Category Archives: Uncategorized

Docker Login For Amazon AWS ECR Using Windows Powershell

My recent studies in .Net Core have lead me to the new world of Docker (new for .Net developers, anyway).  The idea of developing low-cost microservices while still working using  my favorite development platform is very exciting.  In the process, I began using the Amazon AWS Docker platform, Elastic Container Services (ECS).

I quickly found that documentation for using ECS from Windows is a bit scarce.  In particular, the Powershell tools are missing a pretty key helper method, get-login.  Calling “aws ecr get-login” on a Linux box delivers you a complete “docker login” command for authenticating to the Elastic Container Registry (ECR).  There is currently no such helper for Windows.  At least, not that I can find, someone correct me if I’m just missing  it.

Instead, I’ve done a bit of digging and found how to authenticate programatically.  From that, I’ve created the helper code below for reuse.

# Get the authorization token
$token = Get-ECRAuthorizationToken -Region us-east-1 -AccessKey your-access-key -SecretKey your-secret-key
# Split the token into username and password segments
$tokenSegments = [System.Text.Encoding]::ASCII.GetString([System.Convert]::FromBase64String($token.AuthorizationToken)).Split(":")
# Get the host name without https, as this can confuse some Windows machines 
$hostName = (New-Object System.Uri $token.ProxyEndpoint).DnsSafeHost
# Perform login
docker login -u $($tokenSegments[0]) -p $($tokenSegments[1]) -e none $hostName

This login should then be valid for 12 hours. Note that if you use your own Region, AccessKey, and SecretKey on the first line. Alternatively, you could use Set-DefaultAWSRegion and Set-AWSCredentials to store them in your Powershell session. If you’re on a build server running in AWS you could also use IAM roles to grant access directly to the build server.

Update 2/8/2017

You can add the section below to your PowerShell profile to add an easy to use cmdlet. To install:

  1. Run “notepad $PROFILE”
  2. Paste the code below into the file and save
  3. Run “. $PROFILE” or restart Powershell
  4. Run “Auth-ECR your-access-key your-secretkey”.
function Auth-ECR {
	[CmdletBinding()]
	param (
		[parameter(Mandatory=$true, Position=0)]
		[string]
		$AccessKey,
		
		[parameter(Mandatory=$true, Position=1)]
		[string]
		$SecretKey,
		
		[parameter()]
		[string]
		$Region = "us-east-1"
	)
	
	# Get the authorization token
	$token = Get-ECRAuthorizationToken -AccessKey $AccessKey -SecretKey $SecretKey -Region $Region `
		-ErrorAction Stop
	
	# Split the token into username and password segments
	$tokenSegments = [System.Text.Encoding]::ASCII.GetString([System.Convert]::FromBase64String($token.AuthorizationToken)).Split(":")
	
	# Get the host name without https, as this can confuse some Windows machines 
	$hostName = (New-Object System.Uri $token.ProxyEndpoint).DnsSafeHost
	
	# Perform login
	docker login -u $($tokenSegments[0]) -p $($tokenSegments[1]) -e none $hostName
}

Note that this script defaults to using the us-east-1 region. You can change the default in your profile, or use “-Region” on the command line.

Rebuild All Couchbase N1QL Indexes After Restore

Overview

When restoring a Couchbase cluster from a backup, the restore utility is kind enough to recreate the N1QL indexes for you.  To improve speed and efficiency, the indexes are only created, they are not built automatically.  Before they can be used, you must execute a build command such as this:

BUILD INDEX ON BucketName (IndexName1, IndexName2, IndexName3)

It is important that this query be issued as a single command for all indexes on a bucket.  This allows the indexes to be built together, resulting in only one read of the data from the cluster while building multiple indexes.

The Problem

Unfortunately, N1QL doesn’t currently offer a wildcard option, so there is no quick way to rebuild all indexes without typing the all of their names.  If you’re trying to script your environments for development or QA this can be particularly problematic, as the list of indexes may not be constant. It could also be a problem when creating scripts for a disaster recovery plan.

The Solution

If you’re running on Linux (you should be for production clusters), the solution is to use this script:

#!/bin/sh

QUERY_HOST=http://localhost:8091

for i in "BucketName1" "BucketName2" "BucketName3"
do
  /opt/couchbase/bin/cbq -e $QUERY_HOST -s="$( \
    echo "BUILD INDEX ON $i (\`$( \
      /opt/couchbase/bin/cbq -e $QUERY_HOST -q=true -s="SELECT name FROM system:indexes where keyspace_id = '$i' AND state = 'deferred'" | \
        sed -n -e '/{/,$p' | \
        jq -r '.results[].name' | \
        sed ':a;/.*/{N;s/\n/\`,\`/;ba}')\`)")"

  # Wait for completion
  until [ `/opt/couchbase/bin/cbq -e $QUERY_HOST -q=true -s="SELECT COUNT(*) as unbuilt FROM system:indexes WHERE keyspace_id = '$i' AND state <> 'online'" | sed -n -e '/{/,$p' | jq -r '.results[].unbuilt'` -eq 0 ];
  do
    sleep 5
  done
done

Replace the QUERY_HOST parameter as needed to point to a query node, and replace the BucketName values with the names of the buckets where indexes must be built.  It will process each bucket one at a time, waiting for the indexes to be built before continuing to the next bucket.

The only depedency is the jq utility, which is a command line JSON parser. On Ubuntu, this can be installed via:

sudo apt-get install -y jq

The script isn’t pretty, but it gets the job done. Hopefully N1QL will get a wildcard BUILD INDEX command in the future.

Note: Revised 9/15/2016 to better strip header information from the query output before the JSON object. Previously it only stripped the first line, now it strips all lines until it encounters the curly brace starting the JSON result.

Testing an SDK for Async/Await SynchronizationContext Deadlocks

Overview

The purpose of this post is to explain how to write unit and/or integration tests that ensure you don’t have synchronization context deadlocks in an SDK. A very detailed explanation of the problem and the solution can be found here. However, before explaining how to write the tests, I will give a brief summary of the problem and the solutions. Then we’ll get into how to test to make sure that the solution is correctly implemented now and won’t regress in the future.

The Problem

One of the common API developer pitfalls in asynchronous development with the .Net TPL is deadlocks. Most commonly, this is caused by SDK consumers using your asynchronous SDK in a synchronous manner. For example:

public ActionResult Index()
{
    // Note to consumers: Where possible DON'T DO THIS.  Just make your MVC action async, it works MUCH better.
    var data = api.SomeActionAsync().Result;

    return View(data);
}

The example above is typically a problem because MVC runs actions inside a SynchronizationContext. The MVC synchronization context prevents more than one thread from operating within the context simultaneously. The process flow works as follows:

  1. Thread A runs the action above, and requests “SomeActionAsync”
  2. Thread A blocks waiting on “Result” for “SomeActionAsync”
  3. Thread B begins processing the work for “SomeActionAsync”
  4. At some point, Thread B attempts to synchronize onto the SynchronizationContext from the MVC action, and is blocked waiting for Thread A to release it.
  5. We have a deadlock!

So why does Step 4 above happen? I know I didn’t write any code that requested that the MVC SynchronizationContext be used! Well, if you are using the async/await programming model, you’re doing so without even knowing it.

public async Task<SomeResult> SomeActionAsync()
{
    // do some work

    var temp = await obj.SomeOtherActionAsync();

    // do more work

    return result;
}

The await above automatically tries to synchronize with the SynchronizationContext. This is actually pretty important for writing async MVC actions. When an async method is awaited in the action, once it’s complete we really want the remainder of the action to run within the SynchronizationContext. But within our SDK, we probably don’t want that to happen because it usually doesn’t have any value to us.

The Solution

There are two solutions to this problem.  The good one, and the bad one.

The Bad Solution: Require that SDK consumers use the SDK asynchronously

I don’t like this solution because it’s difficult to ensure that consumers use it correctly. It’s also a barrier to SDK use. I really believe that consumers should be given the option to consume the SDK how they like, even if it’s a bad way to consume it. The TPL provides a .Result call, so where possible we should make it work.

public async Task<ActionResult> Index()
{
    var data = await api.SomeActionAsync();

    return View(data);
}

An important note for SDK consumers, though. For you, this is the Good Solution. You should always use asynchronous API calls in an asynchronous manner whenever possible. This is only a Bad Solution if the SDK developer is assuming that you always do this.

The Good Solution: Fix the problem on the SDK side

Thankfully, the TPL provides us with a simple workaround in the SDK, using ConfigureAwait(false). Calling this routine before awaiting a task will cause it to ignore the SynchronizationContext.

public async Task<SomeResult> SomeActionAsync()
{
    // do some work

    var temp = await obj.SomeOtherActionAsync().ConfigureAwait(false);

    // do more work

    return result;
}

Most Importantly, The Test

The problem with the Good Solution is that it requires you to place a lot of ConfigureAwait(false) calls throughout the SDK. This can be cumbersome and easy to forget. Though there is a Resharper plugin to help, ConfigureAwaitHelper.

Any good SDK comes with a battery of unit and integration tests. So the trick is to add tests to the SDK that will ensure that we don’t forget any ConfigureAwait(false) calls. So how do we add a test or tests that ensure we called ConfigureAwait(false)?

The trick is understanding how the SynchronizationContext works. Anytime it is used, a call will be made to either it’s Send or Post method. So, all we need to do is make a mock and ensure that these methods never get called:

[Test]
public void Test_SomeActionNoDeadlock()
{
    // Arrange
    var context = new Mock<SynchronizationContext>
    {
        CallBase = true
    };

    // Do other arrange actions here

    SynchronizationContext.SetSynchronizationContext(context.Object);
    try
    {
        // Act
        class.SomeActionAsync().Wait();

        // Assert

        // If the method is incorrectly awaiting on the current SynchronizationContext
        // We will see calls to Post or Send on the mock

        context.Verify(m => m.Post(It.IsAny<SendOrPostCallback>(), It.IsAny<object>()), Times.Never);
        context.Verify(m => m.Send(It.IsAny<SendOrPostCallback>(), It.IsAny<object>()), Times.Never);
    }
    finally
    {
        SynchronizationContext.SetSynchronizationContext(null);
    }
}

This example uses NUnit and Moq, but it should work just as well with other testing frameworks. Now we have a way to guarantee that ConfigureAwait(false) was used throughout the SDK method, so long as we get 100% test coverage through the logical paths in the method.

Of course, you may ask “Why do I need this?  I just looked at the code, I’m always calling ConfigureAwait(false)!” The answer is preventing regressions. You might have remembered today, but next month when you’re making a change it’s very easy to forget. This test is your fallback plan in case you make a mistake in the future.

Couchbase and N1QL Security

As a developer at CenterEdge Software, I’ve had a lot of cause to use Couchbase as our NoSQL databasing platform over the last few years.  I’ve gotten really excited about the potential of the new Couchbase query language in Couchbase 4.0, called N1QL.  So excited that I’ve spent a lot of time contributing to the Linq2Couchbase library, which allows developers to use LINQ to transparently create N1QL queries.

In doing work with N1QL, I quickly realized that it may have some of the same security concerns as SQL.  In particular, N1QL injection could be a new surface area for attack in Couchbase 4.0.  That’s what I call the N1QL equivalent of SQL injection.  I found that while the risks are lower in N1QL than in SQL, there are still some areas that need to be addressed by application developers using Couchbase.

As a result, I did some research and recently wrote a guest post on N1QL security for Couchbase users.  It researches possible N1QL injection security concerns, then goes into how to protect your applications when using N1QL.

http://blog.couchbase.com/2015/september/couchbase-and-n1ql-security-centeredgesoftware

Windows Domain Account Lockout Mystery

In addition to development, I sometimes get saddled with some domain administration.  We recently encountered a strange mystery, where a user’s account was being locked out every day as soon as they booted up their computer.  They hadn’t even tried to login yet, but their account was being magically locked out.

After lots of research, all of the obvious solutions were excluded.  We finally tracked it down by turning on Kerberos logging on the client computer (http://support.microsoft.com/kb/262177).  We then found Event ID 14, stating “The password stored in Credential Manager is invalid“.  But there were no passwords stored in the Credential Manager!

At this point, we found this very helpful forum discussion that explains it: http://social.technet.microsoft.com/Forums/windows/en-US/e1ef04fa-6aea-47fe-9392-45929239bd68/securitykerberos-event-id-14-credential-manager-causes-system-to-login-to-network-with-invalid?forum=w7itprosecurity.

Apparently, your user account credentials can get saved to the SYSTEM (a.ka. local computer) account on the computer.  Once there, you can’t access it through any normal UI to remove it.  We think this probably had something to do with our RADIUS auth on the WiFi network, but we’re not sure.  Fortunately, the instructions in the post were spot on.

Download PsExec.exe from http://technet.microsoft.com/en-us/sysinternals/bb897553.aspx and copy it to C:\Windows\System32 .

From a command prompt run:    psexec -i -s -d cmd.exe

From the new DOS window run:  rundll32 keymgr.dll,KRShowKeyMgr

The only additional note I would add is that you need to run the command prompt as an Administrator, if you have UAC enabled.