Cancelling Long Running Couchbase N1QL Queries

Overview

The recent release of the Couchbase .NET SDK 2.4.0 has added many new features.  There is a minor feature, however, that is worth a mention.  It’s now possible to cancel long-running N1QL queries.

For example, in a web application a user might browse away from the page in impatience.  When they do, you don’t want  the query to keep executing pointlessly.  Instead, you can cancel the query, freeing web server and Couchbase  server resources for other requests.

How To Cancel

Using this new feature is very easy, when executing your query simply supply a CancellationToken.  For web applications, this can be acquired by including a CancellationToken as a parameter on an asynchronous action method.

public async Task<ActionResult> Index(CancellationToken cancellationToken)
{
    var bucket = ClusterHelper.GetBucket("my-bucket");
    var query = new QueryRequest("SELECT * FROM `my-bucket` WHERE type = 'docType' LIMIT 1000");

    var result = await bucket.QueryAsync<Document>(query, cancellationToken);
    if (result.Success) {
        return Json(result.Rows);
    }
}

Compatibility

Note: Documentation on what versions of ASP.NET MVC and Web API support CancellationToken is a bit sparse.  Apparently some versions only use it for timeouts (using  AsyncTimeout), while some versions have support for cancellations from the browser.  There is also a way to add support for browser cancellation using CreateLinkedTokenSource.

The behavior may also depend on the web server you’re using (i.e. IIS versus Kestrel, and Kestrel version). For example, this change to Kestrel appears to cause client disconnects to do a better job of triggering the CancellationTokken. If anyone knows more details about version support, please let me know and I’ll update the post.

Docker Login For Amazon AWS ECR Using Windows Powershell

My recent studies in .Net Core have lead me to the new world of Docker (new for .Net developers, anyway).  The idea of developing low-cost microservices while still working using  my favorite development platform is very exciting.  In the process, I began using the Amazon AWS Docker platform, Elastic Container Services (ECS).

I quickly found that documentation for using ECS from Windows is a bit scarce.  In particular, the Powershell tools are missing a pretty key helper method, get-login.  Calling “aws ecr get-login” on a Linux box delivers you a complete “docker login” command for authenticating to the Elastic Container Registry (ECR).  There is currently no such helper for Windows.  At least, not that I can find, someone correct me if I’m just missing  it.

Instead, I’ve done a bit of digging and found how to authenticate programatically.  From that, I’ve created the helper code below for reuse.

# Get the authorization token
$token = Get-ECRAuthorizationToken -Region us-east-1 -AccessKey your-access-key -SecretKey your-secret-key
# Split the token into username and password segments
$tokenSegments = [System.Text.Encoding]::ASCII.GetString([System.Convert]::FromBase64String($token.AuthorizationToken)).Split(":")
# Get the host name without https, as this can confuse some Windows machines 
$hostName = (New-Object System.Uri $token.ProxyEndpoint).DnsSafeHost
# Perform login
docker login -u $($tokenSegments[0]) -p $($tokenSegments[1]) -e none $hostName

This login should then be valid for 12 hours. Note that if you use your own Region, AccessKey, and SecretKey on the first line. Alternatively, you could use Set-DefaultAWSRegion and Set-AWSCredentials to store them in your Powershell session. If you’re on a build server running in AWS you could also use IAM roles to grant access directly to the build server.

Update 2/8/2017

You can add the section below to your PowerShell profile to add an easy to use cmdlet. To install:

  1. Run “notepad $PROFILE”
  2. Paste the code below into the file and save
  3. Run “. $PROFILE” or restart Powershell
  4. Run “Auth-ECR your-access-key your-secretkey”.
function Auth-ECR {
	[CmdletBinding()]
	param (
		[parameter(Mandatory=$true, Position=0)]
		[string]
		$AccessKey,
		
		[parameter(Mandatory=$true, Position=1)]
		[string]
		$SecretKey,
		
		[parameter()]
		[string]
		$Region = "us-east-1"
	)
	
	# Get the authorization token
	$token = Get-ECRAuthorizationToken -AccessKey $AccessKey -SecretKey $SecretKey -Region $Region `
		-ErrorAction Stop
	
	# Split the token into username and password segments
	$tokenSegments = [System.Text.Encoding]::ASCII.GetString([System.Convert]::FromBase64String($token.AuthorizationToken)).Split(":")
	
	# Get the host name without https, as this can confuse some Windows machines 
	$hostName = (New-Object System.Uri $token.ProxyEndpoint).DnsSafeHost
	
	# Perform login
	docker login -u $($tokenSegments[0]) -p $($tokenSegments[1]) -e none $hostName
}

Note that this script defaults to using the us-east-1 region. You can change the default in your profile, or use “-Region” on the command line.

Positron – HTML 5 UI For .Net Desktop Applications

At CenterEdge Software, the development department recently had management walk into a meeting and drop a bombshell on us. They wanted us to completely rebuild the UI from the ground up and “make it sexy”. Oh, and “make it last until 2025″. Don’t you just love wild management direction?

Well, we scratched our collective heads for a while and tried to come up with a solution. Preferably one that didn’t leave us clawing out our eyeballs, nor made management come asking about how we blew the budget. We came up with three key criteria we had to meet.

  1. We need to leverage our existing .Net code as much as possible.
  2. We need to be able to deliver the UI redesign in phases over time, rather than operate in a completely independent branch for a couple of years.
  3. We need to support the current hardware infrastructure at our 600+ clients, so that hardware upgrades and networking redesigns are not required before upgrading.

What We Aren’t Doing (And Why)

Based on these criteria, our first thought was WPF. We could continue writing .Net desktop client/server applications to operate our Point of Sale and other local applications. This would allow us to easily maintain our current low-level hardware integrations (read: lots of serial ports and USB serial emulation – ugh). It would also allow us to easily phase in portions of our application as WPF while other portions remain WinForms.

The downside to WPF is the size of the developer pool. There just aren’t that many WPF developers out there, especially great ones, and they tend to be expensive. But what about HTML5 and Javascript? There’s all sorts of great work happening surrounding web technologies and user interfaces. And there’s a much larger pool of developers with these skills for us to draw on.

Looking at HTML5 UI options, we looked at and discarded the two obvious solutions:

  • Convert to a cloud application. We already have cloud components to our suite. But the core on-premise system, if converted, would become too reliant on stable internet connections, and too far from our current architecture to convert easily. This would also be very difficult to implement in a phased manner.
  • Operate an on-premises web server. For our larger clients this wouldn’t be an issue. But a significant portion of our client base are smaller businesses that use workstations as their servers. IIS isn’t an option, and a computer with a couple extra GB of RAM might work for running SQL Express, but not for running the whole application for a half dozen stations.

Can We Have Our Cake And Eat It, Too?

Is it possible to have the best of both worlds? Can we run a desktop client/server application like WPF, but yet build our UI using HTML5? Well, this question stumped us for a bit. But there are systems that do this, like Electron, just not for .Net.

Enter Positron! Positron is a solution for building a .Net desktop application using HTML5 user interface components (rendered using Chromium). It hosts an in-process version of MVC 6 (yes, that’s the new .Net Core flavor), which is then wired in-process to Chromium running within a WPF window.

Fun Facts

  • All requests from Chromium to MVC are handled in-process, there’s no network stack or HTTP involved. This keeps performance and security very high.
  • The window itself is the only WPF involved, the entire window content is the Chromium browser.
  • ASP.Net Core MVC, despite having “Core” in it’s name, isn’t actually .Net Core specific. You can use it in traditional .Net Framework applications as well.
  • All resources (images, views, CSS, JS, etc) are embedded in the DLL, making application distribution easy.
  • Positron even supports Chromium developer tools (via a separate Chrome window).
  • Positron is agnostic about how you build your HTML5 application. We’re currently using using React and TypeScript at CenterEdge. We’re even using Browserify and Gulp to build the JS files. But any web technology will work, pick your favorite flavor.
  • There is currently one significant issue, the Razor view editor in Visual Studio doesn’t recognize the fact we’re working with MVC, so it’s a squiggle fest. I’m sure support for this will be forthcoming, and it works fine after you compile and run. If there are any Visual Studio experts out there, we could use some help with this!

What About Automated Testing?

All the QA and QE people out there are yelling at me now. They’re saying their test frameworks won’t work with Positron. We have a solution for that, too. You’re building an MVC application, just hosting it in-process. There’s nothing stopping you from spinning up a Kestrel server instead to support automated testing over HTTP. Just use Chrome as the test browser for parity with Chromium.

It’s also possible to install plugins in Chromium, so it might be possible to get some of the testing frameworks up and running directly against Positron if their plugin is installed. But we haven’t vetted this out yet.

Open Source

Our specific use case is probably not common. But at CenterEdge we do feel like there is a need for desktop applications with HTML UI. There are many .Net desktop applications that could benefit from the plethora of great UI tools and frameworks available in the web development community. Therefore, we’ve decided to make Positron open source. It’s available on NuGet (there are 4 packages), and the code is available on GitHub.

At this point it’s an early version (0.2.0), and there’s lots of room for improvement. We are already working on integrating this into our application suite, and then learning the pain points to make improvements

We’d also welcome community feedback. And feel free to fork the repo and send back pull requests.

Rebuild All Couchbase N1QL Indexes After Restore

Overview

When restoring a Couchbase cluster from a backup, the restore utility is kind enough to recreate the N1QL indexes for you.  To improve speed and efficiency, the indexes are only created, they are not built automatically.  Before they can be used, you must execute a build command such as this:

BUILD INDEX ON BucketName (IndexName1, IndexName2, IndexName3)

It is important that this query be issued as a single command for all indexes on a bucket.  This allows the indexes to be built together, resulting in only one read of the data from the cluster while building multiple indexes.

The Problem

Unfortunately, N1QL doesn’t currently offer a wildcard option, so there is no quick way to rebuild all indexes without typing the all of their names.  If you’re trying to script your environments for development or QA this can be particularly problematic, as the list of indexes may not be constant. It could also be a problem when creating scripts for a disaster recovery plan.

The Solution

If you’re running on Linux (you should be for production clusters), the solution is to use this script:

#!/bin/sh

QUERY_HOST=http://localhost:8091

for i in "BucketName1" "BucketName2" "BucketName3"
do
  /opt/couchbase/bin/cbq -e $QUERY_HOST -s="$( \
    echo "BUILD INDEX ON $i (\`$( \
      /opt/couchbase/bin/cbq -e $QUERY_HOST -q=true -s="SELECT name FROM system:indexes where keyspace_id = '$i' AND state = 'deferred'" | \
        sed -n -e '/{/,$p' | \
        jq -r '.results[].name' | \
        sed ':a;/.*/{N;s/\n/\`,\`/;ba}')\`)")"

  # Wait for completion
  until [ `/opt/couchbase/bin/cbq -e $QUERY_HOST -q=true -s="SELECT COUNT(*) as unbuilt FROM system:indexes WHERE keyspace_id = '$i' AND state <> 'online'" | sed -n -e '/{/,$p' | jq -r '.results[].unbuilt'` -eq 0 ];
  do
    sleep 5
  done
done

Replace the QUERY_HOST parameter as needed to point to a query node, and replace the BucketName values with the names of the buckets where indexes must be built.  It will process each bucket one at a time, waiting for the indexes to be built before continuing to the next bucket.

The only depedency is the jq utility, which is a command line JSON parser. On Ubuntu, this can be installed via:

sudo apt-get install -y jq

The script isn’t pretty, but it gets the job done. Hopefully N1QL will get a wildcard BUILD INDEX command in the future.

Note: Revised 9/15/2016 to better strip header information from the query output before the JSON object. Previously it only stripped the first line, now it strips all lines until it encounters the curly brace starting the JSON result.

Couchbase Server and Windows 10 Anniversary Edition Problems

Update: This issue has been resolved in Couchbase Server 4.6 Developer Preview. You can certainly continue to use Docker, but there is no longer a requirement with Windows 10 Anniversary Edition.

The Problem

Recently, I ran into some problems with my Couchbase Server 4.5 installation on my Windows 10 development box. The memcached process would crash over and over again with an error code 255.

After doing some research (and getting some assistance, thanks @ingenthr), I determined it’s a known bug in Couchbase Server introduced by the recent release of Windows 10 Anniversary Edition. Apparently, Couchbase Server uses a third party library which incorrectly uses some private Windows APIs for memory allocation. The Windows 10 Anniversary Edition update removed these API calls, causing the crashes. The bug report is filed with Couchbase as MB-20519.

The Workaround

The only known direct workaround is to uninstall the Windows 10 Anniversary Update. Personally, I don’t find this to be a very good solution. Additionally, based on the bug report, I’m not optimistic about a quick fix from Couchbase. It seems like there’s a lot of work involved, and it understandably isn’t urgent because Windows is only supported for development, not production.

I decided instead to play with Docker, and I was very pleasantly surprised at how easy it was to use Docker to get Couchbase Server running on a Windows box. It only took me a few minutes.

  1. Be sure that Hyper-V is installed on your machine via “Turn Windows features on or off” in Control Panel
  2. Install Docker for Windows (I used the Stable Channel)
  3. Start Docker (I did this as the last step of the installation)
  4. Right click the Docker icon in your system tray (next to the clock), and open Settings.  Go to Shared Drives, and share your C drive.  This will require your WIndows password.
  5. Open Powershell and run this command to make a data folder:

    mkdir $env:userprofile\Couchbase

  6. Then run this command to startup the Docker container:

    docker run -d --name db -p 8091-8094:8091-8094 -p 11207:11207 -p 11210-11211:11210-11211 -p 18091-18093:
    18091-18093 -v $env:userprofile/Couchbase:/opt/couchbase/var couchbase

  7. Once complete, open http://localhost:8091/ to complete server configuration

Notes

This configuration will always create the Docker container with the latest version of Couchbase Server, currently 4.5.  Command line arguments can be used to alter this, see the Docker pages for Couchbase for more information.

This configuration puts all Couchbase data in your C:\Users\myusername\Couchbase folder.  If you remove the Docker container and recreate, it will start up with your configuration and data already intact.  If you want to start from scratch, delete this folder before recreating the Docker container.

There are a few of compatibility requirements for this solution:

  1. Hyper-V is incompatible with VirtualBox. If you are using VirtualBox, you should use a different solution.
  2. The client and management ports used by Couchbase must be available on your local machine.
  3. This setup only supports running a single Couchbase node, otherwise there would be network port contention.