Who Says C# Interfaces Can’t Have Implementations?

This post is the seventh installment of the 2017 C# Advent Calendar operated by Matthew Groves. Thanks for letting me participate!

Overview

According to the C# Programming Guide, an interface cannot contain an implementation:

When a class or struct implements an interface, the class or struct must provide an implementation for all of the members that the interface defines. The interface itself provides no functionality that a class or struct can inherit in the way that it can inherit base class functionality.

There is a new feature in the works for C# 8 that will provide default interface implementations. However, I’m known for my impatience, and I don’t want to wait. The fact is, interfaces can have implementations in C# today.

Note: At this point, I’d like to give a shout out to the ASP.NET Core team. I can’t really take credit for this concept, I learned it by digging through the ASP.NET Core source on GitHub.

The Magic of Extension Methods

When I first encountered extension methods, I viewed them as a helpful way to add functionality to third-party classes. Adding additional logic to System.DateTime is helpful, right? However, another powerful use case is to create extensions for your own interfaces. When you write an extension method for an interface, you’re actually providing it with implementation. Sure, the consumer may need an extra using statement at the top of the file, but it’s still an implementation! Taking this approach has several advantages:

  • Extension methods can be safely added to interfaces without risking backward compatibility for consumers (especially if they’re in a separate namespace).
  • The interface becomes easier to implement there are few methods on the interface itself which must be implemented.
  • There is guaranteed consistency in method implementation across all class implementations.

There is, however, one rule you must follow. The extension method must only use members exposed by the interface being extended. While hacks like typecasting back to a particular implementation will work, it risks the purity of the implementation. What happens when the extension is used against a different implementation of the interface? Generally speaking, this rule should be broken only for performance optimizations and should have a fallback that works against any implementation.

The Elephant In The Room

Perhaps the best-known example of extension methods in .NET is LINQ. The vast majority of the API surface of LINQ is extension methods on the IEnumerable<T> and IQueryable<T> interfaces.

LINQ was added in .NET 3.0. If the LINQ team had decided to add the LINQ methods directly to IEnumerable<T>, this would have constituted a breaking change. Every class which implemented IEnumerable<T> would have been broken upon upgrading to .NET 3.0 until the entire suite of LINQ methods was implemented. Just imagine implementing that across every collection class. C# developers everywhere would have chased the LINQ team around with pitchforks.

Instead, the LINQ team implemented their system as extension methods. The result is that .NET 3.0 and LINQ were fully backward compatible with libraries written in .NET 2.0. Additionally, a whole lot of headaches and duplicated code were avoided. And it was done by, effectively, adding implementations to interfaces.

So, Teach Me The Magic

Writing extension methods is actually relatively painless. They’re basically just syntactic sugar layered on top of static methods.

Let’s start with this interface:

namespace MyNamespace
{
    public interface IMyInterface
    {
        IList<int> Values { get; set; }
    }
}

Now, let’s add an extension method that returns the count of all values in the list greater than a threshold.

namespace MyNamespace
{
    public static class MyInterfaceExtensions
    {
        public static int CountGreaterThan(this IMyInterface myInterface, int threshold)
        {
            return Values?.Where(p => p > threshold).Count() ?? 0;
        }
    }
}

The extension method can be consumed like this:

using MyNamespace;

// ...

public void DoSomething()
{
    var myImplementation = new MyInterfaceImplementation();

    // Note that there's no typecast to IMyInterface required
    var countGreaterThanFive = myImplementation.CountGreaterThan(5);    
}

// ...

There are four key pieces to the puzzle:

  1. MyInterfaceExtensions and CountGreaterThan are both public (though they could be internal if you want to use them only within your library).
  2. MyInterfaceExtensions and CountGreaterThan are both static.
  3. The first parameter of CountGreaterThan is the interface and is preceded by the “this” keyword.
  4. The file where DoSomething is declared includes a using statement for the namespace where the extensions are declared (Visual Studio will help by adding this automatically).

Note: There are many different approaches for code organization surrounding these extensions methods. Some teams may prefer them in the same file, others in a separate file in the same folder, and others may want extensions in a separate folder/namespace. Just be sure your team picks a pattern and sticks with it. For teams that choose separate files, including comments on the interface that point to the extension files is a good idea.

Making Mocks Easy, One Extension Method At A Time

My favorite use case is for supporting unit tests, especially when I’m providing lots of method overloads. The reason I point out this specific use case it to show that extension methods can be very useful outside of developing reusable libraries. Almost all development today requires unit testing.

When writing unit tests, it’s often best to test against mocks of interfaces, rather than real implementations. The reasons why are beyond the scope of this post, so just trust me on this. I used to think it was silly until I learned the hard way it wasn’t. Creating mocks can be greatly simplified by adding implementation to interfaces.

For example, imagine an interface named ICartItem which represents an item in an online shopping cart. It needs several different methods to support changing the quantity in the cart.

public interface ICartItem
{
    int Quantity { get; set; }
    void IncrementQuantity();
    void IncrementQuantity(int delta);
    void DecrementQuantity();
}

The Quantity property can be set directly if the user enters a value, but there are also up and down arrows on the UI which tie to the IncrementQuantity and DecrementQuantity methods.

Continuing the example, ICartItem is used by a CartManager class that manages the shopping cart, and CartManager contains lots of business logic that requires unit tests. These tests require mock implementations of ICartItem. Since the interface defines a property and three methods, every mock needs to include four implementations. However, the three methods have a very basic function, and their implementation should be consistent across all classes that implement the interface. Additionally, they can be written to be supported by the Quantity property.

public interface ICartItem
{
    int Quantity { get; set; }
}

public static class CartItemExtensions
{
    public static void IncrementQuantity(this ICartItem cartItem)
    {
        cartItem.IncrementQuantity(1);
    }

    public static void IncrementQuantity(this ICartItem cartItem, int delta)
    {
        cartItem.Quantity += delta;
    }

    public static void DecrementQuantity(this ICartItem cartItem)
    {
        cartItem.IncrementQuantity(-1);
    }
}

public class CartManagerTests
{
    [Fact]
    public void Some_Test()
    {
        // Arrange
    
        // Note: This example uses Moq (https://www.nuget.org/packages/Moq/), your syntax may vary
        var mockCartItem = new Mock<ICartItem>();
        mockCartItem.SetupAllProperties();

        // ...
    }
}

Now every mock of ICartItem is greatly simplified, and they will have support for the IncrementQuantity and DecrementQuantity methods without any specific code added to the tests.

More Advanced, Real-world Examples

There are lots of great examples of this pattern found throughout the ASP.NET Core source code on GitHub. Here is a small selection:

  • ConsoleLoggerFactoryExtensions – Note how each extension implements overloads by forwarding to other extensions with more detailed parameters, until finally a call to ILoggerFactory.AddLogger(ILoggerProvider) is reached.
  • ServiceCollectionServiceExtensions – A veritable avalanche of extensions, which eventually call the most powerful method, IServiceCollection.Add(ServiceDescriptor).
  • ResponseCachingExtensions – In this case, a descendent assembly rather than the original assembly adds an extension to the interface. This helps reduce the inclusion of unnecessary dependencies, users only add the packages they need to the application. Yet, ease-of-use is maintained by extending the original interface.

Summary

That’s how to add implementations to interfaces in a nutshell. It’s a very powerful tool but is especially useful for writing shared libraries as well as for writing code that is easy to unit test. Key use cases to watch out for include:

  • Overloads where simpler methods are just forwarding the requests to methods with more parameters and filling in default values. The simpler methods can be extensions.
  • Helper methods that support more common uses cases by forwarding method calls to more powerful but less frequently used methods.
  • Other methods where the implementation will always be the same for every class that implements the interface, and which are supported by other members of the interface.

Remove Untagged Images From Docker for Windows

Here’s a quick note for Docker for Windows users. This is based on Jim Hoskin’s post Remove Untagged Images From Docker.

I’ve simply reformatted his two scripts for use on Docker for Windows via Powershell. To delete all stopped containers:

docker ps -a -q | % { docker rm $_ }

To delete all untagged local images:

docker images | ConvertFrom-String | where {$_.P2 -eq "<none>"} | % { docker rmi $_.P3 }

Restful API Integration Testing For .NET Core Using Docker

Overview

I love unit tests. There’s nothing quite like writing a class and feeling 100% confident it will work as described because the tests are all passing. But integration testing is also important. Sometimes I need to test the full stack and make sure it works as described.

For a recent project I have been creating .NET Core RESTful microservices. Along with these services I have been creating client SDKs that abstract away the RESTful requests. The client SDK is then published via an internal Nuget server. This makes it easy for services in the architecture to communicate with each other using the SDK rather than using HttpClient. For easy organization and version control, the SDK is located in the same solution as the service.

The question that followed quickly was “How do I test this SDK?” Unit tests can help cover some of the functionality, but they don’t test the full stack from an SDK consumer request through HTTP to the actual service. I could make a test application that uses the SDK to call the service, but this would be cumbersome.

What if I could write tests using a standard test framework like xUnit.net or NUnit? Such tests could be easily executed in Visual Studio or even in a continuous integration step. But if I use this kind of test framework, how do I easily make sure that the latest version of the service is up and running so my client SDK tests can use it? And what about other services called by the service under test?

Enter Docker

Docker is a great tool that serves many purposes in the development pipeline, and since the release of .NET Core I’ve starting to fall in love with it. Integration testing is another great use for Docker. Using Docker it’s possible to launch the service being tested in a container (along with any dependencies) as part of the client SDK integration tests. After the tests are complete, the containers are stopped and the resources are freed.

Preparing The Service

The first step is to make sure the service can be started as a Docker container. Here’s a brief summary of the steps I followed:

  1. Ensure that Hyper-V is enabled in Windows
  2. Install Docker for Windows
  3. Configure a Shared Drive in Docker for the drive where the application lives
  4. Install Visual Studio Tools for Docker
  5. Ensure that Docker is started (it can be configured to autostart on login)
  6. Right click on the project in Visual Studio and Add Docker Support
Add Docker Support in Visual Studio

Configuring Docker Compose

When running on a development machine from within Visual Studio, a Docker Compose file (docker-compose.yml) is used to control the Docker containers which are created. The default file is a great starting point, but it will require some tweaking:

version: '2'

services:
  myservice:
    image: user/myservice${TAG}
    build:
      context: .
      dockerfile: Dockerfile
    ports:
      - "80"

The first thing to do is change to a static port mapping. This will simplify accessing the service from the tests. By changing the port definition from “80” to “8080:80” the tests will be able to access the service on port 8080. Of course, any unused port will work.

version: '2'

services:
  myservice:
    image: user/myservice${TAG}
    build:
      context: .
      dockerfile: Dockerfile
    ports:
      - "8080:80"

Next, the file needs to be updated to deploy additional dependent services. This could get rather complicated if there are a lot of dependencies, but here’s an example of adding a single dependency with a link from the service.

version: '2'

services:
  myservice:
    image: user/myservice${TAG}
    build:
      context: .
      dockerfile: Dockerfile
    ports:
      - "8080:80"
    links:
      - mydependency
  mydependency:
    image: user/mydependency:latest
    expose:
      - "80"

Now when the service is started, it will also start the “mydependency” service. It will be accessible at http://mydependency/ from within “myservice”. Of course, how the dependencies communicate with each other can be adjusted depending on the architecture. The “image:” value should also be adjusted to refer to the correct Docker registry where the dependency is hosted.

Overriding With Test Specific Configuration

The settings in docker-compose.yml are generic and used to define the basic configuration for running the service. Additionally, docker-compose.dev.debug.yml and docker-compose.dev.release.yml provide overrides specific to running in Debug and Release mode in Visual Studio.

However, these configurations don’t start the service in the way most containers are started. They start the container as an executable that does nothing and never exits: “tail -f /dev/null”. Then the service is run out of band using “docker exec”. The container doesn’t even contain the service, it’s just reading the files from the host hard drive using Docker volumes. This is great for debugging in Visual Studio, but I found it problematic for running automated integration tests.

To address this, create an additional YAML file, docker-compose.test.yml, in the root of the service project. This file overrides the build context path so that it collects the service from the publication directory (which is created by using “dotnet publish”). It can also configure environment variables within the container which can override the default ASP.NET Core configuration from appsettings.json.

version: '2'

services:
  myservice:
    build:
      context: bin/${CONFIGURATION}/netcoreapp1.0/publish
    environment:
      - ASPNETCORE_ENVIRONMENT=Development

Starting The Service When Running Tests

To start the container, some commands should be run during test startup. How these commands are run will vary depending on the test framework, but the basic list is:

  1. Run “dotnet publish” against the service to build and publish the application
  2. Run “docker-compose build” to build an up-to-date image for the application
  3. Run “docker-compose up” to start the containers
  4. After tests are complete, run “docker-compose down” to shutdown and remove the containers

If xUnit.net is being used as the test framework, this can be done using a collection fixture. First, define a test collection:

using System;
using Xunit;

namespace IntegrationTests
{
    [CollectionDefinition("ClientTests")]
    public class ClientTestCollection : ICollectionFixture<ServiceContainersFixture>
    {
    }
}

Note above the ClientTestCollection class is implementing ICollectionFixture. There must also be a definition for the referenced ServiceContainersFixture. This class will be created once for all tests in the test collection, and then disposed when they are complete.

Note: The example below assumes that “dotnet” and “docker-compose” are accessible in the system PATH. They should be by default.

using System;
using System.Collections.Generic;
using System.Diagnostics;
using System.Linq;
using System.Net.Http;
using System.Threading;
using System.Threading.Tasks;
using Xunit;

namespace IntegrationTests
{
    public class ServiceContainersFixture : IDisposable
    {
        // Name of the service
        private const string ServiceName = "myservice";

        // Relative path to the root folder of the service project.
        // The path is relative to the target folder for the test DLL,
        // i.e. /test/MyTests/bin/Debug
        private const string ServicePath = "../../../../src/MyService";

        // Tag used for ${TAG} in docker-compose.yml
        private const string Tag = "test";

        // This URL should return 200 once the service is up and running
        private const string TestUrl = "http://localhost:8080/myservice/ping";

        // How long to wait for the test URL to return 200 before giving up
        private static readonly TimeSpan TestTimeout = TimeSpan.FromSeconds(60);

#if DEBUG
        private const string Configuration = "Debug";
#else
        private const string Configuration = "Release";
#endif

        public ApplicationFixture()
        {
            Build();

            StartContainers();

            var started = WaitForService().Result;

            if (!started)
            {
                throw new Exception($"Startup failed, could not get '{TestUrl}' after trying for '{TestTimeout}'");
            }
        }

        private void Build()
        {
            var process = Process.Start(new ProcessStartInfo
            {
                FileName = "dotnet",
                Arguments = $"publish {ServicePath} --configuration {Configuration}"
            });

            process.WaitForExit();
            Assert.Equal(0, process.ExitCode);
        }

        private void StartContainers()
        {
            // First build the Docker container image

            var processStartInfo = new ProcessStartInfo
            {
                FileName = "docker-compose",
                Arguments =
                    $"-f {ServicePath}/docker-compose.yml -f {ServicePath}/docker-compose.test.yml build"
            };
            AddEnvironmentVariables(processStartInfo);

            var process = Process.Start(processStartInfo);

            process.WaitForExit();
            Assert.Equal(0, process.ExitCode);

            // Now start the docker containers

            processStartInfo = new ProcessStartInfo
            {
                FileName = "docker-compose",
                Arguments =
                    $"-f {ServicePath}/docker-compose.yml -f {ServicePath}/docker-compose.test.yml -p {ServiceName} up -d"
            };
            AddEnvironmentVariables(processStartInfo);

            process = Process.Start(processStartInfo);

            process.WaitForExit();
            Assert.Equal(0, process.ExitCode);
        }

        private void StopContainers()
        {
            // Run docker-compose down to stop the containers
            // Note that "--rmi local" deletes the images as well to keep the machine clean
            // But it does so by deleting all untagged images, which may not be desired in all cases

            var processStartInfo = new ProcessStartInfo
            {
                FileName = "docker-compose",
                Arguments =
                    $"-f {ServicePath}/docker-compose.yml -f {ServicePath}/docker-compose.test.yml -p {ServiceName} down --rmi local"
            };
            AddEnvironmentVariables(processStartInfo);

            var process = Process.Start(processStartInfo);

            process.WaitForExit();
            Assert.Equal(0, process.ExitCode);
        }

        private void AddEnvironmentVariables(ProcessStartInfo processStartInfo)
        {
            processStartInfo.Environment["TAG"] = Tag;
            processStartInfo.Environment["CONFIGURATION"] = Configuration;
            processStartInfo.Environment["COMPUTERNAME"] = Environment.MachineName;
        }

        private async Task<bool> WaitForService()
        {
            using (var client = new HttpClient() { Timeout = TimeSpan.FromSeconds(1)})
            {
                var startTime = DateTime.Now;
                while (DateTime.Now - startTime < TestTimeout)
                {
                    try
                    {
                        var response = await client.GetAsync(new Uri(TestUrl)).ConfigureAwait(false);
                        if (response.IsSuccessStatusCode)
                        {
                            return true;
                        }
                    }
                    catch
                    {
                        // Ignore exceptions, just retry
                    }

                    await Task.Delay(1000).ConfigureAwait(false);
                }
            }

            return false;
        }

        public void Dispose()
        {
            StopContainers();
        }
    }
}

Note: Before each call to docker-compose, AddEnvironmentVariables is being called to set some environment variables in ProcessStartInfo. These are used by docker-compose to perform substitutions in the YAML file. For example, ${COMPUTERNAME} will be replaced with the name of the development computer. This could be easily extended with other environment variables as needed.

To specify that a test class needs the containers to be running, apply the Collection attribute to the test class. Note how the string “ClientTests” matches the CollectionDefinition attribute used earlier on ClientTestCollection.

[Collection("ClientTests")]
public class MyTests
{
    // Tests here...
}

Conclusion

It’s now possible to run .NET Core integration tests that depend on running services directly in Visual Studio using your test runner of choice (I use Resharper). It will build the .NET Core service, start it and its dependencies in Docker, run the integration tests, and then perform cleanup. This could also be done as part of a continuous integration process, so long as the test machines have Docker installed and access to any required Docker registries. Best of all, this simultaneously tests both the service and the client SDK, exposing problems in either.

Cancelling Long Running Couchbase N1QL Queries

Overview

The recent release of the Couchbase .NET SDK 2.4.0 has added many new features.  There is a minor feature, however, that is worth a mention.  It’s now possible to cancel long-running N1QL queries.

For example, in a web application a user might browse away from the page in impatience.  When they do, you don’t want  the query to keep executing pointlessly.  Instead, you can cancel the query, freeing web server and Couchbase  server resources for other requests.

How To Cancel

Using this new feature is very easy, when executing your query simply supply a CancellationToken.  For web applications, this can be acquired by including a CancellationToken as a parameter on an asynchronous action method.

public async Task<ActionResult> Index(CancellationToken cancellationToken)
{
    var bucket = ClusterHelper.GetBucket("my-bucket");
    var query = new QueryRequest("SELECT * FROM `my-bucket` WHERE type = 'docType' LIMIT 1000");

    var result = await bucket.QueryAsync<Document>(query, cancellationToken);
    if (result.Success) {
        return Json(result.Rows);
    }
}

Compatibility

Note: Documentation on what versions of ASP.NET MVC and Web API support CancellationToken is a bit sparse.  Apparently some versions only use it for timeouts (using  AsyncTimeout), while some versions have support for cancellations from the browser.  There is also a way to add support for browser cancellation using CreateLinkedTokenSource.

The behavior may also depend on the web server you’re using (i.e. IIS versus Kestrel, and Kestrel version). For example, this change to Kestrel appears to cause client disconnects to do a better job of triggering the CancellationTokken. If anyone knows more details about version support, please let me know and I’ll update the post.

Docker Login For Amazon AWS ECR Using Windows Powershell

My recent studies in .Net Core have lead me to the new world of Docker (new for .Net developers, anyway).  The idea of developing low-cost microservices while still working using  my favorite development platform is very exciting.  In the process, I began using the Amazon AWS Docker platform, Elastic Container Services (ECS).

I quickly found that documentation for using ECS from Windows is a bit scarce.  In particular, the Powershell tools are missing a pretty key helper method, get-login.  Calling “aws ecr get-login” on a Linux box delivers you a complete “docker login” command for authenticating to the Elastic Container Registry (ECR).  There is currently no such helper for Windows.  At least, not that I can find, someone correct me if I’m just missing  it.

Instead, I’ve done a bit of digging and found how to authenticate programatically.  From that, I’ve created the helper code below for reuse.

# Get the authorization token
$token = Get-ECRAuthorizationToken -Region us-east-1 -AccessKey your-access-key -SecretKey your-secret-key
# Split the token into username and password segments
$tokenSegments = [System.Text.Encoding]::ASCII.GetString([System.Convert]::FromBase64String($token.AuthorizationToken)).Split(":")
# Get the host name without https, as this can confuse some Windows machines 
$hostName = (New-Object System.Uri $token.ProxyEndpoint).DnsSafeHost
# Perform login
docker login -u $($tokenSegments[0]) -p $($tokenSegments[1]) -e none $hostName

This login should then be valid for 12 hours. Note that if you use your own Region, AccessKey, and SecretKey on the first line. Alternatively, you could use Set-DefaultAWSRegion and Set-AWSCredentials to store them in your Powershell session. If you’re on a build server running in AWS you could also use IAM roles to grant access directly to the build server.

Update 2/8/2017

You can add the section below to your PowerShell profile to add an easy to use cmdlet. To install:

  1. Run “notepad $PROFILE”
  2. Paste the code below into the file and save
  3. Run “. $PROFILE” or restart Powershell
  4. Run “Auth-ECR your-access-key your-secretkey”.
function Auth-ECR {
	[CmdletBinding()]
	param (
		[parameter(Mandatory=$true, Position=0)]
		[string]
		$AccessKey,
		
		[parameter(Mandatory=$true, Position=1)]
		[string]
		$SecretKey,
		
		[parameter()]
		[string]
		$Region = "us-east-1"
	)
	
	# Get the authorization token
	$token = Get-ECRAuthorizationToken -AccessKey $AccessKey -SecretKey $SecretKey -Region $Region `
		-ErrorAction Stop
	
	# Split the token into username and password segments
	$tokenSegments = [System.Text.Encoding]::ASCII.GetString([System.Convert]::FromBase64String($token.AuthorizationToken)).Split(":")
	
	# Get the host name without https, as this can confuse some Windows machines 
	$hostName = (New-Object System.Uri $token.ProxyEndpoint).DnsSafeHost
	
	# Perform login
	docker login -u $($tokenSegments[0]) -p $($tokenSegments[1]) -e none $hostName
}

Note that this script defaults to using the us-east-1 region. You can change the default in your profile, or use “-Region” on the command line.