Category Archives: .NET Framework

Restful API Integration Testing For .NET Core Using Docker

Overview

I love unit tests. There’s nothing quite like writing a class and feeling 100% confident it will work as described because the tests are all passing. But integration testing is also important. Sometimes I need to test the full stack and make sure it works as described.

For a recent project I have been creating .NET Core RESTful microservices. Along with these services I have been creating client SDKs that abstract away the RESTful requests. The client SDK is then published via an internal Nuget server. This makes it easy for services in the architecture to communicate with each other using the SDK rather than using HttpClient. For easy organization and version control, the SDK is located in the same solution as the service.

The question that followed quickly was “How do I test this SDK?” Unit tests can help cover some of the functionality, but they don’t test the full stack from an SDK consumer request through HTTP to the actual service. I could make a test application that uses the SDK to call the service, but this would be cumbersome.

What if I could write tests using a standard test framework like xUnit.net or NUnit? Such tests could be easily executed in Visual Studio or even in a continuous integration step. But if I use this kind of test framework, how do I easily make sure that the latest version of the service is up and running so my client SDK tests can use it? And what about other services called by the service under test?

Enter Docker

Docker is a great tool that serves many purposes in the development pipeline, and since the release of .NET Core I’ve starting to fall in love with it. Integration testing is another great use for Docker. Using Docker it’s possible to launch the service being tested in a container (along with any dependencies) as part of the client SDK integration tests. After the tests are complete, the containers are stopped and the resources are freed.

Preparing The Service

The first step is to make sure the service can be started as a Docker container. Here’s a brief summary of the steps I followed:

  1. Ensure that Hyper-V is enabled in Windows
  2. Install Docker for Windows
  3. Configure a Shared Drive in Docker for the drive where the application lives
  4. Install Visual Studio Tools for Docker
  5. Ensure that Docker is started (it can be configured to autostart on login)
  6. Right click on the project in Visual Studio and Add Docker Support
Add Docker Support in Visual Studio

Configuring Docker Compose

When running on a development machine from within Visual Studio, a Docker Compose file (docker-compose.yml) is used to control the Docker containers which are created. The default file is a great starting point, but it will require some tweaking:

version: '2'

services:
  myservice:
    image: user/myservice${TAG}
    build:
      context: .
      dockerfile: Dockerfile
    ports:
      - "80"

The first thing to do is change to a static port mapping. This will simplify accessing the service from the tests. By changing the port definition from “80″ to “8080:80″ the tests will be able to access the service on port 8080. Of course, any unused port will work.

version: '2'

services:
  myservice:
    image: user/myservice${TAG}
    build:
      context: .
      dockerfile: Dockerfile
    ports:
      - "8080:80"

Next, the file needs to be updated to deploy additional dependent services. This could get rather complicated if there are a lot of dependencies, but here’s an example of adding a single dependency with a link from the service.

version: '2'

services:
  myservice:
    image: user/myservice${TAG}
    build:
      context: .
      dockerfile: Dockerfile
    ports:
      - "8080:80"
    links:
      - mydependency
  mydependency:
    image: user/mydependency:latest
    expose:
      - "80"

Now when the service is started, it will also start the “mydependency” service. It will be accessible at http://mydependency/ from within “myservice”. Of course, how the dependencies communicate with each other can be adjusted depending on the architecture. The “image:” value should also be adjusted to refer to the correct Docker registry where the dependency is hosted.

Overriding With Test Specific Configuration

The settings in docker-compose.yml are generic and used to define the basic configuration for running the service. Additionally, docker-compose.dev.debug.yml and docker-compose.dev.release.yml provide overrides specific to running in Debug and Release mode in Visual Studio.

However, these configurations don’t start the service in the way most containers are started. They start the container as an executable that does nothing and never exits: “tail -f /dev/null”. Then the service is run out of band using “docker exec”. The container doesn’t even contain the service, it’s just reading the files from the host hard drive using Docker volumes. This is great for debugging in Visual Studio, but I found it problematic for running automated integration tests.

To address this, create an additional YAML file, docker-compose.test.yml, in the root of the service project. This file overrides the build context path so that it collects the service from the publication directory (which is created by using “dotnet publish”). It can also configure environment variables within the container which can override the default ASP.NET Core configuration from appsettings.json.

version: '2'

services:
  myservice:
    build:
      context: bin/${CONFIGURATION}/netcoreapp1.0/publish
    environment:
      - ASPNETCORE_ENVIRONMENT=Development

Starting The Service When Running Tests

To start the container, some commands should be run during test startup. How these commands are run will vary depending on the test framework, but the basic list is:

  1. Run “dotnet publish” against the service to build and publish the application
  2. Run “docker-compose build” to build an up-to-date image for the application
  3. Run “docker-compose up” to start the containers
  4. After tests are complete, run “docker-compose down” to shutdown and remove the containers

If xUnit.net is being used as the test framework, this can be done using a collection fixture. First, define a test collection:

using System;
using Xunit;

namespace IntegrationTests
{
    [CollectionDefinition("ClientTests")]
    public class ClientTestCollection : ICollectionFixture<ServiceContainersFixture>
    {
    }
}

Note above the ClientTestCollection class is implementing ICollectionFixture. There must also be a definition for the referenced ServiceContainersFixture. This class will be created once for all tests in the test collection, and then disposed when they are complete.

Note: The example below assumes that “dotnet” and “docker-compose” are accessible in the system PATH. They should be by default.

using System;
using System.Collections.Generic;
using System.Diagnostics;
using System.Linq;
using System.Net.Http;
using System.Threading;
using System.Threading.Tasks;
using Xunit;

namespace IntegrationTests
{
    public class ServiceContainersFixture : IDisposable
    {
        // Name of the service
        private const string ServiceName = "myservice";

        // Relative path to the root folder of the service project.
        // The path is relative to the target folder for the test DLL,
        // i.e. /test/MyTests/bin/Debug
        private const string ServicePath = "../../../../src/MyService";

        // Tag used for ${TAG} in docker-compose.yml
        private const string Tag = "test";

        // This URL should return 200 once the service is up and running
        private const string TestUrl = "http://localhost:8080/myservice/ping";

        // How long to wait for the test URL to return 200 before giving up
        private static readonly TimeSpan TestTimeout = TimeSpan.FromSeconds(60);

#if DEBUG
        private const string Configuration = "Debug";
#else
        private const string Configuration = "Release";
#endif

        public ApplicationFixture()
        {
            Build();

            StartContainers();

            var started = WaitForService().Result;

            if (!started)
            {
                throw new Exception($"Startup failed, could not get '{TestUrl}' after trying for '{TestTimeout}'");
            }
        }

        private void Build()
        {
            var process = Process.Start(new ProcessStartInfo
            {
                FileName = "dotnet",
                Arguments = $"publish {ServicePath} --configuration {Configuration}"
            });

            process.WaitForExit();
            Assert.Equal(0, process.ExitCode);
        }

        private void StartContainers()
        {
            // First build the Docker container image

            var processStartInfo = new ProcessStartInfo
            {
                FileName = "docker-compose",
                Arguments =
                    $"-f {ServicePath}/docker-compose.yml -f {ServicePath}/docker-compose.test.yml build"
            };
            AddEnvironmentVariables(processStartInfo);

            var process = Process.Start(processStartInfo);

            process.WaitForExit();
            Assert.Equal(0, process.ExitCode);

            // Now start the docker containers

            processStartInfo = new ProcessStartInfo
            {
                FileName = "docker-compose",
                Arguments =
                    $"-f {ServicePath}/docker-compose.yml -f {ServicePath}/docker-compose.test.yml -p {ServiceName} up -d"
            };
            AddEnvironmentVariables(processStartInfo);

            process = Process.Start(processStartInfo);

            process.WaitForExit();
            Assert.Equal(0, process.ExitCode);
        }

        private void StopContainers()
        {
            // Run docker-compose down to stop the containers
            // Note that "--rmi local" deletes the images as well to keep the machine clean
            // But it does so by deleting all untagged images, which may not be desired in all cases

            var processStartInfo = new ProcessStartInfo
            {
                FileName = "docker-compose",
                Arguments =
                    $"-f {ServicePath}/docker-compose.yml -f {ServicePath}/docker-compose.test.yml -p {ServiceName} down --rmi local"
            };
            AddEnvironmentVariables(processStartInfo);

            var process = Process.Start(processStartInfo);

            process.WaitForExit();
            Assert.Equal(0, process.ExitCode);
        }

        private void AddEnvironmentVariables(ProcessStartInfo processStartInfo)
        {
            processStartInfo.Environment["TAG"] = Tag;
            processStartInfo.Environment["CONFIGURATION"] = Configuration;
            processStartInfo.Environment["COMPUTERNAME"] = Environment.MachineName;
        }

        private async Task<bool> WaitForService()
        {
            using (var client = new HttpClient() { Timeout = TimeSpan.FromSeconds(1)})
            {
                var startTime = DateTime.Now;
                while (DateTime.Now - startTime < TestTimeout)
                {
                    try
                    {
                        var response = await client.GetAsync(new Uri(TestUrl)).ConfigureAwait(false);
                        if (response.IsSuccessStatusCode)
                        {
                            return true;
                        }
                    }
                    catch
                    {
                        // Ignore exceptions, just retry
                    }

                    await Task.Delay(1000).ConfigureAwait(false);
                }
            }

            return false;
        }

        public void Dispose()
        {
            StopContainers();
        }
    }
}

Note: Before each call to docker-compose, AddEnvironmentVariables is being called to set some environment variables in ProcessStartInfo. These are used by docker-compose to perform substitutions in the YAML file. For example, ${COMPUTERNAME} will be replaced with the name of the development computer. This could be easily extended with other environment variables as needed.

To specify that a test class needs the containers to be running, apply the Collection attribute to the test class. Note how the string “ClientTests” matches the CollectionDefinition attribute used earlier on ClientTestCollection.

[Collection("ClientTests")]
public class MyTests
{
    // Tests here...
}

Conclusion

It’s now possible to run .NET Core integration tests that depend on running services directly in Visual Studio using your test runner of choice (I use Resharper). It will build the .NET Core service, start it and its dependencies in Docker, run the integration tests, and then perform cleanup. This could also be done as part of a continuous integration process, so long as the test machines have Docker installed and access to any required Docker registries. Best of all, this simultaneously tests both the service and the client SDK, exposing problems in either.

Positron – HTML 5 UI For .Net Desktop Applications

At CenterEdge Software, the development department recently had management walk into a meeting and drop a bombshell on us. They wanted us to completely rebuild the UI from the ground up and “make it sexy”. Oh, and “make it last until 2025″. Don’t you just love wild management direction?

Well, we scratched our collective heads for a while and tried to come up with a solution. Preferably one that didn’t leave us clawing out our eyeballs, nor made management come asking about how we blew the budget. We came up with three key criteria we had to meet.

  1. We need to leverage our existing .Net code as much as possible.
  2. We need to be able to deliver the UI redesign in phases over time, rather than operate in a completely independent branch for a couple of years.
  3. We need to support the current hardware infrastructure at our 600+ clients, so that hardware upgrades and networking redesigns are not required before upgrading.

What We Aren’t Doing (And Why)

Based on these criteria, our first thought was WPF. We could continue writing .Net desktop client/server applications to operate our Point of Sale and other local applications. This would allow us to easily maintain our current low-level hardware integrations (read: lots of serial ports and USB serial emulation – ugh). It would also allow us to easily phase in portions of our application as WPF while other portions remain WinForms.

The downside to WPF is the size of the developer pool. There just aren’t that many WPF developers out there, especially great ones, and they tend to be expensive. But what about HTML5 and Javascript? There’s all sorts of great work happening surrounding web technologies and user interfaces. And there’s a much larger pool of developers with these skills for us to draw on.

Looking at HTML5 UI options, we looked at and discarded the two obvious solutions:

  • Convert to a cloud application. We already have cloud components to our suite. But the core on-premise system, if converted, would become too reliant on stable internet connections, and too far from our current architecture to convert easily. This would also be very difficult to implement in a phased manner.
  • Operate an on-premises web server. For our larger clients this wouldn’t be an issue. But a significant portion of our client base are smaller businesses that use workstations as their servers. IIS isn’t an option, and a computer with a couple extra GB of RAM might work for running SQL Express, but not for running the whole application for a half dozen stations.

Can We Have Our Cake And Eat It, Too?

Is it possible to have the best of both worlds? Can we run a desktop client/server application like WPF, but yet build our UI using HTML5? Well, this question stumped us for a bit. But there are systems that do this, like Electron, just not for .Net.

Enter Positron! Positron is a solution for building a .Net desktop application using HTML5 user interface components (rendered using Chromium). It hosts an in-process version of MVC 6 (yes, that’s the new .Net Core flavor), which is then wired in-process to Chromium running within a WPF window.

Fun Facts

  • All requests from Chromium to MVC are handled in-process, there’s no network stack or HTTP involved. This keeps performance and security very high.
  • The window itself is the only WPF involved, the entire window content is the Chromium browser.
  • ASP.Net Core MVC, despite having “Core” in it’s name, isn’t actually .Net Core specific. You can use it in traditional .Net Framework applications as well.
  • All resources (images, views, CSS, JS, etc) are embedded in the DLL, making application distribution easy.
  • Positron even supports Chromium developer tools (via a separate Chrome window).
  • Positron is agnostic about how you build your HTML5 application. We’re currently using using React and TypeScript at CenterEdge. We’re even using Browserify and Gulp to build the JS files. But any web technology will work, pick your favorite flavor.
  • There is currently one significant issue, the Razor view editor in Visual Studio doesn’t recognize the fact we’re working with MVC, so it’s a squiggle fest. I’m sure support for this will be forthcoming, and it works fine after you compile and run. If there are any Visual Studio experts out there, we could use some help with this!

What About Automated Testing?

All the QA and QE people out there are yelling at me now. They’re saying their test frameworks won’t work with Positron. We have a solution for that, too. You’re building an MVC application, just hosting it in-process. There’s nothing stopping you from spinning up a Kestrel server instead to support automated testing over HTTP. Just use Chrome as the test browser for parity with Chromium.

It’s also possible to install plugins in Chromium, so it might be possible to get some of the testing frameworks up and running directly against Positron if their plugin is installed. But we haven’t vetted this out yet.

Open Source

Our specific use case is probably not common. But at CenterEdge we do feel like there is a need for desktop applications with HTML UI. There are many .Net desktop applications that could benefit from the plethora of great UI tools and frameworks available in the web development community. Therefore, we’ve decided to make Positron open source. It’s available on NuGet (there are 4 packages), and the code is available on GitHub.

At this point it’s an early version (0.2.0), and there’s lots of room for improvement. We are already working on integrating this into our application suite, and then learning the pain points to make improvements

We’d also welcome community feedback. And feel free to fork the repo and send back pull requests.

Simplify CSS and Javascript Compression In Visual Studio

I’ve released a new open source tool that performs design-time compression of your CSS and Javascript files in Visual Studio projects.  This can be a big help, since it allows you to easily do it in your project rather than as part of your build/publish process.  And, since it leaves both the compressed and uncompressed versions in place, you can still use the uncompressed version for debugging.

Read more about this new tool or download it at http://btburnett.com/netcompressor.

Create A Self-Signed SSL Certificate In .NET

A problem that I have commonly run into is trying to secure communications using SSL or other encryption for a intranet application. In this scenario, it is unnecessary to have a secure certificate signed by an expensive Internet authority. And often it is intended for deployment in a small-scale scenario where there might not be a Certification Authority running on a Window Server. In this case, you want to create a self-signed certificate and use the thumbprint of the certificate for phishing prevention.

Microsoft does provide a utility, makecert, which can create a self-signed certificate. However, it isn’t distributed with Windows, is command line only, and definately NOT end user friendly. I wanted a method for creating a certificate just by clicking a button, without using a shell calls and distributing a copy of makecert with my applications.

To this end, I created a VB.Net class that calls out to the CryptoAPI and creates a self signed certificate with a 2048-bit RSA key. The certificate and private key are stored in the Local Machine store. In the Local Machine store it can be accessed by system processes and services. I’ve attached an example of the class to this post, feel free to use it as you see fit.

Certificate Creator (2283)

Using Enumerations For Columns With LINQ

One of the great features of LINQ that DataSets didn’t support well is the ability to use enumerations for your columns instead of integers. It can make writing and working with your code a lot easier. This is accomplished by manually setting the Type of the column to the type of the enumeration in the visual designer.

There are two important notes to remember about doing this. First, it takes extra work if you want to use an enumeration that is in a different namespace from the LINQ classes. You must not only include the full namespace, but also the global:: prefix. So “global::EnumNamespace.EnumTypeName” refers to the enumeration EnumTypeName in the EnumNamespace namespace. Secondly, if you want to update the data structure from the database, you have to remember to manually set the Type back to the enum type from integer.

This system works great most of the time, including when performing LINQ queries in your code. However, the one place it doesn’t seem to work well is in the Where clause of a LinqDataSource. If you try to select on the column values of an enumeration column here, you’ll get various exceptions. If you try to refer to the enumeration constants, you get “No property or field ‘x‘ exists in type ‘y‘. I’ve tried various namespace permutations with no success. If you try to use integers directly instead of enumeration members, you get “Argument types do not match”.

The only solution I have found to the problem is to typecast the column and compare to integers. For example, “Int32(ColumnName) == 1″. Personally, I don’t like this solution much because it can cause huge problems if your constants ever change, but it works.

Also, if you are dealing with a Flags enumeration, it gets even worse. I haven’t found any way to do bitwise operations in the Where clause, so you have to compare the column to all possible integers that contain the bits you are interested in.

If anybody knows of a better way to address this issue, please post a comment and let me know. Thanks.