Mike Lindegarde... Online

Things I'm likely to forget.

Validating API input in ASP.NET Core 1.1 with FluentValidation

Jump to Instructions

It doesn't matter if your API is nothing more than a facade for simple CRUD operations built on-top of an Active Record implementation or if your API is merely the gateway into your complex Domain Driven Design that leverages the latest and greatest CQSR/ES patterns to scale on demand:  you need to validate your input.

Writing code to validate input is quite possibly one of the most tedious task ever created (right next to doing any sort of processing that involves any file format from the medical world).  Thankfully we are far from the days of having to manually handle that task.  

When I started my latest ASP.NET Core 1.1 project I wanted a more expressive way to handle validation.  Enter FluentValidation:  a small library that does an excellent job handling input validation (high level validation before you get into the heart of your business logic).  Below I show you the three phases my validation code went through before I finally end up where I probably should have started.

Getting Started

Before getting to far into this tutorial you'll want to make sure you:

The First Pass (Also Known as the Really Bad Idea)

I knew I wanted to use FluentValidation and I know that ASP.Net has built in model validation.  What I didn't know was how to bring them together.  Eventually I got there, but it took a few passes.  The first step was a given, add the required dependencies to my project.json file:

{
  "dependencies": {
    "Microsoft.NETCore.App": {
      "version": "1.1.0",
      "type": "platform"
    },
    "Microsoft.AspNetCore.Mvc": "1.1.0",
    // other dependencies removed for brevity
    "StructureMap.Microsoft.DependencyInjection": "1.3.0",
    "FluentValidation.AspNetCore": "6.4.0-beta9",
    "Serilog": "2.3.0",
    "Serilog.Extensions.Logging": "1.3.1",
    "Serilog.Sinks.Literate": "2.0.0"
  },
  // other sections removed for brevity
}

With that in place I wrote a few validators for the input into my POST and PUT action handlers.  You can find all of the documentation for FluentValidation here.  Below is an example command and it's related validator:

public class AddRecipe
{
	public Guid Id {get; set;}
	public Guid CreatedBy {get; set;}
	public DateTime CreatedAt {get ;set;}
	public string Title {get; set;}
	public string Instructions {get; set;}
	public List<string> Ingredients {get; set;}
}

public class AddRecipeValidator : AbstractValidator<AddRecipe>
{
	private static readonly DateTime MinDate = new DateTime(2000, 1, 1);
	private static readonly DateTime MaxDate = new DateTime(2100, 1, 1);

	public AddRecipeValidator()
	{
		RuleFor(cmd => cmd.Id).NotEmpty();
		RuleFor(cmd => cmd.CreatedBy).NotEmpty().NotEqual(cmd => cmd.Id);
		RuleFor(cmd => cmd.CreatedAt).Must(BeValidDate)
			.WithMessage("'Created At' must be a valid date");
		RuleFor(cmd => cmd.Title).Length(5, 100);
		RuleFor(cmd => cmd.Ingredients).NotEmpty();
	}

	private bool BeValidDate(DateTime input)
	{
		return input.Date > MinDate && input.Date < MaxDate;
	}
}

Next, I wanted a generic way to get the validator for a given class based on the class's type.  I didn't want to have to know what the exact type of the validator implementation was.  This seemed like a good time to use StructureMap (in reality, it wasn't).  Here's how you could use StructureMap if you really really wanted to (for some reason):

In your StructureMap scanner configuraiton add the following line:

s.ConnectImplementationsToTypesClosing(typeof(AbstractValidator<>));

ConnectImplementationsToTypesClosing will allow you to get an instance of a given validator knowing only the type of the object the validator handles.  I wanted to be able to get an instance of a validator using the same syntax I would use to get any other object from my IoC container.  While I generally avoid extension methods, this seems like a good place to put the pattern to use:

public static class StructureMapUtilities
{
	public static IValidator TryGetValidatorInstance<T>(this IContainer container)
	{
		return container.TryGetInstance<AbstractValidator<T>>();
	}
}

With that in place, you can validate the input to your controller's action methods as follows:

[HttpPost]
public IActionResult Post([FromBody,Required] AddIngredient command)
{
	if(command == null)
		return BadRequest();

	ValidationResult result = 
		_container
			.TryGetValidatorInstance<AddIngredient>()?.Validate(command);

	if(result?.IsValid == false)
		return BadRequest(result);

	return CreatedAtRoute("GetIngredient", new {id = Guid.NewGuid()}, null);
}

That's, ummm.... not good.  There are a few major problems with this solution:

  • It requires injecting my IoC container into the controller (leaky abstraction, I might as well have my repositories return IQueryable while I'm at it)
  • I have to type the same few lines at the beginning of every method where I want to validate the input

The Second (Less Bad) Solution

Surface level validation is kind of a cross-cutting concern right?  Aspect Oriented Programming is a way to handle cross-cutting concerns... AOP frequently uses attributes... I can use an attribute to handle my validation.  A few degrees short of Kevin Bacon and I have a new direction: ValidateInputAttribute:

public class ValidateInputAttribute : ActionFilterAttribute
{
	public override void OnActionExecuting(ActionExecutingContext context)
	{
		if(context.ModelState.IsValid)
			return;

		context.Result = new BadRequestObjectResult(context.ModelState);
	}
}

Progress.

This solution allowed me to pull the IoC container out of my controllers.  It also simplified the code for validating input:

[HttpPost]
[ValidateInput]
public IActionResult Post([FromBody,Required] AddReview command)
{
	return CreatedAtRoute("GetReview", new {id = Guid.NewGuid()}, null);
}

Unfortunately I still had one problem:  I can't inject my logging framework of choice into an Attribute.  Well, where there's a will there's a way... but if you have to work that hard to get something to work, there's probably a better solution...

The Third and Final Attempt (for Now)

Filters allow you to execute code in the MVC Action Pipeline prior to the action being executed but after the model had been bound via Action Filters.  This seems like the perfect place to solve my problem.

Before we can handle validation errors in a Filter we first need to update our Startup class's ConfigureServices method:

public IServiceProvider ConfigureServices(IServiceCollection services)
{
	services.AddMvc()
		.AddFluentValidation(
			fv => fv.RegisterValidatorsFromAssemblyContaining<Startup>());

	return services.AddStructureMap();
}

The highlighted lines above will configure things so that FluentValidation handles validating input for you and updates the ModelState accordingly.  Note, using this approach does not require StructureMap.

Next you need to add an implementation of IActionFilter to handle validation errors:

public class ValidateInputFilter : IActionFilter
{
	#region
	private readonly ILogger _logger;
	#endregion

	#region Constructor
	public ValidateInputFilter(ILogger logger)
	{
		_logger = logger.ForContext<ValidateInputFilter>();
	}
	#endregion

	#region IActionFilter Implementation
	public void OnActionExecuting(ActionExecutingContext context)
	{
		if(context.ModelState.IsValid)
			return;

		using(LogContext.PushProperties(BuildIdentityEnrichers(context.HttpContext.User)))
		{
			_logger.Warning("Model validation failed for {@Input} with validation {@Errors}",
				context.ActionArguments,
				context.ModelState?
					.SelectMany(kvp => kvp.Value.Errors)
					.Select(e => e.ErrorMessage));
		}

		context.Result = new BadRequestObjectResult(
			from kvp in context.ModelState
			from e in kvp.Value.Errors
			let k = kvp.Key
			select new ValidationError(ValidationError.Type.Input, null, k, e.ErrorMessage));
	}

	public void OnActionExecuted(ActionExecutedContext context)
	{
		// This filter doesn't do anything post action.
	}
	#endregion
}

The above code should be pretty straight forward.  If there are no errors in the given model, simply return and do nothing.  Let the next filter do it's thing.  If there is a problem with the ModelState, log it and let the client know that a Bad Request was made (HTTP status code 400).

Note, in the current version of FluentValidation if a null object is passed into your action method context.ModelState.IsValid will return true.  Given what I read here, that's not what I expected.

Conclusion

I double my current solution is perfect and I'm almost certain it'll go through another refactoring (or two) as I continue to work on the project.  Hopefully you found something useful above.  If not, thanks for taking the time to read this article and I would appreciate any feedback you might have.

You can find a working example on GitHub: https://github.com/mlindegarde/examples-validation-api

Useful links

The links below my help answer any questions you may have:

Using Serilog, Elasticsearch 5, and Kibana 5 for Effective Error Logging

Why use Serilog over NLog

Skip to the instructions

For the longest time I didn't understand why everyone was so excited about Serilog.  I've used NLog for a long time and it seemed more than capable of doing what I needed:  logging messages to some data store (log files, databases, etc...).

Then I stared using Elasticsearch.  Suddenly I saw the light.  Structured event data always struck me as one of those neat features that wasn't really needed.  However, once you start using something like Elasticsearch the power of structured event data quickly become evident.

Adding the visualizations offered by Kibana takes your logging to the next level.  A quick glance at a dashboard and you instantly know if there has been an uptick in errors (or whatever you might be logging).  The interactive visualizations allow you to quickly filter out noise and identify the root cause of problems you or your users might be experiencing.

 

Installing the JDK

This part is frequently overlooked.  You need to have Java installed and running on the box you're planning to use as your Elasticsearch server.  For the purposes of this tutorial I'm going to assume that you're going to set things up on a Windows machine.  You'll also need to ensure that the JAVA_HOME environment variable is correctly set.

Step 1: Download the JDK

You can download the current JDK from Oracle.  You'll want to click on the "Java Platform (JDK) 8u111 / 8u112" link.  On the following page download the appropriate package (in my case it's jdk-8u111-windows-x64.exe).  Once the download completes run the installer and let it do it's thing.  In most cases the default options (install location, etc...) are just fine.

Step 2: Setting the JAVA_HOME Environment Variable

In order for Elasticsearch to work you'll need to have the JAVA_HOME variable set.  In order to set the JAVA_HOME variable you'll need to access the Advanced System Settings in Windows.  You can do that by:

  1. Click on the Start Menu
  2. Right click on Computer
  3. Select Properties

In the window that appears, select Advanced System Settings in the upper left.  That will bring up a small dialog window with five tabs across the top:

  1. Select Advanced.
  2. Click on Environment Variables...
  3. In the System variables section click New...
  4. For the Variable Name use JAVA_HOME and for the value use the location of your install's bin directory

 

Setting up Elasticsearch and Kibana

I'm not going to teach you the ins and outs of Elasticsearch.  I'm going to give you just enough information to get things up and running.  Elasticsearch is incredibly powerful and I strongly encourage you to get a book or consult with someone more knowledgeable than me.

Step 1: Download and Install Elasticsearch 

Elastic does a great job walking you through the steps necessary to setup Elasticsearch on any platform.  Rather than repeating that information here, I'll simply point you in the right direction: https://www.elastic.co/downloads/elasticsearch.

Although the download page has some basic installation instructions, I found the instruction in the documentation to be much more helpful.  You can find that information here.

I would strongly recommend setting up Elasticsearch as a service.  However, you'll want to make sure you have Elasticsearch successfully running before setting up the service.  It's much easier to resolve problems when you can see the errors in the console window.

Step 2: Download and Install Kibana

Installing Kibana goes pretty much exactly like installing Elasticsearch.  Simply download the compressed file, decompress to wherever you like, then run the bat file.  You can download Kibana from here:  https://www.elastic.co/downloads/kibana.  You can find better installation documentation here.

If you want to setup Kibana to run as a service you can use the following command in the Windows Console or your preferred terminal (you can see my setup here):

sc create "ElasticSearch Kibana 4.0.1" binPath= "{path to batch file}" depend= "elasticsearch-service-x64"

That handy little line comes to you courtesy of Stack Overflow.

At this point you should be able to verify that Elasticsearch is running at http://localhost:9200 and that Kibana is running at http://localhost:5601 by visiting those URLs in your preferred browser.

 

Using Serilog

As mentioned in the introduction, we'll be using Serilog instead of NLog.  This is so that we can take advantage of the structured data Serilog gives us in our Elasticsearch indexes.  Setting up Serilog with .NET Core is pretty straight forward.

Step 1: Add the Required Packages

Add the following packages to your project.json file:

{
  "dependencies": {
    "Microsoft.NETCore.App": {
      "version": "1.0.0",
      "type": "platform"
    },
    "Swashbuckle": "6.0.0-beta902",
    "Microsoft.ApplicationInsights.AspNetCore": "1.0.0",
    "Microsoft.AspNetCore.Mvc": "1.0.0",
    // removed for length
    "Microsoft.Extensions.Options.ConfigurationExtensions": "1.0.0",
    "Serilog": "2.3.0",
    "Serilog.Extensions.Logging": "1.3.1",
    "Serilog.Sinks.Literate": "2.0.0",
    "Serilog.Sinks.ElasticSearch": "4.1.1"
  },
  // truncated to save space
}

Save your project.json file and let Visual Studio restore the packages.

Step 2: Modify Your Startup.cs

You'll need to modify your Startup.cs in two places: the constructor and in the Config method.  First we'll look at the changes to the constructor:

ElasticsearchConfig esConfig = new ElasticsearchConfig();
Configuration.GetSection("Elasticsearch").Bind(esConfig);

LoggerConfiguration loggerConfig = new LoggerConfiguration()
	.Enrich.FromLogContext()
	.Enrich.WithProperty("Application","App Name")
	.WriteTo.LiterateConsole();

if(esConfig.Enabled)
{
	loggerConfig.WriteTo.Elasticsearch(new ElasticsearchSinkOptions(esConfig.Uri)
	{
		AutoRegisterTemplate = true,
		MinimumLogEventLevel = (LogEventLevel)esConfig.MinimumLogEventLevel,
		CustomFormatter = new ExceptionAsObjectJsonFormatter(renderMessage:true),
		IndexFormat = esConfig.IndexFormat
	});
}

Log.Logger = loggerConfig.CreateLogger();

Looking at the code you should notice that loading settings from my appsettings.json file.  If you need some help with that you read my previous post.

I've enriched my events in two ways.  First I've configured Serilog to use the LogContext.  For more information take a look at the Serilog documentation here.  The second enrichment simply puts the application name on every event generated by Serilog.

I always want Serilog to write to the console (at least while the application is being developed).  To accomplish that I'm using the LiterateConsole Sink.  If you want to know why I'm using the LiterateConsole over the ColoredConsole you can read more about it here.

Lastly, depending on the value in esConfig.Enabled I'm conditionally setting up the Elasticsearch sink.  You can find all the information about the various configuration options here.  Here is the short version:

  • AutoRegisterTemplate - Tells Serilog to automatically register a template for the indexes it creates using a template optimized for working with Serilog events.
  • MinimumLogEventLevel - Kind of straight forward.
  • CustomFormatter - In order to avoid deeply nested objects Serilog writes inner exceptions as an array of exceptions.  This can be problematic for visualizations and some queries.  You can change this behavior using the ExceptionAsJsonObjectFormatter.
  • IndexFormat - This is the pattern Serilog will use to generate the indexes it creates.  Typically it's something like "api-logs-{0:yyyy.MM.dd}".  If you do not provide a format Serilog will use it's default value.

Finally, modify your Configure method:

public void Configure(
	IApplicationBuilder app, 
	IHostingEnvironment env, 
	ILoggerFactory loggerFactory,
	IApplicationLifetime appLifetime)
{
	loggerFactory.AddSerilog();

	app.UseMvc();
	app.UseSwagger();
	app.UseSwaggerUi();

	appLifetime.ApplicationStopped.Register(Log.CloseAndFlush);
}

That's it.  Now you're ready to start writing some events to Elasticsearch.

Step 3: Write some exceptions to Elasticsearch

You'll need to use Serilog's ILogger interface wherever you need to log an event.  I tend to use StructureMap as my IoC container instead of the default implementation Microsoft offers.  This means I need to register the interface in my StructureMap configuration:

_.For<Serilog.ILogger>().Use(Log.Logger);

Once that is done, I can easily inject Serilog into any object created via the IoC container (i.e. my controllers).  Writing an event with structured data to Elasticsearch is as simple as making the following call in your code wherever appropriate:

_logger.Error(ex, "Failed to create object requested by {@staff}", _staff)

For more information about the features Serilog offers please refer to their documentation.  I encourage you to take advantage of source contexts whenever possible.  Having the SourceContext property in your event data makes filtering a lot easier.

 

Using Kibana

It's taken a while, but you've finally got Elasticsearch setup, Kibana installed and running, and your source code writing events to an Elasticsearch index.  Great... now it's time to start seeing the effort pay off.

Step 1: Setup Your Index Pattern

If this is the first time you've run Kibana you will most likely be looking at the screen where Kibana asks you to tell it about your index pattern:

The welcome screen

If you recall, back when we setup the Serilog Elasticsearch sink one of the properties we configured was the IndexFormat.  This is the value you'll want to use here less the date format portion of the string.  If you used "api-logs-{0:yyyy.MM.dd}" for your IndexFormat, then the Index Pattern is "api-logs-".

With the Index Pattern set you'll want to head over to the Discover tab.

Step 2: Save a Simple Query

Before you can discover anything you'll need to make sure you've logged at least a few events to Elasticsearch.  You'll also want to make sure that they occurred within the time frame you're currently viewing (look in the upper right corner of the Kibana window).  As long as you have some events stored in ES, clicking on Discover should display a window that looks something like this:

Discover tab with no filters

In order to create a visualization you're going to need to save a search.  You can find the full Discover documentation here.  For the purposes of moving forward, we'll save a simple search:

  1. In the Query bar type in: level:Error
  2. Click on the search button (magnifying glass)
  3. Click on Save in the upper right corner
  4. Give the search a slick name like All Errors
  5. Click Save

Step 3: Create a Simple Visualization

With the search saved it's time to move over to the Visualize section of Kibana.

There are several visualizations you can create.  In this example we'll create a simple Vertical Bar Chart using a Date Histogram to group the errors by date and time.  Creating this visualization is pretty straight forward:

  1. Select the Vertical Bar Chart option from the list of visualizations
  2. On the right you can select All Errors from the list of saved searches
  3. In the next window select X-Axis
  4. Under Aggregation choose Date Histogram
  5. Leave all of the default settings
  6. Click on the run button (Looks like a play button)
  7. Click Save in the upper right

Step 4: Build your dashboard

With the visualization saved you can easily add it to your dashboard.  You can find a lot more information about building dashboards than I can find in the official Kibana documentation.

 

Configuration using ASP.NET Core 1.0 and StructureMap

Update

Configuration.GetSection(string).Bind(object) has been moved to a new package in .NET Core 1.1:  Microsoft.Extensions.Configuration.Binder.  You will need Microsoft.Extensions.Options.ConfigurationExtensions for the services.Configure<TConfig>(Configuration.GetSection(string)) bits.

First, the bad news

Before .NET Core I used build specific web.config transforms.  When building MVC apps I took advantage XML transforms to have build specific configurations (obvious examples being Debug vs. Release).  If the project type didn't have transforms out of the box I used something like SlowCheetah to handle the XML transform (for example WPF).

While just about every tutorial out there tells you how to setup environment specific appsettings.json files, I haven't found any information about build specific application settings.  Hopefully I'm just missing something.  While this isn't a huge loss, it was convenient to be able to select "Debug - Mock API" as my build configuration and have a transform in place to adjust my web.config as necessary.

A Basic Example

Microsoft's new approach to configuration makes it incredibly easy to use strongly typed configurations via the IOptions<T> interface. Let's start with the following appsettings.json file:

{
  "Logging": {
    "UseElasticsearch":  true, 
    "IncludeScopes": false,
    "LogLevel": {
      "Default": "Debug",
      "System": "Information",
      "Microsoft": "Information"
    }
  },
  "IdentityServer": {
    "Authority": "http://localhost:5000",
    "Scope": "Some Scope"
  }
}

In order to take advantage of strongly typed configuration you'll also need a simple POCO (Plain Old CLR Object) that matches the JSON you've added to the appsettings.json file:

namespace Project.Api.Config
{
    public class IdentityServerConfig
    {
        public string Authority {get; set;}
        public string Scope {get; set;}
    }
}

With those two things in place, it's simply a matter of adding the appropriate code to your project's Startup.cs.  The following example code includes several things that are not necessary for this basic example.  My hope is that you might see something that answers a question you may have that I don't explicitly address in this post.

public Startup(IHostingEnvironment env)
{
	var builder = new ConfigurationBuilder()
		.SetBasePath(env.ContentRootPath)
		.AddJsonFile("appsettings.json", optional: true, reloadOnChange: true)
		.AddJsonFile($"appsettings.{env.EnvironmentName}.json", optional: true)
		.AddJsonFile("appsettings.local.json", optional: true);

	builder.AddEnvironmentVariables();
	Configuration = builder.Build();
	
	LoggingConfig loggingConfig = new LoggingConfig();
	Configuration.GetSection("Logging").Bind(loggingConfig);

	LoggerConfiguration loggerConfig = new LoggerConfiguration()
		.Enrich.FromLogContext()
		.Enrich.WithProperty("application","Application Name")
		.WriteTo.LiterateConsole();

	if(loggingConfig.UseElasticsearch)
	{
		loggerConfig.WriteTo.Elasticsearch(new ElasticsearchSinkOptions(new Uri("http://localhost:9200"))
		{
			AutoRegisterTemplate = true,
			CustomFormatter = new ExceptionAsObjectJsonFormatter(renderMessage:true),
			IndexFormat="logs-{0:yyyy.MM.dd}"
		});
	}

	Log.Logger = loggerConfig.CreateLogger();
}

public IServiceProvider ConfigureServices(IServiceCollection services)
{
	services.Configure<IdentityServerConfig>(Configuration.GetSection("IdentityServer"));

	services.AddMvc().AddMvcOptions(options => 
	{
		options.Filters.Add(new GlobalExceptionFilter(Log.Logger));
	});
	
	services.AddSwaggerGen();
	services.ConfigureSwaggerGen();
	services.AddMemoryCache();

	return services.AddStructureMap(Configuration);
}

public void Configure(
	IApplicationBuilder app, 
	IHostingEnvironment env, 
	ILoggerFactory loggerFactory,
	IApplicationLifetime appLifetime)
{
	IdentityServerConfig idSrvConfig = new IdentityServerConfig();
	Configuration.GetSection("IdentityServer").Bind(idSrvConfig);

	loggerFactory.AddSerilog();

	app.UseIdentityServerAuthentication(new IdentityServerAuthenticationOptions
	{
		Authority = idSrvConfig.Authority,
		ScopeName = idSrvConfig.Scope,
		RequireHttpsMetadata = false,
		AutomaticAuthenticate = true,
		AutomaticChallenge = true
	});

	app.UseMvc();
	app.UseSwagger();
	app.UseSwaggerUi();

	appLifetime.ApplicationStopped.Register(Log.CloseAndFlush);
}

Let's take a closer look at what that code is doing...

StructureMap

By default you get Microsoft's IoC container.  While it does the job for simple projects, I much prefer the power that StructureMap gives me.  However, I was having trouble getting IOptions<IdentityServerConfig> properly injected into my controllers. 

The solution to my problem ended up being pretty straight forward.  Just make sure that all of your calls to services.configure<T> come before you make you're call to:

// do this:
services.Configure<IdentityServerConfig>(Configuration.GetSection("IdentityServer"));

// before this:
container.Populate(services);

In hind site that's a pretty obvious thing to do.  StructureMap won't know anything about what you've added to the default IoC container after you call container.Populate(servcies).

Using your settings

After the configuration has been loaded and StructureMap has been configured you can get access to the values from your appsettings.json file by injecting IOptions<T> (where T would be IdentityServerConfig in my example) into the controller (or whatever class you need).

That's great, unless you need to access the values in Startup.cs for some reason.  The solution to that problem is to use the following code after the configuration has been loaded (via builder.Build()):

IdentityServerConfig idSrvConfig = new IdentityServerConfig();
Configuration.GetSection("IdentityServer").Bind(idSrvConfig);

While that's pretty simple code, I had some trouble finding that information.

Overriding settings

If you look at the "Logging" section in my appsettings.json you'll notice there is a Boolean value indicating whether or not Elasticsearch should be used.  I have Elasticsearch running locally, but not in the development environment.

{
  "Logging": {
    "UseElasticsearch":  false, 
    "IncludeScopes": false,
    "LogLevel": {
      "Default": "Debug",
      "System": "Information",
      "Microsoft": "Information"
    }
  },
  "IdentityServer": {
    "Authority": "http://localhost:5000",
    "Scope": "Some Scope"
  }
}

To get around that problem I added a Boolean value to my configuration that I can override with a settings file that only exists on my computer:

{
  "Logging": {
    "UseElasticsearch": true
  }
}

Notice that this file only needs to have the values you're overriding.  You can then configure the configuration builder to load the local settings file if it exists:

var builder = new ConfigurationBuilder()
	.SetBasePath(env.ContentRootPath)
	.AddJsonFile("appsettings.json", optional: true, reloadOnChange: true)
	.AddJsonFile($"appsettings.{env.EnvironmentName}.json", optional: true)
	.AddJsonFile("appsettings.local.json", optional: true);

The order you add the JSON files does matter.  I always set things up so that my local file will override any other change.

 

Debugging ASP.NET Core Web APIs with Swagger

Debugging APIs

Debugging .NET based RESTful APIs isn't really that difficult.  Once have your code base successfully passing all unit tests it's just a matter of having the right tools and knowing the URLs for all of the end points you need to test.  Usually you're doing this to verify that your API works as expected (authentication / authorization, HTTP status codes, location headers, response bodies, etc...)

For a long time now I've been using an excellent Chrome App called Postman.  Postman offers a lot of great features:

  1. Slick user interface
  2. Ability to save API calls as Collections
  3. You can access your Collections from any computer (using Chrome)
  4. It supports Environments (which allow you to setup environment variables)
  5. You can share Collections and Environments
  6. Test automation

So why not just stick with Postman?  Simple, it doesn't lend itself well to exploring an API.  That's not a problem for the API developer (usually); however, it is a problem for third parties looking to leverage your API (be it another team or another company).  Swagger does an excellent job documenting your API and making it much easier for other users to explore and test.

Using Swagger with an ASP.NET Core 1.0 Web API

Like most things in the .NET world, adding Swagger boils down to adding a NuGet package to your project.  I would assume you could still use the NuGet Package Manager Console; however, we'll just add the required package to our project.json file:

dependencies": {
    "Microsoft.NETCore.App": {
      "version": "1.0.0",
      "type": "platform"
    },
    "Swashbuckle": "6.0.0-beta901"
  },

Next you'll need to add a few lines to your Startup.cs file:

public void ConfigureServices(IServiceCollection services)
{
    // Add framework services.
    services.AddApplicationInsightsTelemetry(Configuration);
    services.AddMvc();
    services.AddSwaggerGen();
}

and:

public void Configure(IApplicationBuilder app, IHostingEnvironment env, ILoggerFactory loggerFactory)
{
    loggerFactory.AddConsole(Configuration.GetSection("Logging"));
    loggerFactory.AddDebug();
 
    app.UseApplicationInsightsRequestTelemetry();
    app.UseApplicationInsightsExceptionTelemetry();
 
    app.UseMvc();
    app.UseSwagger();
    app.UseSwaggerUi();
}

Now you should be able to run your app and explore your API using Swagger by appending /swagger/ui to the Web API's base URL.  It would probably be a good idea to set the your project's Launch URL to the Swagger UI's URL.  You can set by right clicking on your project, selecting Properties, and navigating to the Debug tab.

Security via OperationFilters

In most situations you're going to need to add some sort of Authorization header to your API call.  Fortunately Swashbuckle provides a relatively easy way to add new fields to the Swagger UI.

The following class will take care of adding the Authorization field to the Swagger UI:

public class AuthorizationHeaderParameterOperationFilter : IOperationFilter
{
	public void Apply(Operation operation, OperationFilterContext context)
	{
		var filterPipeline = context.ApiDescription.ActionDescriptor.FilterDescriptors;
		var isAuthorized = filterPipeline.Select(filterInfo => filterInfo.Filter).Any(filter => filter is AuthorizeFilter);
					var allowAnonymous = filterPipeline.Select(filterInfo => filterInfo.Filter).Any(filter => filter is IAllowAnonymousFilter);

		if (isAuthorized && !allowAnonymous)
		{
			if (operation.Parameters == null)
				operation.Parameters = new List<IParameter>();

			operation.Parameters.Add(new NonBodyParameter
			{                    
				Name = "Authorization",
				In = "header",
				Description = "access token",
				Required = false,
				Type = "string"
			});
		}
	}
}

With that in place you simply need to tell Swashbuckle about it in your Startup.cs:

public void ConfigureServices(IServiceCollection services)
{
    // Add framework services.
    services.AddApplicationInsightsTelemetry(Configuration);
    services.AddMvc();
    services.AddSwaggerGen();
    services.ConfigureSwaggerGen(options =>
	{
		options.SingleApiVersion(new Info
		{
			Version = "v1",
			Title = "Sample API",
			Description = "This is a sample API",
			Contact = new Contact
			{
				Name = "Mike",
				Email = "email@example.com"
			}
		});
		options.OperationFilter<AuthorizationHeaderParameterOperationFilter>();
		options.IncludeXmlComments(GetXmlCommentsPath());
		options.DescribeAllEnumsAsStrings();
	});
}

If you run your API project you should now see the Authorization field added to the "Try it out!" section of the Swagger UI for the selected end point.

That's all there is to it.  You now have a self documenting API that is both easy to explore and test using the Swagger UI.  To add even more value to the Swagger UI you should look into using the attributes and XML Documentation support that Swashbuckle offers.

Using WCF to Monitor Your Windows Services

Background...

Skip the story

Having come to age as a professional developer in an era where putting business logic in your database was considered sacrilege, I never used the database for anything more than storing data.  Using SQL Server was (is) a last resort.

A few months back I was working on a project at a company that has its roots firmly planted in a database oriented approach to development.  I get that it's impossible to rewrite a massive legacy system every time contemporary programming practices change.  However, I was surprised how quickly (and frequently) developers turned to the database or logging as a solution.

One such example, we needed a way to monitor and manage a Windows service.  Certainly logging provided a low level means of monitoring; however, it didn't provide an effective way to manage the Windows service.  One suggestion was to use a table in a database as a control mechanism.  That could work, but what about a more direct approach?

Setting up the solution

In this example we'll Create two console applications.  One will use TopShelf to start a Windows service (this post doesn't cover TopShelf).  The other will be a normal console application that'll communicate with the Windows service via WCF.  Generally I prefer to put a WPF application in the system tray; however, I'm keeping it simple for this example.

Create a blank Visual Studio 2015 solution named ServiceMonitorDemo.

The Windows service project

Add a new C# Console Application to your project named ServiceMonitorDemo.Service.  The first thing you'll need to do is to add TopShelf to the project:

Install-Package TopShelf

With that taken care of, you'll need to write two service contracts.  For this tutorial we're going to use a duplex channel.  You'll need one interface for each direction through the channel.

 First write the contract that other programs will use to communicate with the service:

using System.ServiceModel;

namespace ServiceMonitorDemo.Service.Contracts
{
    [ServiceContract(SessionMode = SessionMode.Required, CallbackContract = typeof(IDemoServiceCallbackChannel))]
    public interface IDemoServiceChannel
    {
        [OperationContract(IsOneWay = true)]
        void Connect();

        [OperationContract(IsOneWay = false)]
        bool DisplayMessage(string message);
    }
}

You'll notice this service contract is decorated with the ServiceContract attribute.  This is how you tell .NET what interface to use for the callback contract.  The callback contract is used by the service to communicate with connected clients.  You'll define the callback interface shortly.

Notice that the contract consists of two methods:

  • Connect - This method is used to add the calling client to our list of connected clients.
  • DisplayMessage - This is used as an example of bidirectional communication and to show how clients can control the service through WCF

The callback contract is pretty straight forward:

using System.ServiceModel;
using ServiceMonitorDemo.Model;

namespace ServiceMonitorDemo.Service.Contracts
{
    [ServiceContract]
    public interface IDemoServiceCallbackChannel
    {
        [OperationContract(IsOneWay = true)]
        void UpdateStatus(StatusUpdate status);

        [OperationContract(IsOneWay = true)]
        void ServiceShutdown();
    }
}

Again, our simple service contract has just two methods

  • UpdateStatus - Used by the service to push the service's status out to all connected clients.
  • ServiceShutdown - WCF does not cleanly handle shutting down things.  We need to make sure that the code takes care of opening and closing connection correctly.

With that out of the way we need to take care of writing the actual service.  For this example the service won't do anything exciting, it'll simply post a status object to all connected clients.  The code for this is pretty long, so I'm only going to post important sections here.  You can find the completed example solution on GitHub.

In order to accept connections to the service you'll need to initialize the named pipe:

_host = new ServiceHost(this);

NetNamedPipeBinding binding = new NetNamedPipeBinding();
binding.ReceiveTimeout = TimeSpan.MaxValue;

_host.AddServiceEndpoint(typeof(IDemoServiceChannel),
    binding,
    new Uri(Uri));

_host.Open();

This code simply creates a new host using the current object for the ServiceHost.  A named pipe binding is is added to the host.  Clients connect to this endpoint via the Connect method on the channel service contract defined above.  Our service implements the Connect method as follows:

public void Connect()
{
    AddCallbackChannel(OperationContext.Current.GetCallbackChannel<IDemoServiceCallbackChannel>());
}

The only other important detail here is that DemoService class is implemented as a singleton and decorated with the following attribute:

[ServiceBehavior(InstanceContextMode = InstanceContextMode.Single)]

In this case I have a single service running and I want to share data with all connected clients.  With this in mind, using InstanceContextMode.Single makes sense.  You can also use per session and per call context modes.  You can find a good overview of the differences on Code Project.

The rest of the cod for this project is pretty much boilerplate setup and tare down.

The Console Application

Fortunately this entire application is less than 100 lines long.  Again, I'll refer you to the GitHub repository to see the full implementation.  Below you'll find the most important section of the code:

        private void Connect()
        {
            while(!_isConnected)
            {
                try
                {
                    DuplexChannelFactory<IDemoServiceChannel> channelFactory = new DuplexChannelFactory<IDemoServiceChannel>(
                        new InstanceContext(this),
                        new NetNamedPipeBinding(),
                        new EndpointAddress(Uri));

                    _channel = channelFactory.CreateChannel();
                    _channel.Connect();

                    _isConnected = true;
                    Console.WriteLine("Channel connected.");
                }
                catch(Exception)
                {
                    Console.WriteLine("Failed to connect to channel.");
                }

                Thread.Sleep(1000);
            }
        }

I wouldn't recommend taking the above code and dumping into your production code.  It's designed to demonstrate how to establish a connection to the Windows service via a WCF named pipe.

As long as the console application is connected to the service, the service will continue to trigger the UpdateStatus method.  In a real world implementation UpdateStatus would most likely toggle some sort of visual status indicator on a WPF application (e.g. a red / green light).  In tutorial land displaying a message in the console works just fine.

Wrap Up

If you've cloned, forked, or downloaded a zip file of the repository you should be able to run the Windows service as a service by navigating to your binary folder for the project (Debug or Release depending on your active build configuration) in your preferred terminal / command prompt and running:

../ServiceMonitorDemo.Service install
../ServiceMonitorDemo.Service start

You can then run a few instances of the ServiceMonitorDemo.Monitor and see what happens.

GitHub repository Link