Wednesday, December 31, 2014

2014 Retrospective

Blog

This was the first year where I wrote four blog posts every month, and the unintended side effect was that the posts become smaller. I'm not sure how much I liked that; on the one hand I got to write more quick articles with little helpful information, but to be honest I did not enjoy writing those little posts as much.

Next year I will be reducing this to only three medium length posts per month. I am hoping this will be a good compromise between quantity and quality.

QQ Cast

I absolutely loved doing the QQ-Cast with my friend, Jordan. We started out the year strong, averaging three hour long episodes a month. Unfortunately as the year continued "life got in the way" and we only recorded two episodes in the last three months of the year.

Next year Jordan and I will be following a release cadence of three times a month, but shorter half hour episodes. Obviously this is similar to the blog schedule.

Professional

It has been a crazy year for me professionally. Without going into detail, I switched teams at work and now have a completely different set of responsibilities. I made this change to in order to get out of my comfort zone and, as is a core value for my company, to learn and grow.

It has been very challenging, but very rewarding. I would highly encourage all software engineers who have been in the same roll for three years or more to consider changing positions. What you learn in the first three months of a new job is often more than you will learn the rest of the year.

Personal

I got married. I bought an FZ-09. I, as mentioned above, changed positions at work. That really sums up 2014. Next year I clearly need to focus more on having a better work / personal life balance...maybe I should take more vacations?

Happy New Year,
Tom

Sunday, December 28, 2014

Why you should use more HTTP.

I am a big fan of Ayende Rahien. He leads Hibernating Rhinos, develops RavenDB, and contributes to NHibernate. Oh, and he is an avid blogger that puts my release schedule to shame.

Ayende recently blogged about over the wire protocol design, and I wanted to echo opinion on this subject!

How to actually send it?

As applications become more complicated they frequently need to communicate with remote resources. So how should your applications talk to each other? If you had not figured it out already: I suggest HTTP!

Why use HTTP?

Here are just a few very small reasons that I love working with HTTP:

  • It is supported by every language.
  • It is supported by every platform.
  • It is supported by every device.
  • It has standards for authorization.
  • It has standards for encryption.
  • It has standard response codes.
  • There are amazing tools for it.
  • There are more amazing tools for it.
  • There are still more amazing tools for it.

...need I go on?

Why NOT use HTTP?

I always look to successful companies for inspiration with technology. If you are making a store front, I suggest you look at Amazon or Newegg. If you are creating social media features, I suggest you look at Facebook or Twitter. If you are designing APIs, I suggest you look at Google. Do you know what Larry Page did not say after Google made it big? "Man, I really wish that we hadn't invested so heavily in all of that web stuff!"

Okay seriously, depending on your application there may be reasons not to use HTTP as your communication protocol. However for the vast majority of applications I think it will do the job, and I strongly urge you to consider using it instead of reinventing the wheel again.

Just my two cents,
Tom

Wednesday, December 24, 2014

How to use Entity Framework and SQLite

SQLite is the definition of a lightweight database. Using SQLite you can run an entire database with only a 304 KB executable and a database file. It's fast, stable, and very easy to use. Entity Framework is Microsoft's official ORM, and it has support for SQLite!

SQLite Tools

To run SQLite you need only to download the precompiled SQLite binaries for windows:

You can easily manipulate the database via command line. However, if you would prefer to use a GUI, there is a wonderful Firefox plugin for managing your SQLite databases.

Entity Framework Setup

To get stated using Entity Framework you will need to add two NuGet packages to your solution:

  1. EntityFramework
  2. System.Data.SQLite (x86/x64)

After that you will need to make sure that your app.config file has properly registered the both a System.Data.SQLite provider and provider factory.

Sunday, December 14, 2014

How much does RegexOptions.Compiled improve performance in .NET?

Just how much does the RegexOptions.Compiled flag improve regular expression performance in .NET? The answer: a lot! People have spoken about this before, but below are some more numbers to how you just how much it matters!

Performance Stats

Character Count Regex Pattern
1
[a-z]
3
[b-y].[1-8]
5
[b-y].[c-x].[1-8].[2-7]
7
[b-y].[c-x].[d-w].[1-8].[2-7].[3-6]
9
[b-y].[c-x].[d-w].[e-v].[1-8].[2-7].[3-6].[4-5]

RegexOptions 1 3 5 7 9
None 234176 285067 653016 690282 687343
Compiled 193945 235213 430609 452483 454625
Percent Gain 17% 17% 34% 34% 34%

Saturday, November 29, 2014

.NET 4.5 HttpClient is Thread Safe

Good news everyone, the .NET 4.5 HttpClient is thread safe!

This means that you can share instances of your HttpClients across your entire application. This is useful in that it allows you to reuse persisted connections. One of the best ways to do this is to create a class that can manage the object lifetime of those clients for you.

Below is a simple HttpClientManager that will create one HttpClient per authority. Why per authority? Because you might have to have different settings or credentials for different websites.

Sample Unit Tests

public class HttpClientManagerTests
{
    [Fact]
    public void GetForAuthority()
    {
        using (var manager = new HttpClientManager())
        {
            var client1 = manager.GetForAuthority("http://tomdupont.net/");
            var client2 = manager.GetForAuthority("https://tomdupont.net/");
            Assert.Same(client1, client2);
 
            var client3 = manager.GetForAuthority("http://google.com/");
            Assert.NotSame(client1, client3);
        }
    }
 
    [Fact]
    public void TryRemoveForAuthority()
    {
        const string uri = "http://tomdupont.net/";
 
        using (var manager = new HttpClientManager())
        {
            Assert.False(manager.TryRemoveForAuthority(uri));
 
            manager.GetForAuthority(uri);
 
            Assert.True(manager.TryRemoveForAuthority(uri));
 
        }
    }
}

Friday, November 28, 2014

Web API - Return Correct Status Codes for Exceptions

Returning the appropriate HTTP Response Codes back from your web server is a very important best practice. Fortunately for .NET developers, Web API makes it very easy to use Exception Filters to return the appropriate response codes from your exceptions.

By implementing a custom ExceptionFilterAttribute you can generically create and return HttpResponseMessages for unhandled exceptions based on type. This is great in that you do not have to wrap all of your controller actions in try catch blocks to handle exceptions from other application layers.

Sample Controller

public class ValuesController : ApiController
{
    public string Get(int id)
    {
        switch (id)
        {
            case 1:
                throw new KeyNotFoundException("Hello World");
 
            case 2:
                throw new ArgumentException("Goodnight Moon");
 
            default:
                return "value";
        }
    }
}

Saturday, November 22, 2014

Web API - Bad Request when Model State is Invalid

When you using Web API, would you like to always return a 400 (bad request) response whenever the model state is invalid? It is easy, just add the following filter attribute to your global list:

[AttributeUsage(AttributeTargets.Class | AttributeTargets.Method, 
    AllowMultiple = false, 
    Inherited = true)]
public class InvalidModelStateFilterAttribute : ActionFilterAttribute
{
    public override void OnActionExecuting(HttpActionContext actionContext)
    {
        if (!actionContext.ModelState.IsValid)
        {
            actionContext.Response = actionContext.Request.CreateErrorResponse(
                HttpStatusCode.BadRequest, 
                actionContext.ModelState);
        }
    }
}

Enjoy,
Tom

Saturday, November 15, 2014

AutoCancellationTokenSource

With the following code you can create a CancellationTokenSource that will signal cancellation on dispose. This provides an alternative to having to wrap a normal CancellationTokenSource in a try finally block.

Code

public class AutoCancellationTokenSource : CancellationTokenSource
{
    private bool _isDisposed;
 
    public AutoCancellationTokenSource()
    {
    }
 
    public AutoCancellationTokenSource(params CancellationToken[] linkedTokens)
    {
        foreach (var linkedToken in linkedTokens)
            if (linkedToken.IsCancellationRequested)
                TryCancel();
            else
                linkedToken.Register(TryCancel, false);
    }
 
    protected override void Dispose(bool disposing)
    {
        if (_isDisposed)
            return;
 
        TryCancel();
 
        base.Dispose(disposing);
 
        _isDisposed = true;
    }
 
    private void TryCancel()
    {
        if (!_isDisposed && !IsCancellationRequested)
            Cancel();
    }
}

Friday, October 31, 2014

FireAndForget a Task with AggressiveInlining

When working with tasks you will get a warning if you do not use a task returned from a method. However, you might actually want to fire and forget that task. So what do you do?

One option is to create an extension method for your task to mark it as fire and forget. Aside from removing the warning, it also gives you the nice ability to find all usages.

When creating this method it is a good idea to mark it with the aggressive inlining attribute. This will cause the compiler to try and inline the method to try and optimize performance.

Implementation and Unit Test

public static class TaskExtensions
{
    [MethodImpl(MethodImplOptions.AggressiveInlining)]
    public static void FireAndForget(this Task task)
    {
        // Do Nothing
    }
}
 
public class TaskExtensionTests
{
    [Fact]
    public void FireAndForget()
    {
        Task
            .Delay(100)
            .ContinueWith(t =>
            {
                // TODO: Stuff!
            })
            .FireAndForget();
    }
}

Enjoy,
Tom

Thursday, October 30, 2014

Tuesday, October 14, 2014

Share ReSharper and StyleCop Configuration via NuGet

Would you to share your coding style standards between projects?

Tools such as ReSharper and StyleCop both allow you to share your settings files between projects by placing a configuration file in the solution directory. However just sharing those settings across projects in a single solution might not be enough to truly enforce your coding standards for a whole team.

You can share your configuration settings across multiple solutions via a NuGet package. These configuration files can not just be standard content files in your NuGet package, they need to be installed via the init PowerShell script inside of the package.

Init.ps1

param($installPath, $toolsPath, $package)
 
Write-Host "==================================="
Write-Host "Initing: CSharpConventions"
 
# Get the active solution
$solution = Get-Interface $dte.Solution ([EnvDTE80.Solution2])
$solutionDir = Get-Item $solution.FullName
 
# Copy StyleCopy file
$copFileName = "Settings.StyleCop"
$newCopPath = join-path $solutionDir.Directory $copFileName
$oldCopPath = join-path $toolsPath $copFileName
Write-Host "Copying " $oldCopPath " to " $newCopPath
Copy-Item $oldCopPath $newCopPath 
 
# Copy and rename DotSettings file
$newDotPath = $solution.FullName + ".DotSettings"
$oldDotPath = join-path $toolsPath "Solution.sln.DotSettings"
Write-Host "Copying " $oldDotPath " to " $newDotPath
Copy-Item $oldDotPath $newDotPath 
 
Write-Host "Completed: CSharpConventions"
Write-Host "====================================="

NuSpec File

<?xml version="1.0" encoding="utf-8"?>
<package xmlns="http://schemas.microsoft.com/packaging/2010/07/nuspec.xsd">
    <metadata>
        <id>CSharpConventions</id>
        <version>1.0.0</version>
        <authors>tdupont</authors>
        <requireLicenseAcceptance>false</requireLicenseAcceptance>
        <description>
            StyleCop and R# settings file to enforce C# coding standards.
        </description>
    </metadata>
    <files>
        <file src="tools\init.ps1" target="tools\init.ps1" />
        <file src="tools\Settings.StyleCop" 
            target="tools\Settings.StyleCop" />
        <file src="tools\Solution.sln.DotSettings" 
            target="tools\Solution.sln.DotSettings" />
    </files>
</package>

Enjoy,
Tom

Tuesday, October 7, 2014

Set a Property Value from an Expression in .NET

In .NET both Action and Func classes are actually delegates that can be invoked to execute code, whereas the Expression class represents an expression tree that can potentially be compiled into executable code at run time. Expressions are particularly fun because they are malleable, and you can use them dynamically create delegates.

For example, you can create a generic statement that allows you to assign values to properties. This means that one piece of code can use a lambda to select a property for assignment, and another unrelated piece of code can use that expression to dynamically assign the value.

Thanks to AnxiousdeV for coming up with this solution on Stack Overflow.

Test

public class ExpressionTests
{
    public int X { get; set; }
 
    [Fact]
    public void GetSetPropertyAction()
    {
        // Create an expression.
        var expression = GetExpression<ExpressionTests, int?>(c => c.X);
        // Use our extension method to create the action.
        var assignAction = expression.GetSetPropertyAction();
 
        // Set the property.
        assignAction(this, 2);
        // Assert that the value was set correctly.
        Assert.Equal(2, X);
    }
 
    private static Expression<Func<T, U>> GetExpression<T, U>(
        Expression<Func<T, U>> expression)
    {
        // We are only using this method to create an expression.
        return expression;
    }
}

Sunday, September 28, 2014

xUnit Theory Data from Configuration

I've said it before and I'll say it again, I love xUnit!

In particular, I love xUnits support for data driven tests. It offers several different options for ways to power a data driven unit test right out of the box. Best of all, xUnit allows for easy extensibility.

I have written some simple extensions for xUnit that allow you to power your data driven tests from your configuration file. Not only that, but it allows you to optionally provide default data using inline attributes when no configuration is available.

Sample Config

<?xml version="1.0" encoding="utf-8" ?>
<configuration>
  <configSections>
    <section name="testData" 
type="Xunit.Extensions.Configuration.TestDataSection,xunit.extensions.config"/>
  </configSections>
  <testData>
    <tests>
      <add name="Demo.Tests.Basic">
        <data>
          <add index="0" p0="1" />
        </data>
      </add>
      <add name="Demo.Tests.FromConfig">
        <data>
          <add index="0" p0="4" />
        </data>
      </add>
    </tests>
  </testData>
</configuration>

Sample Tests

namespace Demo
{
    public class Tests
    {
        [Theory]
        [ConfigData]
        public void Basic(int i)
        {
            // This theory data comes from the config file.
            Assert.Equal(1, i);
        }
 
        [Theory]
        [ConfigOrInlineData(2)]
        public void FromInline(int i)
        {
            // This theory data comes from the attribute.
            Assert.Equal(2, i);
        }
 
        [Theory]
        [ConfigOrInlineData(3)]
        public void FromConfig(int i)
        {
            // This theory data comes from the config file
            // instead of the attribute.
            Assert.Equal(4, i);
        }
    }
}

Enjoy,
Tom

Saturday, September 20, 2014

await await Task.WhenAny

The await operator in C# automatically unwraps faulted tasks and rethrows their exceptions. You can await the completion of multiple tasks by using Task.WhenAll, and then if any of those tasks are faulted all of their exceptions will aggregated and rethrown in a single exception by the await.

However, Task.WhenAny does not work the same way as Task.WhenAll. Task.WhenAny returns an awaitable Task<Task> where the child Task represents whichever Task completed first. A key difference being that the container task will not throw when awaited!

If an exception occurs inside of a Task.WhenAny, you can automatically rethrow the exception by doing (and I realize how weird this sounds) await await Task.WhenAny

Sample Tests

[Fact]
public async Task AwaitWhenAnyWait()
{
    var t1 = Task.Run(async () =>
    {
        await Task.Delay(100);
 
        throw new InvalidOperationException();
    });
 
    // This await will NOT throw.
    var whenAny = await Task.WhenAny(t1);
 
    Assert.True(whenAny.IsFaulted);
    Assert.NotNull(whenAny.Exception);
    Assert.IsType<InvalidOperationException>(whenAny.Exception.InnerException);
}
 
[Fact]
public async Task AwaitWhenAll()
{
    var t1 = Task.Run(async () =>
    {
        await Task.Delay(100);
 
        throw new InvalidOperationException();
    });
 
    try
    {
        // This await WILL throw.
        await Task.WhenAll(t1);
 
        throw new AssertException();
    }
    catch (InvalidOperationException)
    {
    }
}
 
[Fact]
public async Task AwaitAwaitWhenAny()
{
    var t1 = Task.Run(async () =>
    {
        await Task.Delay(100);
 
        throw new InvalidOperationException();
    });
 
    try
    {
        // This await await WILL throw.
        await await Task.WhenAny(t1);
 
        throw new AssertException();
    }
    catch (InvalidOperationException)
    {
    }
}

Enjoy,
Tom

Monday, September 15, 2014

Possible Multiple Enumeration Warning

Someone recently asked me what the "Possible Multiple Enumeration" warning means. The IEnumerable interface only exposes a single method, GetEnumerator. This means that every and every time we want to traverse the enumerable we have to start at the beginning and iterate the entire enumeration again.

public interface IEnumerable<out T> : IEnumerable
{
    IEnumerator<T> GetEnumerator();
}
 
public interface IEnumerable
{
    IEnumerator GetEnumerator();
}

Whenever you see the Possible Multiple Enumeration warning the compiler is trying to tell you that your code my be sub-optimal at run time because it will have to completely traverse the enumerable multiple times.

More importantly, the compiler can not be sure what that enumeration will entail!

Why could that be bad?

With a collection we know what the contents and implementation of the enumerable are, and we are able to know the run time implications of iteration over that collection. However an IEnumerable is an abstraction and not a guaranteed implementation, meaning that it may represent a very inefficient enumeration.

For example, an object relational mapping (ORM) framework may expose an IEnumerable that loads it's items from a database on each iteration. Other IEnumerables may be computing complex and CPU intensive operations during iteration. The simple fact is that when your code is iterating over an IEnumerable you just can not be sure what is actually happening behind the interface.

This is not necessarily a bad thing, but that uncertainty does merit a warning.

Thursday, September 11, 2014

Decompile Methods with ReSharper

ReSharper is an amazing tool that .NET developers should never be without. One of my favorite features that if offers is build in support for decompiling methods when navigating to source. Enabling this option will literally allow you to view external source code simply by going to definition.

To enable this option, go to...

  1. Open the "ReSharper" Drop Down
  2. Select "Options"
  3. Expand "Tools"
  4. Select "External Sources"
  5. Select "Navigation to Sources"
  6. Check "Decompile methods"
  7. Save, and you're done!

Would you like to do this outside of Visual Studio as well? Check out JetBrains FREE decompiler tool, dotPeek.

Enjoy,
Tom

Sunday, August 31, 2014

Three steps to wire up your IOC container.

How can you dynamically and flexibly wire up your inversion of control container? Here are three easy steps to consider:

  1. Reflection
  2. Explicit
  3. Configuration

First, use reflection to help wire up your boiler plate or dynamic dependencies. Second, explicitly register and customize any additional dependencies that your application needs Third, use configuration last to dynamically override any of your previous settings, allowing you to make changes to your application in a live environment without having to rebuild or deploy.

Sample Code

Microsoft's Unity offers a last in win container, so if you follow the steps above in order you will have a very flexible configuration for your container!

Sunday, August 24, 2014

xUnit Console Runner - Filter by Test Name

You can now filter by test name with the xUnit Console Runner.

Sample Code

namespace DemoProject
{
    public class ExampleTests
    {
        [Fact]
        public void HelloWorld()
        {
            Assert.True(true);
        }
 
        [Fact]
        public void GoodnightMoon()
        {
            Assert.True(false);
        }
    }
}

Sample Command Line

C:\>xunit.console.exe DemoProject.dll -testName "DemoProject.ExampleTests.HelloWorld"
xUnit.net console test runner (64-bit .NET 4.0.30319.18449)
Copyright (C) 2014 Outercurve Foundation.

Starting:  DemoProject.dll
Finished: DemoProject.dll

=== TEST EXECUTION SUMMARY ===
   DemoProject.dll  Total: 1, Failed: 0, Skipped: 0, Time: 0.276s, Errors: 0

Enjoy,
Tom

Saturday, August 16, 2014

System.Net.CredentialCache supports Digest Auth

In my last post I talked about implementing Digest Authentication in WebAPI. That was a server side implementation, but how do you make requests to that server? Good news: .NET's built in CredentialCache supports Digest Authentication!

PreAuthenticate

Be sure to enable PreAuthenticate, otherwise each request will require a new digest token have to make an additional two requests to get it! Do not worry, the request will not send your credentials without having a token first.

PreAuthenticate = false

PreAuthenticate = true

Sunday, August 3, 2014

Basic and Digest mixed authentication with WebAPI

In my last post I talked about using both Basic and Digest authentication with WebAPI, but not at the same time. So what do you do when you want to used mixed authentication with both?

In principal you can support both Basic and Digest authentication at the same time, but your server has to issue the 401 challenge with Digest. This is because basic requires no token or server information to authenticate, where as digest requires a nonce from the server.

I have updated Rick's Basic authentication and Badri's Digest authentication implementation to work together as a pair of AuthorizationFilterAttributes. Here is the source:

public static class WebApiConfig
{
    public static void Register(HttpConfiguration config)
    {
        config.Filters.Add(new BasicAuthorizationFilterAttribute(false));
        config.Filters.Add(new DigestAuthorizationFilterAttribute());
 
        config.MapHttpAttributeRoutes();
 
        config.Routes.MapHttpRoute(
            "DefaultApi",
            "{controller}/{id}",
            new { controller = "data", id = RouteParameter.Optional }
        );
    }
}

Enjoy,
Tom

Thursday, July 31, 2014

WebAPI and Chrome Authentication Types

Google Chrome supports four HTTP authentication types:

  1. Basic
  2. Digest
  3. NTLM
  4. Negotiate

ASP.NET WebAPI has AuthorizationFilterAttributes which can be used to implement both Authentication and Authorization for your APIs. If you want to use Basic or Digest authentication, there are already several open source implementations available to help you out!

Do you need to used mixed authentication and support both Basic and Digest?
If so, be sure to check out my next blog post...

Enjoy,
Tom

Sunday, July 27, 2014

RavenDB 2.5 vs 3.0 Write Performance (so far)

Is Voron out performing Esent in RavenDB 3.0?

...not yet, at least in terms of write speed. I ran a few tests on my home machine to compare the write and indexing speeds of Raven 2.5 (build 2908) against Raven 3.0 (build 3358). Unfortunately the results were not encouraging. However it is worth pointing out that the Raven team did save their performance updates for last when releasing Raven 2.5, so I do expect that this will improve before we see an RC.

Test Results

Here are the results of my (little) performance tests:

Document Count RavenDB 2.5 RavenDB 3.0 Difference
Elapsed Import Time Elapsed Index Time Elapsed Import Time Elapsed Index Time Import Percent Index Percent
0 - 100k 0:57.48 1:41.45 1:08.59 1:25.39 -19.33% 15.82%
100k - 200k 1:02.68 1:34.85 1:10.87 1:35.65 -13.08% -0.84%
200k - 300k 1:00.34 2:17.84 1:12.94 1:47.20 -20.89% 22.22%
300k - 400k 1:00.85 1:38.59 1:13.46 1:45.61 -20.73% -7.12%
400k - 500k 1:02.03 1:38.70 1:12.03 1:58.51 -16.12% -20.07%

Saturday, July 19, 2014

Python 2.6 and HTTP Basic Authentication

I recently encountered an issue where adding basic authentication to some HTTP calls was breaking a Python application.

Come to find out there is a bug in Python 2.6 that appends a newline character to base 64 encoded strings. That newline character then causes your HTTP request to be malformed, so that the body does not match the content length. When consuming these malformed requests in an ASP.NET sever the body content would cut off early, and in the case of JSON content this made it so that the JSON string was incomplete and could not be parsed.

So what's the fix? You can either update Python, or fix your string after encoding.

Enjoy,
Tom

Sunday, July 13, 2014

Use RavenDB to power Data Driven xUnit Theories

I love xUnit's data driven unit tests, I also really enjoy working with RavenDB, and now I can use them together!

Data driven unit tests are very powerful tools that allow you to execute the same test code against multiple data sets. Testing frameworks such as xUnit makes this extremely easy to develop by offering an out of the box set attributes to quickly and easily annotate your test methods with dynamic data sources.

Below is some simple code that adds a RavenDataAttribute to xUnit. This attribute will pull arguments from a document database and pass them into your unit test, using the fully qualified method name as a key.

Example Unit Tests

public class RavenDataTests
{
    [Theory]
    [RavenData]
    public void PrimitiveArgs(int number, bool isDivisibleBytwo)
    {
        var remainder = number % 2;
        Assert.Equal(isDivisibleBytwo, remainder == 0);
    }
 
    [Theory]
    [RavenData]
    public void ComplexArgs(ComplexArgsModel model)
    {
        var remainder = model.Number % 2;
        Assert.Equal(model.IsDivisibleByTwo, remainder == 0);
    }
 
    [Fact(Skip = "Only run once for setup")]
    public void Setup()
    {
        var type = typeof(RavenDataTests);
 
        var primitiveArgsMethod = type.GetMethod("PrimitiveArgs");
        var primitiveArgs = new object[] { 3, false };
        RavenDataAttribute.SaveData(primitiveArgsMethod, primitiveArgs);
 
        var complexArgsMethod = type.GetMethod("ComplexArgs");
        var complexArgsModel = new ComplexArgsModel
        {
            IsDivisibleByTwo = true,
            Number = 4
        };
        RavenDataAttribute.SaveData(complexArgsMethod, complexArgsModel);
    }
 
    public class ComplexArgsModel
    {
        public int Number { get; set; }
        public bool IsDivisibleByTwo { get; set; }
    }
}

Monday, June 30, 2014

LearnerJS: Back to Basics Presentation

Thanks to everyone who came out for the inaugural meeting of LearnerJS!

Topic

Back to Basics: The importance of testable modular JavaScript components.

Summary

What do jQuery Plugins, Angular Directives, Knockout Components, and Ext JS classes all have in common? Modular Components! In this session we will discuss the importance of modular and reusable JavaScript components, define goals for abstraction and test-ability, and get into some demos showing how to achieve those goals.

Downloads

Enjoy,
Tom

Saturday, June 21, 2014

Waiting for Events with Tasks in .NET

Would you like to just await the next time that an event fires? It takes a little setup, but you can!

Migrating your code from using an Event-based Asynchronous Pattern to an Task-based Asynchronous Pattern can be very tricky. You can use the TaskCompletionSource to manage your Task, and then you just need to create the wire up around registering and unregistering your event handler. Unfortunately this process is not nearly as generic as I would like.

Event-based Asynchronous Pattern Tests

Here is a way of waiting for an event to fire by simply sleeping while we wait.

This is a terrible solution because we can not know how long we will have to wait for the event. This means we have to wait for one long time period, or we have to periodically poll to see if the event has fired.

public delegate void SingleParamTestDelegate(int key, string value);
 

Saturday, June 14, 2014

NUnit TestCase, the Data Driven Unit Test

A while back I wrote a blog post about data driven unit testing with xUnit. Back then a reader had to correct me because I did not think that NUnit had support for such things.

NUnit 2.5 added a slew of great features for authoring your own data driven unit tests. Perhaps best of all is the amazing support that ReSharper offers for the NUnit test cases.

You really should be using these amazing features when authoring your unit tests!

Data Driven NUnit Samples

// Here is a simple example that is the equivalent of an
// inline data attribute from xUnit.
 
[TestCase(1, 2, 3)]
[TestCase(2, 3, 5)]
public void SimpleSumCase(int a, int b, int expected)
{
    var actual = a + b;
    Assert.AreEqual(expected, actual);
}
 

Sunday, June 8, 2014

How to stream a FileResult from one web server to another with ASP.NET MVC

MVC has a lot of great built in tooling, including the ability to stream very large file results straight from disk without having to load the whole file stream into memory.

What about the scenario where you want to stream a large file from one web server to another?

For example, I have an ASP.NET MVC application that needs to expose a download for a file hosted on another server, but I can not just redirect my users directly to the other URL. For that, we need to create a custom ActionResult type!

WebRequestFileResult

Here is a simple of example of what your controller might look like:

public class FileController : Controller
{
    public ActionResult LocalFile()
    {
        return new FilePathResult(@"c:\files\otherfile.zip", "application/zip");
    }
 
    public ActionResult RemoteFile()
    {
        return new WebRequestFileResult("http://otherserver/otherfile.zip");
    }
}

Tuesday, May 27, 2014

Three Things All Applications SHOULD WANT to Have

This is the third in a three part series:

  1. Three Things that all Applications MUST Have
  2. Three Things that all Applications SHOULD Have
  3. Three Things that all Applications SHOULD WANT to Have

Somehow these three things are quite controversial. I have had many debates with people who are not using these practices and services, and they are not convinced of their usefulness. However everyone I have met that has used these tools is always a staunch defender of their value! I beg you to give these a chance, try them out and they will prove their merit to you!

Do you agree or disagree with these practices? Let me know in the comments!

1. Error Reporting

How do you know what is wrong with your application? Without proof you are just guessing!

Reporting errors to a central location helps you solve problems the moment they begin, not after they have negatively impacted your entire user base. Here is a fun Microsoft statistic: 80% of customer issues can be solved by fixing 20% of the top-reported bugs. So start reporting your exceptions today!

2. User Impersonation

The easiest way to recreate a bug reported from a specific user is to actually be that user. By adding user impersonation to your application you can save your QA team hours of time. While I completely understand the security concerns of this feature, I must still emphasize the value that it returns in the forms of testing and debugging.

You may want to take this code out in production, but make sure you have in QA and Dev!

3. Continuous Deployment

There is a difference between continuous delivery and continuous deployment, and I am talking about continuous deployment. Just imagine: instant bug fixes, constant streams of new features, and best of all no more deployment schedules! This is not a cheap goal to achieve and it requires continuous maintenance of your tests and deployments, but so does all software development!

To be honest I have never worked in an environment where we made it all the way to continuous deployment, but I would really like to someday! The few people I have met who have been able to accomplish this task had nothing but great things to say about it. Like always I would suggest starting small with a new practice like this, pick an internal application or minor project and begin your foray into continuous deployment from there.

Miss a post in this series? Start over with part 1: Three Things that all Applications MUST Have

Enjoy,
Tom

Monday, May 26, 2014

Three Things that all Applications SHOULD Have

This is the second in a three part series:

  1. Three Things that all Applications MUST Have
  2. Three Things that all Applications SHOULD Have
  3. Three Things that all Applications SHOULD WANT to Have

These three things are all generally agreed upon as being best practice, and few people will argue against their value. Unfortunately not all teams take the time to set them up. Again, I can not emphasize how much time these services will save you time in the long run, the earlier you set them up the more value they will return to you!

1. Dynamic Configuration

What the hell is "dynamic" configuration? It's configuration that is simply not not static or hard coded. Do not use constant strings or compiler symbols to configure your application! Start by using configuration files and build transforms. If your system is very distributed, consider using remote or discovered configuration.

2. Continuous Integration

How do you know that the code in your source control is in a functional state? If not, who broke the build? Continuous integration is the practice of consistently pulling and building the projects in source control in an automated fashion. Typically these builds will also execute any tests associated with the project and provide reports for each build. This is crucial rapid development, and is a necessary first step on the road to continuous deployment.

3. Automated Deployment

Before we can get to continuous deployment we have to start with automated deployment. This is simply the act of having a service that deploys your applications to an environment without the need for any significant human interaction; i.e. you can deploy with the click of a button! Automated deployment is extremely useful because it drastically speeds up deployments, prevents human error, and restricts access to different environments (such as production). Please, do not under estimate the value that a deployment system can provide!

Continue reading part 3: Three Things that all Applications SHOULD WANT to Have

Enjoy,
Tom

Sunday, May 25, 2014

Three Things that all Applications MUST Have

This is the first in a three part series:

  1. Three Things that all Applications MUST Have
  2. Three Things that all Applications SHOULD Have
  3. Three Things that all Applications SHOULD WANT to Have

I feel very strongly that when you start a new project you should spend your first day or two just setting up a few basic utilities. For every hour you spend at the beginning of a project setting up these tools you will save yourself days down the line.

1. Logging

What is your application doing? How can you debug it? Will that work in all environments? The go to answer for these questions should always be logging!

I am constantly amazed at how many applications do not have a logger. To be fair, most of the time when I do not see a logger it is because the application is small or started out as a one off project. However to me that is all the more reason to just take the time and setup a logger right from a project's inception, then you know it will always be there. Thick client, thin client, or back end service, it should have a logger!

2. Dependency Injection

Dependency Injection is a pattern that drives a lot of best practices: it allows you to loosely couple your modules, forces you to consider the number dependencies any given module requires, and perhaps most importantly it makes your code very testable. The inversion of control that dependency injection provides also enables you to refactor and test in ways that are almost unachievable without it.

It can take a little while to fully understand dependency injection, especially the intricacies of lifetime management, but once you understand the fundamentals you can apply that knowledge to any language and any framework.

3. A Test Project

There is no reason not to have a test project as part of your solution. Regardless of how you feel about Test Driven Development (TDD) as a best practice we should all be able to agree that unit testing does provide value and is a good thing.

I encourage you to start simple: just create a test project with ONE unit test inside of it. Even if you do not have time to write tests right now, just having the project already setup will enable you to write them later. Additionally, just thinking about writing unit tests encourages you to author more testable code; so if absolutely nothing else then just use your test project as a best practices placebo!

Continue reading part 2: Three Things that all Applications SHOULD Have

Enjoy,
Tom

Sunday, May 11, 2014

Compile TypeScript On Request with BundleTransformer

I have talked before about BundleTransformer, and how you can use it to compile your LESS on a per request basis. Did you know that it supports TypeScript too?

BundleTransformer is an add on for Microsoft's System.Web.Optimization frame work bundling and minification of web resources. It allows you to convert your compiled languages on the server on request instead of need to create and maintain complied versions of resource files. Best of all BundleTransformer uses a JavaScript engine to compile LESS and TypeScript in with native compiler, ensuring that you will never lag behind the latest version.

Required NuGet Packages

The BundleTransfomer will bring in System.Web.Optimization, but then you need to specify a JavaScriptEngineSwitcher for the framework to run on, as well as a minifier to use on the final scripts when in release.

After including these packages you need only make three more updates...

Wednesday, April 30, 2014

How to make a Private Method in to a Public Method in .NET

Disclaimer: I actually recommend that you try to use this technique as little as possible.

Once and a while we all have to work with a poorly designed API, and sometimes you just really need to access to a private method inside of their code. So when you are out of other options, what can you do to access a private method?

You can try to decompile the code and fork it or extend it, but that might not work due to type constraints, and even if it does then you have to maintain multiple versions. The most common thing to do is use reflection to access the private methods or members, but then you have to share that ugly reflection code everywhere.

Just make an extension method.

Use reflection, but expose it as a extension method. This gives the illusion that the method you are exposing is natively public. This solution is simple and reusable, but please do not abuse it!

Tuesday, April 29, 2014

Should you use NuGet Package Restore on your Build Server?

In my opinion, no.

Recently, Mark Seemann wrote a terrific article regarding the dangers of using NuGet package restores on a build server. I highly recommend that you take a moment to read the whole article, but here are the major pros and cons that Mark offers to summarize the argument:

  • The Package Restore feature solves these problems:
    • It saves a nickel per repository in storage costs.
    • It saves time when you clone a new repository, which you shouldn't be doing that often.
  • On the other hand, it
    • adds complexity
    • makes it harder to use custom package sources
    • couples your ability to compile to having a network connection
    • makes it more difficult to copy a code base
    • makes it more difficult to set up your development environment
    • uses more bandwidth
    • leads to slower build times
    • just overall wastes your time

Having recently had this exact same debate at work I feel the need to chime in! I think that Mark's list really address the larger issue at core of this debate:

Sunday, April 6, 2014

Deserialize Abstract Classes with Json.NET

Here is a fun problem: how do you deserialize an array of objects with different types, but all of which inherit from the same super class?

If you are using Newtonsoft's Json.NET, then this is actually rather easy to implement!

Example

Here are three classes...

public abstract class Pet { public string Name { get; set; } }
public class Dog : Pet { public string FavoriteToy { get; set; } }
public class Cat : Pet { public bool WantsToKillYou { get; set; } }

...here is an array with instances of those objects mixed together...

new Pet[]
{
    new Cat { Name = "Sql", WantsToKillYou = true },
    new Cat { Name = "Linq", WantsToKillYou = false },
    new Dog { Name = "Taboo", FavoriteToy = "Sql" }
}

...and now let's make it serialize and deseriailze! :)

Extending the JsonConverter

This tactic is actually quite simple! You need to extend a JsonConverter for your specific super class that is able to somehow uniquely identify each child class. In this example we look for a specific property that only exists on the child class, and Newtonsoft's JObjects and JTokens make this very easy to do!

Tuesday, April 1, 2014

TypeScript Definition Files on NuGet: Always have the latest and greatest IntelliSense!

The strongly typed nature of TypeScript offers the potential for amazing IntelliSense!

Open Source TypeScript Definitions

Some people do not realize that there is already a vast library of community authored definition files for just about every JavaScript framework out there. By including these definition files in your project, you can unlock the full potential TypeScript's IntelliSense.

The Original DefinitelyTyped Repository on GitHub

TypeScript Definitions on NuGet

The best thing about the open source community: whenever someone has a great idea, other people gladly line up to help improve it. To that end people have forked Boris's DefinitelyTyped, created NuGet packages, and automated their deployment!

DefinitelyTyped NuGet Repository on GitHub
jquery.TypeScript.DefinitelyTyped on NuGet

jQuery Example

If you want to use jQuery, just install the jquery.TypeScript.DefinitelyTyped NuGet package...

Saturday, March 15, 2014

String.Concat vs StringBuilder Performance

Time for yet another micro-optimization!

Everyone knows that Strings are immutable in .NET, thus the StringBuilder class is very important for saving memory when dealing with manipulating large strings.

...but what about performance?

Interestingly, StringBuilder is just an all around better way to combine strings! It is more memory efficient, and less processor intensive; but not by much. Below is a comparison of performance between different ways of combine strings.

Tuesday, March 11, 2014

Migrating from Moq to NSubstitute

Mocking is a necessary evil for unit testing.

Fortunately frameworks like NSubstitute make it painless setup your mock services. NSubstitute offers a fluent API that requires few lambdas and no calls to an Object property. You just get back the interface that you are substituting and work with it directly. Frankly, NSubstitute is so easy to work with that it almost seems like magic!

Below is a visual representation of equivalent commands between Moq and NSubstitute:

NSubstitute

[Fact]
public void NSubstitute()
{
    var tester = Substitute.For<ITester>();
 
--------------------------------------------
            
    var voidArg = String.Empty;
 
    tester
        .When(t => t.Void(Arg.Any<string>()))
        .Do(i => voidArg = i.Arg<string>());
 
    tester.Void("A");
    Assert.Equal("A", voidArg);
 
--------------------------------------------
 
    tester
        .Bool()
        .Returns(true);
                
    var boolResult = tester.Bool();
    Assert.Equal(true, boolResult);
 
--------------------------------------------
 
    tester
        .Int
        .Returns(1);
 
    var intResult = tester.Int;
    Assert.Equal(1, intResult);
 
--------------------------------------------
 
    tester.Received(1).Void("A");
 
--------------------------------------------
 
    tester.DidNotReceive().Void("B");
}

Moq

[Fact]
public void Moq()
{
    var tester = new Mock<ITester>();
            
    // Setup a callback for a void method. -------
 
    var voidArg = String.Empty;
 
    tester
        .Setup(t => t.Void(It.IsAny<string>()))
        .Callback<string>(s => voidArg = s);
 
    tester.Object.Void("A");
    Assert.Equal("A", voidArg);
 
    // Setup the result of a method. -------------
 
    tester
        .Setup(t => t.Bool())
        .Returns(true);
 
    var boolResult = tester.Object.Bool();
    Assert.Equal(true, boolResult);
 
    // Setup the result of a property. -----------
 
    tester
        .SetupGet(t => t.Int)
        .Returns(1);
 
    var intResult = tester.Object.Int;
    Assert.Equal(1, intResult);
 
    // Ensure that a function was called. --------
 
    tester.Verify(m => m.Void("A"), Times.Once);
 
    // Ensure that a function was NOT called. ----
 
    tester.Verify(m => m.Void("B"), Times.Never);
}

Enjoy,
Tom

Saturday, March 8, 2014

String.Concat vs String.Format Performance

Time for another micro-optimization!

When building strings it is almost always easiest to write and maintain a typical format statement. However, what is the cost of that over just concatenating strings? When building strings for cache keys (which I know are going to get called a lot) I try to use String.Concat instead of String.Format. Let's look at why!

Below is a table showing a comparison the performance difference between String.Concat and String.Format. The Y axis is the number of arguments being concatenated. The X axis is the number of milliseconds it takes to complete 100,000 runs.

Number
of Args
String.Concat
Milliseconds
String.Format
Milliseconds
Concat
Percent Faster
2 4ms 10ms 150%
3 3ms 13ms 333%
4 4ms 16ms 300%
5 12ms 21ms 75%
6 14ms 24ms 71%
7 16ms 28ms 75%
8 18ms 31ms 72%

Sunday, March 2, 2014

Log Performance in a Using Block with Common.Logging

You should be using Common.Logging to share your logger between projects.

Common.Logging is a great and lightweight way to share dependencies without requiring that you also share implementation. It is how several of my projects that use Log4Net are able to shares resources with another team that uses NLog. But that is not what I am here to talk about!

How do you log performance quickly and easily?

No, I do not mean performance counters. No, I do not mean interceptors for dependency injection. I want something far more lightweight and simplistic! What I want is the ability to simply log if too much time is spent in a specific block of code. For example...

public void MyMethod1(ILog log)
{
    // If this using block takes more than 100 milliseconds,
    // then I want it to write to my Info log. However
    // if this using block takes more than 1 second,
    // then I want it to write to my Warn log instead.
    using (log.PerfElapsedTimer("MyMethod took too long!"))
    {
        var obj = GetFromApi();
        SaveToDatabase(obj);
    }
}

The PerfElapsedTimer is just a simple little extension method that I wrote, under the hood it just wraps a Stopwatch in an IDisposable. Feel free to grab the code from below and start using it yourself.

Wednesday, February 26, 2014

ThoughtWorks Technology Radar: Adopt capturing client-side JavaScript errors

ThoughtWorks has released their 2014 Technology Radar.

The Technology Radar is a very cool concept: lock a bunch of very smart people in a room and have then evaluate and rank the top trending technologies of the day. While not everyone is going to agree with the resulting assessment, it is still a wonderful way to spread awareness and share opinions regarding all this new tech!

I was excited to see that capturing client-side JavaScript errors has made its way to the top of the adopt list!

In 2013 this technique was on the "assess" list, and now in 2014, only one year later, it has jumped up right past the "trail" list and directly on to the "adopt" list. I could not agree more, this is a fantastic technique and I am surprised that it is not more widely adopted...so get to it!

How do you capture client-side JavaScript errors?

Last year wrote a blog post about this very subject. In that post is a description of difficulties and pitfalls in implementing your own client side error capturer, and includes a jQuery specific implementation.

Report Unhandled Errors from JavaScript
JavaScriptErrorReporter on GitHub

So what are you going to do once you have captured these errors? You can start off by simply logging them, as that is always better than nothing. However, it would be ideal to aggregate these exceptions, send notifications regarding them, and even report on their frequency. Well good news: Exceptionless just went open source!

Exceptionless Homepage
Exceptionless on GitHub

Enjoy,
Tom

Saturday, February 8, 2014

Deserialize to ExpandoObject with Json.NET

I absolutely love Json.NET!

What I don't like is calling the non-generic DeserializeObject method and then having to deal with JToken wrappers. While these objects can be useful, I almost always want to just work directly with the data.

Good news, everyone! Newtonsoft natively supports deserializing to an ExpandoObject!

For anyone who does not know, ExpandoObjects are what .NET uses to let you create your own dynamic objects whose members can be dynamically added and removed at run time. The following two lines of code are ALL that you need to deserialize straight to an ExpandObjects:

var converter = new ExpandoObjectConverter();
dynamic obj = JsonConvert.DeserializeObject<ExpandoObject>(json, converter);

So why is this useful? To find out, let's take a look at some unit tests!

Sunday, February 2, 2014

Understanding Unity Named Registration and ResolveAll

I have a love hate relationship with Microsoft Unity, the dependency injection container.

Unity is a very powerful tool, it is an extensible industrial strength container that comes equipped with a ton of features right out of box. However my big beef with it is how those features are not always discoverable, and often they are less than intuitive.

For example, let's talk about named registration. You can register a type from the container with or without a name. This means you can then ask the container to Resolve just the type itself, getting back the unnamed registration, or you can ask the container to Resolve that type for a particular name. That is great, it is a feature required to register multiple implementations of the same interface.

The ResolveAll method, however, is only for use with named registrations.

There is no way to resolve a collection of both named and unnamed registrations. That is not a bad thing in and of iteself, but it does mean that if you want to register a "default" type you will need to register it twice. (For more help with that see my previous blog post: Understanding Unity Lifetime Managers)

Equally interesting is how ResolveAll returns you an IEnumerable by reference.

This means that that enumerable you get back will dynamically change with registrations being made against the container. This might sound neat, but it raises a few big questions...

Is it thread safe? Nope!
Is that documented? Nope!

Saturday, February 1, 2014

How to Combine Hash Codes: GetHashCodeAggregate

How do you efficiently combine hash codes?

When combining hash codes I started by generating a large string and then hashing that. However that was inefficient, so instead I was referred to this simple arithmetic solution on StackOverflow provided by Jon Skeet himself!

unchecked
{
    int hash = 17;
    hash = hash * 31 + firstField.GetHashCode();
    hash = hash * 31 + secondField.GetHashCode();
    return hash;
}

How much more efficient is this than string concatenation?

TLDR: Very! Below is a chart of the average number of ticks it takes to calculate a hash by both generating a string and Jon's method of adding ints. Each test is an average of 100,000 iterations.

Number of Keys Avg Ticks for String Avg Ticks for Int Performance Increase
2 0.90 0.15 500%
5 2.08 0.25 732%
10 3.77 0.37 918%
20 7.30 0.64 1040%
50 18.97 1.33 1326%

Monday, January 20, 2014

Serializable PagedList for .NET

Developers have to page through data sets every day. For example: bring me back 101-200 (or the second page) of 1,000 results. So how do we move that data between our data, service, and UI and layers? That is where a PagedList collection comes in!

The sooner you add this to your core library the better off your whole team will all be. No more creating extra models to hold page data, no more loading extra results just to pass that data around and then not consume it, and no more reinventing the wheel over and over! I really wish that Microsoft would add a native paged list to the .NET framework; but until that time we just have to roll our own.

So what are the qualities of a good PagedList?

  • It should be generic collection.
  • It should support a non generic interface.
  • It should be easy to serialize.

Yes, there is already a PagedList project on NuGet and GitHub. Please do not misunderstand me, that is a good project! However the code below is a bit more light weight and easier to serialize. Additionally I prefer the extension methods of ToPagedList and TakePage; where the former creates a list as a page, and the latter selects a page from a super-set.

But hey, you can decide which you prefer! :)

Interfaces

public interface IPagedList
{
    ICollection Items { get; }
    int Count { get; }
    int PageIndex { get; }
    int PageSize { get; }
    int TotalCount { get; }
    int TotalPages { get; }
    bool HasPreviousPage { get; }
    bool HasNextPage { get; }
}
 
public interface IPagedList<T>: IPagedList
{
    new ICollection<T> Items { get; }
}
Real Time Web Analytics