Friday, December 27, 2013

Understanding Unity Lifetime Managers

I absolutely love inversion of control, it is a best practice that I encourage developers to use in every project that they work on. However, like any great tool, dependency injection can cause serious problems if you do not fully understand it; lifetime management in particular.

As a developer you need to be cognisant of how many instances the container is constructing of your classes. You need to know when singleton is going to consume a transient or non thread safe resource. If you are not careful you can leak memory, share unsafe resources, or just make your garbage collector thrash.

Here is a series of examples regarding how to use lifetime managers in my favorite dependency injection container, Microsoft Unity.

  1. ResolveType
  2. Default Lifetime Manager
  3. TransientLifetimeManager
  4. RegisterInstance
  5. ContainerControlledLifetimeManager
  6. ExternallyControlledLifetimeManager
  7. ContainerControlledLifetimeManager with Multiple Keys
  8. RegisterType for Multiple Interfaces
  9. ContainerControlledLifetimeManager with Multiple Interfaces
  10. TransientLifetimeManager with Multiple Interfaces
  11. RegisterInstance Multiple Times
  12. RegisterInstance Multiple Times with External Lifetime Manager

Wednesday, December 18, 2013

Injectable Dataflow Blocks

I really enjoy working with Dataflows, but I always want to resolve my blocks with dependency injection. Thus I have created some abstract wrapper classes around the sealed ActionBlock and TransformBlock classes. This way your can put my logic into the superclass and inject it's dependencies via constructor injection. Additionally, the action method is public, making it even easier to test your code!

Update: I refactored to ditch the constructor parameters in favor of a new abstract BlockOptions.

Sunday, December 15, 2013

Throttling Datafow and the Task Parallel Library

The Task Parallel Library is an amazingly powerful and versatile library. However, knowledge of how dataflow blocks process their data is vital to using them correctly. Trying to link source and target blocks to each other without fully understanding them is like throwing a live grenade into your app domain; at some point it will tear it down!

I recently experienced a bug where linking two blocks together without managing their Bounded Capacity caused them queue actions at an unsustainable rate, and eventually the dataflow literally eat all of the available memory on the server. This could have been easily avoided by throttling the source and target blocks.

How do you throttle your source block based on your target block?

Once linked together, a source block will produce messages as fast as it's target block can consume them. To prevent a source block from being too greedy, you want to restrict the bounded capacity for both it and it's consumer. Even then, you still need to understand that setting a bounded capacity could cause message producers to either block or fail to post.

...I'm sorry, but this subject is very complex, but I think code will explain these details best! Below are a detailed set of tests to take you through all the basic scenarios for throttling a dataflow:

Saturday, December 7, 2013

ConcurrentDictionary.GetOrAdd and Thread Safety

.NET 4.0 added the awesome System.Collections.Concurrent namespace, which includes the very useful ConcurrentDictionary class. While the ConcurrentDictionary is thread safe, it can experience problems with adding values during high concurrency...

ConcurrentDictionary.GetOrAdd may invoke the valueFactory multiple times per key.

This behavior will only happens under high load, and even if the valueFactory does get invoked multiple times the dictionary entry will only ever be set once. Normally this is not much of a problem. However, if you are using this Dictionary to store large or expensive objects (such as unmanaged resources or database connections), then the accidental instantiation of multiple of these could be a real problem for your application.

Don't worry, there is a very simple solution to avoid this problem: just create Lazy wrappers for your expensive objects. That way it will not matter how many times the valueFactory is called, because only one instance of the resource itself will ever actually be accessed and instantiated.

Sunday, November 24, 2013

Generic Enum Attribute Caching

Attributes are wonderful for decorating your Enums with additional markup and meta data. However, looking up attributes via reflection is not always a fast operation in terms of performance. Additionally, no one likes typing that code over and over again.

Well not to worry, just use the following extension methods to help cache your Enum attribute look ups, and increase your application's performance! But how much faster is this? Good question...

Iterations Average Elapsed Ticks Difference
No Cache With Cache
2 1182.00 3934.00 4x Slower
10 20.10 7.10 3x Faster
100 13.07 1.37 10x Faster
1,000 13.27 1.40 10x Faster
10,000 13.56 1.45 10x Faster
100,000 13.02 1.33 10x Faster

Monday, November 18, 2013

XUnit.PhantomQ v.1.2 Released

Want to run client side QUnit tests from Visual Studio or your build server? Now it is easier than ever; just grab newly updated XUnit.PhantomQ v1.2 from NuGet!

XUnit.PhantomQ will allow you to execute your QUnit tests as XUnit tests. It supports both library and web projects, and features the ability to easily specify test files and their dependencies by real relative path from the root of your project.

XUnit.PhantomQ on NuGet
XUnit.PhantomQ Source on GitHub

Change Log for v1.2

My thanks to James M Greene and the other authors of the PhantomJS Runner; their work served as the model for this version's improved test result information.

  • Significantly improved test result information and error details.
  • Added console.log support.
  • Added test timeout configuration support.
  • Added QUnit module support.
  • Added QUnit result details to QUnitTest.Context

Saturday, November 16, 2013

jQuery Mobile: Touch Events Only

jQuery Mobile touch events are awesome.

In a recent project I did not need ALL of the features that the jQuery Mobile framework had to offer; I only needed the excellent touch events. While jQuery Mobile does not offer a custom feature build that includes only the touch system, it is actually quite easy to create your own build.

Just download the following files and include them in your project; but be sure to delete the define method wrapper (the first and last line of each file), as you do not need them with the complete jQuery build.

  1. jquery.mobile.ns.js
  2. jquery.mobile.vmouse.js
  3. jquery.mobile.support.touch.js
  4. touch.js

...that's it, you can now use only jQuery Mobiles touch events!

Saturday, October 26, 2013

Bootstrap 3, LESS, Bundling, and ASP.NET MVC

Until Twitter Bootstrap v3, I would have recommend that you use dotLess to compile and bundle your LESS files. However, it is now a known issue that the current build of dotLess does not support Bootstrap 3, or more specifically that it does not support LESS 1.4; and worse yet, there is no fix in sight.

So, how can you use Bootstrap LESS with MVC?

I recommend using BundleTransformer. It is an amazingly feature rich set of extensions for System.Web.Optimization. The BundleTransformer.Less extension provides easy to use LESS transformations (already up to LESS 1.5) and bundling support that wires up straight into your pre-existing BundleCollection configuration. For more information about everything that BundleTransformer has to offer, check out this article.

Now here is how you setup Bootstrap 3 and BundleTransformer for ASP.NET:

Required NuGet Packages

  1. Twitter.Bootstrap.Less
  2. BundleTransformer.Less
  3. BundleTransformer.MicrosoftAjax
  4. JavaScriptEngineSwitcher.Msie

Sunday, October 20, 2013

Unit Testing and Dependency Injection, with xUnit InlineData and Unity

Inversion of control is great because it makes your code more testable; but you usually still have to write tests for each implementation of your interfaces. So what if your unit testing framework could just work directly with your container to make testing even easier? Well, xUnit can!

Below we define a custom data source for our xUnit theories by extending the InlineDataAttribute. This allows our data driven unit tests to resolve types via a container, and then inject those resolved objects straight into our unit tests.

Bottom line: This allows us to test more with less code!

The rest of post is very code heavy, so I strongly recommend that you start out by taking a look at sections 1 and 2 to get an idea of what we are trying to accomplish. :)

  1. Example Interfaces and Classes
  2. Example Unit Tests
  3. IocInlineDataResolver
  4. UnityInlineDataAttribute

Friday, October 18, 2013

Check Properties of a Dynamic Object in .NET

How can you avoid a RuntimeBinderException when working with dynamics?

In JavaScript, checking if an object implements a property is easy; so why can't it be that easy to check dynamics in C#? Well, it sort of is!* If you are using an ExpandoObject, you need only cast it to a Dictionary and check and see if it contains the desired key.

* Offer only valid with ExpandoObject. **
** See sample code for participating interfaces.***
*** Visit your local Visual Studio installation for details.

Sunday, October 13, 2013

Unshelve to a Different Branch in TFS

Love it or hate it, TFS has a lot of features; some are just more discoverable than others.

Team Foundation Server has the ability to unshelve between branches, but it requires Microsoft Team Foundation Server Power Tools to do so. Once you have installed these, simply follow these two steps to move a shelveset from one branch to another:

  1. Navigate to the root of your project.
  2. Fill in and execute the following command:

Unshelve Command:

tfpt unshelve "[ShelveSetName]" /migrate /source:"[SourcePath]" /target:"[TargetPath]"

Example:

cd c:/code/
tfpt unshelve "Demo Shelveset" /migrate /source:"$/DemoProject/branch" /target:"$/DemoProject/trunk"
Shout it

Enjoy,
Tom

Monday, September 30, 2013

LINQ to SQL DataContext BeginTransaction

LINQ to SQL supports Transactions, but there is no method directly off of the DataContext to initialize one. Fortunately, that functionality is just a simple extension method away!

public static class DataContextExtensions
{
    public static IDbTransaction BeginTransaction(
        this DataContext dataContext, 
        IsolationLevel isolationLevel = IsolationLevel.Unspecified)
    {
        if (dataContext.Connection.State != ConnectionState.Open)
            dataContext.Connection.Open();
 
        return dataContext.Transaction = dataContext.Connection
            .BeginTransaction(isolationLevel);
    }
}
 
public class TransactionTests
{
    [Fact]
    public void Example()
    {
        using (var dataContext = new FutureSimpleDataContext())
        using (dataContext.BeginTransaction())
        {
            // TODO: Stuff!
        }
    }
}
Shout it

Enjoy!
Tom

Wednesday, September 18, 2013

How to Debug Minified JavaScript

I recently wrote a blog post about how to control minification per request. However, that strategy will not help you if the minification itself is causing a bug.

Fortunately Chrome has an absolutely amazing set of developer tools that can help you debug any script, even one that have been minified! Just follow these very simple steps:

  1. Navigate to the page in Chrome.
  2. Launch the developers tools (by pressing F12).
  3. Open the JavaScript file in the Sources tab.
  4. Activate the amazing "Pretty print" feature.
  5. Debug those scripts!

Shout it

Enjoy,
Tom

Sunday, September 15, 2013

TypeScript on your Build Server

Last week I wrote about making your TypeScript files compile on save by updating your project files. It is an easy update to make, but then what happens when you check into source control? You are probably going to get an error because your build server can not resolve Microsoft.TypeScript.targets

Two ways to make TypeScript compile on your build server

  1. You can install TypeScript on your build server.

The big problem with this solution is that it means you have to install a specific version of TypeScript on your build server, and thus make all of your project depend on that single version.

  1. You can check the TypeScript compiler into Source Control.

This may seem like an odd solution, but for right now I feel that it is the best way to go. It allows all of your projects to be independent of each other, and you do not need to install anything new on any of your servers. (This solution has been proposed to the TypeScript team; thanks, josundt!)

How to Add TypeScript to Source Control

This may look like a lot of steps, but do not worry! All of these steps are small, simple, and they will only take you a minute or two. :)

  1. Create a TypeScript folder in the root of your solution folder.
  2. Create a SDKs folder inside of the TypeScript folder.
  3. Create a MSBuild folder inside of the TypeScript folder.
  4. Copy the contents of your TypeScript SDKs install (where the TypeScript compiler, tsc.exe, is located) to the TypeScript\SDKs folder that you have created.
    • By default, that will be located at:
      C:\Program Files (x86)\Microsoft SDKs\TypeScript
  5. Copy the contents of your TypeScript MSBuild folder (where your TypeScript target files are located) to the TypeScript\MSBuild folder that you have created.
    • By default, for Visual Studio 2012, that will be located at:
      C:\Program Files (x86)\MSBuild\Microsoft\VisualStudio\v11.0\TypeScript
  6. Edit TypeScripts\MSBuild\Microsoft.TypeScript.targets to make the TscToolPath point to your local SDKs folder.
    • The original path would be:
      <TscToolPath Condition="'$(TscToolPath)' == ''">$(MSBuildProgramFiles32)\Microsoft SDKs\TypeScript</TscToolPath>
    • The new path should be:
      <TscToolPath Condition="'$(TscToolPath)' == ''">..\TypeScript\SDKs</TscToolPath>
  7. Open your project file for editing.
  8. Update your TypeScript project import to follow a relative path to the local folder.
    • The original path would be:
      <Import Project="$(MSBuildExtensionsPath32)\Microsoft\VisualStudio\v$(VisualStudioVersion)\TypeScript\Microsoft.TypeScript.targets" />
    • The new path should be:
      <Import Project="..\TypeScript\MSBuild\Microsoft.TypeScript.targets" />
  9. You're done! Reload your project and test it out; if that works, check it in.
Shout it

Enjoy,
Tom

Thursday, September 5, 2013

How to Start Using TypeScript Today!

Here are several tips to get you started using TypeScript (currently on 0.9.1.1) today!

Compile on Save

For fast development you MUST compile your TypeScript files on save. While this is not built into the current pre 1.0 release of TypeScript, it is still very easy to enable. There is a very simple article on CodePlex that provides you the exact XML configuration to add to your Project file to compile on save.

Compile-On-Save

Here are some the simple steps to compile on save:

  1. Right click on your project in Solution Explorer and unload it.
  2. Rick click on your project file and open it for editing.
  3. Copy and paste the PropertyGroup and Import nodes from the link above.
  4. Save and close the project file.
  5. Right click and reload the project in solution explorer.
  6. Open the TypeScript Options
    • Tools -> Options -> Text Editor -> TypeScript -> Project -> General
  7. Check "Automatically compile TYpeScript files which are part of a project"
  8. Close the options menu, you are almost done!
  9. Open your TypeScript (.ts) file and save it...boom, it compiled on save. :)

Make Generated Files Dependent

Whenever you compile a TypeScript file it will generate a JavaScript file for you. However, as of .9, that JavaScript file will not be automatically included in your project. First, you should include those generated files in your project. Second, you should modify the project file to mark those generated fiels as being dependent upon their related TypeScript (.ts) file. This will ensure that no one accidentally modifies those files, and it will ensure that TFS automatically checks them out for edit before being regenerated.

<TypeScriptCompile Include="Scripts\jqueryExtensions.ts" />
<Content Include="Scripts\jqueryExtensions.js">
  <DependentUpon>jqueryExtensions.ts</DependentUpon>
</Content>

Here are some the simple steps to compile on save:

  1. Right click on your project in Solution Explorer and unload it.
  2. Open your project file in a text editor.
  3. Add a dependent node under the JavaScript file (see above for an example).
  4. Right click and reload the project in solution explorer.

Use a Master Definition File

Your TypeScript files require reference tags to include information about other TypeScript files. One of my favorite features of TypeScript is that the community builds definition files for other frameworks. However including multiple references in each file is a lot of work, and a bad idea in general.

I strongly suggest keeping one master definition that references all of your other definition files, and then your code files need only reference that one file inherit information regarding all of your code and dependencies.

Master Definition File

/// <reference path="jquery.d.ts" />
/// <reference path="jqueryui.d.ts" />
 
interface JQuery {
    alertOnClick(): JQuery;
}
 
interface JQueryStatic {
    stringFormat(format: string, ...args: Array<any>): string;
}

Normal Code File

/// <reference path="../Definitions/typeScriptDemo.d.ts" />
 
(function ($: JQueryStatic) {
 
    $.stringFormat = stringFormat;
 
    function stringFormat(format: string, ...args: Array<any>): string {
        return format.replace(/{(\d+)}/g, replaceFormat);
 
        function replaceFormat(match, index) {
            return typeof args[index] != 'undefined'
                ? args[index]
                : match;
        }
    }
 
})(jQuery);

Next week I'll talk about how to get your TypeScript compiling on Team Foundation Server.

Shout it

Enjoy,
Tom

Saturday, August 24, 2013

XUnit.PhantomQ v1.1

I recently blogged about how to Use XUnit to Run QUnit Tests. The initial v1.0 release of XUnit.PhantomQ did not support error messages, but now in v1.1 it supports the must have feature of bringing error messages back with failed test results.

XUnit.PhantomQ on NuGet
XUnit.PhantomQ Source on GitHub

Enjoy,
Tom

Tuesday, August 13, 2013

Control Minification per Request with Web Optimizations

The Microsoft ASP.NET Web Optimization Framework is a great bundling and minification solution for your web applications. Simply grab the Microsoft.AspNet.Web.Optimization NuGet package, register your bundles, render them with a single line of code, and your environment will automatically resolve your dependencies based on whether or not the web server is running in debug mode.

But how can you debug minified styles and scripts in production?

Normally that is a difficult proposition, but here is a simple solution: JUST DON'T MINIFY THEM! With the little code snippets below you can add a simple query string parameter to disable minification for specific sessions or requests.

Adding this functionality to your website is extremely easy and requires no additional dependencies. Web Optimizations already has an internal AssetManager class that supports this functionality, we just need to access it via reflection.

Simply apply the following two steps and you will be ready to debug in production:

  1. Create the HtmlHelperExtensions class with the code below.
  2. Add a call to TrySetOptimizationEnabled inside of your ViewStart.

_ViewStart.cshtml

@using System.Web.Optimization
@{
    Layout = "~/Views/Shared/_Layout.cshtml";
    Html.TrySetOptimizationEnabled();
}

HtmlHelperExtensions.cs

public static class HtmlHelperExtensions
{
    public const string Key = "OptimizationEnabled";
 
    public static bool TrySetOptimizationEnabled(this HtmlHelper html)
    {
        var queryString = html.ViewContext.HttpContext.Request.QueryString;
        var session = html.ViewContext.HttpContext.Session;
 
        // Check the query string first, then the session.
        return TryQueryString(queryString, session) || TrySession(session);
    }
 
    private static bool TryQueryString(
        NameValueCollection queryString, 
        HttpSessionStateBase session)
    {
        // Does the query string contain the key?
        if (queryString.AllKeys.Contains(
            Key, 
            StringComparer.InvariantCultureIgnoreCase))
        {
            // Is the value a boolean?
            bool boolValue;
            var stringValue = queryString[Key];
            if (bool.TryParse(stringValue, out boolValue))
            {
                // Set the OptimizationEnabled flag
                // and then store that value in session.
                SetOptimizationEnabled(boolValue);
                session[Key] = boolValue;
                return true;
            }
        }
 
        return false;
    }
 
    private static bool TrySession(HttpSessionStateBase session)
    {
        if (session != null)
        {
            var value = session[Key] as bool?;
            if (value.HasValue)
            {
                // Use the session value to set the OptimizationEnabled flag.
                SetOptimizationEnabled(value.Value);
                return true;
            }
        }
 
        return false;
    }
 
    private static void SetOptimizationEnabled(bool value)
    {
        // Use reflection to set the internal AssetManager.OptimizationEnabled
        // flag for this request specific.
        var instance = ManagerProperty.GetValue(null, null);
        OptimizationEnabledProperty.SetValue(instance, value);
    }
 
    private static readonly PropertyInfo ManagerProperty = typeof(Scripts)
        .GetProperty("Manager", BindingFlags.Static | BindingFlags.NonPublic);
 
    private static readonly PropertyInfo OptimizationEnabledProperty = Assembly
        .GetAssembly(typeof(Scripts))
        .GetType("System.Web.Optimization.AssetManager")
        .GetProperty(
            "OptimizationEnabled",
            BindingFlags.Instance | BindingFlags.NonPublic);
}
Shout it

Enjoy,
Tom

Wednesday, August 7, 2013

Last in Win Replication for RavenDB

One of my favorite features of RavenDB is how easy it is customize and extend.

RavenDB offers an extremely easy to use built in replication bundle. To deal with replication conflicts, the RavenDB.Database NuGet Package includes an abstract base class (the AbstractDocumentReplicationConflictResolver) that you can implement with your own conflict resolution rules.

Last In Wins Replication Conflict Resolver

John Bennett wrote a LastInWinsReplicationConflictResolver for RavenDB 1.0, and I have updated it for RavenDB 2.0 and 2.5. As always you can get that code from GitHub!

Download RavenExtensions from GitHub

Once you have built your resolver, you need only drop the assembly into the Plugins folder at the root of your RavenDB server and it will automatically be detected and loaded the next time that your server starts.

public class LastInWinsReplicationConflictResolver
    : AbstractDocumentReplicationConflictResolver
{
    private readonly ILog _log = LogManager.GetCurrentClassLogger();
 
    public override bool TryResolve(
        string id,
        RavenJObject metadata,
        RavenJObject document,
        JsonDocument existingDoc,
        Func<string, JsonDocument> getDocument)
    {
        if (ExistingDocShouldWin(metadata, existingDoc))
        {
            ReplaceValues(metadata, existingDoc.Metadata);
            ReplaceValues(document, existingDoc.DataAsJson);
            _log.Debug(
                "Replication conflict for '{0}' resolved with existing doc",
                id);
        }
        else
        {
            _log.Debug(
                "Replication conflict for '{0}' resolved with inbound doc",
                id);
        }
 
        return true;
    }
 
    private static bool ExistingDocShouldWin(
        RavenJObject newMetadata, 
        JsonDocument existingDoc)
    {
        if (existingDoc == null ||
            ExistingDocHasConflict(existingDoc) ||
            ExistingDocIsOlder(newMetadata, existingDoc))
        {
            return false;
        }
 
        return true;
    }
 
    private static bool ExistingDocHasConflict(JsonDocument existingDoc)
    {
        return existingDoc.Metadata[Constants.RavenReplicationConflict] != null;
    }
 
    private static bool ExistingDocIsOlder(
        RavenJObject newMetadata,
        JsonDocument existingDoc)
    {
        var newLastModified = GetLastModified(newMetadata);
 
        if (!existingDoc.LastModified.HasValue ||
            newLastModified.HasValue &&
            existingDoc.LastModified <= newLastModified)
        {
            return true;
        }
 
        return false;
    }
 
    private static DateTime? GetLastModified(RavenJObject metadata)
    {
        var lastModified = metadata[Constants.LastModified];
 
        return (lastModified == null)
            ? new DateTime?()
            : lastModified.Value<DateTime?>();
    }
 
    private static void ReplaceValues(RavenJObject target, RavenJObject source)
    {
        var targetKeys = target.Keys.ToArray();
        foreach (var key in targetKeys)
        {
            target.Remove(key);
        }
 
        foreach (var key in source.Keys)
        {
            target.Add(key, source[key]);
        }
    }
}
Shout it

Enjoy,
Tom

Thursday, August 1, 2013

PhantomJS, the Headless Browser for your .NET WebDriver Tests

Did you know that Selenium already supports PhantomJS?

WebDriver is a specification for controlling the behavior of a web browser. PhantomJS is a headless WebKit scriptable with a JavaScript API. Ghost Driver is a WebDriver implementation that uses PhantomJS for its back-end. Selenium is a software testing framework for web applications. Selenium WebDriver is the successor to Selenium RC. The Selenium WebDriver NuGet Package is a .NET client for for Selenium WebDriver that includes support for PhantomJs via GhostDriver.

NuGet Packages

You need only install two NuGet packages in order to use PhantomJS with WebDriver. You will probably also want which ever Unit Testing framework you prefer. As always, I suggest xUnit.

  1. Selenium.WebDriver
  2. phantomjs.exe

PhantomJSDriver

After installing those, using the PhantomJSDriver is as easy as any other WebDriver!

const string PhantomDirectory =
    @"..\..\..\packages\phantomjs.exe.1.8.1\tools\phantomjs";
 
[Fact]
public void GoogleTitle()
{
    using (IWebDriver phantomDriver = new PhantomJSDriver(PhantomDirectory))
    {
        phantomDriver.Url = "http://www.google.com/";
        Assert.Contains("Google", phantomDriver.Title);
    }
}
Shout it

Enjoy,
Tom

Sunday, July 14, 2013

Use XUnit to Run QUnit Tests

I read a great article recently about Unit testing JavaScript in VisualStudio with ReSharper, written by Chris Seroka. As cool as this feature is, it left me with two questions:

  1. What about developers that do not have ReSharper?
  2. How do I run my JavaScript unit tests on a builder server?

It is no secret that I, absolutely, love, xUnit! Thus I decided to extend xUnit theories to be able to run QUnit tests by implementing a new DataAttribute.

Introducing XUnit.PhantomQ

XUnit.PhantomQ is a little NuGet package you can install to get access to the QUnitDataAttribute (see below for an example). This library will allow you to execute your QUnit tests as XUnit tests.

XUnit.PhantomQ supports both library and web projects, and features the ability to easily specify test files and their dependencies by real relative path to the root of your project.

XUnit.PhantomQ on NuGet
XUnit.PhantomQ Source on GitHub

QUnitData Attribute

Here is an example of writing some JavaScript, a file of QUnit tests, and then using an xUnit theory and a QUnitData Attribute to execute all of those tests right inside of Visual Studio.

// Contents of Demo.js
function getFive() {
    return 5;
}
 
// Contents of Tests.js
test('Test Five', function() {
    var actual = getFive();
    equal(actual, 5);
});
test('Test Not Four', function () {
    var actual = getFive();
    notEqual(actual, 4);
});
 
// Contents of QUnitTests.cs
public class QUnitTests
{
    [Theory, QUnitData("Tests.js", "Demo.js")]
    public void ReturnFiveTests(QUnitTest test)
    {
        test.AssertSuccess();
    }
}
 

Integrating XUnit, PhantomJS, and QUnit

So, how does this thing work under the hood? Below is the complete pipeline, step by step, of whenever the tests are executed:

  1. The XUnit test runner identifies your theory tests.
  2. The QUnitDataAttribute is invoked.
  3. A static helper locates PhantomJS.exe and the root folder of your project.
    • It will automatically walk up the folder structure and try to find PhantomJS.exe in your packages folder. However, you can override this and explicitly set the full path by adding an AppSetting to your config file:
      <add key="PhantomQ.PhantomJsExePath" value="C:/PhantomJS.exe" />
    • The same goes for the root of your project, you can override this location with another AppSetting to your config:
      <add key="PhantomQ.WorkingDirectory" value="C:/Code/DemoProject" />
  4. PhantomJS.exe is invoked as a separate process.
  5. The server loads up XUnit.PhantomQ.Server.js
  6. The server now opens XUnit.PhantomQ.html
  7. QUnit is set to autoRun false.
  8. Event handlers are added to QUnit for testDone and done.
  9. All of the dependencies are now added to the page as script tags.
  10. The test file itself is loaded.
  11. QUnit.start is invoked.
  12. The server waits for the test to complete.
  13. Upon completion the server reads the results out of the page.
  14. The tests and their results are serialized to a JSON dictionary.
  15. The result dictionary is written to the standard output.
  16. The resulting JSON string is read in from the process output.
  17. The results are deserialized using Newtonsoft.JSON
  18. The results are loaded into QUnitTest objects.
  19. The QUnitTest array is passed back from the DataAttribute
  20. Each test run finally calls the AssertSuccess and throws on failure.

...and that's (finally) all folks! If you have any questions, comments, or thoughts on how this could be improved, please let me know!

Shout it

Enjoy,
Tom

Sunday, July 7, 2013

Lucene.Net Analyzer Viewer for RavenDB

To query your data in RavenDB you need to write queries in Lucene.Net.

To know which documents your queries are going to return means that you need to know exactly how your query is being parsed by Lucene.Net. Full text analysis is a great baked in feature of RavenDB, but I have found that they Lucene.NET standard analyzer that parses full text fields can sometimes return surprising results.

This Lucene.Net 3.0 Analyzer Viewer is an update of Andrew Smith's original version for Lucene.Net 2.0. This update now allows you to view the results of text analysis for the same version of Lucene that RavenDB is using. This simple tool can be invaluable to debugging full text searches in RavenDB!

Download Raven.Extensions.AnalyzerViewer from GitHub

This tool also comes with my Alphanumeric Analyzer built in.

Shout it

Enjoy,
Tom

Friday, July 5, 2013

Microsoft Build 2013 for Web Developers

Build 2013 was a whole lot of fun!

The big reveals at Build 2013 was Windows 8.1 and Visual Studio 2013. Admittedly Microsoft's big focus was on the Windows 8 store, but that does not mean that they are not delivering great tools for the fastest growing development environment in the world: the web!

Before we discuss web development, let's first take a moment to talk about Windows.

Windows 8.1 actually looks really good! Like all Microsoft products, the first major patch manages to iron out most of the kinks. The new UI updates and inclusion of the start button make the modern UI and desktop seem much less polarizing. Additionally, I am very optimistic about seeing Windows 8 applications come to PC, table, and phone. Microsoft has always had great potential for creating a homogeneous ecosystem consumer devices, and I am finally getting excited about the product line up that they are delivering.

New Editors in Visual Studio 2013

The Visual Studio Team has completely rewritten the HTML and JavaScript editors from scratch in Visual Studio 2013. This is great news; not only are the new editors faster, more responsive, and have much better intellisense, but they also come with a set of fun new features.

  • The overall Razor experience has been greatly improved.
  • Intellisense now include expandable groups to keep drop downs from getting cluttered.
  • CSS intellisense now displays browser compatibility for each rule.
  • New shift alt keys allows you to select content from tree structures.

Microsoft has yet to announce how they will charge for Visual Studio 2013. Will it be a stand alone install, will it be full price, we just do not know yet. Personally, I would love to see Visual Studio more to incremental update system, but obviously there would be lot of technical limitation to doing so...but hey, a dev can dream!

HTML Editing Features in Visual Studio 2013 Preview

Browser Link

Visual Studio 2013 comes with a built in feature called Browser Link. This is where Visual Studio opens a socket connection directly to your browsers (yes, plural) via SignalR that allows your IDE to send commands directly to the browser itself. The most basic use of this a simple refresh button that allows you to refresh all your browser windows on command.

The potential of this feature is fun to think about! Right now the Visual Studio team is intentionally keeping their default feature set for Browser link very simple, but also open source; as they want to see what us developers come up with. Remember that this is a two way channel of communication between the browser and the IDE, so sending error messages back to Visual Studio is just the first thing that comes to my mind.

A cool demo that they showed us at Build was keeping multiple browser windows in sync, this included form elements and page navigation. The cool part was that these browsers included Chrome, Firefox, and IE running on a virtualized windows phone. It was a very compelling demo!

Browser Link feature in Visual Studio Preview 2013

Web API 2.0

Web API was a great addition to Microsoft's web development library, and 2.0 looks like a great incrimental update to that framework. Forgive me if it seems that I do not have much to say about Web API 2, it just seems like a very simple but very welcome update to the framework; but don't let that fool you, I am stoked!

  • Attribute based routing.
  • Integrated OAuth 2 authentication.
  • OWIN Web hosting (run your APIs outside of IIS)
  • Cross-Origin Resource Sharing

ASP.NET Web API 2

TypeScript 0.9

I am SO excited about TypeScript!

TypeScript is a development language that is a super set of JavaScript. It adds type checking, interfaces, and inheritance to JavaScript. The idea is that you develop in TypeScript and then compile that language down to JavaScript for deployment on the web. The key here is that TypeScript is a super set of JavaScript, so all JavaScript code is already valid TypeScript, making it very easy for you to start your migration from language to the other.

There are essentially two ways that you could choose to use TypeScript:

  1. Use TypeScript for simple type checking and enhanced refactoring.

This is probably the more ideal use of TypeScript, or at least the simplest. The idea being that you can constrain your function parameters by type and interface, and make use of TypeScripts more strick declarations for easier refactoring. You would also get significantly improved intellisense both for your code and other frameworks, as a very active open source community has been creating interface wrappers for everything from JQuery to Knockout.

  1. Go full monty, and make your JavaScript completely Object Oriented.

Using TypeScript would allow you to define and extend both classes and interfaces, just as you would a language like C#. This opens the possibility for polymorphism, dependency injection, mocking, testing, the whole object oriented play book. While this does go against many principals of a very functional programming language like JavaScript, it also enables a lot of best practices that help simplify large scale application development.

...how far you want to go using TypeScript is up to you, but regardless of which side of the fence you are on, I definitely feel that TypeScript is worth looking into!

www.typescriptlang.org

Shout it

The future looks bright,
Tom

Monday, May 27, 2013

How to Test RavenDB Indexes

What if you could spin up an entire database in memory for every unit test? You can!

RavenDB offers an EmbeddableDocumentStore NuGet Package that allows you to create a complete in memory instance of RavenDB. This makes writing integration tests for your custom Indexes extremely easy.

The Hibernating Rhinos team makes full use of this feature by including a full suite of unit tests in their RavenDB solution. They even encourage people to submit pull requests to their GitHub repository so that they can pull those tests directly into their source. This is a BRILLIANT integration of all these technologies to both encourage testing and provide an extremely stable product.

So then, how do you test your RavenDB indexes? Great question; let's get into the code!

  1. Define your Document
public class Doc
{
    public int InternalId { get; set; }
    public string Text { get; set; }
}
  1. Define your Index
public class Index : AbstractIndexCreationTask<Doc>
{
    public Index()
    {
        Map = docs => from doc in docs
                      select new
                      {
                          doc.InternalId,
                          doc.Text
                      };
        Analyzers.Add(d => d.Text, "Raven.Extensions.AlphanumericAnalyzer");
    }
}
  1. Create your EmbeddableDocumentStore
  2. Insert your Index and Documents

In this example I am creating an abstract base class for my unit tests. The GetDocumentStore method provides an EmbeddableDocumentStore that comes pre-initialized with the default RavenDocumentsByEntityName index, your custom index, and a complete set of documents that have already been inserted. The Documents come from an abstract Documents property, which we will see implemented below in step 5.

protected abstract ICollection<TDoc> Documents { get; }
 
private EmbeddableDocumentStore NewDocumentStore()
{
    var documentStore = new EmbeddableDocumentStore
    {
        Configuration =
        {
            RunInUnreliableYetFastModeThatIsNotSuitableForProduction = true,
            RunInMemory = true
        }
    };
 
    documentStore.Initialize();
 
    // Create Default Index
    var defaultIndex = new RavenDocumentsByEntityName();
    defaultIndex.Execute(documentStore);
 
    // Create Custom Index
    var customIndex = new TIndex();
    customIndex.Execute(documentStore);
 
    // Insert Documents from Abstract Property
    using (var bulkInsert = documentStore.BulkInsert())
        foreach (var document in Documents)
            bulkInsert.Store(document);
 
    return documentStore;
}
  1. Write your Tests

These tests are testing a custom Alphanumeric analyzer. They will take in a series of lucene queries and assert that they match the correct internal Ids. These documents are being defined by our abstract Documents property from Step 4.

NOTE: Do not forget to include the WaitForNonStaleResults method on your queries, as your index may not be done building the first time you run your tests.

[Theory]
[InlineData(@"Text:Hello",              new[] {0})]
[InlineData(@"Text:file_name",          new[] {2})]
[InlineData(@"Text:name*",              new[] {2, 3})]
[InlineData(@"Text:my AND Text:txt",    new[] {2, 3})]
public void Query(string query, int[] expectedIds)
{
    int[] actualIds;
 
    using (var documentStore = NewDocumentStore())
    using (var session = documentStore.OpenSession())
    {
        actualIds = session.Advanced
            .LuceneQuery<Doc>("Index")
            .Where(query)
            .SelectFields<int>("InternalId")
            .WaitForNonStaleResults()
            .ToArray();
    }
 
    Assert.Equal(expectedIds, actualIds);
}
 
protected override ICollection<Doc> Documents
{
    get
    {
        return new[]
            {
                "Hello, world!",
                "Goodnight...moon?",
                "my_file_name_01.txt",
                "my_file_name01.txt"
            }
            .Select((t, i) => new Doc
            {
                InternalId = i,
                Text = t
            })
            .ToArray();
    }
}
Shout it

Enjoy,
Tom

Sunday, May 12, 2013

Alphanumeric Lucene Analyzer for RavenDB

RavenDB's full text indexing uses Lucene.Net

RavenDB is a second generation document database. This means that you can to throw typeless documents into a data store, but the only way to query them is by indexes that are built with Lucene.Net. RavenDB is a wonderful product that's primary strength is it's simplicity and easy of use. In keeping with that theme, even when you need to customize RavenDB, it makes it relatively easy to do.

So, let's talk about customizing your Lucene.Net analyzer in RavenDB!

Available Analyzers

RavenDB comes equipped with all of the analyzers that are built into Lucene.Net. For the vast majority of use cases, these will do the job! Here are some examples:

  • "The fox jumped over the lazy dogs, Bob@hotmail.com 123432."
  • StandardAnalyzer, which is Lucene's default, will produce the following tokens:
    [fox] [jumped] [over] [lazy] [dog] [bob@hotmail.com] [123432]
  • SimpleAnalyzer will tokenize on all non-alpha characters, and will make all the tokens lowercase:
    [the] [fox] [jumped] [over] [the] [lazy] [dogs] [bob] [hotmail] [com]
  • WhitespaceAnalyzer will just tokenize on white spaces:
    [The] [fox] [jumped] [over] [the] [lazy] [dogs,] [Bob@hotmail.com]
    [123432.]

In order to resolve an issue with indexing file names (details below), I found myself in need of an Alphanumeric analyzer. This analyzer would be similar to the SimpleAnalyzer, but would still respect numeric values.

  • AlphanumericAnalyzer will tokenize on the .NET framework's Char.IsDigitOrLetter:
    [fox] [jumped] [over] [lazy] [dogs] [bob] [hotmail] [com] [123432]

Lucene.Net's base classes made this pretty easy to build...

How to Implement a Custom Analyzer

Grab all the code and more from GitHub:

Raven.Extensions.AlphanumericAnalyzer on GitHub

A lucene analyzer is made of two basic parts, 1) a tokenizer, and 2) a series of filters. The tokenizer does the lions share of the work and splits the input apart, then the filters run in succession making additional tweaks to the tokenized output.

To create the Alphanumeric Analyzer we need only create two classes, an analyzer and a tokenizer. After that the analyzer can use reuse the existing LowerCaseFilter and StopFilter classes.

AlphanumericAnalyzer

public sealed class AlphanumericAnalyzer : Analyzer
{
    public AlphanumericAnalyzer(Version matchVersion, ISet<string> stopWords)
    {
        _enableStopPositionIncrements = StopFilter
            .GetEnablePositionIncrementsVersionDefault(matchVersion);
        _stopSet = stopWords;
    }
 
    public override TokenStream TokenStream(String fieldName, TextReader reader)
    {
        TokenStream tokenStream = new AlphanumericTokenizer(reader);
        tokenStream = new LowerCaseFilter(tokenStream);
        tokenStream = new StopFilter(
            _enableStopPositionIncrements, 
            tokenStream, 
            _stopSet);
 
        return tokenStream;
    }

AlphanumericTokenizer

public class AlphanumericTokenizer : CharTokenizer
{
    protected override bool IsTokenChar(char c)
    {
        return Char.IsLetterOrDigit(c);
    }

How to Install Plugins in RavenDB

Installing a custom plugin to RavenDB is unbelievably easy. Just compile your assembly, and then drop it into the Plugins folder at the root of your RavenDB server. You may then reference the analyzers in your indexes by using their fully assembly qualified names.

Again, you can grab all of the code and more over on GitHub:

Raven.Extensions.AlphanumericAnalyzer on GitHub

Shout it

Enjoy,
Tom

Friday, April 26, 2013

String Extensions: Split Qualified

Splitting on a string is easy.
Respecting qualified (quoted) strings can be hard.
Identifying escaped characters in qualified strings is very tricky.
Splitting on a qualified string that takes escape characters into account is really difficult!

Unit Tests

[Theory]
[InlineData(null,                   new string[0])]
[InlineData("",                     new string[0])]
[InlineData("hello world",          new[] { "hello", "world" })]
[InlineData("hello   world",        new[] { "hello", "world" })]
[InlineData("\"hello world\"",      new[] { "\"hello world\"" })]
[InlineData("\"hello  world\"",     new[] { "\"hello  world\"" })]
[InlineData("hello \"goodnight moon\" world", new[]
{
    "hello", 
    "\"goodnight moon\"", 
    "world", 
})]
[InlineData("hello \"goodnight \\\" moon\" world", new[]
{
    "hello", 
    "\"goodnight \\\" moon\"", 
    "world", 
})]
[InlineData("hello \"goodnight \\\\\" moon\" world", new[]
{
    "hello", 
    "\"goodnight \\\\\"", 
    "moon\"", 
    "world", 
})]
public void SplitQualified(string input, IList<string> expected)
{
    var actual = input
        .SplitQualified(' ', '"')
        .ToList();
 
    Assert.Equal(expected.Count, actual.Count);
 
    for (var i = 0; i < actual.Count; i++)
        Assert.Equal(expected[i], actual[i]);
}

String Extension Methods

public static IEnumerable<string> SplitQualified(
    this string input, 
    char separator, 
    char qualifier, 
    StringSplitOptions options = StringSplitOptions.RemoveEmptyEntries, 
    char escape = '\\')
{
    if (String.IsNullOrWhiteSpace(input))
        return new string[0];
 
    var results = SplitQualified(input, separator, qualifier, escape);
 
    return options == StringSplitOptions.None
        ? results
        : results.Where(r => !String.IsNullOrWhiteSpace(r));
}
 
private static IEnumerable<string> SplitQualified(
    string input, 
    char separator, 
    char qualifier, 
    char escape)
{
    var separatorIndexes = input
        .IndexesOf(separator)
        .ToList();
 
    var qualifierIndexes = input
        .IndexesOf(qualifier)
        .ToList();
 
    // Remove Escaped Qualifiers
    for (var i = 0; i < qualifierIndexes.Count; i++)
    {
        var qualifierIndex = qualifierIndexes[i];
        if (qualifierIndex == 0)
            continue;
 
        if (input[qualifierIndex - 1] != escape)
            continue;
 
        // Watch out for a series of escaped escape characters.
        var escapeResult = false;
        for (var j = 2; qualifierIndex - j > 0; j++)
        {
            if (input[qualifierIndex - j] == escape)
                continue;
 
            escapeResult = j % 2 == 1;
            break;
        }
 
        if (qualifierIndex > 1 && escapeResult)
            continue;
 
        qualifierIndexes.RemoveAt(i);
        i--;
    }
 
    // Remove Qualified Separators
    if (qualifierIndexes.Count > 1)
        for (var i = 0; i < separatorIndexes.Count; i++)
        {
            var separatorIndex = separatorIndexes[i];
 
            for (var j = 0; j < qualifierIndexes.Count - 1; j += 2)
            {
                if (separatorIndex <= qualifierIndexes[j])
                    continue;
 
                if (separatorIndex >= qualifierIndexes[j + 1])
                    continue;
 
                separatorIndexes.RemoveAt(i);
                i--;
            }
        }
 
    // Split String On Separators
    var previousSeparatorIndex = 0;
    foreach (var separatorIndex in separatorIndexes)
    {
        var startIndex = previousSeparatorIndex == 0
            ? previousSeparatorIndex
            : previousSeparatorIndex + 1;
 
        var endIndex = separatorIndex == input.Length - 1
            || previousSeparatorIndex == 0
            ? separatorIndex - previousSeparatorIndex
            : separatorIndex - previousSeparatorIndex - 1;
 
        yield return input.Substring(startIndex, endIndex);
 
        previousSeparatorIndex = separatorIndex;
    }
 
    if (previousSeparatorIndex == 0)
        yield return input;
    else
        yield return input.Substring(previousSeparatorIndex + 1);
}
 
public static IEnumerable<int> IndexesOf(
    this string input, 
    char value)
{
    if (!String.IsNullOrWhiteSpace(input))
    {
        var index = -1;
        do
        {
            index++;
            index = input.IndexOf(value, index);
 
            if (index > -1)
                yield return index;
            else
                break;
        }
        while (index < input.Length);
    }
}
Shout it

Enjoy,
Tom

Saturday, April 13, 2013

Report Unhandled Errors from JavaScript

Logging and aggregating error reports is one of the most important things you can do when building software: 80% of customer issues can be solved by fixing 20% of the top-reported bugs.

Almost all websites at least has some form of error logging on their servers, but what about the client side of those websites? People have a tendency to brush over best practices for client side web development because "it's just some scripts." That, is, WRONG! Your JavaScript is your client application, it is how users experience your website, and as such it needs the proper attention and maintenance as any other rich desktop application.

So then, how do you actually know when your users are experiencing errors in their browser? If you are like the vast majority of websites out there...

You don't know about JavaScript errors, and it's time to fix that!

window.onerror

Browsers do offer a way to get notified of all unhandled exceptions, that is the window.onerror event handler. You can wire a listener up to this global event handler and get back three parameters: the error message, the URL of the file in which the script broke, and the line number where the exception was thrown.

window.onerror = function myErrorHandler(errorMsg, url, lineNumber) {
  // TODO: Something with this exception!
  // Just let default handler run.
  return false;
}

StackTrace.js

JavaScript can throw exceptions like any other language; browser debugging tools often show you a full stack trace for unhandled exceptions, but gathering that information programmatically is a bit more tricky. To learn a bit more about the JavaScript language and how to gather this information yourself, I suggest taking a look at this article by Helen Emerson. However in practice I would strongly suggest you use a more robust tool...

StackTrace.js is a very powerful library that will build a fully detailed stack trace from an exception object. It has a simple API, cross browser support, it handles fringe cases, and is very light weight and unobtrusive to your other JS libraries.

try {
    // error producing code
} catch(error) {
   // Returns stacktrace from error!
   var stackTrace = printStackTrace({e: error});
}

Two Big Problems

  1. The window.onerror callback does not contain the actual error object.

This is a big problem because without the error object you cannot rebuild the stack trace. The error message is always useful, but file names and line numbers will be completely useless once you have minified your code in production. Currently, the only way you can bring additional information up to the onerror callback is to try catch any exceptions that you can and store the error object in a closure or global variable.

  1. If you globally try catch event handlers it will be harder to use a debugger.

It would not be ideal to wrap every single piece of code that you write in an individual try catch block, and if you try to wrap your generic event handling methods in try catches then those catch blocks will interrupt your debugger when you are working with code in development.

Currently my suggestion is to go with the latter option, but only deploy those interceptors with your minified or production code.

jQuery Solution

This global error handling implementation for jQuery and ASP.NET MVC is only 91 lines of JavaScript and 62 lines of C#.

Download JavaScriptErrorReporter from GitHub

To get as much information as possible, you need to wire up to three things:
(Again, I suggest that you only include this when your code is minified!)

  1. window.onerror
  2. $.fn.ready
  3. $.event.dispatch

Here is the meat of those wireups:

var lastStackTrace,
    reportUrl = null,
    prevOnError = window.onerror,
    prevReady = $.fn.ready,
    prevDispatch = $.event.dispatch;
 
// Send global methods with our wrappers.
window.onerror = onError;
$.fn.ready = readyHook;
$.event.dispatch = dispatchHook;
 
function onError(error, url, line) {
    var result = false;
    try {
        // If there was a previous onError handler, fire it.
        if (typeof prevOnError == 'function') {
            result = prevOnError(error, url, line);
        }
        // If the report URL is not loaded, load it.
        if (reportUrl === null) {
            reportUrl = $(document.body).attr('data-report-url') || false;
        }
        // If there is a rport URL, send the stack trace there.
        if (reportUrl !== false) {
            var stackTrace = getStackTrace(error, url, line, lastStackTrace);
            report(error, stackTrace);
        }
    } catch (e) {
        // Something went wrong, log it.
        if (console && console.log) {
            console.log(e);
        }
    } finally {
        // Clear the wrapped stack so it does get reused.
        lastStackTrace = null;
    }
    return result;
}
 
function readyHook(fn) {
    // Call the original ready method, but with our wrapped interceptor.
    return prevReady.call(this, fnHook);
 
    function fnHook() {
        try {
            fn.apply(this, arguments);
        } catch (e) {
            lastStackTrace = printStackTrace({ e: e });
            throw e;
        }
    }
}
 
function dispatchHook() {
    // Call the original dispatch method.
    try {
        prevDispatch.apply(this, arguments);
    } catch (e) {
        lastStackTrace = printStackTrace({ e: e });
        throw e;
    }
}

Identifying Duplicate Errors

One last thing to mention is that when your stack trace arrives on the server it will contain file names and line numbers. The inconsistency of these numbers will make it difficult to identify duplicate errors. I suggest that you "clean" the stack traces by removing this extra information when trying to create a unique error hash.

private static readonly Regex LineCleaner
    = new Regex(@"\([^\)]+\)$", RegexOptions.Compiled);
 
private int GetUniqueHash(string[] stackTrace)
{
    var sb = new StringBuilder();
 
    foreach (var stackLine in stackTrace)
    {
        var cleanLine = LineCleaner
            .Replace(stackLine, String.Empty)
            .Trim();
 
        if (!String.IsNullOrWhiteSpace(cleanLine))
            sb.AppendLine(cleanLine);
    }
 
    return sb
        .ToString()
        .ToLowerInvariant()
        .GetHashCode();
}

Integration Steps

This article was meant to be more informational than tutorial; but if you are interested in trying to apply this to your site, here are the steps that you would need to take:

  1. Download JavaScriptErrorReporter from GitHub.
  2. Include StackTrace.js as a resource in your website.
  3. Include ErrorReporter.js as a resource in your website.
    • Again, to prevent it interfering with your JavaScript debugger, I suggest only including this resource when your scripts are being minified.
  4. Add a report error action to an appropriate controller. (Use the ReportError action on the HomeController as an example.)
  5. Add a "data-report-url" attribute with the fully qualified path to your report error action to the body tag of your pages.
  6. Log any errors that your site reports!
Shout it

Enjoy,
Tom

Thursday, March 28, 2013

Run Multiple Tasks in a Windows Service

ServicesBase.Run(ServiceBase[] services) ...that sure made me think you could run multiple implementations of ServiceBase in a single Windows Service; but that is not how it works.

One Service, Multiple Tasks

I often have a very simple use case: I needed to run multiple tasks in a single windows service. For example, one task to poll some email notifications, and one task to listen to HTTP requests that might trigger those same notifications.

I like the basic setup of OnStart, Run, and OnStop, but after authoring several windows services I got tired of writing the same code over and over again. Thus I created the WindowServiceTasks library. Now whenever I need to do another task in a service, I just implement the WindowsServiceTaskBase or WindowsServiceLoopBase class, and my code is all ready to go!

WindowsServiceTasks

Run multiple tasks in a single WindowsService, and let the base library do all of the work setup and tear down for you. The WindowsServiceTasks NuGet package is simple, flexible, and extremely light weight.

Example

public static class Program
{
    public static void Main(params string[] args)
    {
        var logger = new Logger("Demo.log");
        var emailTask = new EmailServiceTask(logger);
        var failTask = new FailServiceTask(logger);
 
        var service = new Service1(logger, emailTask, failTask);
        service.Start(args);
    }
}
public class EmailServiceTask : WindowsServiceLoopBase
{
    private readonly ILogger _logger;
 
    public EmailServiceTask(ILogger logger)
    {
        _logger = logger;
    }
 
    protected override int LoopMilliseconds
    {
        get { return 2000; }
    }
 
    protected override void HandleException(Exception exception)
    {
        _logger.Log(exception);
    }
 
    protected override void RunLoop()
    {
        // TODO Send Email!
        _logger.Log("EmailServiceTask.RunLoop - Sending an email");
    }
 
    public override void OnStart(string[] args)
    {
        _logger.Log("EmailServiceTask.OnStart");
    }
 
    public override void OnStop()
    {
        _logger.Log("EmailServiceTask.OnStop");
    }
 
    protected override void DisposeResources()
    {
    }
}
Shout it

Enjoy,
Tom

Real Time Web Analytics