Saturday, December 31, 2016

2016 Retrospective

.NET

It has been a great year for .NET development! A Visual Studio Community is fully featured, .NET Core has arrived, and everything is open source. Regarding .NET Core, I am really enjoying working with it, and I simply cannot wait to get deeper into the Linux world.

Blog

I finally had to downgrade from three posts per month to only two posts per month. Unfortunately writing quality blog posts tasks time, and that was not something that I had in great abundance this year. Fortunately, I do think that the majority of posts this year were very high quality, especially when you look at the most recent ones. I have been working a lot with performance optimization, and have really been enjoying profiling and digging deep into code to see exactly what it is doing and why.

Tact.NET

I am very happy to have launched Tact.NET this year! I have always really enjoyed creating frameworks, so rather than continue to write one off posts on this blog I decide to put all of my extracurricular worth together under one repository. I am really enjoying making Tact, and I have every intention of continuing to grow it.

QQ Cast

Wow, the QQ Cast is back! We took a hiatus for the second half of 2015, but in 2016 we recorded 43 podcasts. Next week is actually going to be our 100th episode, be sure to check it out!

Happy new year,
Tom

Friday, December 30, 2016

Object Pooling and Memory Streams

The theme of this year, which I will talk about in my 2016 retrospective, has been optimization. It's been a fun journey, and I have really enjoyed getting down and dirty with profiling garbage collection, using spin waits, and aggressive inlining.

I want to end this year on a fun note: object pooling.

A great use case for this would be making HTTP requests with serialized objects. When you serialize an object, and then place it in a HttpContent object, you are probable creating several buffers (byte arrays) each time. For example, if you are using Newtonsoft to serialize an object and then adding that to a string content object for your request, then you are probably using more memory than you need. But that is getting ahead of ourselves...

Come back next week for a blog post about efficient JSON Content serialization!

For now, let's focus on building an object pool. Really all that we need is a preallocated array to store unused objects in, and then a super efficient thread safe data structure to pool (get and set) those objects.

How does pooling memory streams help us?

When you create a MemoryStream, it creates a byte array. As that byte array grows, the memory stream resizes it by allocating a new larger array and then copying your bytes into it. This is inefficient not only because it creates new objects and throws the old ones away, but also because it has to do the leg work of copying the content each time it resizes.

How can we reuse memory streams? Just set the length to zero!

Internally this will just set an index and empty the array, but the internal data structures will be preserved for future use. Thus, by putting memory streams into an object pool, we can drastically increase our efficiency.

Here is a demo of using the Tact.NET ObjectPool to pool MemoryStreams...

[Fact]
public void MemoryStreamPoolDemo()
{
    using (var pool = new ObjectPool<MemoryStream>(100, () => new MemoryStream()))
    {
        var memoryStream1 = pool.Acquire();
 
        memoryStream1.SetLength(0);
        Assert.Equal(0, memoryStream1.Capacity);
 
        memoryStream1.Write(new byte[] {1, 0, 1, 0, 1}, 0, 5);
 
        var array1 = memoryStream1.ToArray();
        Assert.Equal(5, array1.Length);
        Assert.Equal(1, array1.First());
        Assert.Equal(1, array1.First());
 
        pool.Release(memoryStream1);
 
        var memoryStream2 = pool.Acquire();
        Assert.Same(memoryStream1, memoryStream2);
 
        memoryStream2.SetLength(0);
        Assert.Equal(256, memoryStream2.Capacity);
 
        memoryStream2.Write(new byte[] { 0, 1, 0 }, 0, 3);
 
        var array2 = memoryStream2.ToArray();
        Assert.Equal(3, array2.Length);
        Assert.Equal(0, array2.First());
        Assert.Equal(0, array2.First());
    }
}

Enjoy,
Tom

Sunday, November 27, 2016

The Performance Cost of Boxing in .NET

I recently had to do some performance optimizations against a sorted dictionary that yielded some interesting results...

Background: I am used to using Tuples a lot, simply because they are easy to use and normally quite efficient. Please remember that Tuples were changed from structs to classes back in .NET 4.0.

Problem: A struct decreased performance!

I had a SortedDictionary that was using a Tuple as a key, so I thought "hey, I'll just change that tuple to a struct and reduce the memory usage." ...bad news, that made performance WORSE!

Why would using a struct make performance worse? It's actually quite simple and obvious when you think about it: it was causing comparisons to repeatedly box the primitive data structure, thus allocating more memory on the heap and triggering more garbage collections.

Solution: Use a struct with an IComparer.

I then created a custom struct and used that; it was must faster, but it was still causing boxing because of the non-generic IComparable interface. So finally I added a generic IComparer and passed that into my dictionary constructor; my dictionary then ran fast and efficient, causing a total of ZERO garbage collections!

See for yourself:

The Moral of the Story

Try to be aware of what default implementations are doing, and always remember that boxing to object can add up fast. Also, pay attention to the Visual Studio Diagnostics Tools window; it can be very informative!

Here is how many lines of code it took to achieve a 5x performance increase:

private struct MyStruct
{
    public MyStruct(int i, string s) { I = i; S = s; }
    public readonly int I;
    public readonly string S;
}
 
private class MyStructComparer : IComparer<MyStruct>
{
    public int Compare(MyStruct x, MyStruct y)
    {
        var c = x.I.CompareTo(y.I);
        return c != 0 ? c : StringComparer.Ordinal.Compare(x.S, y.S);
    }
}

Test Program

I have written some detailed comments in the Main function about what each test is doing and how it will affect performance. Let's take a look...

Saturday, November 26, 2016

10x faster than Delegate.DynamicInvoke

This is a follow up to my previous blog posts, Optimizing Dynamic Method Invokes in .NET, and Dynamically Invoke Methods Quickly, with InvokeHelpers.EfficientInvoke. Basically, I have re-implemented this for Tact.NET in a way that makes it smaller, faster, and compatible with the .NET Standard.

So, how much faster is this new way of doing things? EfficientInvoker.Invoke is over 10x faster than Delegate.DynamicInvoke, and 10x faster than MethodInfo.Invoke.

Check out the source on GitHub:

Simple Explanation

Here is an example of a method and a class that we might want to invoke dynamically...

public class Tester
{
    public bool AreEqual(int a, int b)
    {
        return a == b;
    }
}

...and then here is the code that the EfficientInvoker will generate at runtime to call that method:

public static object GeneratedFunction(object target, object[] args)
{
    return (object)((Tester)target).AreEqual((int)args[0], (int)args[1]);
}

See, it's simple!

Monday, October 31, 2016

IOC Container for Tact.NET

Autofac now supports .NET Core, but other IOC frameworks such as Ninject and Unity have yet to port over. While I was looking into IOC frameworks for .NET Core, I had a bad idea...I, for fun, wrote my own Container in Tact.NET!

So, why did I do this? Honestly, it was just a fun academic exercise! I do think that this is a pretty good container, and I intend to use it in some of my personal projects. Would I recommend that YOU use this? Probably not yet, but I would invite you take a look and offer feedback!

Container Design

I have broken the container into two interfaces: IContainer for registration, and IResolver for consumption. There is a abstract class, ContainerBase, that can be inherited to easily create a container that matches other frameworks; for example, I intend to create a IDependencyResolver for ASP.NET.

You may notice that the IContainer does not have any lifetime management methods, that is because ALL of them are implemented as extension methods...

Registrations

To create a new lifetime manager you have to implement the IRegistration interface. There are already implementations for quite a few:

Resolution Handlers

The last part of the container is the IResolutionHandler interface. These resolution handlers are used when an exact registration match is not found during dependency resolution. For example, the EnumerableResolutionHandler will use ResolveAll to get a collection of whatever type is being requested. Another very different example, the ThrowOnFailResolutionHandler will cause an exception to be thrown when no match can be found.

I think that is a pretty good start, but I am hoping that this will continue to grow with time.

Enjoy,
Tom

Introducing Tact.NET

I have decided to create a project to consolidate all of the utilities that I have authored over the years on this blog...

Tact.NET - A tactful collection of utilities for .NET development.

I believe that this will give me the opportunity to update old code more often, as well as develop new utilities faster. For example, the there is already an ILog interface that optimizes (thank you, Jeremy) the Caller Information Extensions I wrote about about back in May.

Everything that I add to this project will be build for the .NET Standard Library, and should support .NET Core on all platforms. Expect to see NuGet packages soon, and hopefully many more blog posts to come.

Enjoy,
Tom

Friday, September 30, 2016

Host HTTP and WebSockets on the Same Port in ASP.NET

How do you support WebSocket connections on the same port as your HTTP server in .NET? It turns out that this is not that hard, and you even have several choices to do so...

Option 1: TCP Proxy

You could use a TCP proxy port to divide traffic between the two targets. For example, all traffic would come in to port 80, but then HTTP traffic would be routed internally to 8001 and WS traffic to 8002. This is useful because it would allow you to use multiple technologies (in the example on their site, both NancyFX and Fleck) to host your web servers.

Option 2: SignalR

SignalR is great because of all the backwards compatibility that it supports for both the server and client. However, it is not the most lightweight framework. If you choose to use it, then it will absolutely support both HTTP and WS.

Option 3: Owin.WebSocket *My Favorite*

Owin supports WebSockets, and that is exactly what SignalR uses to host its WS connections. The awesome Mr. Bryce Godfrey extracted the Owin code from SignalR into a much smaller library called Owin.WebSocket.

The ONLY thing that I did not like about this implementation was that is uses inheritance to define endpoints, whereas I much prefer the ability to use delegates and lambdas. Because of this, I created Owin.WebSocket.Fleck, which allows you to use the Fleck API to map your WebSockets to an Owin context. A pull request is open to merge this into the main repository.

Enjoy,
Tom

Real Time Web Analytics