Sunday, December 25, 2011

Happy Holidays

My apologies for the shortage of blog posts recently, but between the new job and some unannounced side projects I've been extremely busy. So here is some best of 2011 filler material...

  • Gizmo of the Year: Asus Transformer Prime
  • Console Game of the Year: Super Mario 3D Land
  • PC Game of the Year: Orcs Must Die
  • Movie of the Year: Troll Hunter
  • Programmer of the Year: Markus "Notch" Persson
  • Gift of the Year: FLYING SHARKS!

Happy Holidays,
Tom

Sunday, November 6, 2011

jQuery UI Modal Dialog disables scrolling in Chrome

Is Chrome becoming the new IE?

As much as I love jQuery, I still cannot escape the fact that jQuery UI leaves a lot to be desired. Yesterday I ran across an issue where the jQuery UI modal dialog acted inconsistently in different browsers. Normally opening a modal leaves the background page functionality unaltered, but in Webkit browsers (I ran into this while using Chrome) it disables the page scroll bars.

The Fix

Yes, this bug has already been reported. Yes, it is priority major. No, it won't be fixed anytime soon. For a feature as widely used as the Modal Dialog, I find that kinda sad.

However, thanks to Jesse Beach, there is a tiny little patch to fix this! Here is a slightly updated version of the fix:

(function($) {
  if ($.ui && $.ui.dialog && $.browser.webkit) {
    $.ui.dialog.overlay.events = $.map(['focus', 'keydown', 'keypress'], function(event) { 
      return event + '.dialog-overlay';
    }).join(' ');
  }
}(jQuery));

Additional Resources

Hope that helps!
Tom

Thursday, November 3, 2011

Configuring MVC Routes in Web.config

ASP.NET MVC is even more configurable than you think!

Routes are registered in the Application Start of an MVC application, but there is no reason that they have to be hard coded in the Global.asax. By simply reading routes out of the Web.config you provide a way to control routing without having to redeploy code, allowing you to enable or disable website functionality on the fly.

I can't take credit for this idea, my implementation is an enhancement of Fredrik Normén's MvcRouteHandler that adds a few things that were missing:

  • Optional Parameters
  • Typed Constraints
  • Data Tokens
  • An MVC3 Library

Download MvcRouteConfig.zip for the project and a sample application.

Example Global.asax

public static void RegisterRoutes(RouteCollection routes)
{
    routes.IgnoreRoute("{resource}.axd/{*pathInfo}");
 
    var routeConfigManager = new RouteManager();
    routeConfigManager.RegisterRoutes(routes);
}

Example Web.config

<configuration>
  <configSections>
    <section name="routeTable" type="MvcRouteConfig.RouteSection" />
  </configSections>
  <routeTable>
    <routes>
      <add name="Default" url="{controller}/{action}/{id}">
        <defaults controller="Home" action="Index" id="Optional" />
        <constraints>
          <add name="custom"
              type="MvcApplication.CustomConstraint, MvcApplication">
            <params value="Hello world!" />
          </add>
        </constraints>
      </add>
    </routes>
  </routeTable>

Thanks Fredrik!

~Tom

Update 2/16/2013 - I have added this source to GitHub and created a NuGet Package.

Shout it

Monday, October 17, 2011

Using the InternetExplorerDriver for WebDriver

Are you getting this error when trying to use the InternetExplorerDriver for WebDriver (Selenium 2.0)?

System.InvalidOperationException

"Unexpected error launching Internet Explorer. Protected Mode must be set to the same value (enabled or disabled) for all zones. (NoSuchDriver)"

Don't worry, it's easier to fix than it sounds: Simply go into your Internet Options of Internet Explorer, select the Security tab, and all four zones to have the same Protected Mode value (either all on or all off). That's it!

Sample Code

using NUnit.Framework;
using OpenQA.Selenium;
using OpenQA.Selenium.IE;
 
namespace WebDriverTests
{
    public abstract class WebDriverTestBase
    {
        public IWebDriver WebDriver { get; set; }
 
        [TestFixtureSetUp]
        public void TestFixtureSetUp()
        {
            WebDriver = new InternetExplorerDriver();
        }
 
        [TestFixtureTearDown]
        public void TestFixtureTearDown()
        {
            if (WebDriver != null)
            {
                WebDriver.Close();
                WebDriver.Dispose();
            }
        }
 
        [SetUp]
        public void SetUp()
        {
            WebDriver.Url = "about:blank";
        }
    }
 
    [TestFixture]
    public class GoogleTests : WebDriverTestBase
    {
        [Test]
        public void SearchForTom()
        {
            WebDriver.Url = "http://www.google.com/";
 
            IWebElement searchBox = WebDriver
                .FindElement(By.Id("lst-ib"));
 
            searchBox.SendKeys("Tom DuPont");
            searchBox.Submit();
 
            IWebElement firstResult = WebDriver
                .FindElement(By.CssSelector("#search cite"));
 
            Assert.AreEqual("www.tomdupont.net/", firstResult.Text);
        }
    }
}
Shout it

Enjoy,
Tom

Tuesday, September 27, 2011

Unity.MVC3 and Disposing Singletons

Recently I was having a conversation with a friend where I was able to articulate my thoughts on dependency injection more eloquently than I ever had before. In fact, I may have phrased this better than any nerd in the history of programming. That's a bold statement, so you be the judge:

"I didn't just swallow the inversion of control kool-aid, I went on the I.V. drip."

Unity.Mvc3

I recently starting working on a new project where they wanted to use Microsoft's Unity for their DI framework. It's not the exact flavor of IOC container would have chosen, but I am all for best practices regardless of specific implementation.

Important side note about MVC3: There is a new way to wire up for dependency injection. MVC now natively exposes a System.Web.MVC.DependencyResolver that you can set to use your container. This means that you no longer need to create a custom MVC controller factory to inject your dependencies.

While researching how best to implement Unity in my MVC 3 project, I came across Unity.Mvc3 by DevTrends. (Here is a little overview.) It's a great project for all the right reasons:

  1. It's up to date.
  2. It's lightweight.
  3. It solves pre-existing issues.

Note that last point. There are a lot of wrappers out there, and a lot of simple copy paste code snippets, but I really appreciate when someone goes out of their way to do more than just write their own version of something. To be specific, unlike many of the alternatives Unity.Mvc3 works with IDisposable dependencies, and that just happens to be a requirement of the project that I am working on.

Also, DevTrends get's additional bonus points for deploying their solution as a NuGet package!

Disposing Singletons with Application_End

I did find two small problems with Unity.Mvc3:

First and foremost, singleton dependencies (anything registered with ContainerControlledLifetimeManager) were not being disposed. This was easy enough to fix however, I just wired up a dispose call to my Bootstrapper in the MvcApplication's Application_End method.

Second, the NuGet package's initialize method was spelled wrong. I actually view this is actually a good thing, because it means that I am not the only person who makes spelling errors in code that they opensource! HA HA HA!

Bootstrapper Example

public static class Bootstrapper
{
    private static IUnityContainer _container;
 
    public static void Initialize()
    {
        _container = BuildUnityContainer();
        var resolver = new UnityDependencyResolver(_container);
        DependencyResolver.SetResolver(resolver);
    }
 
    public static void Dispose()
    {
        if (_container != null)
            _container.Dispose();
    }
 
    private static IUnityContainer BuildUnityContainer()
    {
        var container = new UnityContainer();
        // TODO Register Types Here
        container.RegisterControllers();
        return container;
    }
}
 
public class MvcApplication : System.Web.HttpApplication
{
    protected void Application_Start()
    {
        AreaRegistration.RegisterAllAreas();
        RegisterGlobalFilters(GlobalFilters.Filters);
        RegisterRoutes(RouteTable.Routes);
 
        Bootstrapper.Initialize();
    }
 
    protected void Application_End()
    {
        Bootstrapper.Dispose();
    }

Enjoy,
Tom

Saturday, September 17, 2011

Object Oriented JavaScript Tutorial

Over the past month I spent a lot of time helping teach a good friend of mine how to write advanced JavaScript. Although he had worked with JavaScript many times before, he did not know a lot of the simple things that make JavaScript the crazy dynamic rich development experience that it is. Together we built up this series of examples to help teach everything from language basics to object orientation.

Why do we need yet another JavaScript tutorial?

JavaScript is an interesting language. It is loosely typed, the object orientation has been hacked in over time, and there are at least 10 ways to do anything. The most important thing you can do when writing JavaScript is to choose a set of conventions and stick with them. Unfortunately with so many different ways to do things out there, it's hard to find tutorials that are consistent in their conventions.

Just like developing in any other programming language, understanding the fundamentals is key to building up an advanced application. This tutorial starts out extremely simple, and incrementally gets more advanced. It shows alternative implementations for different tasks, and tries to explain why I prefer the ones that I do. Meanwhile it always follows a consistent set of conventions.

Let's get to the tutorial!

This tutorial comes in 16 lessons. They are designed to be debugged with Firebug for Firefox. If you don't want to actually run the scripts then you can always just read them by downloading the small htm files below and opening them in notepad (or whichever text editor you prefer).

  1. JavaScript Types
  2. Object Properties
  3. Object Declarations
  4. Objects vs Arrays
  5. Object Pointers and Cloning
  6. Equals Operators
  7. Closures
  8. Advanced Closures
  9. Defining Classes with Closures
  10. Defining Classes with Prototype
  11. Function Scope
  12. Creating Delegates
  13. Class Inheritance
  14. Advanced Class Inheritance
  15. More Advanced Class Inheritance
  16. Extending jQuery

Additional Resources

If all that wasn't enough or it just got you thirsty for more, here are some additional resources to help start you down the path of becoming a ninja with JavaScript. My recommendation is that whenever you are writing JavaScript you should have w3schools open in a second browser window at all times.

Enjoy,
Tom

9/21 Update - Thanks to Zach Mayer (of System-Exception.com) for providing a great critique of my tutorial. He pointed out several typos and small bugs in lessons 4, 6, 7, 9, 10, 12, 14, 15.

Monday, August 22, 2011

Why best practices?

I recently engaged in a fun discussion: Why should we (developers) follow best practices?

My first knee jerk reaction was to say "because they are best practices!" Of course someone immediately pointed out to me what a terrible argument that was. To their point, I could call anything a best practice, but that does not necessarily make it a good thing.


Best Practices: All the cool kids (wearing fedoras) are doing it.

So yes, I agree that any particular best practice should be able to stand on it's own merit. Of course this immediately brings up the next problem...

How do we train people to follow these best practices without having to prove out each and every one?

This is where we have to put our egos aside and be willing to defer to other people's opinions. When a Microsoft MVP or your team lead comes to you and says that you should do something a particular way, you probably want to consider what they have to say. This is because they have credentials, and that they have (hopefully) earned those titles and merit badges from years of experience, which would certainly imply that their opinion has value.

Am I saying that you should just defer to your senior developers / elders? No, absolutely not! Honesty goes both ways, so if something is wrong then you call them out on it! However, you should not feel the need to scrutinize every practice someone else puts forward just because you have not used it before. Again, if something is a good idea it often becomes self evident once implemented.

If you are trying to promote a best practice, start by providing a simple example or two.

Why should we have a data access layer where all of our database code goes? There are several reasons. When a change gets made to the database (and changes WILL get made to the database), there is only one place where you have to go to update your data access logic. Putting all of your logic in one place ensures that people will not rewrite the same queries over and over again. The list goes on.

If you are the trainee, please do not fall back on using arguments from ignorance.

If you are a web developer and you are asking why should I use interfaces in your MVC models, then you probably have not worked on a large project where multiple controllers (each with different models) are all rendering the same user controls. Admittedly it may take a little bit more code to implement an interface, but the reuse that it provides more than makes up for that, and when we look at this in action the advantages should become self evident.

On a related note, why do so many developers not believe in training?

Even after making it big, professional sports players still train everyday. All of the best athletes in the world still have coaches.  The world's leading scientists still read new papers published by students. So what makes software engineers any different?

I encourage all developers to go out and read blogs, check out sample projects, research design patterns, attend their local user groups, and do anything to get exposed to new ideas! Remember that such things will often provide an immediate return in your every day work life. Testable code makes hunting bugs easier, it makes your integration phase shorter, it makes releases go smoother, and that makes your stress level go down!

Programming is fun, and adhering to best practices only helps make it more fun for everyone!

~Tom

Monday, August 15, 2011

Gathering Requirements for Open Source Projects

This past weekend was Dallas TechFest 2011. It was a great conference filled with great speakers and great attendees. Despite speaking at 9am both mornings (which I consider to be far too early in the morning for...well, anything), I had amazing turn outs for both presentations. In short, I had a blast, I got to hang out with great people, and I even learned at thing or two. But I digress.

For the abridged version, just stick to the big italic text. ;)

The Setup

Having so many community leaders in one place at one time, on Friday night a Dot Net User Group leader meetup was held, and this meeting took the form of a fishbowl. Many questions were raised, most got answers, but of course others only wound up raising more questions. Something particularly fun that happened when Devlin Liles raised the question, and I'm paraphrasing: "Why are so many SIG (special interest group) websites so crappy?"

After we all kicked this around for a while, and all the arguments seemed to boil down to the fact that everyone uses different systems to organize their groups, and there does not exists a univerially accepted and easy process to integrate them. As the conversation wound down, I hopped up in the one of the chairs and offered to help spearhead a project to write some type of new website that could help solve these problems.

Then something really really cool happened: a show of hands was called, and almost everyone in the room expressed an interest in contributing to such an open source website!

We very quickly decided to hold an open requirements gathering session the next day at Dallas TechFest (coincidentally to be held in the CodeSmith Hack Room). This is just another example of why I love being a member of the .NET community. It shows just how fun, interactive, and giving the this particular geek can be.

The Problem

Our story continues Saturday morning in the CodeSmith Hack room. Almost three dozen devs showed up to participate, meaning that our numbers had already grown since the previous night! We started out by making two lists on the rooms white board, "this it absolutely must do" and "things it absolutely must not do". The lists were drawn up, votes were cast, and then the drill down debate began.

It took us an additional 45 minutes to realize that the mistake had already been made: We had all listed off similar requirements, but we all had different visions of what this website would actually do.

It took us a while, but we finally figured out that we were all using the same adjectives and verbs to describe different nouns. Finally we called a vote to discuss if we were talking about 1) a top down approach where the website collects and aggrigates data from other SIG sites, or 2) a bottom up approach where this is just an open source SIG website that anyone can host themselves.

So, why did it take us so long to identify the problem? Also, how could this miscommunication have been avoided?

The Solution

Countless statistics and case studies have pointed to the requirements phase as the key to determining whether not a project will succeed or fail. I view this as a modest reminder that no project is immune to this, especially not open source ones. I fear that developers, especially the rock stars who like to engage in open source projects, often forget this.

In my opinion, our particular situation could have been avoid if we had started by defining some basic use cases.

We started out by focusing on functional requirements, but depending on which permutation of these that we selected we would come out with very different products. If we had started out by defining some use cases, that might have helped us determine how our users would interact with this website and thus would have helped us better understand our functional requirements.

Examples

I am not saying that we should have broken out our UML Diagram tools. I am saying that by simply defining basic use cases we can help create a context in which to define functional requirements.

Here are some examples to help clarify this.

Objective: Create a website to help people find user groups.
Requirement 1: Must include user group information.
Requirement 2: Make events appear on a searchable calendar.
Requirement 3: Search for events by metroplex or geographical area.
Ambiguity: Is this a portal to other websites, or is it a hosting service for other websites?

Now, let's add two use cases (one for the end user, and one for the administrator) to clarify things.

Objective: Create a website to help people find user groups.
Requirement 1: Must include user group information.
Requirement 2: Make events appear on a searchable calendar.
Requirement 3: Search for events by metroplex or geographical area.
Use Case 1: Users may arrive at the website and see a list of upcoming events in their area.
Use Case 2: Administrators should be able to register their existing website with this portal.
Ambiguity: Not so much.

Anyway, I hope that helps! 

~Tom

Friday, July 29, 2011

Zen and the Art of Dependency Injection in MVC

Let me start off on a modest note: I am not an expert on dependency injection.
I am, however, certain of the value that it provides.

TDD

After having engaged in the same conversation time and time again, I have become convinced that there is indeed "one simple way to get any developer to write better code." That is because every developer, Junior or Senior, C# or Java, can always engage in more Test Driven Development with Dependency Injection. These are best practices that we can all agree on and that will always result in better and more maintainable code.

If you are already doing TDD with Dependency Injection, keep it up and help spread the word! If you are not, it's time to start. On the plus side, thanks to tools like NuGet, it has never been easier to get started with all of these new fun techniques. :)

Dependency Injection

Dependency Injection is a universally accepted best practice for a number or reasons, not the least of which is how easy it makes unit testing. You should be able to test one section of code without having to rely on the other 99% of your application. By injecting dependencies you are able to control what code is being tested with pin point precision.

A little ant can't move a lake, nor should it want to. However with a little help from a good surface tension framework, it can easily move a drop of water.

MVC

The MVC architecture provides you with a consistent entry point into your code: controller actions. All requests come in and get processed the same way, always passing through a consistent set of action filters and model binders. Right out of the box the MVC model binders are already injecting models straight into your controllers, so why not take it one more step forward and inject your services too?

My controller actions almost always need two things: 1) a model and 2) a service to process that model. So let the MVC model binder manage the RequestContext and inject your model, and let a DI framework manage the logic by injecting your service.

NuGet

To get started (after having installed NuGet) you need look no farther than Tools -> Library Package Manager -> Manage NuGet Packages. If you search for Ninject, a MVC3 specific package will come up that can install itself straight into your MVC3 project and get you going in mere seconds.

Ninject is just one choice of Dependency Injection framework, and it has great documentation. If you would prefer something else then just pick your flavor of choice and keep on moving.

Example

Create your controllers, models and services like normal, and update your controller to take in a service dependency through it's constructor.

public class CalculatorController : Controller
{
     public ICalculatorService CalculatorService {get; private set;}
     public CalculatorController(ICalculatorService calculatorService)
     {
         CalculatorService = calculatorService;
     }

Then all that you have left to do is create a model that binds the service type to it's interface...

public class MvcExampleModule : NinjectModule
{
     public override void Load()
     {
         Bind<ICalculatorService>().To<CalculatorService>();
     }
}

...and load that in the static RegisterServices method.

public static class NinjectMVC3
{
     private static void RegisterServices(IKernel kernel)
     {
          var module = new MvcExampleModule();
          kernel.Load(module);
     }

That's it. That is all that you have to do to start using Dependency Injection.
Want proof? Download the sample application.

Enjoy!
Tom

Sunday, July 17, 2011

How JSONP Works

So recently I was having a conversation with someone about my JSONP update for the ExtJS library. We were talking about how I added error handling to their default implementation, and exactly what trick I had used to do that. However, we should probably start at the beginning...

What is JSONP, and how does it work?

JSONP is a standard (a hack really) that allows you to make AJAX requests across different domains. While this is an obvious security risk, there are also times where do right necessary.

Your browser appends a script block to your webpage that points to the foreign domain. Because of this, the JSONP request must be a simple get request that returns raw JavaScript.

So, what's the catch?

The return format for the JSONP request must be in the form of a call to a single global function. Once loaded, the script will execute and immediately call the global handler, which should know how to get the request data back to its caller. Also, depending on your browser, the script block may fire an onload (or in the hacky IE world, an onclick) event to help get the data back to its proper location.

The even trickier part is error handling, as there are no universal events to report script load failures.

Error Handling with JSONP

Ok, we are finally back on topic! As far as I know there are only three basic solutions to this problem...

Bad) No error handling at all.
Obviously this is a crappy solution, but it is what the default ExtJS implementation does. Somehow I suspect this lack of error handling support is 'justified' by saying that you shouldn't be using JSONP in the first place.

Better) Check for success after your max time out.
This is what my ExtJS implementation does. Before the request is made I set a timeout to call back 1 second after the max timeout of the request. The global handler calls a function that updates the JSONP request queue, canceling the timeout. If the time out is not canceled, it is assumed that the request failed and an error handler is called. This is much better than having now error handling, but it is still rather mediocre, as it could cause your end user to wait 30 seconds to find out about an error that happened 29 seconds ago.

Best) Wire up to state change events for the script block.
This is what the jQuery JSONP plugin does. Obviously this is the best solution, as it is using events exactly as they are supposed to be. The problem of course is that it has to support all the different browsers and all their different events.

In Summary

JSONP is useful, and is pretty easy to use. Implementing your own client side JSONP solution is kind of tricky, especially when taking error handling into account.

I have been trying to update my JSONP implementation to support additional error handling events, but so far it has proven to be rather difficult. Hopefully we'll see a 3.0 in the not too distant future, but right now I need to go watch the 2011 Womens World Cup Final! :D

~Tom

PS: Hopefully I will get the opporutnity to talk about this at SenchaCon 2011 in Austin TX...so if you haven't yet, go sign up!

Saturday, July 2, 2011

Another CodeSmith DNUG Tour Update!

On the road again, I just can't wait to get on the road again! The life I love is making code with my friends, and I can't wait to get on the road again!

Dates

  • New York
    • July 6th - Fairfield DNUG - Code Generation
    • July 7th - Long Island DNUG - Code Generation
  • Texas
    • July 12th - College Station DNUG - PLINQO
    • July 26th - Dallas ASP.NET UG - Attack of the Cloud
    • August 12th - Dallas TechFest - Attack of the Cloud, Embedded QA
    • October 23rd - Austin, SenchaCon - (Pending Approval)
  • Louisiana
    • August 6th - Baton Rouge, SQL Saturday - PLINQO
    • August 8th - Shreveport DNUG - Attack of the Cloud
    • August 9th - New Orleans DNUG - Code Generation
    • August 10th - Baton Rouge DNUG - Code Generation
    • August 11th - Lafayette DNUG - Code Generation
  • Utah
    • September 8th - Salt Lake City, Utah DNUG - TBA
    • September 10th - Salt Lake City, Utah Code Camp - TBA
    • September 10th - Salt Lake City, SQL Saturday Utah - (Pending Approval)

Want me to come speak in your area? Just drop me a line!
tom@codesmithtools.com

See you on the road,
Tom

Friday, June 24, 2011

Dallas TechFest 2011 Registration is OPEN

Are you a developer? Do you live in the DFW area? Then come join me at...

Dallas TechFest 2011
When: Friday, August 12, 2011 - Saturday, August 13, 2011
Where: University of Texas at Dallas (Richardson, TX)

Use discount code TwoDays to get $50 (that's half) off the price of a ticket!

See you there,
Tom

Friday, June 3, 2011

Life is good in Florida

It seems that no matter what state I go to I find myself being pleasantly surprised.

Arkansas was fun, Iowa was cool, and now Florida is very chill. Perhaps I am thinking about this all wrong; maybe it's less about geographic or geopolitical lines and more about people. All these speaking tours I am going on have one thing in common: no matter where I go I am always surrounded by my fellow .NET developers.

...and all of you people rock!

I am a nerdy guy, and birds of a feather flock together, but this is more than that. Every user group has been filled with fun and interesting people, and we all arrived at the same destination by taking different paths. We come together at these meetings because we all "get it", we love what we do, and I can not think of a better crowd of people that I want to spend my time with.

In short: Thank you.

Thank you Microsoft for developing such great technology. Thank you INETA for helping these DNUGs stay organized. Thank you everyone that has come out to my presentations so far. Thank you everyone who has gone out after the meetings to keep talking shop. On that note:


A tavern that serves beer in pink elephant glasses...can anyone top that? I don't know, but I intend to find out with my remaining presentations!

  • June 7th (Tuesday) - Deerfield Beach, Florida - Code Generation
  • June 8th (Wednesday) - Fort Walton Beach, Florida - Code Generation
  • June 9th (Thursday) - Mobile, Alabama - Code Generation

See you out there,

Tom DuPont
Vagabond Programmer

Friday, May 13, 2011

LINQ to SQL vs Entity Framework vs NHibernate

I am working on a new presentation that is essentially a compare and contrast analysis of the three major ORMs out there (LINQ to SQL, Entity Framework, and NHibernate). My problem is not the content, it's choosing a name for the presentation! Here are just a few ideas that I've been considering:

  • Ultimate Fighter
    • Ultimate Fighter: ORM Edition!
    • ORM Showdown: The good, the bad, and the query.
    • ORM Fight Night
    • Object Relational Matchup
  • Misc Movies
    • ORM Club: If it's your first night, your app has to fight.
    • Super-ORM-Man 4: The Quest for Business Logic
    • Dr. Strange DAL: Or how I learned to stop worrying and generated an ORM!
  • Video Games
    • World of ORM Craft
    • pORMtal: This is a triumph!
    • pORMtal 2: Wa are going to have fun, with computer science!
  • Star Wars
    • ORM Wars, Episode 4: A New DAL
    • ORM Wars, Episode 5: The Profiler Strikes Back
    • ORM Wars, Episode 6: Return of the DAL
  • Indiana Jones
    • ORM and the Raiders of the lost DAL
    • ORM and the DAL of Doom
    • ORM and the Last Cascade
  • Star Trek
    • ORM Trek 1: The Data Access Layer
    • ORM Trek 2: The Wrath of DAL
    • ORM Trek 3: The Search for Business Logic
    • ORM Trek 4: The Request's Roundtrip Voyage Home
    • ORM Trek 5: The Data Access Frontier 
    • ORM Trek 6: The Undiscovered Productivity
    • ORM Trek 7: Code Generations
    • ORM Trek 8: First Contract
    • ORM Trek 9: SQL Injection
    • ORM Trek 10: ADO.Nemesis
  • The Hobbit
    • Lord of the ORM: Fellowship of the Data Layers
    • Lord of the ORMs: The Batch Queries
    • Lord of the ORMs: Return of the DAL
    • There and Back Again: An Entity's Tale, by ORM

Wednesday, May 4, 2011

PLINQO for NHibernate, Beta 1 Released!

PLINQO makes NHibernate "so easy, even a caveman can do it."

PLINQO can generated your HBM files, entity classes, and all NHibernate configuration in seconds. It then allows you to safely regenerate that code at anytime, thus synchronizing your mappings with the database while still intelligently preserving custom changes. PLINQO for NHibernate is a enhance and simplify wrapping of NHibernate that brings the best practices, optimizations, and convenience of the PLINQO frameworks to NHibernate.

Check it out, ya'll! Check it check it out!

Friday, April 29, 2011

CodeSmith 2011 User Group Tour Update!

The Arkansas leg of the tour went really well. Hey, rest of the US: Arkansas has an amazing .NET community! I don't know what's better, their top notch developers or their gorgeous scenic byways? Either way, I am officially a huge fan of Arkansas. My thanks to everyone who came who out, and a special thanks to all the user group leaders.

Update: We have added a Louisiana leg to the tour!

Calling all Louisiana .NET developers, I will be visiting your city the second week of August! If you live in Shreveport, New Orleans, Baton Rouge, or Lafayette, be sure to come out to your local DNUG and learn about (among other things) code generation!

Dates

  • New Mexico
    • May 5th (Thursday) - New Mexico - Attack of the Cloud
  • Iowa
    • May 9th (Monday) - Cedar Rapids - Code Generation
  • Texas
    • May 11th (Wednesday) - DFW Connected Systems UG - PLINQO: Advanced LINQ to SQL
  • Florida
    • May 26th (Thursday) - Memphis - Code Generation
    • June 1st (Wednesday) - Brevard County  - PLINQO: Advanced LINQ to SQL
    • June 2nd (Thursday) - Tallahassee - Attack of the Cloud
    • June 7th (Tuesday) - Deerfield Beach - Code Generation
    • June 8th (Wednesday) - Fort Walton Beach - Code Generation
  • Alabama
    • June 9th (Thursday) - Mobile - Code Generation
  • New York
    • July 6th (Wednesday) - Fairfield - Code Generation
    • July 7th (Thursday) - Long Island - Code Generation
  • Louisiana
    • August 8th (Monday) - Shreveport - Attack of the Cloud
    • August 9th (Tuesday) - New Orleans - Code Generation
    • August 10th (Wednesday) - Baton Rouge - Code Generation
    • August 11th (Thursday) - Lafayette - Code Generation

Topics

  • Attack, of the Cloud!
    • Cloud computing is great, but what do we put in the cloud? The web is advancing at an incredible pace and it’s time to start building true Web Applications, not just web sites! Web Apps shake off the constraints of operating system specific frameworks and free developers to work in an open standards based environment. This session will cover a variety of topics ranging from ASP.NET MVC development, unit testing, REST APIs, JSON, JQuery, ExtJS, tips and tricks, lessons learned, and more. It will conclude with building a sample blog reader Web App, and then deploying that to Windows Azure.
  • Generate Your Code!
    • Code generation is a powerful practice that allows you produce higher-quality, more consistent code in less time. This helps remove the mundane and repetitive parts of programming, allowing developers to focus their efforts on more important tasks, and saving companies time and money. Code generation enables you to: efficiently reduce repetitive coding, generate code in less time with fewer bugs, and produce consistent code that adheres to your standards.
  • Using Embedded QA to Build Rock-Solid Software
    • Without an automated means to collect errors from deployed applications, how can you know that your software is performing as expected? Embedded QA can be used to augment your own internal QA efforts, greatly increasing both the effectiveness of your testing and overall stability of your applications. As Jeff Atwood phrased it, "If you're waiting around for users to tell you about problems with your website or application, you're only seeing a tiny fraction of all the problems that are actually occurring. The proverbial tip of the iceberg."
  • PLINQO: Advanced LINQ to SQL
    • In the time that LINQ to SQL has been available, we have been identifying ways to make LINQ to SQL better. We have compiled all of those cool tips and tricks including new features into a set of CodeSmith templates. PLINQO opens the LINQ TO SQL black box giving you the ability to control your source code while adding many new features and enhancements. It's still LINQ to SQL, but better!

Hope to see you soon!
~ Tom

Tuesday, April 12, 2011

Two Important Questions about PLINQO EF

The .NET community is the best development community ever.
How do I know that? Because they ask the best questions!

Here are two really important questions that we have been asked concerning PLINQO for Entity Framework that I wanted to call some extra attention to:

What is the advantage of using PLINQO EF instead of standard Entity Framework?

In 1.0 the primary goal was to improve the regeneration story of Entity Framework, thus making it easy to update and sync data and code changes. The entities are pretty much equivalent, but the PLINQO query extensions greatly improve and simplify the composition of queries.

With future versions there will be more features brought in from the PLINQO for L2S feature set.  This will include built in caching, auditing, enhanced serialization, possibly WCF and DataServices support, and hopefully batch/future queries!

What are the benefits, if any, of moving to PLINQO EF over PLINQO L2S?

Such benefits are not there yet, but will be. The primary reason to migrate right now would be to inherit the benefits that standard EF has over L2S, most notably is its multiple database support (so more than just SQL Server).

There will be a simple migration path between the two versions of PLINQO, but the bottom line is that PLINQO EF is not ready for that yet. It is still in beta, and is simply not yet as feature complete as PLINQO L2S. It's going to take one or two releases until we get there, but we will get there! :)

Tuesday, March 29, 2011

Presentation Downloads

Presentation downloads are (finally) up!

The PLINQO presentation will be up some time next week.

Enjoy!
Tom

Friday, March 18, 2011

OAuth 2.0 for MVC, Two Legged Implementation

OAuth 1.0 was one complicated beast. The OAuth 2.0 spec greatly simplified things, but that also had the wonderful side effect of rending all of our old OAuth 1.0 code obsolete. They say that "the only thing a pioneer gets is an arrow in the back," I disagree, I say "the thing that only a pioneer gets to have is an adventure."

For example, I got to help write this wonderful, cutting edge, open source, OAuth 2.0 implementation for MVC!

OAuth 2.0 Overview

OAuth is all about tokens. You start by getting a Request Token from the server, and then using that to secure your login information. When you have successfully logged in you will be given a role/permission specific Access Token, you will then submit this token with all of your future requests. You will also get a Refresh Token with your Access Token. Once your Access Token has expired, you can then submit your Refresh Token to get a new pair of Access and Request Tokens.

Two Legged vs Three Legged

A two legged implementation is rather straight forward, you log into the server you are trying to access. A three legged implementation allows you to gain access to a resource by authentication with a third party server.  For the time being this project only supports two legged authentication.

Implementation

You must implement four classes to use this library:

  1. OAuthIdentityBase
  2. OAuthPrincipalBase
  3. OAuthProviderBase
  4. OAuthServiceBase

The first three are very small classes, requiring only a few short lines of code. The Service is the work horse where most of your code will go, but even then it only requires the implementation of four methods.

public abstract class OAuthServiceBase : ProviderBase, IOAuthService
{
    public static IOAuthService Instance { get; set; }
    public abstract OAuthResponse RequestToken();
    public abstract OAuthResponse AccessToken(string requestToken,
        string grantType, string userName, string password, bool persistent);
    public abstract OAuthResponse RefreshToken(string refreshToken);
    public abstract bool UnauthorizeToken(string token);
}

Then of course you will need to update your Web.config:

<configuration>
  <configSections>
    <section name="oauth" type="OAuth2.Mvc.Configuration.OAuthSection, OAuth2.Mvc, Version=1.0.0.0, Culture=neutral"/>
  </configSections>
  <oauth defaultProvider="DemoProvider" defaultService="DemoService">
    <providers>
      <add name="DemoProvider" type="OAuth2.Demo.OAuth.DemoProvider, OAuth2.Demo" />
    </providers>
    <services>
      <add name="DemoService" type="OAuth2.Demo.OAuth.DemoService, OAuth2.Demo" />
    </services>
  </oauth>
  <system.web>
    <httpModules>
      <add name="OAuthAuthentication" type="OAuth2.Mvc.Module.OAuthAuthenticationModule, OAuth2.Mvc, Version=1.0.0.0, Culture=neutral"/>
    </httpModules>
  </system.web>
</configuration>

Securing Your Pages

That's the easy part, just add the MVC Authorize Attribute to any actions or controllers that you want to secure.

public class HomeController : Controller
{
    public ActionResult Index()
    {
        return View();
    }
 
    [Authorize]
    public ActionResult Secure()
    {
        return View();
    }
}

The Demo API in Action

  • /oauth/requesttoken
    • Request Params
      • None
    • Result
      • RequestToken = a028f1895cc548af9de744f63d283f6e
      • Expires = 300
      • Success = true
  • /oauth/accesstoken
    • Request Params
      • oauth_token = a028f1895cc548af9de744f63d283f6e
      • username = tom
      • password = c4e5995d4cb8b26970336b956054ac1be9cc50b3
    • Result
      • AccessToken = 3b23ee5f128a45c88e657ecc74c41bbc
      • Expires = 300
      • RefreshToken = 85126a53bca940f1ae7c9d797f63a274
      • Success = true
  • /oauth/refreshtoken
    • Request Params
      • refreshToken = 85126a53bca940f1ae7c9d797f63a274
    • Result
      • AccessToken = 8cfc317af6ed45b2b065a8fa5da3ba81
      • Expires = 300
      • RefreshToken = d0b4a8898d974e939ca83b55cfeabcac
      • Success = true
  • /oauth/unauthorize
    • Request Params
      • oauth_token = 8cfc317af6ed45b2b065a8fa5da3ba81
    • Result
      • Success = true

Additional Resources

Happy authenticating!
~ Tom

Friday, March 11, 2011

CodeSmith 2011 User Group Tour

The CodeSmith Tools are hitting the road (and even the skies) to talk about .NET technology all around the US!
Want us to speak at your local user group? Just drop me a line!

We will post again each month with updates and additional details, so stay tuned!

Dates

  • Oklahoma
    • April 4th (Monday) - Oklahoma City DNUG - PLINQO: Advanced LINQ to SQL
  • Arkansas
    • March 25th (Friday) - North West Arkansas TechFest 2011 - Attack of the Cloud
    • April 11th (Monday) - Fort Smith DNUG - PLINQO: Advanced LINQ to SQL
    • April 12th (Tuesday) - North West Arkansas DNUG - TBA
    • April 13th (Wednesday) - North West Arkansas SQL Server UG - PLINQO: Advanced LINQ to SQL
    • April 14th (Thursday) - Little Rock DNUG - Using Embedded QA to Build Rock-Solid Software
  • New Mexico
    • May 5th (Thursday) - New Mexico DNUG - Attack of the Cloud
  • Iowa
    • May 9th (Monday) - Cedar Rapids DNUG - Code Generation
    • August 4th (Thursday) - Des Moines DNUG - Using Embedded QA to Build Rock-Solid Software
  • Texas
    • May 11th (Wednesday) - DFW Connected Systems UG - PLINQO: Advanced LINQ to SQL
  • Florida
    • May 26th (Thursday) - Memphis DNUG - Code Generation
    • June 1st (Wednesday) - Space Coast DNUG - PLINQO: Advanced LINQ to SQL
    • June 2nd (Thursday) - Tallahassee DNUG - Attack of the Cloud
    • June 7th (Tuesday) - Deerfield Beach DNUG - Code Generation
  • New York
    • July 7th (Thursday) - Long Island DNUG - Code Generation
    • TBD - Fairfield DNUG - Code Generation

Topics

  • Attack of the Cloud
    • Cloud computing is great, but what do we put in the cloud? The web is advancing at an incredible pace and it’s time to start building true Web Applications, not just web sites! Web Apps shake off the constraints of operating system specific frameworks and free developers to work in an open standards based environment. This session will cover a variety of topics ranging from ASP.NET MVC development, unit testing, REST APIs, JSON, JQuery, ExtJS, tips and tricks, lessons learned, and more. It will conclude with building a sample blog reader Web App, and then deploying that to Windows Azure.
  • Code Generation
    • Code generation is a powerful practice that allows you produce higher-quality, more consistent code in less time. This helps remove the mundane and repetitive parts of programming, allowing developers to focus their efforts on more important tasks, and saving companies time and money. Code generation enables you to: efficiently reduce repetitive coding, generate code in less time with fewer bugs, and produce consistent code that adheres to your standards.
  • Using Embedded QA to Build Rock-Solid Software
    • Without an automated means to collect errors from deployed applications, how can you know that your software is performing as expected? Embedded QA can be used to augment your own internal QA efforts, greatly increasing both the effectiveness of your testing and overall stability of your applications. As Jeff Atwood phrased it, "If you're waiting around for users to tell you about problems with your website or application, you're only seeing a tiny fraction of all the problems that are actually occurring. The proverbial tip of the iceberg."
  • PLINQO: Advanced LINQ to SQL
    • In the time that LINQ to SQL has been available, we have been identifying ways to make LINQ to SQL better. We have compiled all of those cool tips and tricks including new features into a set of CodeSmith templates. PLINQO opens the LINQ TO SQL black box giving you the ability to control your source code while adding many new features and enhancements. It's still LINQ to SQL, but better!

Hope to see you soon!
~ Tom 

Tuesday, March 8, 2011

PLINQO for NHibernate?

As my Uncle Ben once said, "with great insomnia comes great responsibility." Or maybe that was Spiderman, I don't remember. All I know is that I couldn't go to sleep last night, and when I came to this morning there was a proof of concept PLINQO for NHibernate architecture on my screen.

I am not saying we are working on PLINQO for NHibernate...yet.

NHibernate is a well established ORM that is backed by a great community, and frankly, they have their own way of doing things. NHibernate is built on some great principals: patterns, testability, and openness. Also, things like Fluent NHibernate and Sharp Architecture are examples of some superb extensions to the base NHibernate framework, and they are perfectly tailored to fit NHibernate needs.

Originally we had thought that creating PLINQO templates for NHibernate would be going against the grain of the NHibernate community. The architecture of PLINQO, specifically it's query extension pattern, is a bit of an anti-pattern. Also PLINQO is based on LINQ to SQL, and not all of it's features are needed in the more mature NHibernate framework.

So if we were to make PLINQO for NHibernate, what value would it provide?

First and foremost, simplicity.
A major goal of PLINQO is to get the end user up and running as fast as possible. A major complaint of NHibernate is that it can be very complex to setup. It seems that simplifying the start up process would be a major advantage to new users.

This could provide a migration path to NHibernate.
Using LINQ to SQL or PLINQO and want to switch to NHibernate? Maybe you need more DB Providers, maybe you like open source components, or maybe you have converted to the church of Rhino; in any case, this would be a great way to make that transition quick and easy.

PLINQO for NHibernate means more PLINQO.
...and I LOVE PLINQO! I certainly don't think more PLINQO could hurt anyone, heck, I'm pretty sure that it will help someone! Also, on a personal note, I would get to code more PLINQO! If you can't tell from all the exclamation points, I find that prospect to be freak'n exciting!

What would PLINQO for NHibernate look like?

Remember back when you were in grade school and your teacher told you that there were no stupid questions? That was a stupid question.

Ideally the PLINQO NHibernate templates would generate you a matching data layer. You would swap out your templates, update your namespaces, and you're back in business.

[Test]
public void ByQueries()
{
    // This DataContext is powered by a NHibernate ISession
    using (var db = new PetshopDataContext())
    {
        var x = db.Product
            .ByName(ContainmentOperator.NotEquals, "A")
            .ByDescn("B")
            .FirstOrDefault();

        var y = db.Product
            .ByName(ContainmentOperator.StartsWith, "B")
            .ByDescn("B");
                
        var z = y.ToList();

        Assert.AreEqual(1, z.Count);
        Assert.AreEqual(x.Id, z[0].Id);
    }
}

So by the way, that code snippet is real, and that test succeeds.

Tuesday, March 1, 2011

Error Handling and CustomErrors and MVC3, oh my!

So, what else is new in MVC 3?
MVC 3 now has a GlobalFilterCollection that is automatically populated with a HandleErrorAttribute. This default FilterAttribute brings with it a new way of handling errors in your web applications. In short, you can now handle errors inside of the MVC pipeline. 

What does that mean?
This gives you direct programmatic control over handling your 500 errors in the same way that ASP.NET and CustomErrors give you configurable control of handling your HTTP error codes.

How does that work out?
Think of it as a routing table specifically for your Exceptions, it's pretty sweet!

Global Filters

The new Global.asax file now has a RegisterGlobalFilters method that is used to add filters to the new GlobalFilterCollection, statically located at System.Web.Mvc.GlobalFilter.Filters. By default this method adds one filter, the HandleErrorAttribute.

public class MvcApplication : System.Web.HttpApplication
{
    public static void RegisterGlobalFilters(GlobalFilterCollection filters)
    {
        filters.Add(new HandleErrorAttribute());
    }

HandleErrorAttributes

The HandleErrorAttribute is pretty simple in concept: MVC has already adjusted us to using Filter attributes for our AcceptVerbs and RequiresAuthorization, now we are going to use them for (as the name implies) error handling, and we are going to do so on a (also as the name implies) global scale.

The HandleErrorAttribute has properties for ExceptionType, View, and Master. The ExceptionType allows you to specify what exception that attribute should handle. The View allows you to specify which error view (page) you want it to redirect to. Last but not least, the Master allows you to control which master page (or as Razor refers to them, Layout) you want to render with, even if that means overriding the default layout specified in the view itself.

public class MvcApplication : System.Web.HttpApplication
{
    public static void RegisterGlobalFilters(GlobalFilterCollection filters)
    {
        filters.Add(new HandleErrorAttribute
        {
            ExceptionType = typeof(DbException),
            // DbError.cshtml is a view in the Shared folder.
            View = "DbError",
            Order = 2
        });
        filters.Add(new HandleErrorAttribute());
    }

Error Views

All of your views still work like they did in the previous version of MVC (except of course that they can now use the Razor engine). However, a view that is used to render an error can not have a specified model! This is because they already have a model, and that is System.Web.Mvc.HandleErrorInfo

@model System.Web.Mvc.HandleErrorInfo
           
@{
    ViewBag.Title = "DbError";
}

<h2>A Database Error Has Occurred</h2>

@if (Model != null)
{
    <p>@Model.Exception.GetType().Name<br />
    thrown in @Model.ControllerName @Model.ActionName</p>
}

Errors Outside of the MVC Pipeline

The HandleErrorAttribute will only handle errors that happen inside of the MVC pipeline, better known as 500 errors. Errors outside of the MVC pipeline are still handled the way they have always been with ASP.NET. You turn on custom errors, specify error codes and paths to error pages, etc.

It is important to remember that these will happen for anything and everything outside of what the HandleErrorAttribute handles. Also, these will happen whenever an error is not handled with the HandleErrorAttribute from inside of the pipeline.

<system.web>
  <customErrors mode="On" defaultRedirect="~/error">
    <error statusCode="404" redirect="~/error/notfound"></error>
  </customErrors>

Sample Controllers

public class ExampleController : Controller
{
    public ActionResult Exception()
    {
        throw new ArgumentNullException();
    }
    public ActionResult Db()
    {
        // Inherits from DbException
        throw new MyDbException();
    }
}

public class ErrorController : Controller
{
    public ActionResult Index()
    {
        return View();
    }
    public ActionResult NotFound()
    {
        return View();
    }
}

Putting It All Together

If we have all the code above included in our MVC 3 project, here is how the following scenario's will play out:

  1. A controller action throws an Exception.
    • You will remain on the current page and the global HandleErrorAttributes will render the Error view.
  2. A controller action throws any type of DbException.
    • You will remain on the current page and the global HandleErrorAttributes will render the DbError view.
  3. Go to a non-existent page.
    • You will be redirect to the Error controller's NotFound action by the CustomErrors configuration for HTTP StatusCode 404.

But don't take my word for it, download the sample project and try it yourself.

Three Important Lessons Learned

For the most part this is all pretty straight forward, but there are a few gotcha's that you should remember to watch out for:

1) Error views have models, but they must be of type HandleErrorInfo.

It is confusing at first to think that you can't control the M in an MVC page, but it's for a good reason. Errors can come from any action in any controller, and no redirect is taking place, so the view engine is just going to render an error view with the only data it has: The HandleError Info model. Do not try to set the model on your error page or pass in a different object through a controller action, it will just blow up and cause a second exception after your first exception!

2) When the HandleErrorAttribute renders a page, it does not pass through a controller or an action.

The standard web.config CustomErrors literally redirect a failed request to a new page. The HandleErrorAttribute is just rendering a view, so it is not going to pass through a controller action. But that's ok! Remember, a controller's job is to get the model for a view, but an error already has a model ready to give to the view, thus there is no need to pass through a controller.

That being said, the normal ASP.NET custom errors still need to route through controllers. So if you want to share an error page between the HandleErrorAttribute and your web.config redirects, you will need to create a controller action and route for it. But then when you render that error view from your action, you can only use the HandlerErrorInfo model or ViewData dictionary to populate your page.

3) The HandleErrorAttribute obeys if CustomErrors are on or off, but does not use their redirects.

If you turn CustomErrors off in your web.config, the HandleErrorAttributes will stop handling errors. However, that is the only configuration these two mechanisms share. The HandleErrorAttribute will not use your defaultRedirect property, or any other errors registered with customer errors.

In Summary

The HandleErrorAttribute is for displaying 500 errors that were caused by exceptions inside of the MVC pipeline. The custom errors are for redirecting from error pages caused by other HTTP codes. 

Also, if you are going to be handling all these errors, why not report them too?

Tuesday, February 15, 2011

MVC3's GlobalFilters and HandleErrorAttribute

In MVC3 a GlobalFilterCollection has been added to the Application_Start. This allows you to register filters that will be applied to all controller actions in a single location. Also, MVC3 web applications now add an instance of HandleErrorAttribute to these GlobalFilters by default. This means that errors in the MVC pipeline will now be automatically handled by these attributes and never fire the HttpApplication's OnError event.

This is nice because it is another step away from the old ASP.NET way of doing things, and a step toward the newer cleaner MVC way of doing things. However, it did throw us a slight curve ball when updating CodeSmith Insight's HttpModule.

Side Note: The CodeSmith Insight MVC3 client assembly will be released next week (the week of 2/21/11).

Out With the Old

Our old HttpModule wired up to the HttpApplication's OnError event and used that to log unhandled exceptions in web applications. It didn't care if the error happened in or out of the MVC pipeline, either way it was going to bubble up and get caught in the module.

public virtual void Init(HttpApplication context)
{
   InsightManager.Current.Register();
   InsightManager.Current.Configuration.IncludePrivateInformation = true;
   context.Error += OnError;
}

private void OnError(object sender, EventArgs e)
{
   var context = HttpContext.Current;
   if (context == null)
       return;

   Exception exception = context.Server.GetLastError();
   if (exception == null)
       return;

   var abstractContext = new HttpContextWrapper(context);
   InsightManager.Current.SubmitUnhandledException(exception, abstractContext);
}

However, now the MVC HandleErrorAttribute may handle exceptions right inside of the MVC pipeline, meaning that they will never reach the HttpApplication and the OnError will never be fired. What to do, what to do...

In With the New

Now we need to work with both the attributes and the HttpApplication, ensuring that we will catch errors from both inside and outside of the MVC pipeline. This means that we need to find and wrap any instances of HandleErrorAttribute in the GlobalFilters, and still register our model to receive notifications from the HttpApplications OnError event.

The first thing we had to do was create a new HandleErrorAttribute. Please note that this example is simplified and only overrides the OnException method. If you want to do this "right", you'll have to override and wrap all of the virtual methods in HandleErrorAttribute.

public class HandleErrorAndReportToInsightAttribute : HandleErrorAttribute
{
   public bool HasWrappedHandler
   {
       get { return WrappedHandler != null; }
   }

   public HandleErrorAttribute WrappedHandler { get; set; }

   public override void OnException(ExceptionContext filterContext)
   {
       if (HasWrappedHandler)
           WrappedHandler.OnException(filterContext);
       else
           base.OnException(filterContext);

       if (filterContext.ExceptionHandled)
           InsightManager.Current.SubmitUnhandledException(filterContext.Exception, filterContext.HttpContext);
   }
}

Next we needed to update our HttpModule to find, wrap, and replace any instances of HandleErrorAttribute in the GlobalFilters.

public virtual void Init(HttpApplication context)
{
   InsightManager.Current.Register();
   InsightManager.Current.Configuration.IncludePrivateInformation = true;
   context.Error += OnError;

   ReplaceErrorHandler();
}

private void ReplaceErrorHandler()
{
   var filter = GlobalFilters.Filters.FirstOrDefault(f => f.Instance is HandleErrorAttribute);
   var handler = new HandleErrorAndReportToInsightAttribute();

   if (filter != null)
   {
       GlobalFilters.Filters.Remove(filter.Instance);
       handler.WrappedHandler = (HandleErrorAttribute) filter.Instance;
   }

   GlobalFilters.Filters.Add(handler);
}

In Conclusion

Now when we register the InsightModule in our web.config, we will start capturing all unhandled exceptions again.

<configuration>
 <configSections>
   <section name="codesmith.insight" type="CodeSmith.Insight.Client.Configuration.InsightSection, CodeSmith.Insight.Client.Mvc3" />
 </configSections>
 <codesmith.insight apiKey="XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX" serverUrl="http://app.codesmithinsight.com/" />
 <system.web>
   <customErrors mode="On" />
   <httpModules>
     <add name="InsightModule" type="CodeSmith.Insight.Client.Web.InsightModule, CodeSmith.Insight.Client.Mvc3"/>
   </httpModules>
 </system.web>
</configuration>

Friday, January 28, 2011

How to Learn ExtJS

Ever since CodeSmith Insight was featured on the Sencha Product Spotlight I have been getting a lot of questions about ExtJS. Specifically how to start learning it, and what tools we recommend using. Well rather than respond to these inquiries one email at a time, I thought it might be a good idea to throw up a blog post about how to learn ExtJS.

I hope this helps you get started, but always feel free to contact me with any additional questions you have.

Start with the Samples

I think the best way to start learning ExtJS is to reverse engineer some of the official ExtJS samples. I like learning by example, so I found that to be a great starting point. After I had a grasp of the fundamentals (their object orientation, their standard configuration properties, etc), I was able to start authoring my own components. I suggest starting with the window examples, taking a look at the form examples, and then really getting a feel for the full application layouts by exploring the feed viewer.

Also, the most important and best thing that you can do: keep the ExtJS API open in another window at all times!

Try using the ExtJS Designer

The Ext Designer is a visual designer for creating ExtJS components. I did not have access to this back when I started working with ExtJS, but in retrospect, I wish that I had. Recently I had the opportunity to help another company get started with ExtJS where they used the designer, and I can endorse that it's a great tool. It allows you to visually drag drop and resize components, such as elements on windows and panels, and helps you get an image of what you are creating without having to render your classes in the browser every step of the way.

The designer is great for two reasons: 1) you can see what you are doing, which makes it much easier to learn what ExtJS configuration properties do, and 2) you don't have to worry about one bad value or syntax error crashing your whole page every time you are trying to learn a new ExtJS component.

Debugging Tools

Firebug for Firefox is still (in my opinion) the best debugging tool around. It is all around responsive, it includes the JSON viewer, it's JavaScript console has auto complete, and you can edit DOM elements inline with ease.

The Google Chrome Development Tools (not Firebug Lite, but the actual built in tools themselves) have really grown into a usable set of tools since they were released two years ago. It finally competes with Firebug for responsiveness and available features. While it does not have the JSON viewer, it does offer a very useful and unique local storage viewer.

Real Time Web Analytics