Showing posts with label ORM. Show all posts
Showing posts with label ORM. Show all posts

Wednesday, December 24, 2014

How to use Entity Framework and SQLite

SQLite is the definition of a lightweight database. Using SQLite you can run an entire database with only a 304 KB executable and a database file. It's fast, stable, and very easy to use. Entity Framework is Microsoft's official ORM, and it has support for SQLite!

SQLite Tools

To run SQLite you need only to download the precompiled SQLite binaries for windows:

You can easily manipulate the database via command line. However, if you would prefer to use a GUI, there is a wonderful Firefox plugin for managing your SQLite databases.

Entity Framework Setup

To get stated using Entity Framework you will need to add two NuGet packages to your solution:

  1. EntityFramework
  2. System.Data.SQLite (x86/x64)

After that you will need to make sure that your app.config file has properly registered the both a System.Data.SQLite provider and provider factory.

Monday, September 30, 2013

LINQ to SQL DataContext BeginTransaction

LINQ to SQL supports Transactions, but there is no method directly off of the DataContext to initialize one. Fortunately, that functionality is just a simple extension method away!

public static class DataContextExtensions
{
    public static IDbTransaction BeginTransaction(
        this DataContext dataContext, 
        IsolationLevel isolationLevel = IsolationLevel.Unspecified)
    {
        if (dataContext.Connection.State != ConnectionState.Open)
            dataContext.Connection.Open();
 
        return dataContext.Transaction = dataContext.Connection
            .BeginTransaction(isolationLevel);
    }
}
 
public class TransactionTests
{
    [Fact]
    public void Example()
    {
        using (var dataContext = new FutureSimpleDataContext())
        using (dataContext.BeginTransaction())
        {
            // TODO: Stuff!
        }
    }
}
Shout it

Enjoy!
Tom

Tuesday, September 25, 2012

Func Injection in Unity

Let your container be your factory. :)

If you are using LinqToSql and dependency injection, then you have probably created a factory with which you create DataContexts. But what if you could just let your IOC Container do that work for you? You can!

If you are using Unity then you can inject a Func<T> of any registered type. Unity will automatically bind the injected Func to the container's resolve method, thus preserving the resource Lifetime Manager.

Example Code

public class FuncInjectionTests
{
    [Fact]
    public void TransientLifetimeFuncIsThreadsafe()
    {
        var container = new UnityContainer();
 
        container
            .RegisterType<IUserService, UserService>(
                new ContainerControlledLifetimeManager())
            .RegisterType<IDataContext, DataContext>(
                new TransientLifetimeManager());
 
        var parallelOptions = new ParallelOptions {MaxDegreeOfParallelism = 100};
 
        Parallel.For(0, 1000, parallelOptions, i =>
        {
            var userService = container.Resolve<IUserService>();
 
            Parallel.For(0, 1000, parallelOptions, j =>
            {
                userService.Update();
            });
        });
 
        Assert.Equal(1, UserService.Count);
        Assert.Equal(1000000, DataContext.Count);
    }
}
 
public interface IUserService
{
    void Update();
}
 
public interface IDataContext : IDisposable
{
    void UpdateUser();
}
 
public class UserService : IUserService
{
    public static int Count;
 
    private readonly Func<IDataContext> _dataContextFactory;
 
    public UserService(Func<IDataContext> dataContextFactory)
    {
        _dataContextFactory = dataContextFactory;
        Interlocked.Increment(ref Count);
    }
 
    public void Update()
    {
        using (var dataContext = _dataContextFactory())
            dataContext.UpdateUser();
    }
}
 
public class DataContext : IDataContext
{
    public static int Count;
 
    public DataContext()
    {
        Interlocked.Increment(ref Count);
    }
 
    public void UpdateUser()
    {
        Trace.WriteLine(Thread.CurrentThread.ManagedThreadId + " - " + Count);
    }
 
    public void Dispose()
    {
    }
}
Shout it

Enjoy,
Tom

Saturday, August 18, 2012

Linq to Sql Batch Future Queries

Batch queries are crucial feature of any good ORM...including LinqToSql!

Future queries are an unobtrusive and easy to use way of batching database queries into a lazy loaded data structures. Future queries were originally a feature of NHibernate, and have since been ported to other ORMs. However I am currently working on a project where I do not have access to those tools.

Thus I created LinqToSql.Futures, a simple extension that adds batched futures queries to LinqToSql.

Using Future Queries

  1. Get the LinqToSqlFutures NuGet package.
  2. Extend your DataContext to implement IFutureDataContext. (This is optional, see below!)
  3. Use future queries!

To batch queries together, simply call the ToFuture extension method on your IQueryables, this will return a Lazy collection of entities. The first time that one of the Lazy collections is accessed, all of the pending queries will be batched and executed together.

[Fact]
public void ToFuture()
{
    // SimpleDataContext Implements IFutureDataContext 
    using (var dataContext = new SimpleDataContext())
    {
        Lazy<IList<Person>> people = dataContext.Persons
            .Where(p => p.FirstName == "Tom" || p.FirstName == "Cat")
            .ToFuture();
 
        Lazy<IList<Pet>> pets = dataContext.Pets
            .Where(p => p.Name == "Taboo")
            .ToFuture();
 
        // Single database call!
        Assert.Equal(2, people.Value.Count);
        Assert.Equal(1, pets.Value.Count);
    }
}

To see the future queries working, you can use SqlProfiler to capture the batch query:

Extending Your DataContext

To use the code above, you need only extend your generated DataContext to implement the IFutureDataContext interface (which consists of a mere one property). Additionally, the sample code below overrides the Dispose method to optimize query disposal.

partial class SimpleDataContext : IFutureDataContext
{
    protected override void Dispose(bool disposing)
    {
        if (_futureCollection != null)
        {
            _futureCollection.Dispose();
            _futureCollection = null;
        }
 
        base.Dispose(disposing);
    }
 
    private IFutureCollection _futureCollection;
    public IFutureCollection FutureCollection
    {
        get
        {
            if (_futureCollection == null)
                _futureCollection = this.CreateFutureCollection();
 
            return _futureCollection;
        }
    }
}

Do you not have access to extend your DataContext? No problem...

Alternative Consumption

Your DataContext does not need to implement IFutureDataContext to use future queries. You can also create a FutureCollection and pass that into the ToFuture method. This will provide the future queries with a batch context.

[Fact]
public void FutureCollection()
{
    using (var dataContext = new SimpleDataContext())
    using (var futureCollcetion = dataContext.CreateFutureCollection())
    {
        Lazy<IList<Person>> people = dataContext.Persons
            .Where(p => p.FirstName == "Tom" || p.FirstName == "Cat")
            .ToFuture(futureCollcetion);
 
        Lazy<IList<Pet>> pets = dataContext.Pets
            .Where(p => p.Name == "Taboo")
            .ToFuture(futureCollcetion);
 
        // Single database call!
        Assert.Equal(2, people.Value.Count);
        Assert.Equal(1, pets.Value.Count);
    }
}

Links

If you enjoy the future queries or need to extend them, then please feel free to download the source from GitHub.

LinqToSql.Futures on GitHub
LinqToSql.Futures on NuGet

Shout it

Enjoy,
Tom

Friday, May 13, 2011

LINQ to SQL vs Entity Framework vs NHibernate

I am working on a new presentation that is essentially a compare and contrast analysis of the three major ORMs out there (LINQ to SQL, Entity Framework, and NHibernate). My problem is not the content, it's choosing a name for the presentation! Here are just a few ideas that I've been considering:

  • Ultimate Fighter
    • Ultimate Fighter: ORM Edition!
    • ORM Showdown: The good, the bad, and the query.
    • ORM Fight Night
    • Object Relational Matchup
  • Misc Movies
    • ORM Club: If it's your first night, your app has to fight.
    • Super-ORM-Man 4: The Quest for Business Logic
    • Dr. Strange DAL: Or how I learned to stop worrying and generated an ORM!
  • Video Games
    • World of ORM Craft
    • pORMtal: This is a triumph!
    • pORMtal 2: Wa are going to have fun, with computer science!
  • Star Wars
    • ORM Wars, Episode 4: A New DAL
    • ORM Wars, Episode 5: The Profiler Strikes Back
    • ORM Wars, Episode 6: Return of the DAL
  • Indiana Jones
    • ORM and the Raiders of the lost DAL
    • ORM and the DAL of Doom
    • ORM and the Last Cascade
  • Star Trek
    • ORM Trek 1: The Data Access Layer
    • ORM Trek 2: The Wrath of DAL
    • ORM Trek 3: The Search for Business Logic
    • ORM Trek 4: The Request's Roundtrip Voyage Home
    • ORM Trek 5: The Data Access Frontier 
    • ORM Trek 6: The Undiscovered Productivity
    • ORM Trek 7: Code Generations
    • ORM Trek 8: First Contract
    • ORM Trek 9: SQL Injection
    • ORM Trek 10: ADO.Nemesis
  • The Hobbit
    • Lord of the ORM: Fellowship of the Data Layers
    • Lord of the ORMs: The Batch Queries
    • Lord of the ORMs: Return of the DAL
    • There and Back Again: An Entity's Tale, by ORM

Wednesday, May 4, 2011

PLINQO for NHibernate, Beta 1 Released!

PLINQO makes NHibernate "so easy, even a caveman can do it."

PLINQO can generated your HBM files, entity classes, and all NHibernate configuration in seconds. It then allows you to safely regenerate that code at anytime, thus synchronizing your mappings with the database while still intelligently preserving custom changes. PLINQO for NHibernate is a enhance and simplify wrapping of NHibernate that brings the best practices, optimizations, and convenience of the PLINQO frameworks to NHibernate.

Check it out, ya'll! Check it check it out!

Tuesday, April 12, 2011

Two Important Questions about PLINQO EF

The .NET community is the best development community ever.
How do I know that? Because they ask the best questions!

Here are two really important questions that we have been asked concerning PLINQO for Entity Framework that I wanted to call some extra attention to:

What is the advantage of using PLINQO EF instead of standard Entity Framework?

In 1.0 the primary goal was to improve the regeneration story of Entity Framework, thus making it easy to update and sync data and code changes. The entities are pretty much equivalent, but the PLINQO query extensions greatly improve and simplify the composition of queries.

With future versions there will be more features brought in from the PLINQO for L2S feature set.  This will include built in caching, auditing, enhanced serialization, possibly WCF and DataServices support, and hopefully batch/future queries!

What are the benefits, if any, of moving to PLINQO EF over PLINQO L2S?

Such benefits are not there yet, but will be. The primary reason to migrate right now would be to inherit the benefits that standard EF has over L2S, most notably is its multiple database support (so more than just SQL Server).

There will be a simple migration path between the two versions of PLINQO, but the bottom line is that PLINQO EF is not ready for that yet. It is still in beta, and is simply not yet as feature complete as PLINQO L2S. It's going to take one or two releases until we get there, but we will get there! :)

Tuesday, March 8, 2011

PLINQO for NHibernate?

As my Uncle Ben once said, "with great insomnia comes great responsibility." Or maybe that was Spiderman, I don't remember. All I know is that I couldn't go to sleep last night, and when I came to this morning there was a proof of concept PLINQO for NHibernate architecture on my screen.

I am not saying we are working on PLINQO for NHibernate...yet.

NHibernate is a well established ORM that is backed by a great community, and frankly, they have their own way of doing things. NHibernate is built on some great principals: patterns, testability, and openness. Also, things like Fluent NHibernate and Sharp Architecture are examples of some superb extensions to the base NHibernate framework, and they are perfectly tailored to fit NHibernate needs.

Originally we had thought that creating PLINQO templates for NHibernate would be going against the grain of the NHibernate community. The architecture of PLINQO, specifically it's query extension pattern, is a bit of an anti-pattern. Also PLINQO is based on LINQ to SQL, and not all of it's features are needed in the more mature NHibernate framework.

So if we were to make PLINQO for NHibernate, what value would it provide?

First and foremost, simplicity.
A major goal of PLINQO is to get the end user up and running as fast as possible. A major complaint of NHibernate is that it can be very complex to setup. It seems that simplifying the start up process would be a major advantage to new users.

This could provide a migration path to NHibernate.
Using LINQ to SQL or PLINQO and want to switch to NHibernate? Maybe you need more DB Providers, maybe you like open source components, or maybe you have converted to the church of Rhino; in any case, this would be a great way to make that transition quick and easy.

PLINQO for NHibernate means more PLINQO.
...and I LOVE PLINQO! I certainly don't think more PLINQO could hurt anyone, heck, I'm pretty sure that it will help someone! Also, on a personal note, I would get to code more PLINQO! If you can't tell from all the exclamation points, I find that prospect to be freak'n exciting!

What would PLINQO for NHibernate look like?

Remember back when you were in grade school and your teacher told you that there were no stupid questions? That was a stupid question.

Ideally the PLINQO NHibernate templates would generate you a matching data layer. You would swap out your templates, update your namespaces, and you're back in business.

[Test]
public void ByQueries()
{
    // This DataContext is powered by a NHibernate ISession
    using (var db = new PetshopDataContext())
    {
        var x = db.Product
            .ByName(ContainmentOperator.NotEquals, "A")
            .ByDescn("B")
            .FirstOrDefault();

        var y = db.Product
            .ByName(ContainmentOperator.StartsWith, "B")
            .ByDescn("B");
                
        var z = y.ToList();

        Assert.AreEqual(1, z.Count);
        Assert.AreEqual(x.Id, z[0].Id);
    }
}

So by the way, that code snippet is real, and that test succeeds.

Wednesday, February 17, 2010

Advanced PLINQO Future Queries, Part 2

The immediate uses of PLINQO Future Queries are readily apparent: Do you have multiple queries that need to be executed back to back? Add future to the back of them and save yourself a few database trips.

What might not be quite as easy to spot, are those places where using a more advanced design pattern can really enhance your application's performance. Here is an example of one of those patters.

Part 2: Pipeline Loader Patter

There are many times when an application needs to process a series of independent actions, each requiring its own set of database objects. If using LINQ to SQL (prior to PLINQO), this would mean that you need to either hard code the loading of all the necessary data in one place, or allow each action to load its own data in succession; thus leaving you with the options of bad abstraction, or lack of optimization; neither of which are very desirable.

Using the PLINQO Future Queries in conjunction with some simple interfaces, you can create a pipeline system that allows your actions to share one single data loading session. As usual, I feel that code narrates this best: 

Use Case

[Test]
public void Test()
{
   var actions = new List<IPipelineLoader>
   {
       new CaseAction { CaseId = "A", Title = "First New Title" },
       new CaseAction { CaseId = "A", Title = "Second New Title" },
       new UserAction { UserId = 1, Title = "Senior Tester" },
       new UserAction { UserId = 2, Title = "Junior Tester" }
   };

   using (var db = new DemoDataContext())
   {
       actions.ForEach(a => a.Load(db));
       db.ExecuteFutureQueries();
       actions.ForEach(a => a.Process());
       db.SubmitChanges();
   }
}

Interface

public interface IPipelineLoader
{
   void Load(DemoDataContext db);
   void Process();
}

Implementation

public class CaseAction : IPipelineLoader
{
   private FutureValue<Case> _futureCase;

   public string CaseId { get; set; }
   public string Title { get; set; }

   public void Load(DemoDataContext db)
   {
       _futureCase = db.Case.ById(CaseId).FutureFirstOrDefault();
   }

   public void Process()
   {
       _futureCase.Value.Title = Title;
   }
}

public class UserAction : IPipelineLoader
{
   private FutureValue<User> _futureUser;

   public int UserId { get; set; }
   public string Title { get; set; }

   public void Load(DemoDataContext db)
   {
       _futureUser = db.User.ById(UserId).FutureFirstOrDefault();
   }

   public void Process()
   {
       _futureUser.Value.Title = Title;
   }
}

Wednesday, February 10, 2010

Advanced PLINQO Future Queries, Part 1

The immediate uses of PLINQO Future Queries are readily apparent: Do you have multiple queries that need to be executed back to back? Add future to the back of them and save yourself a few database trips. What might not be quite as easy to spot, are those places where using a more advanced design pattern can really enhance your application's performance. Here is an example of one of those patters.

Part 1: Entity Loaders

When retrieving entities from the database you can not always use DataLoadOptions to populate the complete object model in a single trip to the database. Having to make multiple trips to populate one type of entity can be costly, and this can be compounded by the need to populate a series of different entities at the same time.

One solution is to create a loader class for your entities that use future queries to get their data, as this is a very reliable means with which to ensure that no extra unnecessary database calls are execute. Also, by creating an interface for these loader classes, you can use a generic piece of code to load any number of entities of any number of data types.

Use Cases

[Test]
public static Case Test(int id)
{
    var loader = new CaseTagLoader(id);
    loader.Load();
    return loader.Entity;
}

[Test]
public static List<Case> Test(int[] ids)
{
    var loaders = new List<CaseTagLoader>();
    foreach (var id in ids)
        loaders.Add(new CaseTagLoader(id));

    using (var db = new BlogDataContext())
    {
        loaders.ForEach(l => l.Load(db));
        db.ExecuteFutureQueries();
    }

    return loaders.Select(l => l.Entity).ToList();
}

Interface

public interface ILoader<T>
{
    void Load();
    void Load(BlogDataContext db);
    T Entity { get; }
    bool IsLoaded { get; }
}

Implementation

public class CaseTagLoader : ILoader<Case>
{
    private readonly int _caseId;
    private FutureValue<Case> _futureCase;
    private IEnumerable<CaseTag> _caseTags;
    private IEnumerable<Tag> _tags;
    private Case _case;

    public Case Entity
    {
        get
        {
            if (!IsLoaded)
                return null;

            if (_case == null)
                CreateCase();

            return _case;
        }
    }

    public bool IsLoaded { get; private set; }

    public CaseTagLoader(int caseId)
    {
        _caseId = caseId;
        IsLoaded = false;
    }

    public void Load()
    {
        if (IsLoaded)
            return;

        using (var db = new BlogDataContext())
        {
            db.ObjectTrackingEnabled = false;
            Load(db);
            db.ExecuteFutureQueries();
        }
    }

    public void Load(BlogDataContext db)
    {
        if (IsLoaded)
            return;

        _futureCase = db.Case.ById(_caseId).FutureFirstOrDefault();
        var caseTagQuery = db.CaseTag.ByCaseId(_caseId);
        _caseTags = caseTagQuery.Future();
        _tags = db.Tag
            .Join(caseTagQuery, t => t.Id, ct => ct.TagId, (t, ct) => t)
            .Future();

        IsLoaded = true;
    }

    private void CreateCase()
    {
        _case = _futureCase.Value;
        _case.CaseTags.AddRange(_caseTags);

        foreach (var caseTag in _caseTags)
            caseTag.Tag = _tags.First(t => t.Id == caseTag.TagId);
    }
}

Monday, October 26, 2009

PLINQO Query Extension Updates

The Problem

Unfortunately the previous query extensions, while great at querying by a particular value, did not support simple operations such as getting by a nullable value type (where you know it can be null or just a value).

Example: If you want to query by a nullable column, you had to use a a lambda expression.

context.Product.Where(p => !p.OwnerId.HasValue || p.OwnerId.Value == 5);

Our Solution

Add an overload that for each "By" extension method that accepts a params list of values.

Example: Now you can look for null OR a certain value without using where statement.

context.Product.ByOwnerId(null, 5);

New PLINQO Feature

Here at CodeSmith we love generic solutions. So, forget just supporting ByValueOrNull scenarios, now all "By" query extensions have an overload to support or statements!

Example: Ummm, every single "By" query extension method generated by PLINQO?

context.Task.ByPriority(Priority.High, null, Priority.Medium);
context.Person.ByFirstName("Eric", null, "Shannon", String.Empty);
context.Product.ByName("Scribblenauts", "Bowser's Inside Story").ByRating(Rating.Excellent, Rating.AboveAverage);

Implementation Details

To see how the overloads are implemented you need only take a look at the template, QueryExtension.Generated.cst. If you are curious to see how all of this comes together under the hood, where we build up the parametrized query string and then parse the lambda, this code is located in DynamicQuery.cs

Fun Little FYI Facts: Unfortunately Linq to SQL does not support doing a IEnumerable.Contains against a nullable value type collection to identify null values. Also, when building a string expression, you cant just say where a primitive is equal to a null, that will cause a parsing exception. So once we jumped those hurdles, the rest was relatively easy; just check for null params arrays in the overloads and know when to treat them as a null vs an empty param, and you're all set!

In Conclusion

Linq rocks. PLINQO just keeps getting better. Check out our new nightly builds, or wait for PLINQO 3.2, coming soon to our Google Code repository near you!

Tuesday, July 28, 2009

Breaking Change in PLINQO 3.2

No, PLINQO 3.2 has not been released yet. However, when it is, there will be a few small but breaking changes introduced with it. As we often recommend that you use PLINQO's latest nightly builds, we wanted to bring this to your attention ASAP.

Breaking Query Extension Changes

We have added the UniqueMethodPrefix property to the Queries.cst template. Now PLINQO will generate two methods for each unique index and primary key. Methods that had previously returned an instance of an entity will now return IQueryable, thus updating to the latest version may result in compiler errors.

  • MethodPrefix Property
    • All methods generated return IQueryable.
    • Defaults to "By"
  • UniqueMethodPrefix
    • All methods generated return single instance of an entity.
    • Defaults to "GetBy"

Don't worry! These method names are still configurable! If you don't like this update, you can change the defaults of your Queries.cst to be whatever you prefer! Also, functionality has only been added, not lost! PLINQO now has methods that return both object and IQueryable types for queries with unique results!

Example

Product p;
using (var db = new PetshopDataContext())
{
    /**** PLINQO 3.1 ****/

    // This "ByKey" method used to return a unique result.
    p = db.Product.ByKey("BD-02");

    // To cache a product by Id, a Where statement had to be used in order to build the IQueryable.
    p = db.Product.Where(p => p.ProductId == "BD-02").FromCache().First();

    /**** PLINQO 3.2 ****/

    // There is no longer a "ByKey" method, it has been replaced with a "GetByKey" method.
    p = db.Product.GetByKey("BD-02");

    // Because there are now By methods for all properties (even ones with unique results),
    // you can now use the extension methods for your cache!
    p = db.Product.ByProductId("BD-02").FromCache().First();
}

Why make these changes?

When we first developed the naming conventions for our extension methods, we did not have as many features that extended IQueryable results. At the time, it made sense that getting by a unique set should only return a single entity rather than an IQueryable with a single result.

Times have changed, PLINQO has expanded and gotten better. PLINQO now offers more features, and frankly, we felt that the API needed to keep up. This update offers the following advantages...

  • Unique result query extensions now work with batch queries.
  • Unique result query extensions now work with the query result cache.
  • Query Extensions are now more consistent, all "By" methods now return same type.

Enjoy!

Monday, June 29, 2009

How PLINQO Improves Performance

Performance Q&A

One of the first questions people always ask about PLINQO is "do we have any performance statistics?" Well, to quote a wiser man than myself: "People can come up with statistics to prove anything. 14% of people know that." ~ Homer J. Simpson

PLINQO's primary plan for performance improvement is simple: reduce the number of trips that you need to make to the database. A round trip to the database and back is one of the most costly things an application can do, but it's also one of the most common things an application must do. PLINQO offers several easy to use features that can dramatically reduce database transactions, but it's up to the developer to use them.

Bottom Line: PLINQO can and will out preform standard LINQ to SQL. By how much? That is entirely up to you!

Batch Updates and Deletes

Updates with LINQ to SQL

To update records with LINQ to SQL, you must retrieve objects from the database, make your updates, and then commit the changes back to the database. This requires TWO round trips to the database.

1) Build the query, query the database, wait for a response, store the data in memory.
2) Make your updates, commit the changes to the database, wait for the response.

Batch Updates with PLINQO

PLINQO offers a batch update method on its table objects. To update as many records as you want, merely create a query, send it to the database, and get back your result. Unlimited rows, ONE trip.

1) Build the query, update the database, get the response.

// Update all Tasks with a StatusId of 1 to have a StatusId of 2,
context.Task.Update(t1 => t1.StatusId == 1, t2 => new Task() { StatusId = 2 });

Batch Deletes

To delete a record in LINQ to SQL you must first retrieve that record. That is TWO complete round trips to the database to delete one little record! PLINQO allows for batch deletes in the same manner as batch updates, as many records as you want, without loading records into memory, and in just ONE trip.

// Delete all tasks where StatusId is 2,
context.Task.Delete(t => t.StatusId == 2);

Stored Procedures

While the LINQ to SQL designer does support stored procedures, it only supports returning a single result set. PLINQO supports stored procedures with multiple result sets, and provides a simple way to handle those results. Again, this is yet another way PLINQO helps get you more data with fewer trips to the database.

// Create Procedure [dbo].[GetUsersWithRoles]
// As
// Select * From [User]
// Select * From UserRole
// GO
var results = context.GetUsersWithRoles();
List<User> users = results.GetResult<User>().ToList();
List<UserRole> roles = results.GetResult<UserRole>().ToList();

Batch Queries

In LINQ to SQL every query is a trip to the database. It doesn't matter if you have five queries in succession that require no logic in between, you must still make each and every query separately. The PLINQO DataContext offers an ExecuteQuery overload that will execute as many queries as you want in a single transaction. This is an extremely simple feature to use, and it can drastically improve performance in every day development scenarios.

var q1 = from u in context.User select u;
var q2 = from t in context.Task select t;
IMultipleResults results = context.ExecuteQuery(q1, q2);
List<User> users = results.GetResult<User>().ToList();
List<Task> tasks = results.GetResult<Task>().ToList();

Monday, June 22, 2009

PLINQO Cache

Yo dawg! I heard you like cache, so we put a caching mechanism in yo server side cache, so you can cache while you cache!

...but seriously, PLINQO now includes a built in caching mechanism! :)

  • All IQueryable result sets can now be dynamically cached/retrieved right from the query.
  • The cache is accessible via an IQueryable extension method (FromCache), thus the cached objects are not DataContext specific.
  • The cache duration is set at the time of the query, it can be stored for a specific time span or use a sliding expiration.

Example

using (var context = new PetshopDataContext())
{
    // Cache a result set. (A query is made to the DB.)
    var birds = context.Product.GetByCategoryId("BIRDS").FromCache().ToList();
    // Get a single entity from that cache. (No query is made to the DB.)
    var firstBird = context.Product.GetByCategoryId("BIRDS").FromCache().FirstOrDefault();

    // Specify number of seconds to cache. (A query is made to the DB.)
    var penguin = context.Product.GetByName("Penguin").FromCache(60).FirstOrDefault();
    // Get the same result set back as a list. (No query is made to the DB.)
    var penguins = context.Product.GetByName("Penguin").FromCache(60).ToList();
}

Configuration

This feature is is not yet available in an official PLINQO release, to use the cache you will have to download the latest PLINQO Nightly Build.

To access the FromCache extension method you must...

  1. Include a reference to the following assemblies...
    1. CodeSmith.Data
    2. System.Data.Linq
    3. System.Data.Services
    4. System.Web
  2. Include a using/import statement for the CodeSmith.Data.Linq namespace.

Friday, February 20, 2009

Defining Enum Tables

Back in January I posted about "Defining Many To Many" tables; and now (to take a note from the Colbert Report), here is Part 2 in my infinite part series...

Better Know an ORM Programmatic Definition: Defining Enum Tables!

The idea here is that we want to generate Enums from database tables; so for each table we specify as an Enum table, we want create an Enum and populate it's values from the rows in that table. As always, the goal is to make this solution be as generic as possible. We want this to be able to work on pretty much any database we throw at it, we want it to check for any usual pitfalls, and of course we want what we generate to be as useful as possible! Let's begin with an example...

Example Table (Input)

Table Name: StarTrek
Columns: Id (int), Name (string), Captain (string)
Row 1: "Original", "James Tiberius Kirk"
Row 2: "Animated", "James Tiberius Kirk"
Row 3: "TNG", "Jean-Luc Picard"
Row 4: "DS9", "Benjamin Sisko"
Row 5: "Voyager", "Kathryn Janeway"
Row 6: "Enterprise", "Scott Bakula? Seriously?"

Example Enum (Output)

public enum StarTrek : int
{
    /// <summary>James Tiberius Kirk</summary>
    Original = 1,
...etc...
    /// <summary>Scott Bakula? Seriously?</summary>
    Enterprise = 6
}

So, what is the logic?

1) Explicitly identify select the table.

While associations can be determined by examining keys and other qualities, Enum tables just don't have enough unique qualities to identify in that manner; thus we will want to explicitly choose our Enum tables. For this task I recommend a Regex for Enum table names; however you could always use a good old fashion table list. Now that we have identified that the table SHOULD be an Enum, we need to determine if it CAN be an Enum table...

2) The table must have a primary key.

Enums values can be assigned a numeric value, so to allow for association mapping and logical comparisons it's a good idea that our generated Enums are assigned meaningful values at generation time. NOTE: While assigning a numeric value to Enum values can server many different purposes, the following was specifically chosen because it allows for Enums to act as database associations for business entity objects.

2.A) The table must have a primary key composed of a single column. (It's hard to have a composite key that evaluates to a single numerical value.)

2.B) The primary key column must be of a number type. (This is so that the Enum values can be assigned to the key value.)

3) The table must have a column to specify the values.

Well if the table is the Enum itself, where are the values going to come from? You have to chose which column the value is going to come out of! Again I recommend using a Regex to find this column by name, but if that fails (or if you are feeling lazy) you could default to taking the first column of a string type.

4) The table must have at least one row.

Firstly, this is because there's not a lot of use for an empty Enum; but also, some languages (such as VB) don't support it.

Additionally

When generating the enums, it might come in handy to generate the description for each value as well (as we did in the example above); so, in the code below is an extra function for finding that description with (surprise surprise) a Regex.

And finally, here is what your code might look like...

public static Regex EnumTableNameRegex = new Regex("(E|e)num$", RegexOptions.Compile);
public static Regex EnumValueColumnRegex = new Regex("((V|v)alue)|((N|n)ame)|((T|t)ype(C|c)ode)", RegexOptions.Compile);
public static Regex EnumDescriptionColumnRegex = new Regex("(D|d)esc", RegexOptions.Compile);

public bool IsEnum(TableSchema table)
{
    return EnumTableNameRegex.IsMatch(table.Name)                            // 1) Matches the enum regex.
        && table.PrimaryKey != null                                                          // 2) Has a Primary Key...
        && table.PrimaryKey.MemberColumns.Count == 1                         // a) ...that is a single column...
        && IsNumeric(table.PrimaryKey.MemberColumns[0].SystemType)   // b) ...of a number type.
        && !string.IsNullOrEmpty(GetEnumNameColumnName(table))         // 3) Contains a column for name.
        && table.GetTableData().Rows.Count > 0;                                      // 4) Must have at least one row.
}

private bool IsNumeric(Type t)
{
    return t == typeof(byte)
           || t == typeof(sbyte)
           || t == typeof(short)
           || t == typeof(ushort)
           || t == typeof(int)
           || t == typeof(uint)
           || t == typeof(long)
           || t == typeof(ulong);
}

public string GetEnumValueColumn(TableSchema table)
{
    string result = GetEnumColumn(table, EnumValueColumnRegex);

    // If no Regex match found, use first column of type string.
    if (string.IsNullOrEmpty(result))
        foreach (ColumnSchema column in table.Columns)
            if (column.SystemType == typeof(string))
            {
                result = column.Name;
                break;
            }

    return result;
}

private string GetEnumColumn(TableSchema table, Regex regex)
{
    string result = string.Empty;

    foreach (ColumnSchema column in table.Columns)
        if (regex.IsMatch(column.Name))
        {
            result = column.Name;
            break;
        }

    return result;
}

public string GetEnumDescriptionColumn(TableSchema table)
{
    return GetEnumColumnName(table, EnumDescriptionExpressions);
}

Thursday, January 8, 2009

Defining Many To Many

There are a lot of simple tasks that we humans can do with little effort, but when put into logic, can often become quite difficult to define. A great example of this is Many To Many Table Associations.

So, how do you programatically take a table and tell if it is a many to many association? My train of thought started off simple: the table has two foreign key columns. Then the flaws started rolling in...

  1. Regular tables could have two for foreign keys.
  2. It can't just be two columns, there could be time stamps or version numbers.
  3. It can't just be two key columns, because there could be composite keys.
  4. It may or may not have a primary key.
  5. The primary key could be a composite key on the foreign keys.

...it seems that I had taken my human brain (specifically it's mad pattern recognition skillz) for granted! :) So after a brief discussion in the office (and on our forums), we came up with the following logic:

  1. Table must have Two ForeignKeys.
  2. All columns must be either...
    1. Member of the Primary Key.
    2. Member of a Foreign Key.
    3. A DateTime stamp (CreateDate, EditDate, etc).
    4. Match our Row Version regular expression.

Of course, there could always be other things out there that we didn't think of. In this world there are many technologies, with many conventions, used by many programmers, all unique in there own way. So, unfortunately, there is no truly simple answer, nor is there a perfect solution...however, that is why we here at CodeSmith always try to be as generic and flexible as possible in our designs! Also, it's why we love to use things like Extended Properties, and how we get our last criteria:

0) Bypass logic if table contains Extended Property for ManyToMany

So, finally, here is what the code might look like...

Note: This is (a slightly modified version of) what is in our NHibernate templates.

public static bool IsManyToMany(TableSchema table)
{
    // 0) Bypass logic if table contains Extended Property for ManyToMany
    if (table.ExtendedProperties.Contains("CS_ManyToMany"))
        return true;

    // 1) Table must have Two ForeignKeys.
    // 2) All columns must be either...
    //    a) Member of the Primary Key.
    //    b) Member of a Foreign Key.
    //    c) A DateTime stamp (CreateDate, EditDate, etc).
    //    d) Name matches Version Regex.

    if(table.ForeignKeys.Count != 2)
        return false;

    bool result = true;
    versionRegex = new Regex("(V|v)ersion");

    foreach (ColumnSchema column in table.Columns)
    {
        if (!( column.IsForeignKeyMember
            || column.IsPrimaryKeyMember
            || column.SystemType.Equals(typeof(DateTime))
            || versionRegex.IsMatch(column.Name)))
        {
            result = false;
            break;
        }
    }

    return result;
}

Real Time Web Analytics