[.NetWorld] CQRS: Command Query Responsibility Segregation Design Pattern

I was recently turned onto the Command Query Responsibility Segregation (CQRS) design pattern by a co-worker. One of the biggest benefits of CQRS is that is aids in implementing distributed, highly scalable system. This notion can be intimidating, but at the heart of CQRS there are rather simple guidelines to follow. Now let’s dive in and explore what this pattern is and some way of implementing it.

Purpose of Command Query Responsibility Segregation (SQRS)

The main purpose of CQRS is to assist in building high performance, scalable systems with large amounts of data.

Derives from Command Query Seperation (CQS)

The basis for the CQRS pattern is the Command Query Separation (CQS) design pattern devised by Bertrand Meyer. The pattern states there should be complete separation between “command” methods that perform actions and “query” methods that return data.

Here’s a really simplistic object oriented example of Command Query Separation in C#:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
// Simple CQS Example
public class DataStore {
    // Query Method
    public Person GetPerson(int id) {
        // query data storage for specific Person by Id
        // return Person
    }
    // Command Methods
    public void Insert(Person p) {
        // Insert Person into data storage
    }
    public void UpdateName(int id, string name) {
        // Find Person in data storage by Id
        // Update the name for this Person within the data storage
    }
}

The above example has clear separation between the Query method “GetPerson” that retrieves data, and the Command methods that insert or update data.

Adding Responsibility Segregation

Next, CQRS takes “Separation” from CQS and turns it into “Segregation” to completely pull apart the Responsibilities of Command and Query methods to place them in separate contexts.

Here’s a simple example in C# of taking the above CQS example and adding the “Responsibility Segregation” concept to it:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
// Responsibility Segregation Example
public class QueryDataStore {
    public Person GetPerson(int id) {
        // query data storage for specific Person by Id
        // return Person
    }
}
public class CommandDataStore {
    public void Insert(Person p) {
        // Insert Person into data storage
    }
    public void UpdateName(int id, string name) {
        // Find Person in data storage by Id
        // Update the name for this Person within the data storage
    }
}

The seemingly simple change to completely separate Command and Query methods has some fairly big implications on the way you implement the Command and Query methods themselves. By breaking them apart to completely separate contexts, they must be able to function in isolation, completely independent from each other. What this means is that the Command object in the above example must not have a hard dependency on the Query object. If they were to depend on each other, then the design would still be CQS instead of CQRS.

Here’s a simple diagram to help clarify the separation of Command and Query how it pertains to CQRS:

Separate Models for Command and Query

The way CQRS enforces Responsibility Segregation is by requiring there to be separate models used for Command methods than for Query methods. The above responsibility segregation methods example would then be built out so that the Query class and Command class can operate completely independently without either one having dependencies on the other. One of the key principles for this in CQRS is that the pattern is really meant to allow for there to be multiple Query and/or Command classes, each with it’s own methods, that get used when it’s unique circumstance require. For example, there may be a Query class for simple data retrieval with a separate Query class used for a more complex, power user search.

CQRS + Event Sourcing

Event Sourcing is a separate design pattern itself, but it is necessary to integrate with CQRS when building systems that have distributed data along with the requirement of high performance and scalability; as these are the systems that CQRS was really developed for. Event Souring allows the CQRS implementation to have a separate data store (or database) for consistent, stable data with change tracking, while easily maintaining separate data stores (or databases) that have eventual consistency.

Here’s some example breakouts of systems that would benefit from a CQRS plus Event Sourcing architecture:

  1. A system that has the main database used for editing data, and a separate Reporting database that is synced with eventual consistency.
  2. A system that has the main database used for editing, with a separate database used for extremely specific query operations where searching and reporting have their own data models to facilitate higher query performance.

Here’s a simple diagram to help clarify how CQRS can be combined with Event Sourcing:

It is important to remember that Event Sourcing is not a requirement of CQRS unless the system architecture is utilizing distributed data. Event Sourcing gives the system the ability to maintain eventual consistency of the Query models while maintaining the Command model as the source of that consistency. Without Event Sourcing there really isn’t any way to effectively build a system using CQRS and Distributed Data.

Single Database for both Command and Query

Just because you have separate models for Command methods than Query methods doesn’t mean the data can’t be stored and queried in the same place. This is just an implementation detail for you to decide when using the pattern; however, the biggest benefits of CQRS come when using it to maintain completely separate data stores for Writing data (Command) than for Reading data (Query.) CQRS is really meant to be used for building systems with distributed data, and high performance and scalability.

New Paradigm in System Design

There’s a bit of mysticism in the industry as to what CQRS is. Some say it’s a philosophy and not even an architectural pattern. Before writing this article, I read through various different opinions on what CQRS is and how it relates to Event Sourcing. I have come to the conclusion that CQRS is an extremely simple design pattern, just as all design patterns should be. However, the level of abstraction that it facilitates really creates a huge shift in the way that software systems work with data. Instead of designing a single data access layer that utilizes a single, traditional data store, CQRS opens software architecture up to a new paradigm. This new paradigm breaks the software free from the restrictiveness of the vertical, monolithic data store thus allowing for the data store to be built in a similar modular fashion as the way clean code is. CQRS facilitates the storage, retrieval and editing of distributed data.

While the paradigm shift of distributed data is still fairly new to the software industry, it is most definitely the direction that things are moving. The “No SQL” movement of recent years is a testament to the fact that developers and architects abound are discovering the need to more effectively handle large amounts of data in a distributed fashion that allows for much greater flexibility and scalability.

ref: http://pietschsoft.com/post/2014/06/15/CQRS-Command-Query-Responsibility-Segregation-Design-Pattern

[.NetWorld] SharePoint Architecture – What are we talking about here?

As I have announced in my previous post – I will start a series of the architecture related SharePoint articles on this blog. This is merely caused by the lack of the proper architecture, in a huge number of the SharePoint applications I have seen. That, on another side, has been caused by numerous reasons, but nevertheless: we have somehow come to the point, where it has become acceptable that the front end talks directly to the back end. Say, a SharePoint web part communicating directly with the data in the SharePoint list.

Well, it is not acceptable, not in any other technology, and not in SharePoint.

As I have written before – this series of the articles will be accompanied by a CodePlex project, where all of the staff which I talk about in these articles, will be accompanied with the real, living code. The test solution will be the about the conference organization: if you have ever been to a SharePoint conference, or any other development conference for that matter, you know that there is a lot of stuff to be done – from the speakers’ perspective (speaker bios, session abstracts, session schedule…), but mostly from the visitors’ perspective (applying for the conference participation, registering for a single session, rating a session…).

Of course, we will want to have a nice, modern way of doing things – we want visitors to register to a session simply by scanning a QR code on the door. We want them to be able to rate a session in a nice web interface on the conference web page. Even better, with a Windows Phone 7 app. OK, maybe even with the mobile-optimized HTML5 page (there are still some folks out there which are not using Windows Phone, from whatever reason it might be).

Conference administrators, on the other side, will mainly use the standard SharePoint interface for managing the conference data – visitors, sessions, schedules etc. But we want to make their lives a bit easier – we want them to know the session ratings immediately after the visitors have given them. We want them to know the session visitors immediately after the session end. And we would like to give them a nice geographical distribution of visitors, overview for the whole conference, and for each single session.

This will be our project. As you can see, a lot of work is waiting there, but we have to start somehow.

It is obvious, even now at the beginning, that a solution without architecture would not give us any benefits here. Just consider the task of rating a single session. We have said – we want it to be possible to do that through the web interface. Let’s say – we need a web part for it. Then we have said that we want to make it possible through the WP7 app. And, on the end, we want a sleek app for the non-Windows Phone mobile devices. Should we then develop this logic three times? Once, we talk directly to the SharePoint from the web part. Then we develop the same thing for the Windows Phone. Then we develop a special page which would be optimized for the other mobile devices. Now, that does not make any sense, does it? We need to have one piece of code for rating the presentations. Our web part, our mobile devices optimized web page, our WP7 app – they need to talk to that piece of code. So when we change the rating logic, it’s changed everywhere.

Not to mention, how much easier testing of that code will be when we write it only once. And testing is also kind of important. As we see, it’s all about architecture.

So, how does a generic architecture for a complex SharePoint solution looks like? Well, there is no uniform answer for it. It all depends on the needs of the application. But the next picture can give an overview about some standard cornerstones which should be found in the solution.

APArchitecture - Simplified
We see quite a number of boxes here, even if it is a simplified model. It would be too much, to explain it all in one article, but, let’s just identify here architecture layers we have used in this example:

SharePoint UI Layer

This is all about SharePoint, so let’s start with the stuff which should be well known for most of the SharePoint developers – SharePoint UI Layer. Web parts, ASPX pages, workflows, event receivers…

Wait a moment. Workflows and event receivers are in the UI Layer?! Well, a kind of. Of course they are not really UI, but when you think about it – they actually do a typical UI tasks: they trigger business processes. Of course we can make them actually EXECUTE business processes, but we can do that in a web part as well, can’t we?

You get the idea -a web part, an ASPX page – it is all UI, they don’t do any business – they interact with user, they collect data, and they give that data to someone else to do the dirty work. If we think about our example solution – conference organization – this is where the visitors will give ratings to the sessions. They will see a form, they will enter ratings, and they will click the OK button. That’s it.

Business Layer

This is where the dirty work has been done: we actually need to DO something with the input data which comes here. And yes, we also define our data model here (data entities). If you think from the SharePoint perspective, you don’t want to write a series of strings and numbers (which represent our rating) in the fields of a SPListItem. Of course you can do that, but, then good luck with validation, testing, debugging, and, oh yes – with the maintaining that code. That is the reason why we will have the Rating entity, where we will store our ratings. We will have a number of supporting methods for that rating – to validate rating, to calculate the average rating (not as simple as you think!), well, a number of different things to do. You can’t do that with the values of the fields in the SPListItem object.

architecture layersAnd yes, this is theoretically the only layer of the solution which you will never want to replace. You might improve it, you might add some new functionality, but this is the ONLY part of the solution which you don’t want to replace. You can replace the front end – you can develop new web parts, you can develop new interfaces for new devices, but they will all talk to your business logic. You can also replace the back end – your customer might tell you that she has, after a long and painful thinking process, decided that she does not want SharePoint Server on the back end. She wants a SQL Server. You might curse, but you will make a new Data Access Layer, with new persistence methods. Your business logic stays the same. It didn’t change, did it?

Data Access Layer

And this is where you write your data to the SharePoint, and where you read it from there. You don’t do any calculations here (like calculation of the average rating), you simply read and write your data. It is more than enough work for this piece of code.

And you will want to have more than one Data Access Layer implementation in your solution. You will at least want to have a mock implementation, so you can make some isolated tests, without bothering about SharePoint. Or, from the example above, you might want to implement an alternative, SQL Server Data Access Layer. All this can happen. So, this is why you need an interface. Your interface is basically telling what the Data Access Layer has to do, and the different implementations of this interface (SharePoint, SQL, Mock…) will do that. Since this interface is closely related to our business entities, it is stored in the Business Layer, but all of the different Data Access Layer implementations will implement that interface. It might seem odd at first, but it is actually quite handy.

There is another challenge with the SharePoint implementation of the Data Access Layer. It needs to be aware of the context. Well, you might think, but we know the context from the front end – SharePoint UI elements? Yes, we do, but we have the Business Layer in between, and it has no idea about the SharePoint context. Why should it? It shouldn’t even know, where are we storing the data that it is manipulating. And, if you think about it, our WP7 and HTML clients will also not be aware of the context. These are the challenges we will deal with in the following articles.

Data layer

Data layer is pure SharePoint Server, SQL Server, or whatever we want it to be. In the case of SharePoint, we need to configure it, to create lists and libraries, create workflows, change different settings.

Infrastructure Layer

This is where we do our common stuff. Logging, application configuration, exception handling, dealing with localization (a huge issue in SharePoint) and similar stuff. Where are we going to log? ULS is the natural answer for the SharePoint, but what if we want to switch the logging to, say, Event Log, or to the local text file? Do we need to refactor our solution to change that stuff in all pieces of code where we have used the logging? Do we have different logging implementations in the front end and business layer (front end might not be SharePoint-aware)? And how do we configure that logging and the application in general?

All those questions will be dealt with in the infrastructure layer – I can already tell you that it will contain a number of interfaces, and a number of the implementations of those interfaces. And that a huge portion of this series of articles will be devoted to the infrastructure layer.

Service Layer

We need to expose our business logic to the outer world – we might have some quite different client implementations. Some people are still using iPhones. Scary, I know, but that’s the fact. And different clients can have different ways of communication. No problem, we can cover them all – WCF, REST, even the good ol’ ASMX is not quite dead yet (and it is aware of the SharePoint Context). This stuff will be our interface to the world.  Whatever might be out there.

UI Layer

This is all UI, which is not the SharePoint UI. And as we have said – it can be a lot of things. Silverlight application, different .NET managed clients, web stuff, interfaces to the other applications, whatever you might think of. Very often you will not even control it – there will be other people and applications who will want to connect to you. It’s all UI for you.

Wait, Silverlight? And managed .NET applications? Don’t we have the SCOM (SharePoint Client Object Model) now? Aren’t we reinventing a wheel with this? No, we are not reinventing a wheel. Or do you want to develop the session-rating logic again in your Silverlight client? And when rating logic changes, you need to change it in two different pieces of code? We don’t want that.

Is the CSOM obscure then? Useless? Not at all. CSOM is a great thing, if you have a LOB application which needs to exchange the data with SharePoint. Or, to use SharePoint’s collaboration features – document storage, collaboration, versioning… This is where the CSOM is your (very) good friend. When your business logic stays in the external LOB application, and when you just need a way to persist the data in the SharePoint, this is the playground for CSOM. But you shouldn’t implement the business logic with CSOM, from the numerous reasons, which were all stated above at least once.

But it is enough for now. In the next article, I will describe our Conference Organization solution, it’s parts, and finally start coding. Until then, cheers.

(Ref: http://blog.sharedove.com/adisjugo/index.php/2011/09/03/sharepoint-architecture-what-are-we-talking-about-here)

[.NETWorld] Database Initialization Strategies in Code-First:

You already created database after running you code first application first time, but what about second time onwards?? Will it create new database every time you run the application? What about production environment? How to alter database when you change your domain model? To handle these scenarios, you have to use one of the database initialization strategies.

 

There are four different database Initialization strategies:

 

  1. CreateDatabaseIfNotExists: This is default initializer. As name suggests, it will create the database if not exists as per the configuration. However, if you change the model class and then run the application with this initializer then it will throw an exception. 
  2. DropCreateDatabaseIfModelChanges: This initializer drops existing database and creates new database if your model classes (entity classes) have been changed. So you don’t have to worry about maintaining your database schema when your model classes changes. 
  3. DropCreateDatabaseAlways: As name suggests, this initializer drops an existing database every time you run the application irrespective of whether your model classes have changed or not. This will be useful when you want fresh database every time you run the application while developing. 
  4. Custom DB Initializer: You can also create your own custom initializer if any of the above doesn’t satisfy your requirement or you want to do some other process when it initialize the database using above initializer. 

 

To use one of the above DB initialization strategies, you have to set the DB Initializer using Database class in Context class as following:

     
    public class SchoolDBContext: DbContext 
    {

        public SchoolDBContext(): base("SchoolDBConnectionString") 
        {
            Database.SetInitializer<SchoolDBContext>(new CreateDatabaseIfNotExists<SchoolDBContext>());

            //Database.SetInitializer<SchoolDBContext>(new DropCreateDatabaseIfModelChanges<SchoolDBContext>());
            //Database.SetInitializer<SchoolDBContext>(new DropCreateDatabaseAlways<SchoolDBContext>());
            //Database.SetInitializer<SchoolDBContext>(new SchoolDBInitializer());
        }
        public DbSet<Student> Students { get; set; }
        public DbSet<Standard> Standards { get; set; }
    }

You can also create your custom DB initializer by inheriting one of the intializer as below:

    
    public class SchoolDBInitializer :  DropCreateDatabaseAlways<SchoolDBContext>
    {
        protected override void Seed(SchoolDBContext context)
        {
            base.Seed(context);
        }
    }

As you can see in the above code, we have created new class SchoolDBInitializer which is derived from DropCreateDatabaseAlways initializer.

 

Set db initializer in the configuration file:

 

You can also set db initializer in the configuration file. For example, to set default initializer in app.config:

   
    <?xml version="1.0" encoding="utf-8" ?>
    <configuration>
      <appSettings>
        <add key="DatabaseInitializerForType SchoolDataLayer.SchoolDBContext, SchoolDataLayer"         
            value="System.Data.Entity.DropCreateDatabaseAlways`1[[SchoolDataLayer.SchoolDBContext, SchoolDataLayer]], EntityFramework" />
      </appSettings>
    </configuration>

You can set custom db initializer as following:

    <?xml version="1.0" encoding="utf-8" ?>
    <configuration>
      <appSettings>    
        <add key="DatabaseInitializerForType SchoolDataLayer.SchoolDBContext, SchoolDataLayer"
             value="SchoolDataLayer.SchoolDBInitializer, SchoolDataLayer" />
      </appSettings>
    </configuration>

So this way you can use DB initialization strategy for your application.

[.NETWorld] Code First: Inside DbContext Initialization

A lot of stuff happens when you use a DbContext instance for the first time. Most of the time you don’t worry about this stuff, but sometimes it’s useful to know what’s happening under the hood. And even if it’s not useful, it’s hopefully interesting for its geek value alone.

 

Note that even though there is a lot of detail below I’ve actually simplified things quite a lot to avoid getting totally bogged down in code-like details. Also, I’m writing this from memory without looking at the code so forgive me if I forget something. 🙂

Creating a DbContext instance

Not very much happens when the context instance is created. The initialization is mostly lazy so that if you never use the instance, then you pay very little cost for creating the instance.

It’s worth noting that SaveChanges on an un-initialized context will also not cause the context to be initialized. This allows patterns that use auto-saving to be implemented very cheaply when the context has not been used and there is therefore nothing to save.

One thing that does happen at this stage is that the context is examined for DbSet properties and these are initialized to DbSet instances if they have public setters. This stops you getting null ref exceptions when you use the sets but still allows the sets to be defined as simple automatic properties. The delegates used to do this are cached in a mechanism similar to the one described here.

DbContext initialization

The context is initialized when the context instance is used for the first time. “Use” means any operation on the context that requires database access or use of the underlying Entity Data Model (EDM). The initialization steps are:

  1. The context tries to find a connection or connection string:
    1. If a DbConnection was passed to the constructor, then this is used.
    2. Else, if a full connection string was passed, then this is used.
    3. Else, if the name of a connection string was passed and a matching connection string is found in the config file, then this is used.
    4. Else, the database name is determined from the name passed to the constructor or from the context class name and the registered IConnectionFactory instance is used to create a connection by convention.
  2. The connection string is examined to see if it is an Entity Framework connection string containing details of an EDM to use or if it is a raw database connection string containing no model details.
    1. If it is an EF connection string, then an underlying ObjectContext is created in Model First/Database First mode using the EDM (the CSDL, MSL, and SSDL from the EDMX) in the connection string.
    2. If it a database connection string, then the context enters Code First mode and attempts to build the Code First model as described in the next section.

I made a post on the EF Team blog that describes some of the connection handling in more detail.

Building the Code First model

The EDM used by Code First for a particular context type is cached in the app-domain as an instance of DbCompiledModel. This caching ensures that the full Code First pipeline for building a model only happens once when the context is used for the first time. Therefore, when in Code First mode:

  1. DbContext checks to see if an instance of DbCompiledModel has been cached for the context type. If the model is not found in the cache, then:
    1. DbContext creates a DbModelBuilder instance.
      1. By default, the model builder convention set used is Latest. A specific convention set can be used by setting the DbModelBuilderVersionAttribute on your context.
    2. The model builder is configured with each entity type for which a DbSet property was discovered.
      1. The property names are used as the entity set names, which is useful when you’re creating something like an OData feed over the model
    3. The IncludeMetadataConvention convention is applied to the builder. This will include the EdmMetadata entity in the model unless the convention is later removed.
    4. The ModelContainerConvention and ModelNamespaceConvention are applied to the builder. These will use the context name as the EDM container name and the context namespace as the EDM namespace. Again, this is useful for services (like OData) that are based on the underlying EDM.
    5. OnModelCreating is called to allow additional configuration of the model.
    6. Build is called on the model builder.
      1. The model builder builds an internal EDM model representation based on configured types and reachability from those types and runs all the Code First conventions which further modify the model/configuration.
        1. The connection is used in this process since the SSDL part of the model depends on the target database, as represented by the provider manifest token.
    7. Compile is called on the DbModel to create a DbCompiledModel. DbCompiledModel is currently a wrapper around the MetadataWorkspace.
      1. The model hash is also created by the call to compile.
    8. The DbCompiledModel is cached.
  2. The DbCompiledModel is used to create the underlying ObjectContext instance.

Database initialization

At this point we have an underlying ObjectContext, created either through Code First or using the EDM in the connection string.

DbContext now checks whether or not database initialization has already happened in the app-domain for the type of the derived DbContext in use and for the database connection specified. If initialization has not yet happened, then:

  1. DbContext checks whether or not an IDatabaseInitializer instance has been registeredfor the context type.
    1. If no initializer (including null) has been explicitly registered then a default initializer will be automatically registered.
      1. In Code First mode, the default initializer is CreateDatabaseIfNotExists.
      2. In Database/Model First mode, the default initializer is null, meaning that no database initialization will happen by default. (Because your database almost always already exists in Database/Model First mode.)
  2. If a non-null initializer has been found, then:
    1. A temporary ObjectContext instance is created that is backed by the same EDM as the real ObjectContext. This temp is used by the DbContext instance for all work done by the initializer and then thrown away. This ensures that work done in the initializer does not leak into the context later used by the application.
    2. The initializer is run. Using the Code First default CreateDatabaseIfNotExists as an example, this does the following:
      1. A check is made to see whether or not the database already exists.
      2. If the database does not exist, then it is created:
        1. This happens through the CreateDatabase functionality of the EF provider. Essentially, the SSDL of the model is the specification used to create DDL for the database schema which is then executed.
          1. If the EdmMetadata entity was included in the model, then the table for this is automatically created at the same time since it is part of the SSDL just like any other entity.
        2. If the EdmMetadata entity was included in the model, then the model hashgenerated by Code First is written to the database by saving an instance of EdmMetadata.
        3. The Seed method of the initializer is called.
        4. SaveChanges is called to save changes made in the Seed method.
      3. If the database does exist, then a check is made to see if the EdmMetadata entity was included in the model and, if so, whether there is also a table with a model hash in the database.
        1. If EdmMetadata is not mapped or the database doesn’t contain the table, then it is assumed that the database matches the model. This is what happens when you map to an existing database, and in this case it is up to you to ensure that the model matches the database. (Note DropCreateDatabaseIfModelChanges would throw in this situation.)
        2. Otherwise, the model hash in the database is compared to the one generated by Code First. If they don’t match, then an exception is thrown. (DropCreateDatabaseIfModelChanges would drop, recreate, and re-seed the database in this situation.)
    3. The temporary ObjectContext is disposed.
  3. Control returns to whatever operation it was that caused initialization to run.

That’s the basics. Like I mentioned above, I missed some details intentionally, and I probably missed some more by mistake. Hopefully it was somewhat useful/interesting anyway.

Thanks for reading!
Arthur

P.S. There is an alternate theory of how DbContext works that suggests nuget introduces a herd of unicorns into your machine which then run on treadmills to create magic entity juice that in turn magically connects your objects to your database. I cannot comment on this theory without breaking confidentiality agreements I have signed with the unicorn king. Or something.

[.NETWorld] EF 6.1: Creating indexes with IndexAttribute

Since EF 4.3 it has been possible to use CreateIndex and DropIndex in Code First Migrations to create and drop indexes. However this had to be done manually by editing the migration because the index was not included anywhere in the Code First model. Now with EF 6.1 it is possible to add index specifications to the model such that creating and dropping indexes can be handled automatically by Migrations.

Single column indexes

Consider a simple Blog entity:

1
2
3
4
5
6
7
public class Blog
{
    public int Id { get; set; }
    public string Title { get; set; }
    public int Rating { get; set; }
    public virtual ICollection<Post> Posts { get; set; }
}

Let’s assume this entity is already in our model and migrations have been created and applied so the model and database are both up-to-date. The easiest way to add an index is to place IndexAttribute onto a property. For example, let’s add an index to the column mapped to by the Rating property:

1
2
3
4
5
6
7
8
9
10
public class Blog
{
    public int Id { get; set; }
    public string Title { get; set; }
    [Index]
    public int Rating { get; set; }
    public virtual ICollection<Post> Posts { get; set; }
}

After doing this using Add-Migration will scaffold a migration something like this:

1
2
3
4
5
6
7
8
9
10
11
12
public partial class Two : DbMigration
{
    public override void Up()
    {
        CreateIndex("dbo.Blogs", "Rating");
    }
    public override void Down()
    {
        DropIndex("dbo.Blogs", new[] { "Rating" });
    }
}

The index is being created with a default name and default options. The defaults are as follows:

  • Name: IX_[column_name]
  • Not unique
  • Not clustered

You can also use IndexAttribute to give the index a specific name and options. For example, let’s add a name to the index for the Rating column:

1
2
3
4
5
6
7
8
9
10
public class Blog
{
    public int Id { get; set; }
    public string Title { get; set; }
    [Index("RatingIndex")]
    public int Rating { get; set; }
    public virtual ICollection<Post> Posts { get; set; }
}

Scaffolding another migration for this change results in:

1
2
3
4
5
6
7
8
9
10
11
12
public partial class Three : DbMigration
{
    public override void Up()
    {
        RenameIndex(table: "dbo.Blogs", name: "IX_Rating", newName: "RatingIndex");
    }
    public override void Down()
    {
        RenameIndex(table: "dbo.Blogs", name: "RatingIndex", newName: "IX_Rating");
    }
}

Notice that Migrations has scaffolded a rename for the index from the default name to the new name.

Multiple column indexes

Indexes that span multiple columns can also be scaffolded by using the same index name on multiple properties. For example:

1
2
3
4
5
6
7
8
9
10
11
12
13
public class Blog
{
    [Index("IdAndRating", 1)]
    public int Id { get; set; }
    public string Title { get; set; }
    [Index("RatingIndex")]
    [Index("IdAndRating", 2, IsUnique = true)]
    public int Rating { get; set; }
    public virtual ICollection<Post> Posts { get; set; }
}

Notice that the order of columns in the index is also specified. The unique and clustered options can be specified in one or all IndexAttributes. If these options are specified on more than one attribute with a given name then they must match.

Scaffolding a migration for this change results in:

1
2
3
4
5
6
7
8
9
10
11
12
public partial class Four : DbMigration
{
    public override void Up()
    {
        CreateIndex("dbo.Blogs", new[] { "Id", "Rating" }, unique: true, name: "IdAndRating");
    }
    public override void Down()
    {
        DropIndex("dbo.Blogs", "IdAndRating");
    }
}

Index conventions

The ForeignKeyIndexConvention Code First convention causes indexes to be created for the columns of any foreign key in the model unless these columns already have an index specified using IndexAttribute. If you don’t want indexes for your FKs you can remove this convention:

1
2
3
4
protected override void OnModelCreating(DbModelBuilder modelBuilder)
{
    modelBuilder.Conventions.Remove<ForeignKeyIndexConvention>();
}

What IndexAttribute doesn’t do

IndexAttribute can be used to create a unique index in the database. However, this does not mean that EF will be able to reason about the uniqueness of the column when dealing with relationships, etc. This feature usually referred to as support for “unique constraints” which can be voted for as a feature suggestion and on the CodePlex work item.

[.NETWorld] YouTube Downloader Using C# .NET

Attached new version Smile | :) .

Introduction

This article shows how to download YouTube videos using C# code only. The code is very simple to understand and anyone can easily integrate it to in their solution or project.

I didn’t use any third party library to do this task. All you need is to take two .cs files and integrate them to your project.

Using the code

There are two main classes in this project:

YouTubeVideoQuality Class

This is the entity that describes the video.

 Collapse | Copy Code
public class YouTubeVideoQuality 
{
    /// <summary>
    /// Gets or Sets the file name
    /// </summary>
    public string VideoTitle { get; set; }
    /// <summary>
    /// Gets or Sets the file extention
    /// </summary>
    public string Extention { get; set; }
    /// <summary>
    /// Gets or Sets the file url
    /// </summary>
    public string DownloadUrl { get; set; }
    /// <summary>
    /// Gets or Sets the youtube video url
    /// </summary>
    public string VideoUrl { get; set; }
    /// <summary>
    /// Gets or Sets the file size
    /// </summary>
    public Size Dimension { get; set; }

    public override string ToString()
    {
        return Extention + " File " + Dimension.Width + 
                           "x" + Dimension.Height;
    }

    public void SetQuality(string Extention, Size Dimension)
    {
        this.Extention = Extention;
        this.Dimension = Dimension;
    }
}

YouTubeDownloader Class

This class downloads YouTube videos:

 Collapse | Copy Code
public class YouTubeDownloader
{
    public static List<YouTubeVideoQuality> GetYouTubeVideoUrls(params string[] VideoUrls)
    {
        List<YouTubeVideoQuality> urls = new List<YouTubeVideoQuality>();
        foreach (var VideoUrl in VideoUrls)
        {
            string html = Helper.DownloadWebPage(VideoUrl);
            string title = GetTitle(html);
            foreach (var videoLink in ExtractUrls(html))
            {
                YouTubeVideoQuality q = new YouTubeVideoQuality();
                q.VideoUrl = VideoUrl;
                q.VideoTitle = title;
                q.DownloadUrl = videoLink + "&title=" + title;
                if (getQuality(q))
                    urls.Add(q);
            }
        }
        return urls;
    }

    private static string GetTitle(string RssDoc)
    {
        string str14 = Helper.GetTxtBtwn(RssDoc, "'VIDEO_TITLE': '", "'", 0);
        if (str14 == "") str14 = Helper.GetTxtBtwn(RssDoc, "\"title\" content=\"", "\"", 0);
        if (str14 == "") str14 = Helper.GetTxtBtwn(RssDoc, "&title=", "&", 0);
        str14 = str14.Replace(@"\", "").Replace("'", "'").Replace(
                "\"", "&quot;").Replace("<", "&lt;").Replace(
                ">", "&gt;").Replace("+", " ");
        return str14;
    }

    private static List<string> ExtractUrls(string html)
    {
        html = Uri.UnescapeDataString(Regex.Match(html, "url_encoded_fmt_stream_map=(.+?)&", 
                                      RegexOptions.Singleline).Groups[1].ToString());
        MatchCollection matchs = Regex.Matches(html, 
          "url=(.+?)&quality=(.+?)&fallback_host=(.+?)&type=(.+?)&itag=(.+?),", 
          RegexOptions.Singleline);
        bool firstTry = matchs.Count > 0;
        if (!firstTry)
            matchs = Regex.Matches(html, 
                     "itag=(.+?)&url=(.+?)&type=(.+?)&fallback_host=(.+?)&sig=(.+?)&quality=(.+?),{0,1}", 
                     RegexOptions.Singleline);
        List<string> urls = new List<string>();
        foreach (Match match in matchs)
        {
            if (firstTry)
                urls.Add(Uri.UnescapeDataString(match.Groups[1] + ""));
            else urls.Add(Uri.UnescapeDataString(match.Groups[2] + "") + "&signature=" + match.Groups[5]);
        }
        return urls;
    }

    private static bool getQuality(YouTubeVideoQuality q)
    {
        if (q.DownloadUrl.Contains("itag=5"))
            q.SetQuality("flv", new Size(320, 240));
        else if (q.DownloadUrl.Contains("itag=34"))
            q.SetQuality("flv", new Size(400, 226));
        else if (q.DownloadUrl.Contains("itag=6"))
            q.SetQuality("flv", new Size(480, 360));
        else if (q.DownloadUrl.Contains("itag=35"))
            q.SetQuality("flv", new Size(640, 380));
        else if (q.DownloadUrl.Contains("itag=18"))
            q.SetQuality("mp4", new Size(480, 360));
        else if (q.DownloadUrl.Contains("itag=22"))
            q.SetQuality("mp4", new Size(1280, 720));
        else if (q.DownloadUrl.Contains("itag=37"))
            q.SetQuality("mp4", new Size(1920, 1280));
        else if (q.DownloadUrl.Contains("itag=38"))
            q.SetQuality("mp4", new Size(4096, 72304));
        else return false;
        return true;
    }
}

Points of Interest

Using this code you can select the video quality depending in your internet connection speed to start download.

Many people have slow internet connections and they can not watch videos from YouTube, so I made this code to help those people to download YouTube videos to their PC’s so they can watch the videos offline.

Updates

Thanks to Motaz Alnuweiri, the Downloader works again. Added self download videos.

License

This article, along with any associated source code and files, is licensed under The Code Project Open License (CPOL)

[.NETWorld] Build End-to-End Apps with TypeScript

Over the past few years, I’ve been working a lot with the JavaScript language and building large-scale web applications. Although I have “Stockholm Syndrome” (JavaScript is one of my favorite programming languages), other developers don’t share my affection for the language. Developers don’t like JavaScript for many reasons. One of the main reasons is that JavaScript is difficult to maintain in a large-scale code base. In addition, JavaScript lacks certain language features, such as modules, classes, and interfaces.

Developers can use two main approaches to avoid JavaScript pitfalls. The first approach is to use plain JavaScript with JavaScript design patterns to mimic the behavior of modules, classes, and other missing JavaScript language features. I use this approach most of the time, but it can be very difficult for junior JavaScript developers because you need know JavaScript quite well to avoid its pitfalls.

Related: Is TypeScript Ready for Prime Time?

The second approach is to use JavaScript preprocessors. JavaScript preprocessors are design-time tools that use custom languages or known languages that later compile into JavaScript. Using this approach helps you create a more object-oriented–like code base and helps code maintainability. JavaScript preprocessors such as CoffeeScript, Dart, or GWT are very popular, but they force you to learn a new language or use languages such as Java or C# that are later compiled into JavaScript. What if you could use JavaScript instead, or a variation of JavaScript?

You’re probably asking yourself why you would use JavaScript or a JavaScript variant to compile into JavaScript. One reason is that ECMAScript 6, the latest JavaScript specifications, introduces a lot of the missing features in the JavaScript language. Also, because at the end of the process you get JavaScript code anyway, why not write the code in plain JavaScript from the start?

This is where TypeScript becomes very useful.

TypeScript to the Rescue

A year ago Microsoft released the first preview of a new language, TypeScript, written by a team led by Anders Hejlsberg, creator of the C# language. TypeScript is a JavaScript preprocessor that compiles into plain JavaScript. Unlike other JavaScript preprocessors, TypeScript was built as a typed superset of JavaScript, and it adds support for missing JavaScript features that aren’t included in the current version of ECMAScript. TypeScript aligns to the new JavaScript keywords that will be available when ECMAScript 6, the next version of JavaScript specifications, becomes the JavaScript standard. This means that you can use language constructs such as modules and classes in TypeScript now, and later on when ECMAScript 6 becomes the standard, your code will already be regular JavaScript code.

TypeScript is cross-platform and can run on any OS because it can run wherever JavaScript can run. You can use the language to generate server-side code written in JavaScript along with client-side code also written in JavaScript. This option can help you write an end-to-end application with only one language—TypeScript.

To install TypeScript, go to the TypeScript website. On the website you’ll find download links and an online playground that you can use to test the language. You can also view TypeScript demos in the website’s “run it” section. The website can be very helpful for new TypeScript developers.

I don’t go into great detail about TypeScript’s features in this article; for a more detailed explanation of the language, see Dan Wahlin’s “Build Enterprise-Scale JavaScript Applications with TypeScript.” I recommend that you read Wahlin’s article before you proceed any further with this article. You’ll need a good understanding of what TypeScript is before you jump into writing a simple end-to-end application using the language.

Creating the Server Side with Node.js

To demonstrate the ease of using TypeScript to write an application, let’s create a simple gallery of DevConnections conference photos. First, you need to create the server side. The application will use the node.js runtime to run the application back end.

Node.js is a platform to build web servers using the JavaScript language. It runs inside a Google V8 engine environment. V8 is the Chrome browser’s JavaScript engine. Node.js uses an event-driven model that helps create an efficient I/O-intensive back end. This article assumes that you know a little bit about node.js and Node Packaged Modules (npm). If you aren’t familiar with node.js, you should stop reading and go to the node.js website first.

Our application will also use the Express framework, which is a node.js web application framework. Express helps organize the web application server side into MVC architecture. Express lets you use view engines such as EJS and Jade to create the HTML that’s sent to the client. In addition, Express includes a routes module that you can use to create application routes and to access other features that help speed up the creation of a node.js server. For further details about Express, go to the Express website.

Creating the project. To create the application, you need to install node.js Tools for Visual Studio (NTVS). (As I write this article, NTVS is currently in first alpha and might be unstable.) NTVS includes project templates for node.js projects, IntelliSense for node.js code, debugging tools, and many other features that can help you with node.js development inside Visual Studio IDE.

After you install NTVS, create a blank Express application and call it DevConnectionsPhotos. Figure 1 shows the New Project dialog box, which includes all the installed NTVS project templates.

Figure 1: Creating a Blank Express Application

Figure 1: Creating a Blank Express Application

When NTVS asks you whether to run npm to install the missing dependencies for the project, you should select the option to run npm and allow it to retrieve all the Express packages.

Creating the views. In the Views folder, you should replace the layout.jade file with the code in Listing 1. This code is written in Jade view engine style, and it will render the HTML layout of the main application page.

Listing 1: Rendering the HTML Layout of the Main Application Page
doctype html
html
head
title=’DevConnections Photo Gallery’
link(rel=’stylesheet’, href=’/Content/app.css’)
link(rel=’stylesheet’, href=’/Content/themes/classic/galleria.classic.css’)
script(src=’/Scripts/lib/jquery-1.9.0.js’)
script(src=’/Scripts/lib/galleria-1.2.9.js’)
script(src=’/Content/themes/classic/galleria.classic.js’)
script(src=’/Scripts/app/datastructures.js’)
script(src=’/Scripts/app/dataservice.js’)
script(src=’/Scripts/app/bootstrapper.js’)
script(src=’/Scripts/app/app.js’)
body
block content

You should also replace the index.jade file, which includes the content block that will be injected into the layout.jade file during runtime. The new code for the index.jade file should look like that in Listing 2.

Listing 2: The index.jade File
extends layout
block content
div#container
header
img#DevConnectionsLogo(src=’/Content/Images/DevConnctionsLogo.png’, alt=’DevConnections Logo’)
h1=’DevConnections Photo Gallery’
section.main
div#galleria
img#light(src=”/Content/Images/Light.png”)

The index.jade file includes a declaration of a DIV element with a Galleria ID. You’ll use that DIV later on the client side to show the photo gallery that you’re implementing.

Implementing the server side. Before you use TypeScript, you should import the TypeScript runtime to the NTVS project. To do so, add the following line of code to the DevConnectionsPhotos.njsproj file:

<Import Project=”$(VSToolsPath)\TypeScript\Microsoft.TypeScript.targets” />

This line of code imports TypeScript to the project and allows you to use it to compile TypeScript files. (Note that the TypeScript Visual Studio runtime wasn’t a part of NTVS projects at the time I wrote this article.)

Now that the environment is ready and you’ve created the main web page, you should rename the app.js file, which exists in the root of the project, to app.ts by changing its postfix to .ts. Performing this action forces the code to run as TypeScript code rather than JavaScript code. Because TypeScript is a JavaScript superset, you can transform the app.js file, which is a simple Express template, to app.ts without any problems.

In the app.ts file, you should add a module dependency on the node.js file system module. This module exists under the name fs. To use this module, you should create a new variable called fs under the Module dependencies comment, as Listing 3 shows.

Listing 3: Creating the fs Variable
/**
* Module dependencies.
*/
var express = require(‘express’);
var routes = require(‘./routes’);
var user = require(‘./routes/user’);
var http = require(‘http’);
var path = require(‘path’);
var fs = require(‘fs’);

You should use a function called getAllFileURIs, as in Listing 4, that receives a folder name and a callback function. The getAllFileURIs function will use the folder name to open that folder; later, it will return all the file URIs in that folder.

Listing 4: The getAllFileURIs Function
var getAllFileURIs = function(folder: string, callback: Function): void {
var results = [],
relativePath = folder.substr(8);
fs.readdir(folder, (err, files) => {
if (err) {
callback([]);
};
files.forEach(function(file) {
file = relativePath + ‘/’ + file;
results.push(file);
});
callback(results);
});
};

You can see that I used lambdas in the code and types for the arguments that the function receives. These features come from TypeScript and aren’t currently part of JavaScript.

After you write the getAllFileURIs function, you should add an endpoint called getAllImages on your server. This endpoint uses the getAllFileURIs function to fetch all the URIs for files that exist in the /public/Content/Photos folder. Listing 5 shows what the implementation of this endpoint should look like. In Listing 5, whenever a request arrives to the getAllImages endpoint, an array of image URIs is serialized to JSON format and is written to the response.

Listing 5: Implementing the getAllImages Endpoint
app.get(‘/getAllImages’, (req, res) => {
getAllFileURIs(‘./public/Content/Photos’, (imageUris) => {
res.writeHead(200, { ‘Content-Type’: ‘application/json’ });
res.write(JSON.stringify(imageUris));
res.end();
});
});

Your server code is now ready to run. Be sure to set the generated app.js file as the startup file for the node.js application. Figure 2 shows a running DevConnections photo gallery with only the server implementation. (Notice that there are no photos in the gallery yet.) Now that you have a working server side, you need to create the client side.

Figure 2: DevConnections Photo Gallery with only the Server Implementation

Figure 2: DevConnections Photo Gallery with only the Server Implementation

Creating the Client Side Using JavaScript Libraries

You’ll use two libraries on the client side: jQuery and Galleria. You’re probably already familiar with jQuery; Galleria is a JavaScript library that’s used to create a gallery from an image array. You can download Galleria, or you can use your own gallery library as long as you adjust the code, as I show you later in this article

Setting up the folders. In the public folder that was created by the Express project template, create a Scripts folder that you’ll use to hold all the scripts that the application uses. Under the Scripts folder, add two new folders named app and lib. Put all the application TypeScript files and the generated JavaScript files in the app folder. Put the jQuery and Galleria JavaScript files in the lib folder.

To use JavaScript libraries as though they were created with TypeScript, you need to import the libraries’ declaration files to the project. A declaration file is a file that ends with the .d.ts postfix; it describes the interfaces of the library. Having a declaration file can help the TypeScript environment understand the types included in a typed library. However, not all libraries have a declaration file. A known GitHub repository calledDefinitelyTyped includes most of the major libraries’ declaration files. You should download the jquery.d.ts declaration file and put it under the lib folder. Unfortunately, Galleria isn’t included in DefinitelyTyped. Now you’re ready to create the TypeScript files and use the libraries.

Creating the client-side implementation. The first step in configuring the client side is to create the data structures for Galleria image information and for Galleria configuration options. Create a new TypeScript file in the app folder that exists in the Scripts folder. Call the new file datastructures.ts. Both of the classes you’ll create will be a part of the app.data.structures module. The code in Listing 6 implements the data structures.

The data structure implementation is very simple. The data structures include properties that the application will later use to configure the image gallery.

After you’ve created the data structures, you need to configure the interaction with the server, and you need to fetch the images for the gallery. To accomplish these tasks, you need to implement a data service class. Create a new TypeScript file in the app folder that exists in the Scripts folder. Call the new file dataservice.ts. The data service’s responsibility will be to call the getAllImages endpoint and use the array of image URIs to create Galleria images, as Listing 7 shows.

Listing 7: Implementing a Data Service Class
/// <reference path=”../lib/jquery.d.ts” />
/// <reference path=”datastructures.ts” />
module app.data {
import structures = app.data.structures;
export interface IDataService {
getImages: () => JQueryPromise;
}
export class DataService implements IDataService {
getImages(): JQueryPromise {
var deferred = $.Deferred();
var result: structures.GalleriaImage[] = [];
$.getJSON(“/getAllImages”, {}, (imageUris) => {
$.each(imageUris, (index, item) => {
result.push(new structures.GalleriaImage(new structures.GalleriaImageConfigOptions(item, “”, “”, “My title” + index, “My description” + index, “”)));
});
deferred.resolve(result);
});
return deferred.promise();
}
}
}

As you can see in Listing 7, one of the first steps is to import the app.data.structures module. Later on, you declare an interface that exposes a getImages function. The getImages function returns a JQueryPromise object that will help defer the execution of the getImages operation until the getJSON function returns and runs its success callback. When the getJSON success callback runs, it creates a GalleriaImage object for each image URI that’s part of the array that was returned by the server.

Now that you have data structures and a data service, you need to configure the Galleria object. Create a new TypeScript file in the app folder that exists in the Scripts folder. Call the new file bootstrapper.ts. In the bootstrapper.ts file, create a Bootstrapper class that’s responsible for running the Galleria object, as Listing 8 shows.

Listing 8: Configuring the Galleria Object
/// <reference path=”../lib/jquery.d.ts” />
/// <reference path=”dataservice.ts” />
declare var Galleria;
module app {
import data = app.data;
export interface IBootstrapper {
run: () => void;
}
export class Bootstrapper implements IBootstrapper {
run() {
this.getData().then((images) => {
Galleria.configure({
imageCrop: true,
transition: ‘fade’,
dataSource: images,
autoplay: true
});
Galleria.run(‘#galleria’);
});
}
getData(): JQueryPromise {
var dataService = new data.DataService();
return dataService.getImages();
}
}
}

One of the interesting things in the implementation is the call to declare var Galleria.Because Galleria doesn’t have a declaration file, you need to declare its object. This is where the declare keyword becomes very handy. You use the declare keyword to inform the TypeScript runtime that the declared variable is dynamic and should be used with theany type.

The last part of the puzzle is the app.ts file. You need to create this file in the app folder that exists in the Scripts folder. Don’t be confused between the app.ts file of node.js and this app.ts file, which is used to run the application. The code in Listing 9 implements the client-side app.ts file.

Listing 9: Implementing the Client-Side app.ts File
/// <reference path=”bootstrapper.ts” />
module app {
// start the app on load
window.addEventListener(“DOMContentLoaded”, (evt) => {
var bootstrapper = new app.Bootstrapper();
bootstrapper.run();
}, false);
}

Now that all the parts are in place, the final result is an image gallery (see Figure 3). You can put images in the Images folder, which exists under the Content folder. You candownload the complete application that I used to create the DevConnections photo gallery.

Figure 3: Final DevConnections Photo Gallery

Figure 3: Final DevConnections Photo Gallery

The TypeScript Advantage

In this article you learned how to use TypeScript to build a simple application. TypeScript can help you bridge the missing JavaScript features that will eventually be available in ECMAScript 6. TypeScript allows you to write large-scale JavaScript code and maintain it more easily. You can use TypeScript both in the application front end and in the application back end with frameworks and environments that run JavaScript.