[.NETWorld] Looking at ASP.NET MVC 5.1 and Web API 2.1 – Part 4 – Web API Help Pages, BSON, and Global Error Handling

This is part 4 of a series covering some of the new features in the ASP.NET MVC 5.1 and Web API 2.1 releases. The last one! If you’ve read them all, you have earned twelve blog readership points… after you finish this one, of course. Here are the previous posts:

The sample project covering the posts in this series is here; other referenced samples are in the ASP.NET sample repository.

As a reminder, Part 1 explained that ASP.NET MVC 5.1 / Web API 2.1 is a NuGet update for the MVC 5 / Web API 2 releases that shipped with Visual Studio 2013. There will be a Visual Studio update that will make them the defaults when you create new projects.

In this post, we’ll look at new features in ASP.NET Web API 2.1.

Attribute Routing

We already looked at the updates to Attribute Routing improvements for both ASP.NET Web API and MVC in the second post in this series, I just want to call it out again since this post is overviewing all of the other new features in ASP.NET Web API 2.1 and the Attribute Routing support for custom constraints is one of the top features in the ASP.NET Web API 2.1 release.

As a reminder, custom route constraints make it really easy to create wrap route matching logic in a constraint which can then be placed on ApiControllers or actions like this:

1
2
3
4
5
6
7
8
9
10
[VersionedRoute("api/Customer", 1)]
public class CustomerVersion1Controller : ApiController
{
    // controller code goes here
}
[VersionedRoute("api/Customer", 2)]
public class CustomerVersion2Controller : ApiController
{
    // controller code goes here
}

In that example, the custom VersionedRoute constraint looks for an api-version header and forwards the request to the correct controller. See the post for more information, including a link to the sample application.

Help Page improvements

Okay, let’s dig into some of the cool new features we haven’t seen yet. To start with, I’m going to scaffold out a new PersonApiController using the same Person class I’ve used earlier in this series, shown below:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
public class Person
{
    [ScaffoldColumn(false)]
    public int Id { get; set; }
    [UIHint("Enum-radio")]
    public Salutation Salutation { get; set; }
    [Display(Name = "First Name")]
    [MinLength(3, ErrorMessage = "Your {0} must be at least {1} characters long")]
    [MaxLength(100, ErrorMessage = "Your {0} must be no more than {1} characters")]
   public string FirstName { get; set; }
    [Display(Name = "Last Name")]
    [MinLength(3, ErrorMessage = "Your {0} must be at least {1} characters long")]
    [MaxLength(100, ErrorMessage = "Your {0} must be no more than {1} characters")]
    public string LastName { get; set; }
    public int Age { get; set; }
}
//I guess technically these are called honorifics
public enum Salutation : byte
{
    [Display(Name = "Mr.")]   Mr,
    [Display(Name = "Mrs.")]  Mrs,
    [Display(Name = "Ms.")]   Ms,
    [Display(Name = "Dr.")]   Doctor,
    [Display(Name = "Prof.")] Professor,
    Sir,
    Lady,
    Lord
}

And we’re using the standard Web API scaffolding:

2014-01-30_16h11_27

Nothing has really changed for the top level ASP.NET Web API Help Page – you get a generated list of API calls for each API Controller.

2014-02-12_23h20_49

What has changed is what you see when you click through on one of the API calls, e.g. the PersonApi GET method. Here’s how that looked in ASP.NET Web API 2.1:

2014-02-17_13h40_46

It shows sample data in JSON and XML, and you can kind of guess what they are if you’ve named your model properties well, but there’s no information on type, model attributes, validation rules, etc.

Here’s how it looks in ASP.NET Web API 2:

2014-02-17_13h41_45

The Response Formats section hasn’t changed, but now we have a Resource Description area at the top. Let’s take a closer look at that:

2014-02-17_13h53_18

Here we’re clearly displaying both the type and validation rules.

Note that the Salutation type is hyperlinked, since it’s using our custom Salutation enum. Clicking through shows the possible values for that enum:

2014-02-17_13h47_37

If you’ve done any work integrating with API’s that had minimal or out of date documentation, hopefully the value of the above is really clear. What’s great is that this is generated for me at runtime, so it’s always up to date with the latest code. If my Web API is in production and I add a new Enum value or change a validation rule, the live documentation on the web site is immediately updated without any work or extra thought on my part as soon as I deploy the code.

Short detour: Filling in Descriptions using C# /// Comments

Now that we’ve got documentation for our model types, it’s clear that we could improve it a bit. The most obvious thing is that there’s no provided description. That’s easy to add using C# /// comments (aka XML Comments). ASP.NET Web API Help Pages have had support for /// comments documentation for a while, it just hasn’t been this obvious.

The ASP.NET Web API Help Pages are implemented in a really clear, open model: it’s all implemented in an ASP.NET MVC Area within your existing site. If you’re not familiar with ASP.NET MVC Areas, they’re a way to allow you to segment your applications into with separate routes, models, views and controllers but still keep them in the same project so it’s easier to manage, share resources, etc.

Here’s the Help Page Area within the sample project we’re working on:

2014-02-17_14h36_48

1. In the above screenshot, I’ve highlighted the \App_Start\HelpPageConfig.cs file because that’s where we’re going to set up the XML comments. There’s a Register method right at the top with the following two lines:

1
2
//// Uncomment the following to use the documentation from XML documentation file.
//config.SetDocumentationProvider(new XmlDocumentationProvider(HttpContext.Current.Server.MapPath("~/App_Data/XmlDocument.xml")));

So to use that, we’ll uncomment the second line, just as the instructions say.

2. Note that the comments are pointing to an XmlDocument.xml file. We need to check a box in the project settings to generate that XML file as shown below.

2014-02-17_14h46_38

That’s it!

Once that’s done, I’m going to throw the /// comments on the controller methods and model properties and generate XML comments. I used GhostDoc to generate the comments, then cleaned them up and editorialized a bit.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
/// <summary>
/// This is an example person class. It artisanally crafted by a
/// bearded, bespeckled craftsman after being lovingly sketched
/// in a leather bound notebook with charcoal pencils.
/// </summary>
public class Person
{
    [ScaffoldColumn(false)]
    public int Id { get; set; }
    /// <summary>
    /// This uses a custom salution enum since there's apparently no ISO standard.
    /// </summary>
    /// <value>
    /// The person's requested salutation.
    /// </value>
    [UIHint("Enum-radio")]
    public Salutation Salutation { get; set; }
    [Display(Name = "First Name")]
    [MinLength(3, ErrorMessage = "Your {0} must be at least {1} characters long")]
    [MaxLength(100, ErrorMessage = "Your {0} must be no more than {1} characters")]
   public string FirstName { get; set; }
    [Display(Name = "Last Name")]
    [MinLength(3, ErrorMessage = "Your {0} must be at least {1} characters long")]
    [MaxLength(100, ErrorMessage = "Your {0} must be no more than {1} characters")]
    public string LastName { get; set; }
    /// <summary>
    /// This is the person's actual or desired age.
    /// </summary>
    /// <value>
    /// The age in years, represented in an integer.
    /// </value>
    public int Age { get; set; }
}

And here’s the updated help page with the descriptions:

2014-02-17_15h04_16

There are a ton of other features in the HelpPageConfig – you could pull your documentation from a database or CMS, for example. And since it’s all implemented in standard ASP.NET MVC, you can modify the views or do whatever else you want. But it’s nice to have these new features available out of the box.

BSON

BSON is a binary serialization format that’s similar to JSON in that they both store name-value pairs, but it’s quite different in how the data is actually stored. BSON serializes data in a binary format, which can offer some performance benefits for encode / decode / traversal.  It’s been possible to hook up a custom BSON formatter in ASP.NET Web API before; Filip and others have written comprehensive blog posts describing how to do just that. It’s even easier now – both for clients and servers – since the BSON formatter is included with ASP.NET Web API.

Important note: BSON isn’t designed to be more compact than JSON, in fact it’s often bigger (depending on your data structure and content). That’s because, unlike JSON, BSON embeds type information in the document. That makes for fast scan and read, but it’s holding more data than the equivalent JSON document. That means that it will be faster in some cases, but it may be slower in other cases if your messages are bigger. This shows the value of content negotiation and flexible formatters in ASP.NET Web API – you can easily try out different formatters, both on the client and server side, and use the best one for the job.

I’ll look at two examples here.

Testing BSON with a text-heavy model

For the first example, I’ll use the Person class we’ve been using for our previous examples. I’d like to have a lot more people in my database. I grabbed some absolutely silly code I wrote 7 years ago that generates fake surnames (Generate random fake surnames) and added a controller action to slam 500 new people with a first name of Bob but a random last name and age into the database. Then I clicked on it a few times.

2014-02-17_16h19_50

Turning on the BSON formatter is just a one line code change:

1
2
3
4
5
6
7
8
public static class WebApiConfig
{
    public static void Register(HttpConfiguration config)
    {
        config.Formatters.Add(new BsonMediaTypeFormatter());
        // ...
    }
}

Now whenever a client sends an Accept header for application/bson, they’ll get the data in BSON format. For comparison, I’m making two requests in Fiddler. Here’s a request with no Accept header specified, so we get JSON:

2014-02-18_00h48_12

The content-length there is 118,353 bytes.

Now I’m setting an Accept header with application/bson:

2014-02-18_00h51_19

Notice that this BSON request is 134,395 bytes, or 13% larger. I’ve marked some of the type identifiers in there, but you can see there are a lot more since they’re lined up in columns.

Place your bets: think the faster BSON serializer will be faster, despite the larger payload size? Before we answer that, we’ll add in a second scenario that replaces our text-heavy Person class with a quite exciting BoringData class that’s mostly numeric and binary data:

1
2
3
4
5
6
7
public class BoringData
{
    public int Id { get; set; }
    public long DataLong { get; set; }
    public byte[] DataBytes { get; set; }
    public DateTime DataDate { get; set; }
}

And here’s the test we’ll throw at both of these:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
private HttpClient _client = new HttpClient();
static void Main(string[] args)
{
    try
    {
        Console.WriteLine("Hit ENTER to begin...");
        RunAsync().Wait();
    }
    finally
    {
        Console.WriteLine("Hit ENTER to exit...");
        Console.ReadLine();
    }
}
private async static Task RunAsync()
{
    using (HttpClient client = new HttpClient())
    {
        await RunTimedTest<BoringData>(client, new JsonMediaTypeFormatter(), "http://localhost:29108/api/BoringDataApi", "application/json");
        await RunTimedTest<BoringData>(client, new BsonMediaTypeFormatter(), "http://localhost:29108/api/BoringDataApi", "application/bson");
        await RunTimedTest<Person>(client, new JsonMediaTypeFormatter(), "http://localhost:29108/api/PersonApi", "application/json");
        await RunTimedTest<Person>(client, new BsonMediaTypeFormatter(), "http://localhost:29108/api/PersonApi", "application/bson");
    }
}
public static async Task RunTimedTest<T>(HttpClient _client, MediaTypeFormatter formatter, string Uri, string mediaHeader)
{
    int iterations = 500;
    _client.DefaultRequestHeaders.Accept.Clear();
    _client.DefaultRequestHeaders.Accept.Add(new System.Net.Http.Headers.MediaTypeWithQualityHeaderValue(mediaHeader));
    MediaTypeFormatter[] formatters = new MediaTypeFormatter[] { formatter };
    var watch = Stopwatch.StartNew();
    for (int i = 0; i < iterations; i++)
    {
        var result = await _client.GetAsync(Uri);
        var value = await result.Content.ReadAsAsync<T[]>(formatters);
    }
    Console.WriteLine("Format: {0,-20} Type: {1,-15}\t Time (s):{2,10:ss\\.fff}", mediaHeader, typeof(T).Name, watch.Elapsed);
}

The BoringDataApi controller’s GET method returns lots of data, as you’d expect:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
public class BoringDataApiController : ApiController
{
    static Random rng = new Random(Guid.NewGuid().GetHashCode());
    public IEnumerable<BoringData> GetBoringData()
    {
        return GetLotsOfBoringData(100);
    }
    private IEnumerable<BoringData> GetLotsOfBoringData(int quantity)
    {
        byte[] buf1 = new byte[10000];
        byte[] buf2 = new byte[64];
        for (int i = 1; i < quantity; i++)
        {
            rng.NextBytes(buf1);
            rng.NextBytes(buf2);
            yield return new BoringData
            {
                Id  = i,
                DataBytes = buf1,
                DataDate = DateTime.UtcNow,
                DataLong = BitConverter.ToInt64(buf2,0)
            };
        }
    }
}

So, big picture, the test harness will run 500 end to end tests on both controllers, requesting both Person and BoringData as both JSON and BSON. What we’re not comparing is the difference between the Person and BoringData responses, we’re just looking for some general trends to see if BSON is faster than JSON for a mostly-textual and mostly-binary model. Yes, Kelly Sommers will beat me up for this, and I’m okay with that. My goal is to get some basic guidelines on when BSON works better than JSON.

The real point here is that you won’t know how your API or a specific content type will perform until you test it.

So how’d we do?

2014-02-18_17h13_33

In this case (and I ran this test many times with the same general result) BSON was a lot faster for mostly binary/numeric data, and a little slower for mostly textual data. In this pretty random example, BSON was 140% faster for the mostly binary case and and 21% slower for the mostly-textual case. That’s because both serialize textual data to UTF-8, but BSON includes some additional metadata.

So, very generally speaking, if your service returns a lot of binary / numeric / non-textual data, you should really look at BSON. If you’re looking for a silver bullet, you may have to pony up for some silver.

Easier implementations due to BaseJsonMediaTypeFormatter

Yes, that’s the most boring heading you’ll ever see. But it’s hopefully true. The new BaseJsonMediaTypeFormatter has been designed to make it easier to implement new different serialization formats easier, since the the internal JSON formatters have been redesigned to make it easier to extend. I asked Doug, the dev that did most of the work for this BSON update, about his commit message saying recent changes will make it easier to make other formatters like MessagePack happen and he said:

Yes.  BaseJsonMediaTypeFormatter introduces a few Json.Net types and concepts.  But it also provides solid and reusable async wrappers around easier-to-implement sync methods.

The main thing I’ve noticed there is the common BaseJsonMediaTypeFormatter. There’s not a whole lot of code in the BsonMediaTypeFormatter, since a lot of it’s in the common base and in other support classes.

And while I’m mentioning MessagePack, I think it’s another great option that’s really worth looking at, since (unlike BSON) MessagePack is designed for small message size. There’s a MsgPack formatter available now in the WebApiContrib formatters collection, and Filip Woj. wrote a nice blog post overview here: Boost up your ASP.NET Web API with MessagePack.

Global Error Handling

The last feature we’ll look at is Global Error Handling. The name’s pretty self-descriptive: it lets you register handlers and loggers which will respond to any unhandled exceptions across your entire Web API application.

Global error handling is especially useful in Web API because of the way the parts are so loosely coupled and composable – you’ve got all kinds of different handlers and filters, wired together with a very configurable system that encourages dependency injection… There’s a lot going on.

ASP.NET Web API Poster

Note: You can download the PDF of this poster from the ASP.NET site.

That provides you tons of flexibility when you’re building HTTP services, but it can make it hard to find out what’s wrong when there’s a problem. Exception filters help, but as David Matson notes, they don’t handle:

  1. Exceptions thrown from controller constructors
  2. Exceptions thrown from message handlers
  3. Exceptions thrown during routing
  4. Exceptions thrown during response content serialization

I recommend David Matson’s Web API Global Error Handling wiki article entry in the ASP.NET repository for more information on design and technical implementation. The short excerpt is that you can register one IExceptionHandler and multiple IExceptionLogger instances in your application, and they’ll respond to all Web API exceptions.

There’s already a pretty clear sample in the Web API samples which shows a GenericTextExceptionHandler (which returns a generic exception message for unhandled exceptions) and an ElmahExceptionLogger (which implements logging using the popular ELMAH logging system). I’ve been trying to come up with some other use cases, but I think they captured the main ones here – usually if you have an unhandled exception, you want to log it and make sure you return some sort of useful message to your client.

Both of these interfaces are really simple – they both have a single async method (LogAsync and HandleAsync, respectively) which are passed an ExceptionContext and a CancellationToken.

1
2
3
4
5
6
7
8
public interface IExceptionLogger
{
    Task LogAsync(ExceptionLoggerContext context, CancellationToken cancellationToken);
}
public interface IExceptionHandler
{
    Task HandleAsync(ExceptionHandlerContext context, CancellationToken cancellationToken);
}

The ExceptionContext includes the exception as well as a lot of other useful context information:

  • Exception (Exception)
  • Request (HttpRequestMessage)
  • RequestContext ()
  • ControllerContext (HttpControllerContext)
  • ActionContext (HttpActionContext)
  • Response (HttpResponseMessage)
  • CatchBlock (string)
  • IsTopLevelCatchBlock (bool)

They’re registered in your WebApiConfig like this:

1
2
3
4
5
6
7
8
9
10
11
12
13
public static class WebApiConfig
{
    public static void Register(HttpConfiguration config)
    {
        config.MapHttpAttributeRoutes();
        // There can be multiple exception loggers. (By default, no exception loggers are registered.)
        config.Services.Add(typeof(IExceptionLogger), new ElmahExceptionLogger());
        // There must be exactly one exception handler. (There is a default one that may be replaced.)
        // To make this sample easier to run in a browser, replace the default exception handler with one that sends
        // back text/plain content for all errors.
        config.Services.Replace(typeof(IExceptionHandler), new GenericTextExceptionHandler());
    }
}

There are a few areas to possibly add to, but I’m going to pass on actually implementing them so I can get this series wrapped before ASP.NET Web API 2.2 ships. Maybe an exercise for the reader?

Damien (damienbod) has a nice overview of Web API Exception handling, complete with a lot of references: Exploring Web API Exception Handling

More features to read about

We’ve looked at several of the top features in this release, but there are a lot more. Here’s a list with links to the documentation:

ASP.NET MVC 5.1

ASP.NET Web API 2.1

ASP.NET Web Pages 3.1

Hope you enjoyed the series. As a reminder, you can grab the source for my samples here and the official ASP.NET / Web API samples in the ASP.NET sample repository.

[.NETWorld] Looking at ASP.NET MVC 5.1 and Web API 2.1 – Part 3 – Bootstrap and JavaScript enhancements

This is part 3 of a 4 part series covering some of the new features in the ASP.NET MVC 5.1 and Web API 2.1 releases.

In this post, we’ll be focusing on some client-side improvements to ASP.NET MVC 5.1.

As a reminder if you haven’t read the first post, these updates are currently delivered via a NuGet update to your existing ASP.NET MVC 5 / Web API 2 applications. They’ll be part of the File / New Project templates included in an upcoming Visual Studio update.

EditorFor now supports passing HTML attributes – Great for Bootstrap

The new ASP.NET project templates all include Bootstrap themes. Bootstrap uses custom class names for everything – styling, components, layout, behavior. That made it frustrating that you couldn’t pass classes down to the Html.EditorFor HTML helper: you either had to use specific HTML Helpers like Html.TextBoxFor (which do allow you to pass HTML attributes, but don’t benefit from some of the other nice features in HTML.EditorFor, like Data Attribute support for display and input validation) or give up on using the Bootstrap classes and style things yourself.

In the 5.1 release, you can now pass HTML attributes as an additional parameter to Html.EditorFor, allowing you to get the best of both. Here’s an example of why that’s useful.

In the first post in the series, we scaffolded a simple create controller and associated views. The Create view ended up looking like this:

2014-01-28_10h51_55

That’s okay, but it’s not taking advantage of any of the Bootstrap form styling (e.g. focus indication, element sizing, groups, etc.) and it won’t do anything special with custom Bootstrap themes. A great start would be just to add the “form-control” class to the form elements. That just involves changing from this:

1
@Html.EditorFor(model => model.FirstName)

to this:

1
@Html.EditorFor(model => model.FirstName, new { htmlAttributes = new { @class = "form-control" }, })

When I make that update to the textboxes, I get this view:

2014-01-28_00h51_07

You’ll notice some subtle improvements, like the focus highlight on the FirstName field, nicer textbox size and validation layout for Age, etc. These are just really simple things with a  very basic model, but they give a quick idea of the improvement here.

Also nice is that I can pass the attributes on Html.EditorFor when displaying the entire model. Here I’ve updated the entire form section to just use one EditorFor call, passing in the model:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
@using (Html.BeginForm())
{
    @Html.AntiForgeryToken()
    
    <div class="form-horizontal">
        <h4>Person</h4>
        <hr />
        @Html.ValidationSummary(true)
        @Html.EditorFor(model => model, new { htmlAttributes = new { @class = "form-control" }, })
        <div class="form-group">
            <div class="col-md-offset-2 col-md-10">
                <input type="submit" value="Create" class="btn btn-default" />
            </div>
        </div>
    </div>
}

Note that to make sure the Id property didn’t display and to use the custom radio enum display template (as explained in the first post in the series), I added two annotations to my model. Here’s how the model and associated Enum look:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
public class Person
{
    [ScaffoldColumn(false)]
    public int Id { get; set; }
    [UIHint("Enum-radio")]
    public Salutation Salutation { get; set; }
    public string FirstName { get; set; }
    public string LastName { get; set; }
    public int Age { get; set; }
}
//I guess technically these are called honorifics
public enum Salutation : byte
{
    [Display(Name = "Mr.")]   Mr,
    [Display(Name = "Mrs.")]  Mrs,
    [Display(Name = "Ms.")]   Ms,
    [Display(Name = "Dr.")]   Doctor,
    [Display(Name = "Prof.")] Professor,
    Sir,
    Lady,
    Lord
}

That gives me the exact same output as shown above (the second, nicer screenshot). What’s cool there is that the EditorFor method passed that form-control class to each element in the form, so each input tag got the form-control class. That means that I could apply additional Bootstrap classes, as well as my own custom classes, in that same call:

1
@Html.EditorFor(model => model, new { htmlAttributes = new { @class = "form-control input-sm my-custom-class" }, })

You can see the code changes and associated tests for this EditorFor change on this commit on CodePlex.

Client-side validation for MinLength and MaxLength

This is a pretty straightforward one: we had client-side validation for StringLength before, but not for MinLength and MaxLength. Personally, I feel like it’s a tossup on which to use – StringLength lets you set both min and max and is more widely supported, but MinLength and MaxLength allow you to specify them separately and give different validation messages for each. Regardless, the good news is that whichever you use, they’re both supported on both server and client.

To test that out, we’ll add some MinLength and MaxLength attributes to our Person class.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
public class Person
{
    [ScaffoldColumn(false)]
    public int Id { get; set; }
    [UIHint("Enum-radio")]
    public Salutation Salutation { get; set; }
    [Display(Name = "First Name")]
    [MinLength(3, ErrorMessage = "Your {0} must be at least {1} characters long")]
    [MaxLength(100, ErrorMessage = "Your {0} must be no more than {1} characters")]
    public string FirstName { get; set; }
    [Display(Name = "Last Name")]
    [MinLength(3, ErrorMessage = "Your {0} must be at least {1} characters long")]
    [MaxLength(100, ErrorMessage = "Your {0} must be no more than {1} characters")]
    public string LastName { get; set; }
    public int Age { get; set; }
}

I get immediate feedback on what the website thinks of a potential stage name I’ve been considering:

2014-01-28_14h45_16

Here’s the link to the work item, and here’s the code diff for the commit.

Three small but useful fixes to Unobtrusive Ajax

There are a few fixes to Unobtrusive Ajax:

I thought the first fix was pretty interesting: a question came up on StackOverflow, someone posted a suggested one line fix on a CodePlex issue, and it got fixed in this commit.

This issue allows callbacks from Unobtrusive Ajax to have access to the initiating element.  That’s pretty handy when you’ve got multiple potential callers, e.g. a list of items which contain Ajax.ActionLink calls. In the past, I’ve had to write unnecessarily complicated JavaScript to wire things up manually because I couldn’t take advantage of the OnBegin, OnComplete, OnFailure and OnSuccess options, e.g.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
<script type="text/javascript">
    $(function () {
        // Document.ready -> link up remove event handler
        $(".RemoveLink").click(function () {
            // Get the id from the link
            var recordToDelete = $(this).attr("data-id");
            if (recordToDelete != '') {
                // Perform the ajax post
                $.post("/ShoppingCart/RemoveFromCart", {"id": recordToDelete },
                    function (data) {
                        // Successful requests get here
                        // Update the page elements
                        if (data.ItemCount == 0) {
                            $('#row-' + data.DeleteId).fadeOut('slow');
                        } else {
                            $('#item-count-' + data.DeleteId).text(data.ItemCount);
                        }
                        $('#cart-total').text(data.CartTotal);
                        $('#update-message').text(data.Message);
                        $('#cart-status').text('Cart (' + data.CartCount + ')');
                    });
            }
        });
    });
</script>

Now with this change, I’d have the option of wiring up the Ajax call and success callbacks separately and tersely, since they’d have access to the calling element for the ID.

That’s it for this post, in the next (and definitely last) post of this series I’ll look at some ASP.NET Web API 2.1 improvements.

[.NETWorld] Looking at ASP.NET MVC 5.1 and Web API 2.1 – Part 2 – Attribute Routing with Custom Constraints

I’m continuing a series looking at some of the new features in ASP.NET MVC 5.1 and Web API 2.1. Part 1 (Overview and Enums) explained how to update your NuGet packages in an ASP.NET MVC application, so I won’t rehash that here.

The sample project covering the posts in this series is here; other referenced samples are in the ASP.NET sample repository.

In this post, we’ll look at improvements to attribute routing for both ASP.NET MVC and ASP.NET Web API. First, a quick review of what routing constraints are used for.

Intro to Routing Constraints

ASP.NET MVC and Web API have both offered both simple and custom route constraints since they first came out. A simple constraint would be something like this:

1
2
3
routes.MapRoute("blog", "{year}/{month}/{day}",
    new { controller = "blog", action = "index" },
    new { year = @"\d{4}", month = @"\d{2}", day = @"\d{2}" });

In the above case, “/2014/01/01” would match but “/does/this/work” would not since the values don’t match the required pattern.  If you needed something more complex than a simple pattern match, you’d use a custom constraint by implementing IRouteConstraint and defining the custom logic in the Match method – if it returns true, the route is a match.

1
2
3
4
public interface IRouteConstraint
{
    bool Match(HttpContextBase httpContext, Route route, string parameterName, RouteValueDictionary values, RouteDirection routeDirection);
}

Route Constraints in Attribute Routing

One of the top new features in ASP.NET MVC 5 and Web API 2 was the addition of Attribute Routing. Rather than defining all your routes in /App_Start/RouteConfig.cs using a series of routes.MapRoute() calls, you can define routes using attributes on your controller actions and controller classes. You can take your pick of whichever works better to you: continue to use traditional routing, use attribute routing instead, or use them both.

Attribute routing previously offered custom inline constraints, like this:

1
[Route("temp/{scale:values(celsius|fahrenheit)}")]

Here, the scale segment has a custom inline Values constraint which will only match if the the scale value is in the pipe-delimited list, e.g. this will match temp/celsius and /temp/fahrenheit but not /temp/foo. You can read more about the Attribute Routing features that shipped with ASP.NET MVC 5, including inline constraints like the above, on this post by Ken Egozi: Attribute Routing in ASP.NET MVC 5.

While inline constraints allow you to restrict values for a particular segment, they’re both a little limited (e.g. they can’t operate over the entire URL, and some more complex thing that aren’t possible at that scope). To see more about what changed and why, see the issue report and changed code for this commit.

Now with ASP.NET MVC 5.1, we can create a new attribute that implements a custom route constraint. Here’s an example.

ASP.NET MVC 5.1 Example: Adding a custom LocaleRoute

Here’s a simple custom route attribute that matches based on a list of supported locales.

First, we’ll create a custom LocaleRouteConstraint that implements IRouteConstraint:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
public class LocaleRouteConstraint : IRouteConstraint
{
    public string Locale { get; private set; }
    public LocaleRouteConstraint(string locale)
    {
        Locale = locale;
    }
    public bool Match(HttpContextBase httpContext, Route route, string parameterName, RouteValueDictionary values, RouteDirection routeDirection)
    {
        object value;
        if (values.TryGetValue("locale", out value) && !string.IsNullOrWhiteSpace(value as string))
        {
            string locale = value as string;
            if (isValid(locale))
            {
                return string.Equals(Locale, locale, StringComparison.OrdinalIgnoreCase);
            }
        }
        return false;
    }
    private bool isValid(string locale)
    {
        string[] validOptions = "EN-US|EN-GB|FR-FR".Split('|') ;
        return validOptions.Contains(locale.ToUpper());
    }
}

IRouteConstraint has one method, Match. That’s where you write your custom logic which determines if a set of incoming route values, context, etc., match your custom route. If you return true, routes with this constraint are eligible to respond to the request; if you return false the request will not be mapped to routes with this constraint.

In this case, we’ve got a simple isValid matcher which takes a locale string (e.g. fr-fr) and validates it against a list of supported locales. In more advanced use, this may be querying against a database backed cache of locales your site supports or using some other more advanced method. If you are working with a more advanced constraint, especially a locale constraint, I recommend Ben Foster’s article Improving ASP.NET MVC Routing Configuration.

It’s important to see that the real value in this case is running more advanced logic than a simple pattern match – if that’s all you’re doing, you could use a regex inline route constraint (e.g. {x:regex(^\d{3}-\d{3}-\d{4}$)}).

Now we’ve got a constraint, but we need to map it to an attribute to use in attribute routing. Note that separating constraints from attributes gives a lot more flexibility – for instance, we could use this constraint on multiple attributes.

Here’s a simple one:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
public class LocaleRouteAttribute : RouteFactoryAttribute
{
    public LocaleRouteAttribute(string template, string locale)
        : base(template)
    {
        Locale = locale;
    }
    public string Locale
    {
        get;
        private set;
    }
    public override RouteValueDictionary Constraints
    {
        get
        {
            var constraints = new RouteValueDictionary();
            constraints.Add("locale", new LocaleRouteConstraint(Locale));
            return constraints;
        }
    }
    public override RouteValueDictionary Defaults
    {
        get
        {
            var defaults = new RouteValueDictionary();
            defaults.Add("locale", "en-us");
            return defaults;
        }
    }
}

Now we’ve got a complete route attribute we can place on a controller or action:

1
2
3
4
5
6
7
8
9
10
11
12
13
using System.Web.Mvc;
namespace StarDotOne.Controllers
{
    [LocaleRoute("hello/{locale}/{action=Index}", "EN-GB")]
    public class ENGBHomeController : Controller
    {
        // GET: /hello/en-gb/
        public ActionResult Index()
        {
            return Content("I am the EN-GB controller.");
        }
    }
}

And here’s our FR-FR controller:

1
2
3
4
5
6
7
8
9
10
11
12
13
using System.Web.Mvc;
namespace StarDotOne.Controllers
{
    [LocaleRoute("hello/{locale}/{action=Index}", "FR-FR")]
    public class FRFRHomeController : Controller
    {
        // GET: /hello/fr-fr/
        public ActionResult Index()
        {
            return Content("Je suis le contrôleur FR-FR.");
        }
    }
}

Before running this, we need to verify that we’ve got Attribute Routes enabled in our RouteConfig:

1
2
3
4
5
6
7
8
9
10
11
12
13
public class RouteConfig
{
    public static void RegisterRoutes(RouteCollection routes)
    {
        routes.IgnoreRoute("{resource}.axd/{*pathInfo}");
        routes.MapMvcAttributeRoutes();
        routes.MapRoute(
            name: "Default",
            url: "{controller}/{action}/{id}",
            defaults: new { controller = "Home", action = "Index", id = UrlParameter.Optional }
        );
    }
}

Now a request to /hello/en-gb/ goes to our ENGBController and a request to /hello/fr-fr/ goes to the FRFRController:

2014-01-24_11h18_03

Because we’ve set the default locale in the LocaleRouteAttribute to en-us, we can browse to it using either /hello/en-us/ or just /hello:

2014-01-24_11h20_05

If you’ve been paying close attention, you may be thinking that we could have accomplished the same thing using an inline route constraint. I think the real benefit over a custom inline constraint is when you’re doing more than operating on one segment in the URL: preforming logic on the entire route or context. One great example there would be using a custom attribute based on a user’s locale selection (set in a cookie, perhaps) or using a header.

So, to recap:

  • You could write custom route constraints before in “Traditional” code-based routing, but not in attribute routing
  • You could write custom inline constraints, but they mapped just to a segment in the URL
  • Custom route constraints now can operate at a higher level than just a segment on the URL path, e.g. headers or other request context

A very common use case for using headers in routing is versioning by header. We’ll look at that with ASP.NET Web API 2.1 next. Keep in mind that, while the general recommendation is to use ASP.NET Web API for your HTTP APIs, many APIs are still running on ASP.NET MVC for a variety of reasons (existing / legacy systems APIs built on ASP.NET MVC, familiarity with MVC, mostly-MVC applications with relatively few APIs that want to stay simple, developer preferences, etc.) and for that reason, versioning ASP.NET MVC HTTP APIs by headers is probably one of the top use cases of custom route attribute constaints for ASP.NET MVC as well.

ASP.NET Web API 2.1 Custom Route Attributes example: Versioning By Header

Note: The example I’m showing here is in the official samples list on CodePlex. There’s a lot of great examples there, including some samples showing off some of the more complex features you don’t hear about all that often. Since the methodology is almost exactly the same as what we looked at in ASP.NET MVC 5.1 and the sample’s available, I’ll go through this one a lot faster.

First, the custom constraint:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
internal class VersionConstraint : IHttpRouteConstraint
{
    public const string VersionHeaderName = "api-version";
    private const int DefaultVersion = 1;
    public VersionConstraint(int allowedVersion)
    {
        AllowedVersion = allowedVersion;
    }
    public int AllowedVersion
    {
        get;
        private set;
    }
    public bool Match(HttpRequestMessage request, IHttpRoute route, string parameterName, IDictionary<string, object> values, HttpRouteDirection routeDirection)
    {
        if (routeDirection == HttpRouteDirection.UriResolution)
        {
            int version = GetVersionHeader(request) ?? DefaultVersion;
            if (version == AllowedVersion)
            {
                return true;
            }
        }
        return false;
    }
    private int? GetVersionHeader(HttpRequestMessage request)
    {
        string versionAsString;
        IEnumerable<string> headerValues;
        if (request.Headers.TryGetValues(VersionHeaderName, out headerValues) && headerValues.Count() == 1)
        {
            versionAsString = headerValues.First();
        }
        else
        {
            return null;
        }
        int version;
        if (versionAsString != null && Int32.TryParse(versionAsString, out version))
        {
            return version;
        }
        return null;
    }
}

This is similar to the simpler LocaleConstraint we looked at earlier, but parses an integer version number from a header. Now, like before, we create an attribute to put this constraint to work:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
    internal class VersionedRoute : RouteFactoryAttribute
    {
        public VersionedRoute(string template, int allowedVersion)
            : base(template)
        {
            AllowedVersion = allowedVersion;
        }
        public int AllowedVersion
        {
            get;
            private set;
        }
        public override IDictionary<string, object> Constraints
        {
            get
            {
                var constraints = new HttpRouteValueDictionary();
                constraints.Add("version", new VersionConstraint(AllowedVersion));
                return constraints;
            }
        }
    }
}

And with that set up, we can just slap the attribute header on a couple different ApiControllers:

1
2
3
4
5
6
7
8
9
10
[VersionedRoute("api/Customer", 1)]
public class CustomerVersion1Controller : ApiController
{
    // controller code goes here
}
[VersionedRoute("api/Customer", 2)]
public class CustomerVersion2Controller : ApiController
{
    // controller code goes here
}

That’s it – now requests to /api/Customer with the api-version header set to 1 (or empty, since it’s the default) go to the first controller, and with api-version set to 2 go to the second controller. The sample includes a handy test client console app that does just that:

2014-01-24_14h58_06

Okay, let’s wrap up there for now. In the next (probably final) post, we’ll take a quick high level look at some of the other features in this release.

Recap:

  • Custom route constraints let you run custom logic to determine if a route matches as well as other things like compute values that are available in the matching controllers
  • The previous release allowed for custom inline route constraints, but they only operated on a segment
  • This *.1 release includes support for full custom route constraints

[.NETWorld] Looking at ASP.NET MVC 5.1 and Web API 2.1 – Part 1 – Overview and Enums

This is the first in a four part series covering ASP.NET MVC 5.1 and Web API 2.1

The sample project covering the posts in this series is here; other referenced samples are in the ASP.NET sample repository.

ASP.NET MVC 5.1, Web API 2.1 and Web Pages 3.1 were released on January 20. I call it the star-dot-one release, not sure if that one’s going to stick. Here are the top links to find out more:

Release notes

Let’s run through what’s involved in getting them and trying some of the new features.

Nothing to Install, just NuGet package updates

As I mentioned in my last post, ASP.NET has moved from a “big thing” that you install every few years. The ASP.NET project templates are now mostly a collection of composable NuGet packages, which can be updated more frequently and used without needing to install anything that will affect your dev environment, other projects you’re working on, your server environment, or other applications on your server.

You don’t need to wait for your hosting provider to support ASP.NET MVC 5.1, ASP.NET Web API 2.1 or ASP.NET Web Pages 3.1 – if they supported 5/2/3 they support 5.1/2.1/3.1. Easier said, if your server supports ASP.NET 4.5, you’re set.

However, there are some new features for ASP.NET MVC 5.1 views that require you to be running the most recent Visual Studio update to get editing support. You’re installing the Visual Studio updates when they come out so that’s not a problem, right?

Okay, Let’s Have a Look Then

Game plan: I’m going to take an ASP.NET MVC 5 + Web API 2 project, update the NuGet packages, and then throw some of my favorite features in there.

In this case, I’m opting for the “mostly Web API template” since it includes both MVC and Web API, and it includes help pages right out of the box. I could go with “mostly MVC” + Web API, but then I’d need to install the Web API Help Page NuGet package and I might strain a muscle.

2014-01-20_16h31_07

Now I’ll open the Manage NuGet Packages dialog and check for updates. Yup, there they are.

2014-01-20_16h37_07

Since this is a throw-away project I’ll throw caution to the wind and click Update All. If this were a real project, I might just update the three new releases so as not to pick an unnecessary fight with JavaScript libraries. But I’m feeling lucky today so Update All it is.

2014-01-20_16h42_15

Wow, look at them go! jQuery 2.0.3 even. It’s a party. (anti-party disclaimer for those who might be getting carsick: I didn’t have to update to jQuery 2.0.3 or any of that other stuff to use the 5.1/2.1 stuff).

Enum Support in ASP.NET MVC Views

Okay, I’ll start by creating a Person model class with a Salutation enum:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
using System.ComponentModel.DataAnnotations;
namespace StarDotOne.Models
{
    public class Person
    {
        public int Id { get; set; }
        public Salutation Salutation { get; set; }
        public string FirstName { get; set; }
        public string LastName { get; set; }
        public int Age { get; set; }
    }
    //I guess technically these are called honorifics
    public enum Salutation
    {
        [Display(Name = "Mr.")]
        Mr,
        [Display(Name = "Mrs.")]
        Mrs,
        [Display(Name = "Ms.")]
        Ms,
        [Display(Name = "Dr.")]
        Doctor,
        [Display(Name = "Prof.")]
        Professor,
        Sir,
        Lady,
        Lord
    }
}

Note that I’m using the Display attribute on a few that I want to abbreviate.

Next, I delete my HomeController and views and scaffold a new HomeController using the Person class. Caution to the wind being our theme, I’ll run it.

2014-01-21_00h04_50

Oh no! No dropdown on Salutation!

Just kidding. That’s to be expected. To get the dropdown, we need to change the scaffolded view code for the Salutation from the generic Html.EditorFor to use the new Html.EnumDropDownListFor helper.

So in my Create.cshtml, I need to change this line:

1
@Html.EditorFor(model => model.Salutation)

to this:

1
@Html.EnumDropDownListFor(model => model.Salutation)

Okay, with that done I’ll refresh the page:

2014-01-21_00h09_53

And there it is.

“Now, Jon,” you say, “That’s really nice, but it would have been absolutely perfect if the scaffolder or EditorFor or something had seen the Enum property and just done the right thing.”

You’re right. I’m told that will all magically work in an update on the way soon. For now, though, it’s easy to get that behavior using some simple EditorTemplates and DisplayTemplates. You can find examples of them in this EnumSample on CodePlex. So I grabbed those templates and copied them into the /Views/Shared directory in my project:

2014-01-21_00h18_21

And I’ll change my Create.cshtml view back how it was originally scaffolded, using Html.EditorFor. That way the view engine will look for a matching EditorTemplate for the object type, find Enum.cshtml, and use that to render all Enum model properties.

2014-01-21_00h51_35

Blam!

Okay, one more fun thing in that EnumSample. There’s an override in Html.EditorFor that lets you specify the EditorTemplate you’d like to be used. So I’ll change that line to this:

1
@Html.EditorFor(model => model.Salutation, templateName: "Enum-radio")

And now we are truly dropping science like Galileo dropped the orange:

2014-01-21_00h57_41

Recap so far:

  • We updated to the new NuGet packages
  • We saw that we can now use a new helper to render dropdowns for enums: Html.EnumDropDownListFor
  • We saw that we can use EditorTemplates (and, trust me, DisplayTemplates as well) to encapsulate that so any call to Html.EditorFor will intelligently display enum properties

[.NETWorld] How does locking work in C#?

A question I’ve gotten from several readers recently is “how does the lock statement actually work in C#?” It appears to be somewhat magical; if you didn’t already have a lock statement, how would you implement it?

Before I go on, I should make one thing very clear: this article is going to be chock full of lies. The mechanisms that the CLR actually uses to implement locks are quite a bit more complicated than the oversimplified sketch I’m presenting here. The intended takeaway here is not that you understand precisely what the CLR does, but rather that you understand how a very simple lock could be built out of even simpler parts.

Let’s start by clearly describing what a lock statement does. It is documented by the specification as having two main properties. First, a lock statement takes a reference to an object which may be “acquired” by a thread; such an object is called a “monitor” for historical reasons. Only one thread may acquire a particular monitor at one time. If a second thread attempts to acquire a particular monitor while a first thread is holding it, the second thread blocks until such time as the first thread releases the lock and the second thread can acquire the monitor. The question then is how this behaviour can be implemented without locking, since that’s what we’re trying to implement. Second, the C# specification states that certain special side effects in multithreaded programs are always observed to be ordered in a particular way with respect to locks; we won’t discuss this aspect of locking in this article.

Once more, before I go on I want to clarify a few other differences between what I’m presenting today and reality. In the real C# language you can lock on any instance of a reference type; we won’t consider that. In the real C# the same thread can acquire the same monitor twice:

void M()
{
  lock(this.someMonitor) { N(); }
}
void N()
{
  lock(this.someMonitor) { whatever }
}

We won’t consider that either. And in the real C# language, there are more operations you can perform on monitors than just acquiring and releasing them, such as waiting and pulsing. I’m not going to discuss any of these today; remember, this is a pile of lies intended to get across the idea that locks are built out of more fundamental parts. The real implementation is far more complex than what I’ll present here.

OK, now that we’ve got that out of the way, the next thing to discuss is what the lock statement actually means in C#. When you say

void N()
{
  lock(this.someMonitor) { whatever }
}

the C# compiler does a very simple transformation of that code into:

void N()
{
  object monitor = this.someMonitor;
  System.Threading.Monitor.Enter(monitor);
  try
  {
    whatever
  }
  finally
  {
    System.Threading.Monitor.Exit(monitor);
  }
}

As you can see, Enter and Exit are the names in the .NET framework for the “acquire” and “release” operations on a monitor.

Here we have our first major lie. That was the code generated before C# 4, and it has some reliability problems. See my 2009 article on this subject for the real codegen.

To illustrate how a lock is not magical, I need to show how to implement those enter and exit methods without using locks. So again, let’s simplify. Instead of any object, let’s make a specific class of monitor objects to illustrate how it might work. Here’s a naive and broken implementation:

sealed class MySimpleMonitor // Broken!
{
  private bool acquired;
  public MySimpleMonitor()
  {
    acquired = false;
  }
  public void Enter()
  {
    // Yield until acquired is false
    while(acquired) { Thread.Sleep(1); }
    acquired = true;
  }
  public void Exit()
  {
    if (!acquired) throw new Exception("Bogus exit dude!");
    acquired = false;
  }
}

It should be clear why this implementation is unacceptable. Suppose we have:

class C
{
  private MySimpleMonitor monitor = new MySimpleMonitor();
  public void Foo()
  {
    monitor.Enter();
    try
    {
      whatever
    }
    finally
    {
      Monitor.Exit();
    }
  }
}

Threads A and B both call Foo and both enter Enter. Both discover that acquired is false, skip the loop, both set acquired to true, and both enter the body of the lock. How do we fix this problem?

The method we need is int Interlocked.CompareExchange(ref int variable, int newValue, int oldValue). This method takes an integer variable (by reference) and two values. If the variable is equal to the second value then it is set to the first value; otherwise, it stays the same. Regardless of whether the variable is changed or not, the original value of the variable is returned. All of this is done atomically. This is the magic building block that we need to build a monitor:

sealed class MySimpleMonitor
{
  private const int Available = 0;
  private const int Taken = 1;
  private int state;
  public MySimpleMonitor()
  {
    state = Available;
  }
  public void Enter()
  {
    while(true)
    {
      // If the state is Available then set it to Taken.
      int original = Interlocked.CompareExchange(ref state, Taken, Available);
      // Was it originally Available? Then we took it!
      if (original == Available)
        return;
      // It was not Available so it must have been Taken. We need to block.
      // This call means "yield the rest of my time to any other thread";
      // hopefully the thread that has the lock will call Exit.
      Thread.Sleep(1);
    }
  }
  public void Exit()
  {
    // If we're exiting, we'd better be in the Taken state.
    int original = Interlocked.CompareExchange(ref state, Available, Taken);
    if (original == Available)
      throw new Exception("Bogus exit dude!");
    // We must have been in the Taken state, so we're now in the Available state.
  }
}

Now if you’re reading carefully you should at this point be protesting that the question has been thoroughly begged. We have implemented an exceedingly simple monitor, yes, but we’ve just moved the magic atomicity from Enter into Interlocked.CompareExchange! How then is this implemented atomically?

By the hardware! The Intel chip has an instruction CMPXCHG which does an atomic-compare-and-exchange. Interlocked.CompareExchange can be thought of as a thin wrapper around a machine code routine that uses this instruction. How the hardware achieves an atomic compare and exchange is up to Intel. Ultimately all the magic in any computer program comes down to the hardware eventually. (Of course that instruction has not existed in every chipset since the invention of monitors in the 1970s. Implementing atomic-compare-and-exchange on hardware that does not have this instruction is an interesting challenge but well beyond the scope of this article. On modern hardware we rely on these sorts of instructions.)

The monitor implementation I’ve presented here would work, but it is extremely inefficient compared to real monitors; the real implementation does not simply sit in a loop calling CompareExchange andThread.Sleep(1). Suppose we have a hyperthreaded processor — that is, we have two threads of execution going in one physical processor. Suppose further one thread has acquired the monitor and its lock body is extremely short: on the order of nanoseconds. If the second thread has the bad luck to ask for the monitor a couple nanoseconds before the monitor is going to be released by the first thread, then the second thread ends up ceding its time to any other thread in the system. What would be better is for it to burn those couple nanoseconds doing no real work and try again; this avoids the cost of the context switch to another thread.

The .NET Framework gives you multiple tools you could use to build a more sophisticated waiting strategy: Thread.SpinWait puts the processor into a tight loop allowing you to wait a few nanoseconds or microseconds without ceding control to another thread. Thread.Sleep(0) cedes control to any ready thread of equal priority or keeps going on the current thread if there is none.Thread.Yield cedes control to any ready thread associated with the current processor. And as we’ve seen Thread.Sleep(1) cedes control to any ready thread of the operating system’s choice. By carefully choosing a mix of these calls and doing performance testing in realistic conditions you could build a high-performance implementation, and of course this is what the CLR team has actually done.

So that’s it; an extremely simple monitor can be built out of an atomic test-and-set of a flag that indicates whether the monitor is taken or available, and a strategy for waiting that gives good performance in the (hopefully unlikely!) event that the monitor cannot be acquired immediately. We’d need to add a small amount of extra gear to allow the same monitor to be taken in “nested” scenarios, but perhaps you can see how that could be done from this sketch. In order to make a monitor that is robust in the face of thread abort exceptions we’d need even more gear, but again, it could all be built out of judicious uses of CompareExchange. You might have also noticed that our oversimplified implementation is in no way “fair”: if there are ten threads waiting for a monitor there is no guarantee that the one that has been waiting the longest gets it; this could cause “thread starvation” in practice. And finally, in the real CLR any object of reference type can be used as a monitor. The exact details of how the CLR does so efficiently are beyond the scope of this article; suffice to say that the CLR’s implementation is heavily optimized for the case that a monitor is (1) used only for locking, and (2) is never contended.

[.NETWorld] Does Garbage Collection Hurt?

When dealing with application responsiveness issues you often need to get data from the actual users to be sure what is going on. There are plenty of reasons why your application does not respond. Most prominent examples are

  • Network
    • A bad network connection can totally ruin the user experience. Wireshark is your best friend to check if you have a network issue at all.
  • Paging
    • If your application has a high memory footprint you will experience sudden slowdowns because the OS did silently page out unused memory. When you touch it again the memory needs to be read in from the page file again which is about 1000 times slower than plain memory access. A SSD can greatly improve the user experience. (Check performance counter Memory\Pages Input/sec).
  • Disc IO
    • If your app hangs because it is just busy writing 500MB to disc you should never do this on the UI thread although it is much easier to program.
  • Garbage Collection

After you have looked at network (we need that data), paging (we need so much memory), disc IO (we need to read/write so much data) you should also check if garbage collection is an issue. GC times can become quite high if you create a lot of temporary objects. Especially de/serialization is a source of pain for the GC. How do you find out if you have a GC issue? First of all you need to check the usual performance counters which do tell you how often the GC kicks in and how much CPU is burned by cleaning up your mess. If you stay low e.g. 10-20% then you do usually not need to worry much about it except if your application hangs too often during a UI interaction. In that case you need to dig deeper.

The best tool to check managed memory issues including GC induced latency is PerfView from Vance Morrison. When you uncheck everything except “GC Collect Only” you can let it run for a long time on any machine to check how many GCs you have.

image

 

When you have pressed Start/Stop Collection you get an etl file that contains a GCStats section:

image

To complete your analysis you need to double click GCStats to get a nice overview for each process how much memory it did allocate and how much GC pause times your application did experience.

image

If you want to drill deeper you can export the data to Excel and check every single GC event if you wish to. That is all nice but what if the customer does not want to install strange tools on their machines? In that case you still can get the data and analyze it on another machine. PerfView relies on the CLR ETW providers which can be enabled with the command line tool logman as well. Logman is part of Windows since XP. To check which data the CLR ETW provider can give you you can execute:

logman query providers “.NET Common Language Runtime”   

The most interesting keywords are

Keyword (Hex) Name Description
0x0000000000000001 GCKeyword GC
0x0000000000000004 FusionKeyword Binder (Log assembly loading attempts from various locations)
0x0000000000000008 LoaderKeyword Loader (Assembly Load events)
0x0000000000008000 ExceptionKeyword Exception

 

The most prominent keyword is 0x1 which is GC. The other also very interesting keyword is Exception (0x8000) which logs every thrown exceptions in all managed applications. If you want to check for GC latency issues and all exceptions I have created a little script to automate this task:

gcEvents.cmd

@echo off
REM enable GC Events 0x1
REM Exception tracing is 0x8000
logman start clr -p “.NET Common Language Runtime” 0x1,0x8000 0x5 -ets -o “%temp%\logmanclr.etl” -ct perf -bs 1024 -nb 50 500 -max 500
REM Needed to decode clr events later
logman start “NT Kernel Logger” -p “Windows Kernel Trace”  (process,thread)  -ets -o “%temp%\logmanKernel.etl” -ct perf -bs 1024 -nb 50 500 -max 500
pause
logman -stop clr -ets
logman -stop “NT Kernel Logger” -ets
if “%1” EQU “-Merge” (
xperf -merge “%temp%\logmanclr.etl”  “%temp%\logmanKernel.etl” “%temp%\merged.etl”
)

You can send your customer this script and tell him to start it and execute the use and then press any key to stop collecting data. The result are two files named logmanclr.etl and logmanKernel.etlwhich can be zipped and mailed back to you. On your machine you need to merge the two etl files to be able to load the data into PerfView. This is what the script does with the -Merge option on your machine if you have the Windows Performance Toolkit installed. The other logman options are there to prevent loosing events even if very many of them come in and to prevent filling up the hard disc by allowing for each file a maximum size of 500MB.

For GC relevant data PerfView is the best tool to analyze the captured data. If you are after exceptions you can also use WPA to check which exceptions were thrown while your and all other applications where running on this machine. This will give you only the exception type and exception message but no stacks because a very limited set of ETW providers is enabled which is well suited for long running tests.

image

Since each garbage collection has a start and stop event which does roughly correlate to the stop times of your application you can also create a regions file to visualize the GC events. This makes it extremely simple to check if your app was hanging because some long GC was running which did (probably) suspend your thread which tried to allocate just more memory. When concurrent GC is enabled you can allocate data and even new GC segments (for Gen 0,1)  but not large objects or Gen 2. Most of the time GC time does correlate with app pause times.

image

Here is the regions file I came up with to make the graph above.

gcRegions.xml

<?xml version=’1.0′ encoding=’utf-8′ standalone=’yes’?>
<?Copyright (c) Microsoft Corporation. All rights reserved.?>
<InstrumentationManifest>
<Instrumentation>
<Regions>
<RegionRoot Guid=”{d8d639a0-cf4c-45fb-976a-0000DEADBEEF}” Name=”GC” FriendlyName=”GC Times”>
<Region Guid=”{d8d639a0-cf4c-45fb-976a-000000000001}” Name=”GCStart” FriendlyName=”GC Start”>
<Start>
<Event Provider=”{e13c0d23-ccbc-4e12-931b-d9cc2eee27e4}” Id=”1″ Version=”0″/>
</Start>
<Stop>
<Event Provider=”{e13c0d23-ccbc-4e12-931b-d9cc2eee27e4}” Id=”2″ Version=”0″/>
</Stop>
</Region>

          <Region Guid=”{d8d639a0-cf4d-45fb-976a-000000000002}” Name=”GCStart_V1″ FriendlyName=”GC”>
<Start>
<Event Provider=”{e13c0d23-ccbc-4e12-931b-d9cc2eee27e4}” Id=”1″ Version=”1″/>
</Start>
<Stop>
<Event Provider=”{e13c0d23-ccbc-4e12-931b-d9cc2eee27e4}” Id=”2″ Version=”1″/>
</Stop>
</Region>

           <Region Guid=”{d8d639a0-cf4d-45fb-976a-000000000003}” Name=”GCStart_V2″ FriendlyName=”GC”>
<Start>
<Event Provider=”{e13c0d23-ccbc-4e12-931b-d9cc2eee27e4}” Id=”1″ Version=”2″/>
</Start>
<Stop>
<Event Provider=”{e13c0d23-ccbc-4e12-931b-d9cc2eee27e4}” Id=”2″ Version=”1″/>
</Stop>
</Region>
</RegionRoot>
</Regions>
</Instrumentation>
</InstrumentationManifest>

Not bad for such a simple file to make WPA GC aware. To make this file active you need select in the menu Trace – Trace Properties and add the file to it. Then a new Regions of Interest graph is added to your existing graphs which you can add to your analysis pane as usual.

image

To check when a GC did start/stop this overview is really nice. If you need a deeper analysis there is no way around PerfView which gives you all the numbers pre analyzed in a per process view. When you have managed memory leaks you should check out the Memory menu of PerfView to take GC Heap dumps from running processes (nice) but also from memory dumps (even better). The latter functionality is essential if you only have a dump from a crashed process but it is not obvious who is holding most objects. I know .NET Memory Profiler from SciTech which also can import dump files but I got lost in the fancy UI. PerfView on the other hand hand looks old fashioned but it does its job much better than the UI might suggest:

image

Here I do see that I keep 100% of my data (10GB in a static variable in 10003 instances) which is very easy too see. Other profilers can do this also very well. But if you double click on the type you get the shortest roots to all objects holding your data nicely printed in a tree view. Cyclic references are broken up and the relevant nodes are marked bold. Other commercial tools had a hard time to tell me who is holding my data if graph cycles are involved. PerfView was able to show me it directly without much navigation in a complex interdependent object graph. To check who is allocating too much data you can also use PerfView when you enable “.NET SamplAlloc” to sample your allocations with minimum performance impact. Then you get the allocation stacks nicely lined up by object type. If you can afford it performance wise you can also enable “.NET Alloc” to track every managed object allocation which gives you exact numbers. That’s all for today. I am constantly amazed how much useful data you can get out of ETW and shocked at the same time that so few people know about it. If your new years resolution is already history how about a new goal? Learn how to get most out of ETW!

[.NETWorld] Who Said Building Visual Studio Extensions Was Hard?

In years past building Visual Studio Extensions have often been considered the realm of the big boys. Staff working at Jetbrains or the Microsoft employees of the world. Last year I saw a talk given by Mads Kristensen aimed at taking away some of this stigma and showing how easy the guys at Microsoft have tried to make it for developers like you and me to just up and write extensions. I’ve been wanting to build one ever since, but haven’t had a good enough excuse to jump right in – until now. Here follows the creation of “OnCheckin Web.config Transformer”.

imageMy little project’s requirements

Last year I launched my own SAAS startup OnCheckin to bring the time and money saving gift of deployment automation to the masses.

A recent release has added support for multiple environments for each deployment project. With this comes the addition of environmental based config transforms on top of the already supported “web.oncheckin.config” transform applied to all build and deploys done through OnCheckin.

The way this works is a tiered transformation of your web.config.

If you have an environment in your deployment workflow called Production and you want to store database connection strings etc. that are environment specific then you’ll need to add a config transform named “web.production.config” to your project.

Web.config transforms are then applied in the following order.

  1. web.release.config
  2. web.oncheckin.config
  3. web.production.config

imageThis is great, but unless you have a publishing profile in your website called “production” creating the above transform is actually a little more difficult, and involves a bit of fiddling with the actual XML in your web application’s project file.

Like a lot of learning project’s, when you have your own itch to scratch it’s often the best way to start.

What you’ll need to get started

Firs you’ll need the following

Once you’ve got these installed you can jump right in.

Create a new project under Visual C# > Extensibility > Visual Studio Package.

image

Click through the opening wizard.

image

Select a language for your extension and either provide or select to enter a signing key.

image

Enter some basic information about your plugin and provide an icon.

image

Then select “Menu Command” from the next window – this will create the boiler plate code to get us started.

image

Then enter the text for your first command option and give it a command id (you’ll understand this later).

image

Select whether you want a Microsoft Unit test and Integration project to get you started (yes, please!).

image

Then click “Finish”.

This has actually created for you a working Menu item VSIX project.

If you “Run” the project a new instance of a sandboxed “Visual Studio Experimental Instance” will start with your menu plugin installed. Open a project and then select the “Tools” menu drop down to see your plugin.

image

If I click this I get the default method created by the template firing.

image

You can find this code inside the class “OnCheckinTransforms.VisualStudioPackage.cs” automatically created by the project setup.

01.private void MenuItemCallback(object sender, EventArgs e)
02.{
03.// Show a Message Box to prove we were here
04.IVsUIShell uiShell = (IVsUIShell)GetService(typeof(SVsUIShell));
05.Guid clsid = Guid.Empty;
06.int result;
07.Microsoft.VisualStudio.ErrorHandler.ThrowOnFailure(uiShell.ShowMessageBox(
08.0,
09.ref clsid,
10."OnCheckin Transforms",
11.string.Format(CultureInfo.CurrentCulture, "Inside {0}.MenuItemCallback()",this.ToString()),
12.string.Empty,
13.0,
14.OLEMSGBUTTON.OLEMSGBUTTON_OK,
15.OLEMSGDEFBUTTON.OLEMSGDEFBUTTON_FIRST,
16.OLEMSGICON.OLEMSGICON_INFO,
17.0, // false
18.out result));
19.}

Moving along – what we want to do it change this from a menu item to a context menu item for files in your solution, so when you right click a file (our final goal is just a web.config) you see our menu. Let’s change this.

Open “OnCheckinTransforms.VisualStudio.vsct” and modify the menu group created for your action to make it an “Item node menu” command instead of a “Visual Studio Menu” command.

1.<Group guid="guidOnCheckinTransforms_VisualStudioCmdSet" id="MyMenuGroup"priority="0x0600">
2.<Parent guid="guidSHLMainMenu" id="IDM_VS_CTXT_ITEMNODE"/>
3.<!--<Parent guid="guidSHLMainMenu" id="IDM_VS_MENU_TOOLS"/>-->
4.</Group>

Then immediately upon clicking “Start” on my project another instance of Visual Studio will launch with your plugin installed.

You’ll notice now that if I right click on a project item (any item), I’ll see my command option.

image

We’re moving along pretty quickly, but what we really want now is:

  • Only show our menu if a project item is selected (not a folder, or project etc).
  • Disable our menu if you have selected a file that isn’t a web.config, or is a child of a web.config.

To do the above we can hook up an event that fires before the context menu shows on the screen. This means we can hide or disable our menu through code based on the file selected.

The first thing we’ll need to do is turn on the features to disable and hide our menu item by default. To do this open up the “OnCheckinTransforms.VisualStudio.vsct” file again and add a few lines to our menu button.

01.<Button guid="guidOnCheckinTransforms_VisualStudioCmdSet"id="oncheckinEnvTransform" priority="0x0100" type="Button">
02.<Parent guid="guidOnCheckinTransforms_VisualStudioCmdSet" id="MyMenuGroup"/>
03.<Icon guid="guidImages" id="bmpPic1" />
04.<!-- the 2 lines below set the default visibility-->
05.<CommandFlag>DefaultInvisible</CommandFlag>
06.<CommandFlag>DynamicVisibility</CommandFlag>
07.<Strings>
08.<ButtonText>Add EnvironmentTransforms</ButtonText>
09.</Strings>
10.</Button>

Then we open our ‘OnCheckinTransforms.VisualStudioPackage.cs’ file again and replace a few lines in ourInitialize method. We change our menu command’s type, and then hook into a BeforeQueryStatus event handler.

01.OleMenuCommandService mcs = GetService(typeof(IMenuCommandService)) asOleMenuCommandService;
02.if null != mcs )
03.{
04.// Create the command for the menu item.
05.CommandID menuCommandID = newCommandID(GuidList.guidOnCheckinTransforms_VisualStudioCmdSet, (int)PkgCmdIDList.oncheckinEnvTransform);
06.// WE COMMENT OUT THE LINE BELOW
07.// MenuCommand menuItem = new MenuCommand(MenuItemCallback, menuCommandID );
08.// AND REPLACE IT WITH A DIFFERENT TYPE
09.var menuItem = new OleMenuCommand(MenuItemCallback, menuCommandID);
10.menuItem.BeforeQueryStatus += menuCommand_BeforeQueryStatus;
11.mcs.AddCommand( menuItem );
12.}

Then we add a new method to handle changing the status of our menu item and check if the filename is ‘web.config’ before showing.

01.void menuCommand_BeforeQueryStatus(object sender, EventArgs e)
02.{
03.// get the menu that fired the event
04.var menuCommand = sender as OleMenuCommand;
05.if (menuCommand != null)
06.{
07.// start by assuming that the menu will not be shown
08.menuCommand.Visible = false;
09.menuCommand.Enabled = false;
10.IVsHierarchy hierarchy = null;
11.uint itemid = VSConstants.VSITEMID_NIL;
12.if (!IsSingleProjectItemSelection(out hierarchy, out itemid)) return;
13.// Get the file path
14.string itemFullPath = null;
15.((IVsProject) hierarchy).GetMkDocument(itemid, out itemFullPath);
16.var transformFileInfo = new FileInfo(itemFullPath);
17.// then check if the file is named 'web.config'
18.bool isWebConfig = string.Compare("web.config", transformFileInfo.Name, StringComparison.OrdinalIgnoreCase) == 0;
19.// if not leave the menu hidden
20.if (!isWebConfig) return;
21.menuCommand.Visible = true;
22.menuCommand.Enabled = true;
23.}
24.}
25.public static bool IsSingleProjectItemSelection(out IVsHierarchy hierarchy,out uint itemid)
26.{
27.hierarchy = null;
28.itemid = VSConstants.VSITEMID_NIL;
29.int hr = VSConstants.S_OK;
30.var monitorSelection = Package.GetGlobalService(typeof(SVsShellMonitorSelection)) asIVsMonitorSelection;
31.var solution = Package.GetGlobalService(typeof(SVsSolution)) asIVsSolution;
32.if (monitorSelection == null || solution == null)
33.{
34.return false;
35.}
36.IVsMultiItemSelect multiItemSelect = null;
37.IntPtr hierarchyPtr = IntPtr.Zero;
38.IntPtr selectionContainerPtr = IntPtr.Zero;
39.try
40.{
41.hr = monitorSelection.GetCurrentSelection(out hierarchyPtr, out itemid, outmultiItemSelect, out selectionContainerPtr);
42.if (ErrorHandler.Failed(hr) || hierarchyPtr == IntPtr.Zero || itemid == VSConstants.VSITEMID_NIL)
43.{
44.// there is no selection
45.return false;
46.}
47.// multiple items are selected
48.if (multiItemSelect != nullreturn false;
49.// there is a hierarchy root node selected, thus it is not a single item inside a project
50.if (itemid == VSConstants.VSITEMID_ROOT) return false;
51.hierarchy = Marshal.GetObjectForIUnknown(hierarchyPtr) as IVsHierarchy;
52.if (hierarchy == nullreturn false;
53.Guid guidProjectID = Guid.Empty;
54.if (ErrorHandler.Failed(solution.GetGuidOfProject(hierarchy, outguidProjectID)))
55.{
56.return false// hierarchy is not a project inside the Solution if it does not have a ProjectID Guid
57.}
58.// if we got this far then there is a single project item selected
59.return true;
60.}
61.finally
62.{
63.if (selectionContainerPtr != IntPtr.Zero)
64.{
65.Marshal.Release(selectionContainerPtr);
66.}
67.if (hierarchyPtr != IntPtr.Zero)
68.{
69.Marshal.Release(hierarchyPtr);
70.}
71.}
72.}

Now we have our extension only showing up we right click a web.config file, and all other files will hide/disable the extension menu option.

The rest of the code required to replace the click handler with code to add a web.config transform is included in the Github repository at the end, it gets a bit tedious to past inside a post.

To continue your journey you can take a look at the VSIX documentation over on MSDN.

Publishing your VSIX

Once you’ve got your extension to a place where you’re happy, it’s time to get it out there for other developers to use. You want to publish your extension in the Visual Studio Extension Gallery.

First, build your extension in release mode, then head on over tohttp://visualstudiogallery.msdn.microsoft.com/

Login with the Microsoft account you want to publish using.

Then click on the big “Upload” button on the home page.

image

On the second page, select “Tool” as the extension type you’re uploading.

image

Then surf to your ‘/bin/release’ directory  and select your VSIX for upload.

image

Then on the next page enter a description, select some categories and select “Publish” and you’re done!

image

But wait, there’s more…

Screen2

My final VSIX was a little more involved than the above show as I extend mine to actually contain a WPF window and some more logic to add a web.config transform. As I also reused some of the great codebase over on Sayed Hashimi’s project Slow Cheetah as part of my project and Sayed’s project is open sourced using the Apache 2.0 license, I’ve decided to open source my project as well.

You can all the source code for it over here on Github – also licensed as Apache 2.0 so you can reuse and learn forevermore.

My final Visual Studio plugin is also now online for you to download and use, and it can be found here.

If you’d like to give the new release of OnCheckin.com a try feel free to head on over and signup today!

Published at DZone with permission of Douglas Rathbone, author and DZone MVB. (source)