[Dev Tip] Understanding Windows Identity Foundation (WIF) 4.5

First things First

If you are looking for an article that shows a lot of code and dissects the new WIF 4.5 APIs, then you won’t find what you need here.

My aim in this article is to explain the “why” and the “what” rather than the “how”. Once you understand that, the “how” becomes really simple. This article does not assume pre-knowledge of the topics of federation, claims, and WIF so it’s suited for beginners. However, I think that mid-level knowledge audience will also benefit from it. If you’re like super-expert, well please contact me to help me in my current project.

What Exactly is the Problem?

Authentication (and authorization) is an ever present challenge for most applications. The challenge that these applications face is the same: authentication logic creeps into the application code and becomes coupled with it; any change to the authentication requirements will result in a change in the application itself.

Say for example that your user store is SQL Server and new business mandates adding an existing Oracle-based user store to your list of users, or maybe you want to mix your authentication to support both custom user stores (SQL Server or Oracle) with Active Directory user store. And what about social media? It’s increasingly popular nowadays for applications to allow authentication via services such as Google and Facebook.

Here is another tough case: assume you used to ask you users for username/password combination to log in. Now based on certain needs, you want them also to supply additional information such as a one-time code. This will certainly lead to UI change for your login page as well as code change.

In all these cases, something has to change in your code. Of course, having a good architecture with a proper separation of concerns will ease up the change. However, the idea is that you still have to manage this authentication logic instead of focusing on the business side of your application.

Claims-based authentication is the architecture that solves this problem.

Claims-based Authentication

Claims-Based architecture allows you to delegate authentication logic into another entity. This entity is a “some” layer which abstracts all authentication related coding and gives your application what it needs: is the user authenticated or not, and some information about the user (called claims or assertions) that lets your application take authorization decisions.

The claims-based architecture defines the following actors:

  • Subject: the entity that needs to be authentication. This can be a user that wants to log in to your application, or a piece of code in your application that wants to access a web service.
  • Relying Party (RP): the application (in case the Subject is a user) or the web service (in case the Subject is application code) that needs to delegate the authentication logic
  • Identity Provider (IP): the entity (that “some” layer mentioned before) that actually holds the authentication logic. The IP talks to the user stores as appropriate and performs actual authentication.
  • Claim: When an IP performs successful authentication, it returns to the RP a set of claims. Claims are statements about the authenticated entity – for example birth date, department, role, etc… – that gives the RP information to take authorization decisions.
  • Token: Claims travel inside a token. Although a token can be a username/password combination or even a simple string such as bearer token in OAuth 2.0; in this context tokens can be either XML-based such as SAML tokens or binary-based such as X.509 certificates.

In addition to the above definitions, WS-Federation and WS-Trust protocols define another term called Secure Token Service (STS). STS is the web service exposed by the IP that provides the authentication service.

Before the claims-based flow can start, the RP and IP need to publish their policies. In abstract terms, a policy is a contract published by an entity that specifies the terms and conditions that other entities must obey before establishing communication.

In this context, the policy published by the IP specifies the supported protocol(s) and security requirements as well as supported claim types. Similarly, the policy published by the RP specifies its own protocol, security, and claims requirements as well as the list of IPs it trusts to delegate authentication to.

The rest of the article discusses WS-Federation (and related WS-standards) and its implementation in WIF, while I will also briefly discuss SAML 2.0.

The WS-* Mania

For some, the WS-* standards are something to avoid. They tend to strike developers as being overly complex. Indeed complex they are. However, with the advent and continuous enhancement of developer libraries, most of the times working with WS-* standards is nothing more than configuration tweaking. Granted however, you always need to understand what is going behind the scenes if you really want to understand the architecture.

This section discusses – briefly – some core WS-* standards that are related to WS-Federation.

WS-Security

WS-Security is a SOAP extension that adds authentication/authorization, message protection, and message integrity to SOAP messages.

  • Authentication/authorization: authentication is implemented using security tokens while claims carried inside a security token aid in authorization. Although can be extended, the three types of tokens you’d usually see are Username, Binary, and XML-based tokens.
    • Username tokens: these are the plain-old username/password combinations sent in the SOAP header. Verification of identity can be achieved by hashing the password or applying a digital signature.
    • Binary tokens: these usually come in two flavors: X.509 certificates and Kerberos tickets.
      • X.509 certificates: an X.509 certificate is the public key container of a public/private key pair. Obviously since it contains a public key it cannot be relied upon for authentication by itself. The certificate is signed with the private key portion of the sender. The receiver uses the public key to verify the signature. Since the private key is unique for the sender, signature verification proves identity
      • Kerberos tickets: if a Kerberos infrastructure is already in place, WS-Security recognizes Kerberos tickets as a valid security token type
    • XML-based tokens: XML tokens were published as an ad-on specification to WS-Security. XML tokens contain a set of claims about the sender. Similar to X.509 certificate tokens, XML tokens must be accompanied by a signature generated by the sender so that the receiver can verify its identity. Probably the most dominant form of XML tokens is SAML token.
  • Message protection: Whereas at transport level message protection is established using SSL, at message level this is done via XML Signatures. Depending on the configuration, WS-Security uses either a symmetric shared key or an asymmetric public/private key pair to encrypt the required message content. In the symmetric approach the shared key must be exchanged securely prior to communication. In the asymmetric approach the sender encrypts the (required) message content using the receiver’s public key and the receiver decrypts it using its private key. Most of the time though, the approach used is a combination of both due to the fact of asymmetric approach being compute-expensive. In this hybrid approach, asymmetric encryption is used to exchange a shared key and then this shared key is used for encryption for the rest of the session communication.
  • Message Integrity: as discussed in the authentication section, XML signatures are used to establish user identity. It is also used to establish message integrity; i.e. that the message has not been tampered with. By attaching a signature with a message, the receiver recalculates the signature and verifies integrity if both signatures match. Similar to encryption, a hybrid approach is usually used to reduce the cost of asymmetric-based signatures.

WS-Policy

WSDL does a good job describing basic web service requirements such as message schemas and authentication headers. However, WSDL cannot describe contractual requirements such as security. For example, recall form the previous section of Claims-based architecture, that an RP-IP interaction is governed by a set of security policies. These policies cannot be described using WSDL; instead they are described by WS-Policy and its related specification WS-SecurityPolicy.

In general, there are WS-Policy assertions for security, reliable messaging, sessions, transactions, reliable messaging, among others. In WCF, these policies are specified either as code attributes or configuration sections.

  • WS-Policy: a specification that defines a framework for describing policy assertions. An assertion is a requirement or preference of the service. This specification defines a common language regardless of the assertion domain (security, transactions, reliable messaging, etc…).
  • WS-PolicyAssertion: a specification that defines general messaging-related assertions for use with WS-Policy. Separate assertions exist for different domains; for example, WS-SecurityPolicy, WS-AtomicTransactions, and WS-ReliableMessaging.
  • WS-PolicyAttachment: a specification that describes how policies are attached to WSDL (and UDDI).

WS-Addressing

WS-Addressing specification provides elements that enable end-to-end transport-independent message transmission. For example this allows you to implement message routing (previously WS-Routing) where – based on some message criteria – you can explicitly specify the next hop in the message route. Also some of the other things that you can do is specify a different response or fault return URLs to a message and thus sending the response to a different endpoint than the originator.

WS-Trust

As discussed in the claims-based architecture, you can delegate your application authentication logic to another entity which issues claims that your application. The part I skipped is how to make your application “trust” these claims? What prevents a fake IP from generating a claim and sending it to your application which then will grant him access?

Let’s take this one step further: assume two companies A and B want to conduct business. Company A want company B users to access its application. How can this be done? One way is for A to provision B users; however, this clearly is a troublesome solution as A will have to manage and control B users. Wouldn’t it be a much better solution if we make A “trust” B users without actually managing them?

WS-Trust is a specification that tackles the above two scenarios. WS-Trust introduces the concept of a Secure Token Service (STS), which is a web service that is responsible of generating claims that are trusted by consumers. In the first scenario (authentication delegation), you have your application establishing a WS-Trust relationship with an STS service for an IP. In this second scenario (companies A & B), both parties establish a WS-Trust where A trusts an STS for B IP; this way B users can carry tokens issued by their IP-STS and present these tokens to A, which trusts the STS and thus grants access.

WS-Trust defines a message request called RequestSecurityToken (RST) issued to the STS. STS in turn replies via a response called RequestSecurityTokenResponse (RSTR) that holds the security token to be used to grant access. WS-Trust describes the protocol for requesting tokens via RST and issuing tokens via RSTR.

WS-SecureConversation

WS-SecureConversation specification provide a mechanism to improve a service response time when the client engages with a lengthy communication with that service.

When a client sends a message to a secure service, part of the request will be dedicated to credential negotiation and authentication. At transport level, this is accomplished in SSL handshake process so any consequent requests use the secure session already established. WS-SecureConversation achieves the same on the message level.

WS-SecureConversation specification states that the client first sends an RST (from the WS-Trust specification) to the service. The service validates the credentials within the RST and issues back a token called Secure Context Token (SCT) with a symmetric key to perform the cryptographic operations for the remaining communication.

These tokens are used by WS-Security for authentication and integrity and are described at both ends (IP & RP) using the WS-SecurityPolicy assertion.

WS-Federation

So we’re finally at the WS-Federation section! All the previous specifications discussed lead to this place.

Let’s start by defining federation: Federation refers to multiple security domains (also called realms) – typically multiple organizations in B2B scenarios – establishing trust for granting access to resources. Carrying on with the terminology we have been using, an RP in domain A can trust an STS IP in domain B so that B users can access resources of A.

WS-Federation build on WS-Trust and simplifies the creation of such federated scenarios by defining a common infrastructure for achieving federated identity for both web services (called active clients) and web browsers (called passive clients).

WS-Federation says that organizations participating in federation should publish communication and security requirements in Federation Metadata. This metadata adds federation specific communication requirements on top of the WS-Policy (and WS-SecurityPolicy) metadata described before. For example, token types and single sign out requirements are examples of what is defined in the Federation Metadata.

WS-Federation does not mandate a specific token format, although as we will see later, SAML tokens are used heavily.

Identity and Access Control in .NET 4.5

Identity and Principal pre-.NET 4.5

If you have been creating applications in .NET framework since v 1.0, chances are you have already came across interfaces IIdentity and IPrincipal. IIdentity would represent the identity of the authenticated user while IPrincipal contains that specific identity in addition to a method to check if the user is a member of a certain role.

Different implementations existed for IIdentity and IPrincipal:

  • WindowsIdentity and WindowsPrincipal are used for windows, active directory, or Kerberos authentication
  • GenericIdentity and GenericPrincipal are used for custom authentication such as forms authentication

As you can see, role-based access (or authorization) up until .NET 4.0 was really restricted to the IsInRolemethod of the IPrincipal and its implementations. There are multiple ways to call this method; you can call it directly via the API, or you can use attribute-based authorization (PrincipalPermission), or you can use the authorization element in the web.config. Regardless of the method, your role-based access power was limited to checking if the logged in user belong to a certain group or not.

Now if you have reached so far in this post, you will for sure have noticed that claims-based authorization gives you much bigger power. Using claims you can make access decisions based on things like a user birth date, national id, and of course roles, among others. The idea is that the system that performed authentication (the IP in our literature) attaches any attributes agreed with the RP as a set of claims (assertions); the RP then uses these claims to make suitable authorization decisions.

So are claims not supported pre-.NET 4.5? Yes they are, and here’s how:

WCF 3.0 Claim Transformation

Before Windows Identity Foundation (WIF) 1.0 was shipped, Microsoft’s first attempt to incorporate claims into their security model came under the umbrella of WCF. In WCF 3.0 Microsoft included the System.IdentityModel assembly which basically generated a set of claims for every security token authenticated by WCF. WCF 3.0 shipped with the following classes:

  • System.IdentityModel.Claims.DefaultClaimSet, which represents any additional generic claims sent to the service
  • System.IdentityModel.Claims.X509CertificateClaimSet for converting X509 tokens to claims
  • System.IdentityModel.Claims.WindowsClaimSet for converting Windows tokens to claims

WCF 3.0 included authorization policies to perform the actual claim transformation.

The problem in this approach was that now .NET developers have two completely different security infrastructures for web applications (IIdentity and IPrincipal) and for WCF services (System.IdentityModel). So if you have an application which consists of both web app and web service, then you’ll have to write nearly the same code twice against two different libraries.

WIF 1.0

Microsoft enhanced their claim infrastructure in In WIF 1.0. Instead of the System.IdentityModel used in WCF 3.0, they shipped a new assembly of name Microsoft.IdentityModel.

The need to support claims was achieved by adding another implementation of IPrincipal, calledIClaimsPrincipal which will be populated once claims-based authentication is used. As you can see, this way WIF combined both worlds: that of base classes IIdentity and IPrincipal and that of WCF 3.0 claims infrastructure.

Here is how you would retrieve claims for a claims-authentication enabled application:

IClaimsIdentity identity = Thread.CurrentPrincipal.Identity as IClaimsIdentity;
string email = (from c in identity.Claims where
   c.ClaimType == System.IdentityModel.Claims.ClaimTypes.Email
   select
   c.Value).Single();

As you can see, we’re casting the current principal’s identity to get the IClaimsIdentity.

Now even with this improved approach, can you already spot the drawback? Well the issue here is that with this approach if you use WindowsPrincipal or GenericPrincipal you won’t get claims as IClaimsPrincipal is just another implementation of IPrincipal much like the other two. It’s either this or that. This makes claims support in .NET 4.0 not a first class citizen.

Identity and Principal in .NET 4.5

In .NET 4.5 claims are made available regardless of the authentication type. What Microsoft did was creating a new implementation of the base IPrincipal, called ClaimsPrincipal. Then they made every other principal (WindowsPirincipal and GenericPrincipal) derive from ClaimsPrincipal, and they removed theIClaimsPrincipal of .NET 4.0. This way, you will get claims all the way; for example even if you use forms authentication, all the attributes you pull from a membership provider will be carried in claims.

This way, role based access is unified for all authentication types and more important, it’s much richer than the simple IsInRole approach because you now depend on claims to take decisions.

Let’s see how you can retrieve claims for a claims-authentication enabled application and compare this to the same code you had to write pre-.NET 4.5:

string email = ClaimsPrincipal.Current.FindFirst(ClaimTypes.Email).Value;

ClaimsPrincipal.Current basically plays the same role that Thread.CurrentPrincipal used to play.

Let’s see the new claims-based model in .NET 4.5. Below you can see the debugger of a VS 2012 web application using WindowsIdentity:

You will notice that the base class is of type System.Security.Claims.ClaimIdentity which as explained is the new base class for the identity classes as of .NET 4.5. Also notice how although we used windows identity, claims are populated with windows login specific information. In .NET 4.5, you will get claims no matter what.

The below figure shows a similar result when we define a GenericIdentity (simulating a forms authentication for example). Again the base class is common and a claim of type name is generated by default:

When using forms authentication, you will get the Name claim by default. You can get additional claims by using the ASP.NET Role Manager.

Let’s briefly discuss some more concepts and capabilities of .NET 4.5 claims model.

Supported Credential Types

As discussed before, the biggest advantage of the claim model is that regardless of the authentication mechanism used, your application will always rely on claims to achieve authorization. So your application logic does not really care about the actual type of the security token that was used for authentication; authentication logic is a separate concern with its separate module, your application gets claims and you use these claims to take authorization decisions.

Now this authentication layer I am talking about, can be part of your application itself living in a separate module, or – as discussed in the very beginning – can be a separate STS that your application trusts in the IP/RP model. In this discussion our focus is on .NET 4.5 so authentication will be part of the application, the next section dives into WIF 4.5 where you will see STS in action.

So, you .NET 4.5 application can use any of the following security tokens:

  • Windows/AD/Kerberos
  • Forms authentication
  • Client certificates
  • SAML tokens
  • Other extended token types

Once the authentication layer is passed, your application will get claims in the ClaimsPrincipal shown before.

Claims Transformation

Before the claims actually hit your application, it can be subjected to a transformation/validation layer. Once example of transformation need is that you know claims will come is a certain format and you would like to change this format to suit your application coding. An example of validation is – say your application accepts claims from an STS – and you want to make sure before handing the claim to your code that the minimum set of required claims are actually present.

To implement this transformation/validation, you will have to override a class calledClaimsAuthenticationManager.

Authentication Sessions

The logic you wrote in the ClaimsAuthenticationManager derived class can be run at each and every request, or you have the option to cache its outcome if you believe its logic is expensive. The cached value can be saved in cookies for ASP.NET applications and using WS-SecureCoversation for WCF services.

Claims-based Authorization

Finally we are the stage of actually performing authorization decisions based on the claims. The classClaimsAuthorizationManager is what you’d want to derive from to implement your authorization logic. This class contains the method CheckAccess which can be called whenever you want to make an authorization decision.

This way you won’t mingle authorization logic directly into your business logic (which typically was the approach used in the days of the IsInRole checking).

Once again, I want to remind you that this approach of using claims inside your application is the same regardless if whether the claims are generated by authentication logic as part of your application or if the claims are given to your application by an STS that it trusts. In both cases, you have the ability of using theClaimsAuthorizationManager to minimize the coupling between your business logic and the authorization decision.

One final note: the claims-based model is the same for web applications and WCF services. So the .NET 4.5 code to retrieve, transform, and authorize claims works the same regardless of whether the consumer is a .NET 4.5 web app or WCF service.

Windows Identity Foundation (WIF) in .NET 4.5

In the previous section I discussed the claims model in .NET 4.0 vs. .NET 4.5 and showed you how claims are now first class citizens in .NET 4.5. However, the previous section still used the “traditional” authentication mechanism where authentication is still your application’s responsibility. Be cautious though, I do not want the previous section to deviate your attention from the main focus of this article: delegating authentication responsibility to an external entity.

WIF is Microsoft’s technology for encapsulating the inner workings of WS-Federation (and WS-Trust) behind a .NET library which makes it easy for developers to create claims-based apps without the need to know the details of the specs we discussed previously.

As was discussed in the previous section, we have seen how identity and access support in .NET passed through multiple stages, from the simple IsInRole checking, to claims support in WCF 3.0, to WIF 1.0 and then to .NET 4.5.

WIF 4.5 is where this ends (and the new work begins!). In .WIF 4.5, Microsoft did yet another naming change to the assemblies:

WIF in Action

VS 2012 provides you with the tools to quickly set up your RP and use a local STS. Creating an STS should be really the last thing that you would try to do. However, you will only use this STS for development/testing purposes. Running an STS is not a simple task as its business critical and relies on complicated protocols and cryptographic operations.

STS commercial products are available and you are likely to use them. Some of these products are:

  • Active Directory Federation Services (ADFS) v2
  • IBM Tivoli Federation Manager
  • Oracle Identity Manager

ADFS v2 is the topic of the next article.

So why does Visual Studio give you this local STS? Simply to aid you during the development (and possibly testing) phase where the STS product might not be accessible to you. Since your RP does not hold the authentication logic anymore, all what you have to do before going to live is to edit the policy for your RP to trust the new STS instead of the one created using visual studio.

Let’s roll.

Create a new VS 2012 ASP.NET 4.5 web forms application. Right click the project and select “Identity and Access”. This will display a wizard giving you three options: ADFS2 and ACS will be discussed in future posts, I will just quickly note that ADFS2 is the STS of AD and ACS is the STS on Azure.

Select the option to create a local STS. In the Local Development STS tab, you can select the SAML token version to use, the local port of the STS, but most important you can select the claims that the STS will give your RP. In WIF 1.0, you had to manually edit the local STS code to change the set of claims; this way is much cleaner and true to the promise of isolating you (the RP owner) from the authentication logic. Notice that you can use from the many pre-defined claim namespaces and you can define your own. Accept the defaults to close the wizard.

VS 2012 has now equipped your RP to trust a local STS and set up the policy between them. To see this, let’s examine the web.config of the RP:

  • Forms Authentication removed: your RP now is neither windows nor forms authentication-base. It is a claims-based application, so forms is disabled because forms is the default when creating a VS web forms app.
  • Two HTTP modules that makes WIF magic possible:
    • WSFederationAuthenticationModule: this module intercepts incoming requests and redirect unauthenticated ones to the trusted IP(s). For authenticated requests it processes claims within the security token and presents them to your application in a consumable format.
    • SessionAuthenticationModule: this module takes care of session management after an authentication is established. The WSFederationAuthenticationModule module is bypassed andSessionAuthenticationModule handles requests until the session is expired or a sign-out flow is invoked.
  • audienceUris: lists the URIs that your RP will consider valid for the received tokens
  • trustedIssuers: the list of trusted IP certificates that the RP will accept signatures from to verify the token
  • wsFederation: configures the WS-Federation protocol, such as using the passive protocol flow, the IP that the protocol will talk to, and the realm which is basically the ultimate return-URL that the protocol flow will end up in – the RP itself in this case.

In addition, VS 2012 has created a FederationMetadata.xml file in the RP solution. Recall from the discussion of WS-Federation that organizations participating in federation should publish communication and security requirements in Federation Metadata. This XML file holds these requirements for this RP.

Now run the application, quickly notice the browser navigation and you will see that your application gets redirected to the local STS which performed the (hardcoded) authentication and returned back to your application as an authentication user (assume you’re Terry for a moment!). You can now access the claims inside your application just like we have seen before.

Now let’s see what happened in the background to examine WS-Federation and the supporting protocols discussed before in action. To do so, I will use Fiddler:

Step 1: A user browses the RP

You asked for the application via the browser. The WIF HTTP module WSFederationAuthenticationModuledetects that you are not authenticated, so it redirects the browser back with a 302 response and with a Location header that contains the address of the IP-STS at which you must authenticate. In addition, a set of WS-Federation protocol query strings are also supplied to govern how the flow will behave:

  • wa: with a value of wsignin1.0 which means that this is a sign in request (note that WS-Federation also supports sign out flow, so this parameter is mandatory)
  • wtrealm: this is the RP itself, and it represents the intended consumer of the token
  • wct: this is an optional parameter that specified the time of the response of the RP. This might be used as indication to possible attacks if the IP sees that there is a time lag between this parameter value and the actual time it received the sign in request

Step 2: The browser sends a GET request to the STS

The browser then uses the STS address in the Location header and the query strings discussed before to assemble a sign in request to the STS. Here is the request made to the local STS:

Step 3: The IP-STS performs authentication

In “real” scenario, the STS will normally present you with an authentication form that you would supply your credentials to. This procedure of STS authenticating users – as mentioned before – is outside the scope of WS-Federation and thus WIF. The STS can authenticate you against an AD, a custom user store, or even using an X.509 certificate. This depends on the type of the RP application and where the users are coming from. Regardless of the mechanism to authenticate, the STS will – assuming successful authentication – generate a security token containing the claims agreed between the IP and RP via policy.

Step 4: The STS sends the response back to the browser

The STS sends back to the browser a hidden form with will POST back to the RP. The form contains the following information:

  • wa: same as before indicating a sign-in flow
  • wresult: contains the SAML security token issued by the STS.

If you carefully examine the response, you will see many of the protocols discussed before coming into play:

  • The RequestSecurityTokenResponse (RSTR) of WS-Trust carries the token collection
  • XML Signature of WS-Security is used to provide message integrity and trust between the IP and RP
  • WS-Policy – driven by the Federation Metadata – indicates rules such as token lifetime
  • WS-Addressing is used to identify the endpoint reference for the passive request (i.e. the RP)

I reformatted the content for ease of display, here you can see these protocols in action:

Note: There is a difference between SAML-P (the protocol) and SAML token. SAML-P is a full blown protocol much like WS-Federation. SAML token is a token type that can be used independent of SAML-P, and it’s one of the token types frequently used in WS-Federation.

I will briefly touch on SAML-P 2.0 at the end of this article. 

Step 5: The browser posts back to the RP

The browser uses the hidden post form from the previous step to post the result shown in the previous step back to the RP

Step 6: The RP verifies the token

Verifying a token involves multiple checks that could take place based on each situation and policy. Some of these checks are:

  • Integrity: in case digital signature is used, the RP uses the IP public certificate included in the request to verify that the digital signature is valid
  • Expiration: in case an expiration for the token is present, the RP verifies that the token is not expired
  • Decryption: in case encryption is used, the RP uses its private key to decrypt the contents. In the above example, encryption was not used as we were in the passive (web browser) case, later I will discuss the active case and illustrate the difference.
  • Source: using the policy, the RP makes sure the token is issued by a trusted IP
  • Claims: also using the policy, the RP checks that the set of claims issued are the ones agreed on

Step 7: A session cookie is issued

Once the token is verified, the RP issues a session cookie back to the browser so that next requests do not pass through the same WS-Federation process.

WIF for Active Clients

The browser-based scenario we have just seen is called a passive scenario. Passive clients are those that do not possess the WS-* capabilities. Browsers are passive because they just perform redirects they are told to, but by themselves browsers have no notion of WS-Federation.

Another type of clients are active ones. Active clients are those that can perform WS-* operations; an obvious example are WCF services. A WCF service can also act as a claims-based enabled RP. In this scenario, the client (the Subject as per the defined terminology) is an application calling the WCF service which requires claims-based authentication from a trusted STS.

The full cycle goes as follows:

  • An application reaches a line where it is invoking a WCF service. The WCF service itself is an RP configured for claims-based authentication.
  • The WCF client library at the application finds out that the WCF service needs a security token to grant access. The library then issues an RST to the STS.
  • The STS replies back with a security token via RSTR
  • The client application then uses this token to access the WCF service. Now here you can spot a major difference between the passive and active scenarios. Recall in the passive case that the client (browser) had only Https as option for encrypting its request to the RP – called transport level. In the active case, the client (application) can actually use the token to perform custom message level encryption on the message. Message level security has the benefits of better performance as we can be selective for which parts of the message are deemed sensitive and thus to be encrypted and it also supports end-to-end messaging.

The good news is that when it comes to WIF, you do not have to learn anything new than what you saw in the passive case. The same programming model applies. Again you can right click a WCF project in VS 2012 and configure it as an RP using the “Identity and Access” wizard. The defining difference between active and passive clients becomes clear when you inspect the web.config of the WCF service; you will see the ws2007FederationHttpBinding in use, which again shows you that active clients are WS-* aware.

Federation Providers

So far the discussion has been centered on one role that STS can play, and that is issuing claims on behalf of an IP; and thus it has been called an IP-STS. However, an STS can play another roles; the role of a Federation Provider.

Let’s recall the scenario I discussed before of the two companies A and B wishing to enter a federated business agreement. In that scenario I gave an example of B users needing to access a (single) application in A and how that application is configured as an RP for an STS in B.

Now let’s extend the scenario a bit: instead of B users wanting to access a single application in A, both companies want to extend their partnership and now B users must be able to access multiple A applications. Under the role of IP-STS we have discussed so far, every A application must establish trust with B IP which in turn must provision every A application.

A far better solution is for A to expose a Federation Provider. A Federation Provider is a Resource STS (R-STS) that sits on the resource side (on the relying party side – A domain in this case). Company A applications then establish trust with the R-STS and B then provisions this STS only.

A typical flow will then goes as follows:

  • An employee on B domain tries to access an A application
  • A detects (via WIF for example) that the user is not authenticated and redirects the request to the STS it trusts; in this case it’s the R-STS on A’s domain
  • In its turn, the R-STS is configured to accept tokens from B IP, so the request is redirected back to B IP
  • The user authenticated herself again B IP using any mechanism defined by B
  • A request is submitted to the R-STS on A containing the token
  • R-STS processes the request and typically does one of 3 things:
    • Issue claims exactly as they were sent from B IP. Here the R-STS decides that the included claims satisfy A application needs
    • Modify claims based on some rules, for example to satisfy special formatting needs for A applications
    • Add new claims that the R-STS knows are important for A applications yet B IP had no information about. For example R-STS might maintain some information from previous transactions for a specific user.
  • The resulted token is then sent to the originally request A application which grants access

Once again, using WIF you can create your own local R-STS; although as explained before, in real scenarios you would go with commercial products. ADFS v2 is also suited to play the role of R-STS; for example its claims-transformation language is ideal for claims modification. Another great example is Azure Access Control Service (ACS) which also plays the role of a Federation Provider on the cloud.

Both ADFS v2 and ACS will be discussed in future articles.

SAML 2.0

I will close this article by briefly addressing SAML 2.0. Please note that I have never implemented this protocol myself, rather what I present here is a summary answer for the common question of WS-Federation vs. SAML just to get you going; but you’ll have a lot of reading to do if you want a definitive answer.

SAML 2.0 consists of two parts; a protocol (much like WS-Federation) that defines interactions and supported communication protocols, and a token format – conveniently called SAML token. The token specification is separate from that of the protocol, so you can use the token in another protocols. Actually that is what we have been doing in this article all along by using SAML tokens to carry claims in the WS-Federation protocol.

From a very high level point of view, SAML 2.0 protocol (SAML-P) achieves the same objectives of WS-Federation: it allows business partners to enter a federation agreement and allows delegation of authentication logic from an application (Service Provider) to an external entity (Identity Provider). The Identity Provider creates a SAML assertion that the Service Provider then validates to grant authentication. The SAML assertion is a SAML token containing claims.

SAML specification is actually divided into multiple parts:

  • SAML core: this specification defines the structure of the SAML assertions and the supported protocols. You can think of this as the base of every other specification
  • SAML bindings: each of the protocols described in the core specification is detailed in the binding specification where each binding has certain communication and formatting requirements
  • SAML profiles: the profiles specification puts together the different specification into a usage profile, such as the login and logout profile.
  • SAML metadata: this specification is basically what makes SAML-P tick. It defines the agreement requirements between the Service Provider and the Identity Provider to establish SAML-P (supported bindings, certificates, cryptography, etc…). Basically this is to SAML-P what WS-Trust is to WS-Federation.

WIF does not support SAML-P, although some time ago an extension to WIF that adds SAML 2.0 support was CTP’ed but has not took off since then. Here you can see the announcement:

http://blogs.msdn.com/b/card/archive/2011/05/16/announcing-the-wif-extension-for-saml-2-0-protocol-community-technology-preview.aspx

It’s not all bad however, ADFS 2.0 however fully supports SAML 2.0 protocol. Why is this great news? Well simply because of the range of interoperability scenarios that is now possible between two environments one adopting WS-Federation and the other SAML-P. So organizations that have already a SAML-P-based protocol infrastructure in place, does not need to change in order to be interoperable with an ADFS-based one.

[Discovery] Con người chịu lạnh được đến giới hạn nào?

2653954_Tinhte-nhiet-do-am.

Không khí lạnh đã lan tỏa tại khắp mọi nơi, không chỉ ở nước ta mà còn ở khắp mọi nơi trên thế giới. Hiện tại, Hà Nội đang là 16 độ C, California (Mỹ) đang 10 độ, Adadyr (Nga) thì nhiệt độ đã xuống tới âm 24 độ C. Một người bạn của mình tại Pháp cũng cho biết là nhiệt độ chỉ còn có 5 độ C và cô ấy bảo “đang lạnh sắp chết rồi!”. Câu nói của cô ấy làm mình tự hỏi con người chịu lạnh được tới bao nhiêu? Và ở mức nhiệt độ nào thì con người sẽ chịu không nổi nữa? Khi nhiệt độ hạ thấp thì cơ thể người sẽ chịu những tổn thương nào? Mời các bạn cùng mình tìm hiểu nhé.

Các cơ chế “tích hợp sẵn” trong cơ thể người để chống lại cái lạnh

Trên thực tế, cơ thể của chúng ta thật sự kỳ diệu, dường như tạo hóa đã chuẩn bị sẵn cho chúng ta các phương án để đối phó với sự khắc nghiệt của môi trường. Và đối với cái lạnh, cơ thể đã được “tích hợp” các cơ chế để bảo vệ tính mạng chúng ta.

Tinhte-co-gian-mach.

Ngay khi những cơn gió lạnh táp vào mặt, cơ thể chúng ta sẽ tự phản ứng bằng cách chuyển máu tránh xa da cũng như các bộ phận nhô ra bên ngoài như ngón tay, ngón chân và chuyển lượng máu đó vào khu vực trung tâm của các cơ quan này. Quá trình này được gọi là sự co mạch (vasoconstriction) và nó giúp giới hạn lượng nhiệt thất thoát ra ngoài môi trường.

Phản ứng thứ 2 từ cơ thể là run lên. Ngay khi nhiệt độ hạ xuống, một số người sẽ bắt đầu rùng mình, nổi da gà, răng đánh vào nhau và tiếp theo đó là rung bần bật lên. Thật ra, khi cảm nhận được cái lạnh, “cảm biến” sẽ gởi tín hiệu về não và não sẽ đáp ứng bằng một loạt những cảnh báo. Sự run rẩy là một trong những cảnh báo đó. Lúc này, cơ bắp của con người sẽ co giãn liên tục. Một cách nôm na, phản ứng này giúp cơ thể tạo thêm nhiệt độ, làm thân nhiệt tăng lên đồng thời cảnh báo cho con người biết rằng “đã đến lúc tìm nơi ấm áp hơn rồi đó.”

2 cơ chế trên được biểu hiện qua nhiều phản ứng khác, các bạn có thể theo dõi thêm tại bài viết “những phản ứng của con người khi gặp thời tiết lạnh.” Nhưng nếu các cảnh báo đã được não bộ đưa ra, nhưng chúng ta không thể tìm được nơi ấm áp để trú ẩn và cơ thể phải tiếp tục ở trong môi trường nhiệt độ thấp thì sao?

Tổn thương trước khi “đóng băng”

Tinhte-tuyet.

Dưới góc độ y học, khi nhiệt độ trung tâm (core temperature) của cơ thể hạ xuống thấp, cơ thể bắt đầu xuất hiện hạ thân nhiệt (hypothernmia). Hạ thân nhiệt từ trung bình đến nặng xảy ra khi thân nhiệt xuống dưới 32,2 độ C. Đây là một tình trạng lâm sàng của nhiệt độ dưới mức bình thường, lúc cơ thể không còn khả năng sinh nhiệt để duy trì các hoạt động bình thường. Và nếu cơ thể vừa chịu lạnh, lại bị ướt thì câu chuyện lại hoàn toàn khác. Cơ thể ẩm ướt sẽ mất nhiệt nhanh gấp 25 lần so với trong không khí.

Theo giáo sư John Castellani, trưởng khoa thân nhiệt và môi trường vùng núi tại Viện nghiên cứu môi trường quân y Hoa Kỳ cho biết nhiệt độ trung tâm bình thường của cơ thể người là 37 độ C và hiện tượng hạ thân nhiệt nhẹ sẽ xuất hiện lúc thân nhiệt xuống còn 35 độ C. Và nếu tiếp tục xuống nữa, tình hình sẽ tiến triển theo hướng xấu.

  • Thân nhiệt tại 32,2 độ C, cơ chế bù trừ bắt đầu suy giảm, trạng thái tâm thần có thể biến đổi và thậm chí, bạn có thể bị mất trí nhớ.
  • Thân nhiệt tại 27,7 độ C, bạn bắt đầu mất ý thức
  • Thân nhiệt xuống dưới 21 độ C, trạng thái hạ thân nhiệt nặng diễn ra và con người sẽ chết.

Kỷ lục ghi nhận thân nhiệt thấp nhất của một người trưởng thành từng được biết đến là 13,7 độ C. Lúc đó, người này đã bị ngâm trong nước lạnh và đóng băng trong thời gian khá lâu.

Các thương tổn nguy hiểm do nhiệt độ thấp

TInhte-ton-thuong-lanh.

Chưa đến mức phải chết, nhưng các tổn thương do lạnh giá gây ra cũng không kém phần nguy hiểm. Giáo sư John Castellani cho biết: “Nếu như nhiệt độ trung tâm của cơ thể phải mất khá lâu thì mới giảm xuống, thì nhiệt độ khu vực ngoại vi lại giảm xuống khá nhanh chóng.”

Các ngón tay, ngón chân sẽ dễ bị tổn thương do lạnh giá nhất do các khu vực này sẽ bị giảm lượng máu lưu thông trước tiên khi nhiệt độ hạ xuống. Thậm chí, mặc dù chúng ta có thể đeo găng tay hoặc mang tất nhưng nhiệt độ ngón tay, ngón chân vẫn sẽ rất thấp và nếu bạn lại bị đổ mồ hôi, sự ẩm ướt càng làm cho khu vực này bị mất nhiệt nhiều hơn.

Tuy nhiên, nhiệt độ không khí vẫn còn trên 0 độ C thì các tổn thương do lạnh giá vẫn chưa diễn ra. Theo các nhà nghiên cứu, những tổn thương thường xuất hiện khi nhiệt độ môi trường xuống dưới điểm đông 0 độ C. Castellani cho biết: Nếu bạn phải chịu những cơn gió lạnh âm 9,4 độ C trong thời gian dài thì sự tê cóng nặng sẽ xuất hiện và các thương tổn do lạnh giá mới tăng lên.”

Đồng thời, việc “quãng thời gian” để các tổn thương xuất hiện thì còn phụ thuộc vào các điều kiện môi trường. Ví dụ nếu nhiệt độ môi trường đang là âm 17,8 độ C và những cơn gió rét âm 28,3 độ C, bạn sẽ chịu các thương tổn do giá rét sau 30 phút đứng trong môi trường này. Tuy nhiên, thời gian này sẽ chỉ còn có 5 phút nếu bạn đứng trong môi trường âm 26 độ C và những cơn gió rét âm 48,3 độ C thôi liên tục.

Dù vậy, theo Castellani thì mặc dù những nguy cơ khá cao, nhưng con người có thể ra ngoài trong nhiệt độ môi trường cực kỳ lạnh và vẫn có thể sinh tồn. Bằng chứng là những người leo núi hoặc thám hiểu bắc cực,… Đồng thời, chúng ta đã có những người bơi qua eo biển Manche khi nhiệt độ nước rất thấp. Tuy nhiên, không phải khả năng chịu lạnh của mỗi người là như nhau, có thể những người này phải trải qua quá trình luyện tập trong thời gian dài để cơ thể dần thích ứng với nhiệt độ lạnh. Khi đó, cơ thể sẽ hình thành khả năng sản xuất và duy trì thân nhiệt hiệu quả hơn so với người bình thường.

Do đó, đừng nên mang cơ thể ra thí nghiệm khả năng chịu đựng nhiệt độ thấp. Bạn vẫn có thể mắc bệnh và chịu nhiều thương tổn do thời tiết lạnh giá gây ra nếu không được rèn luyện trước. Nếu vậy, hãy tự giữ ấm và nhắc nhở mọi người xung quanh cũng làm vậy để có được mùa đông tuyệt vời và nhiều sức khỏe nhé. Cám ơn các bạn đã theo dõi bài viết. Hy vọng rằng bài viết đã cung cấp cho các bạn thêm một số thông tin thú vị về mua đông của chúng ta. Có thể đọc đến những dòng này thì điện thoại hay tablet của các bạn cũng đã đủ nóng lên để sưởi ấm cho bàn tay các bạn rồi đó. Chúc các bạn có mùa đông ấm áp bên áo lạnh hoặc người thân nhé. Chúc vui vẻ.

Tham khảo BBC, LS

[Dev Tip] 5 Internal things that you should know about IIS Express

#1 : Where does the basic configuration information stored ?

The basic information related to the IIS Express are stored inside the project file (proj file)  within the property group information section.  To view it, Open the Project File in edit mode ( In this case I will strongly recomand you to use Visual Studio productivity Tool ) ; where you can edit the project file using “Power Command” as shown in the image below.

EditProjectFilesusingPowerCommands thumb 5 Internal things that you should know about IIS Express

Once you have the  project file in edited mode, you can search for below configuration section, where you can see the application information related to IIS Express are set.

IISExpressSettings thumb 5 Internal things that you should know about IIS Express

You can change the configuration editable values and update the project files to take the changes effects.

If your application has the SSL Enabled, IISExpressSSLPort will have the values specified for the SSL  port number.

IISPortEnabled thumb1 5 Internal things that you should know about IIS Express

#2 : Where is the configuration settings related to bindings and virtual directory?

Project files have the information related to the IIS Express and it’s basic settings; whereas there are several configuration files that are required to host and run a web application.  You can find all the IIS Express related files under \users\<username>\My Documents \ IISExpress\Config .

IISExpressConfigFiles thumb 5 Internal things that you should know about IIS Express

Open the “applicationhost.config” file in any text editor, and search for your web application name. You will find a section similar to the image shown in below.

ConfigurationSettings thumb 5 Internal things that you should know about IIS Express

As you can see this section contains the information related with the physical path of  IIS Express Virtual Directory, application pool  and the several bindings information.

The aplicationhost.config files are user specific.

#3 :  Applying Multiple Bindings With IIS Express

You can add additional bindings within the “bindings” elements to access your sites using different urls.

<bindings>
<binding protocol="http" bindingInformation="*:53294:localhost" />
<binding protocol="http" bindingInformation="*:53295:my-pc" />
<binding protocol="https" bindingInformation="*:44302:localhost" />
</bindings>

For an example if you have the above bindings in the binding configuration section, you will be able to acess the site using http://my-pc:53259 ( where my-pc has to be configure in your host files )

#4 : There is an additional application – IISExpressTray

When you press F5 to run the project, Visual Studio automatically launches the IIS Express and it will show up in your task-bar tray while it’s running.

 5 Internal things that you should know about IIS Express

You can right-click and  select the “Application”  to get the list of currently active URL’s for the current application.  To navigate, you have to click on the site URL.

IISApplicationHost thumb 5 Internal things that you should know about IIS Express

Along with the hosting sites, IIS Express ( IISExpress.exe)  is the parent process of an another application “IISExpressTray.Exe”. You can launch this application by just right click on IIS Express Icon on system tray icon and then select “Show All Applications” .

Following snaps shows the overall process hierarchy of IIS Express with in Visual Studio.

IISExpressTray thumb 5 Internal things that you should know about IIS Express

#5 : Quick way to get the details of the site configurations

From the application url lists, you can select any of the url / sites ; IISExpress tray application will show you different additional details such as runtime, application path and configuration file as shown the below image.

ApplicationDetails thumb 5 Internal things that you should know about IIS Express

This is the easiest option to open the application configuration file for IIS Express.

That’s all ! Hope going forward this information will help you to work with IIS Express.

Thanks

[Dev Tip] ASP.NET Web Api: Unwrapping HTTP Error Results and Model State Dictionaries Client-Side

When working with ASP.NET Web Api from a .NET client, one of the more confounding things can be handling the case where errors are returned from the Api. Specifically, unwrapping the various types of errors which may be returned from a specific API action method, and translating the error content into meaningful information for use be the client.

How we handle the various types of errors that may be returned to our Api client applications can be very dependent upon specific application needs, and indeed, the type of client we are building.

In this post we’ll look at some general types of issues we might run into when handing error results client-side, and hopefully find some insight we can apply to specific cases as they arise.

Understanding HTTP Response Creation in the ApiController

Most Web Api Action methods will return one of the following:

  • Void: If the action method returns void, the HTTP Response created by ASP.NET Web Api will have a 204 status code, meaning “no content.”
  • HttpResponseMessage: If the Action method returns an HttpResponseMessage, then the value will be converted directly into an HTTP response message. We can use the Request.CreateResponse()method to create instances of HttpResponseMessage, and we can optionally pass domain models as a method argument, which will then be serialized as part of the resulting HTTP response message.
  • IHttpActionResult: Introduced with ASP.NET Web API 2.0, the IHttpActionResult interface provides a handy abstraction over the mechanics of creating an HttpResponseMessage. Also, there are a host of pre-defined implementations for IHttpActionResult defined in System.Web.Http.Results, and theApiController class provides helper methods which return various forms of IHttpActionResult, usable directly within the controller.
  • Other Type: Any other return type will need to be serialized using an appropriate media formatter.

For more details on the above, see Action Results in Web API 2 by Mike Wasson.

From Web Api 2.0 onward, the recommended return type for most Web Api Action methods is IHttpActionResult unless this type simply doesn’t make sense.

Create a New ASP.NET Web Api Project in Visual Studio

To keep things general and basic, let’s start by spinning up a standard ASP.NET Web Api project using the default Visual Studio Template. If you are new to Web Api, take a moment to review the basics, and get familiar with the project structure and where things live.

Make sure to update the Nuget packages after you create the project.

Create a Basic Console Client Application

Next, let’s put together a very rudimentary client application. Open another instance of Visual Studio, and create a new Console application. Then, use the Nuget package manager to install the ASP.NET Web Api Client Libraries into the solution.

We’re going to use the simple Register() method as our starting point to see how we might need to unwrap some errors in order to create a more useful error handling model on the client side.

The Register Method from the Account Controller

If we return to our Web Api project and examine the Register() method, we see the following:

The Register() method from AccountController:
[AllowAnonymous]
[Route("Register")]
public async Task<IHttpActionResult> Register(RegisterBindingModel model)
{
    if (!ModelState.IsValid)
    {
        return BadRequest(ModelState);
    }

    var user = new ApplicationUser() 
    { 
        UserName = model.Email, 
        Email = model.Email 
    };

    IdentityResult result = await UserManager.CreateAsync(user, model.Password);

    if (!result.Succeeded)
    {
        return GetErrorResult(result);
    }
    return Ok();
}

In the above, we can see that there are a number of options for what might be returned as ourIHttpActionResult.

First, if the Model state is invalid, the BadRequest() helper method defined as part of the ApiController class will be called, and will be passed the current ModelStateDictionary. This represents simple validation, and no additional processes or database requests have been called.

If the Mode State is valid, the CreateAsync() method of the UserManager is called, returning anIdentityResult. If the Succeeded property is not true, then GetErrorResult() is called, and passed the result of the call to CreateAsync().

GetErrorResult() is a handy helper method which returns the appropriate IHttpActionResult for a given error condition.

The GetErrorResult Method from AccountController
private IHttpActionResult GetErrorResult(IdentityResult result)
{
    if (result == null)
    {
        return InternalServerError();
    }
    if (!result.Succeeded)
    {
        if (result.Errors != null)
        {
            foreach (string error in result.Errors)
            {
                ModelState.AddModelError("", error);
            }
        }
        if (ModelState.IsValid)
        {
            // No ModelState errors are available to send, 
            // so just return an empty BadRequest.
            return BadRequest();
        }
        return BadRequest(ModelState);
    }
    return null;
}

From the above, we can see we might get back a number of different responses, each with a slightly different content, which should assist the client in determining what went wrong.

Making a Flawed Request – Validation Errors

So, let’s see some of the ways things can go wrong when making a simple POST request to the Register() method from our Console client application.

Add the following code to the console application. Note that we are intentionally making a flawed request. We will pass a valid password and a matching confirmation password, but we will pass an invalid email address. We know that Web Api will not like this, and should kick back a Model State Error as a result.

Flawed Request Code for the Console Client Application:
static void Main(string[] args)
{
    // This is not a valid email address, so the POST should fail:
    string email = "john";
    string password = "Password@123";
    string confirmPassword = "Password@123";

    HttpResponseMessage result = 
        Register(email, password, confirmPassword);

    if(result.IsSuccessStatusCode)
    {
        Console.WriteLine(
            "The new user {0} has been successfully added.", email);
    }
    else
    {
        Console.WriteLine(result.ReasonPhrase);
    }
    Console.Read();
}


public static HttpResponseMessage Register(
    string email, string password, string confirmPassword)
{
    //Attempt to register:
    using (var client = new HttpClient())
    {
        var response =
            client.PostAsJsonAsync("http://localhost:51137/api/Account/Register",

            // Pass in an anonymous object that maps to the expected 
            // RegisterUserBindingModel defined as the method parameter 
            // for the Register method on the API:
            new
            {
                Email = email,
                Password = password,
                ConfirmPassword = confirmPassword
            }).Result;
        return response;
    }
}

If we run our Web Api application, wait for it to spin up, and then run our console app, we see the following output:

Console output from the flawed request:
Bad Request

Well, that’s not very helpful.

If we de-serialize the response content to a string, we see there is more information to be had. Update the Main()method as follows:

De-serialize the Response Content:
static void Main(string[] args)
{
    // This is not a valid email address, so the POST should fail:
    string email = "john";
    string password = "Password@123";
    string confirmPassword = "Password@123";

    HttpResponseMessage result = 
        Register(email, password, confirmPassword);

    if(result.IsSuccessStatusCode)
    {
        Console.WriteLine(
            "The new user {0} has been successfully added.", email);
    }
    else
    {
        string content = result.Content.ReadAsStringAsync().Result;
        Console.WriteLine(content);
    }
    Console.Read();
}

Now, if we run the Console application again, we see the following output:

Output from the Console Application with De-Serialized Response Content:
{"Message":"The request is invalid.","ModelState":{"":["Email 'john' is invalid."]}}

Now, what we see above is JSON. Clearly the JSON object contains a Message property and a ModelStateproperty. But the ModelState property, itself another JSON object, contains an unnamed property, an array containing the error which occurred when validating the model.

Since a JSON object is essentially nothing but a set of key/value pairs, we would normally expect to be able to unroll a JSON object into a Dictionary<string, object>. However, the nameless property(ies) enumerated in the ModelState dictionary on the server side makes this challenging.

Unwrapping such an object using the Newtonsoft.Json library is doable, but slightly painful. Equally important, an error returned from our API may, or may not have a ModelState dictionary associated with it.

Another Flawed Request – More Validation Errors

Say we figured out that we need to provide a valid email address when we submit our request to the Register()method. Suppose instead, we are not paying attention and instead enter two slightly different passwords, and also forget that passwords have a minimum length.

Modify the code in the Main() method again as follows:

Flawed Request with Password Mismatch:
{
    "Message":"The request is invalid.",
    "ModelState": {
        "model.Password": [
            "The Password must be at least 6 characters long."],
        "model.ConfirmPassword": [
            "The password and confirmation password do not match."]
    }
}

In this case, it appears the items in the ModelState Dictionary are represented by valid key/value pairs, and the value for each key is an array.

Server Errors and Exceptions

We’ve seen a few examples of what can happen when the model we are passing with our POST request is invalid. But what happens if our Api is unavailable?

Let’s pretend we finally managed to get our email and our passwords correct, but now the server is off-line.

Stop the Web Api application, and then re-run the Console application. Of course, after a reasonable server time-out, our client application throws an AggregateException.

What’s an AggregateException? Well, it is what we get when an exception occurs during execution of an asyncmethod. If we pretend we don’t know WHY our request failed, we would need to dig down into theInnerExceptions property of the AggregateException to find some useful information.

In the context of our rudimentary Console application, we will implement some top-level exception handling so that our Console can report the results of any exceptions like this to us.

Update the Main() method once again, as follows:

Add Exception Handling to the Main() Method of the Console Application:
static void Main(string[] args)
{
    // This is not a valid email address, so the POST should fail:
    string email = "john@example.com";
    string password = "Password@123";
    string confirmPassword = "Password@123";

    // Add a Try/Catch in case something goes wrong and the server throws:
    try
    {
        HttpResponseMessage result =
            Register(email, password, confirmPassword);

        if (result.IsSuccessStatusCode)
        {
            Console.WriteLine(
                "The new user {0} has been successfully added.", email);
        }
        else
        {
            string content = result.Content.ReadAsStringAsync().Result;
            Console.WriteLine(content);
        }
    }
    catch (AggregateException ex)
    {
        Console.WriteLine("One or more exceptions has occurred:");
        foreach (var exception in ex.InnerExceptions)
        {
            Console.WriteLine("  " + exception.Message);
        }
    }
    Console.Read();
}

If we run our console app now, while our Web Api application is offline, we get the following result:

Console Output with Exception Handling and Server Time-Out:
One or more exceptions has occurred:
  An error occurred while sending the request.

Here, we are informed that “An error occurred while sending the request” which at least tells us something, and averts the application crashing due to an unhandled AggregateException.

Unwrapping and Handling Errors and Exceptions in Web Api

We’ve seen a few different varieties of errors and exceptions which may arise when registering a user from our client application.

While outputting JSON from the response content is somewhat helpful, I doubt it’s what we are looking for as Console output. What we need is a way to unwrap the various types of response content, and display useful console messages in a clean, concise format that is useful to the user.

While I was putting together a more in-depth, interactive console project for a future article, I implemented a custom exception, and a special method to handle these cases.

ApiException – a Custom Exception for Api Errors

Yeah, yeah, I know. Some of the cases above don’t technically represent “Exceptions” by the hallowed definition of the term. In the case of a simple console application, however, a simple, exception-based system makes sense. Further, unwrapping all of our Api errors up behind a single abstraction makes it easy to demonstrate how to unwrap them.

Mileage may vary according to the specific needs of YOUR application. Obviously, GUI-based applications may extend or expand upon this approach, relying less on Try/Catch and throwing exceptions, and more upon the specifics of the GUI elements available.

Add a class named ApiException to the Console project, and add the following code:

ApiException – a Custom Exception
using System;
using System.Collections.Generic;
using System.Linq;
using System.Net;
using System.Net.Http;

namespace ApiWithErrorsTest
{
    public class ApiException : Exception
    {
        public HttpResponseMessage Response { get; set; }
        public ApiException(HttpResponseMessage response)
        {
            this.Response = response;
        }


        public HttpStatusCode StatusCode
        {
            get
            {
                return this.Response.StatusCode;
            }
        }


        public IEnumerable<string> Errors
        {
            get
            {
                return this.Data.Values.Cast<string>().ToList();
            }
        }
    }
}

Unwrapping Error Responses and Model State Dictionaries

Next, let’s add a method to our Program which accepts an HttpResponseMessage as a method argument, and returns an instance of ApiException. Add the following code the the Program class of the Console application:

Add the CreateApiException Method the to Program Class:
public static ApiException CreateApiException(HttpResponseMessage response)
{
    var httpErrorObject = response.Content.ReadAsStringAsync().Result;

    // Create an anonymous object to use as the template for deserialization:
    var anonymousErrorObject = 
        new { message = "", ModelState = new Dictionary<string, string[]>() };

    // Deserialize:
    var deserializedErrorObject = 
        JsonConvert.DeserializeAnonymousType(httpErrorObject, anonymousErrorObject);

    // Now wrap into an exception which best fullfills the needs of your application:
    var ex = new ApiException(response);

    // Sometimes, there may be Model Errors:
    if (deserializedErrorObject.ModelState != null)
    {
        var errors = 
            deserializedErrorObject.ModelState
                                    .Select(kvp => string.Join(". ", kvp.Value));
        for (int i = 0; i < errors.Count(); i++)
        {
            // Wrap the errors up into the base Exception.Data Dictionary:
            ex.Data.Add(i, errors.ElementAt(i));
        }
    }
    // Othertimes, there may not be Model Errors:
    else
    {
        var error = 
            JsonConvert.DeserializeObject<Dictionary<string, string>>(httpErrorObject);
        foreach (var kvp in error)
        {
            // Wrap the errors up into the base Exception.Data Dictionary:
            ex.Data.Add(kvp.Key, kvp.Value);
        }
    }
    return ex;
}

In the above, we get a sense for what goes into unwrapping an HttpResponseMessage which contains a mode state dictionary.

When the response content includes a property named ModeState, we unwind the ModelState dictionary using the magic of LINQ. We knit the string key together with the contents of the value array for each item present, and then add each item to the exception Data dictionary using an integer index for the key.

If no ModelState property is present in the response content, we simply unwrap the other errors present, and add them to the Data dictionary of the exception.

Error and Exception Handling in the Example Application

We’ve already added some minimal exception handling at the top level of our application. Namely, we have caught and handled AggregateExceptions which may be thrown by async calls to our api, which are not handled deeper in the call stack.

Now that we have added a custom exception, and a method for unwinding certain types error responses, let’s add some additional exception handling, and see if we can do a little better, farther down.

Update the Register() method as follows:

Add Handle Errors in the Register() Method:
public static HttpResponseMessage Register(
    string email, string password, string confirmPassword)
{
    //Attempt to register:
    using (var client = new HttpClient())
    {
        var response =
            client.PostAsJsonAsync("http://localhost:51137/api/Account/Register",

            // Pass in an anonymous object that maps to the expected 
            // RegisterUserBindingModel defined as the method parameter 
            // for the Register method on the API:
            new
            {
                Email = email,
                Password = password,
                ConfirmPassword = confirmPassword
            }).Result;

        if(!response.IsSuccessStatusCode)
        {
            // Unwrap the response and throw as an Api Exception:
            var ex = CreateApiException(response);
            throw ex;
        }
        return response;
    }
}

You can see here, we are examining the HttpStatusCode associated with the response, and if it is anything other than successful, we call our CreateApiException() method, grab the new ApiException, and then throw.

In reality, for this simple console example we likely could have gotten by with creating a plain oldSystem.Exception instead of a custom Exception implementation. However, for anything other than the simplest of cases, the ApiException will contain useful additional information.

Also, the fact that it is a custom exception allows us to catch ApiException and handle it specifically, as we will probably want our application to behave differently in response to an error condition in an Api response than we would other exceptions.

Now, all we need to do (for our super-simple example client, anyway) is handle ApiException specifically in ourMain() method.

Catch ApiException in Main() Method

Now we want to be able to catch any flying ApiExceptions in Main(). Our Console application, shining example of architecture and complex design requirements that it is, pretty much only needs a single point of error handling to properly unwrap exceptions and write them out as console text!

Add the following code to Main() :

Handle ApiException in the Main() Method:
static void Main(string[] args)
{
    // This is not a valid email address, so the POST should fail:
    string email = "john@example.com";
    string password = "Password@123";
    string confirmPassword = "Password@123";

    // Add a Try/Cathc in case something goes wrong and the server throws:
    try
    {
        HttpResponseMessage result =
            Register(email, password, confirmPassword);

        if (result.IsSuccessStatusCode)
        {
            Console.WriteLine(
                "The new user {0} has been successfully added.", email);
        }
        else
        {
            string content = result.Content.ReadAsStringAsync().Result;
            Console.WriteLine(content);
        }
    }
    catch (AggregateException ex)
    {
        Console.WriteLine("One or more exceptions has occurred:");
        foreach (var exception in ex.InnerExceptions)
        {
            Console.WriteLine("  " + exception.Message);
        }
    }
    catch(ApiException apiEx)
    {
        var sb = new StringBuilder();
        sb.AppendLine("  An Error Occurred:");
        sb.AppendLine(string.Format("    Status Code: {0}", apiEx.StatusCode.ToString()));
        sb.AppendLine("    Errors:");
        foreach (var error in apiEx.Errors)
        {
            sb.AppendLine("      " + error);
        }
        // Write the error info to the console:
        Console.WriteLine(sb.ToString());
    }
    Console.Read();
}

All we are doing in the above is unwinding the ApiException and transforming the contents for the Data dictionary into console output (with some pretty hackey indentation).

Now let’s see how it all works.

Running Through More Error Scenarios with Error and Exception Handling

Stepping all the way back to the beginning, lets see what happens now if we try to register a user with an invalid email address.

Change our registration values in Main() back to the following:

// This is not a valid email address, so the POST should fail:
string email = "john";
string password = "Password@123";
string confirmPassword = "Password@123";

Run the Web Api application once more. Once it has properly started, run the Console application with the modified registration values. The output to the console should look like this:

Register a User with Invalid Email Address:
An Error Occurred:
Status Code: BadRequest
Errors:
  Email 'john' is invalid.

Similarly, if we use a valid email address, but password values which are both too short, and also do not match, we get the following output:

Register a User with Invalid Password:
An Error Occurred:
Status Code: BadRequest
Errors:
  The Password must be at least 6 characters long.
  The password and confirmation password do not match.

Finally, let’s see what happens if we attempt to register the same user more than once.

Change the registration values to the following:

Using Valid Registration Values:
string email = "john@example.com";
string password = "Password@123";
string confirmPassword = "Password@123";

Now, run the console application twice in a row. The first time, the console output should be:

Console Output from Successful User Registration:
The new user john@example.com has been successfully added.

The next time, however, an error result is returned from our Web Api:

Console Output from Duplicate User Registration:
An Error Occurred:
Status Code: BadRequest
Errors:
  Name john@example.com is already taken.. Email 'simon@example.com' is already taken.

Oh No You did NOT Use Exceptions to Deal with Api Errors!!

Oh, yes I did . . . at least, in this case. This is a simple, console-based application in which nearly every result needs to end up as text output. Also, I’m just a rebel like that, I guess. Sometimes.

The important thing to realize is how to get the information we need out of the JSON which makes up the response content, and that is not as straightforward as it may seem in this case. How different errors are dealt with will, as always, need to be addressed within terms best suited for your application.

In a good many cases, treating Api errors as exceptions, to me, has merit. Doing so most likely will rub some architecture purists the wrong way (many of the errors incoming in response content don’t really meet the textbook definition of “exception“). That said, for less complex .NET-based Api Client applications, unwrapping the errors from the response content, and throwing as exceptions to be caught by an appropriate handler can save on a lot of duplicate code, and provides a known mechanism for handling problems.

In other cases, or for your own purposes, you may choose to re-work the code above to pull out what you need from the incoming error response, but otherwise deal with the errors without using exceptions. Register() (and whatever other methods you use to call into your Api) might, in the case of a simple console application, return strings, ready for output. In this case, you could side-step the exception issue.

Needless to say, a good bit of the time, you will likely by calling into your Web Api application not from a desktop .NET application, but instead from a web client, probably using Ajax or something.

That’s a Long and Crazy Post about Dealing with Errors – Wtf?

Well, I am building out a more complex, interactive console-based application in order to demo some concepts in upcoming posts. One of the more irritating aspects of that process was figuring out a reasonable way to deal with the various issues that may arise, when all one has to work with is a command line interface to report output to the user.

This was part of that solution (ok, in the application I’m building, things are a little more complex, a little more organized, and there’s more to it. But here we saw some of the basics).

But . . . Can’t We Just do it Differently on the Server?

Well . . . YES!

In all likelihood, you just might tune up how and what you are pushing out to the client, depending upon the nature of your Web Api and the expected client use case. In this post, I went with the basic, default set-up (and really, we only looked at one method). But, depending upon how your Api will be used, you might very will handle errors and exceptions differently on the server side, which may impact how you handle things on the client side.

REF: http://typecastexception.com/post/2014/09/28/ASPNET-Web-Api-Unwrapping-HTTP-Error-Results-and-Model-State-Dictionaries-Client-Side.aspx

[Dev Tip] Interact With the Web in Real-time Using Arduino, Firebase and Angular.js

This simple project is meant to be a hybrid introduction to connecting and manipulating data over the internet with Arduino, Node, Angular and Firebase.

The Internet of Things is nothing new. You may have been using it all along. In the broadest sense, your laptop and smartphones are IoT objects. What’s actually new is the “T” part. We have been using computers and smartphones so often that we hardly recognize them as “things.” However, “things” are more synonymous to everyday objects such as clothes, furnitures, fridges, clocks, books, lamps, skateboards, bicycles, and etc. IoT is when a coffee machine that brews you a cup of java when the weather gets too cold, a pair of shoes that lights up 10 minutes before your train arrives, or a door knob that alerts your phone when your parents try to trespass into your room.

To be able to connect to the internet, make sense of its data and interact with the users, these things need tiny computers aka micro controllers to make them conscious.

What we are building

We are going to connect an Arduino board to the internet and change the RGB color property on a web page in real-time by rotating a potentiometer.

What we need:

  • Arduino Uno
  • A-B USB cable
  • Potentiometer (aka pot)
  • RGB LED
  • 330 Ohms resistors x 3
  • 10 KOhms resistor x 1
  • Male-to-male jumper wires x 9

Everything is included in Sparkfun’s inventor’s kit, which is quite neat to get your hands on. No wifi or ethernet shield needed, since we’re actually persisting the data to the online database and communicate with the web app via the REST API provided by Firebase.

If you have not installed Node, head to Node.js home and follow instructions to install Node and npm.

All the codes can be downloaded from my repo or cloned using git:

$ git clone https://github.com/jochasinga/firepot

Once downloaded, cd firepot to get into the directory, and you should see two subdirectories—pot and app. cd into each one and install the dependencies:

$ cd pot && npm install

All dependencies are indicated in package.json, and npm automatically install them according to the information. They will be collected in a new subdirectory node_modules and can be required by any code within the project scope.

Connecting the circuit

Image made with Fritzing

Connect your circuit according to the diagram. A typical potentiometer has three leads. Connect one end to +5V power via 10 KOhms pull-up resistor and the other far end to ground (0V). The pot provides variable resistance between the voltage, and the middle lead should be connected to Arduino’s analog input pin (A0) to feed the voltage signal into the Arduino.

RGB LED is essentially three LEDs in one, each with different color of red, green and blue, together producing 16,777,216 possible colors. We are going to use the pot to traverse this color range from pure red (#FF0000) to blue (#0000FF) to green (#00FF00) and back to red again. The longest lead of the LED is called a common, and should be connected to ground. The rest are connected to Arduino’s PWM outputs (those with ~ preceding the number). The code connects red, green and blue leads to ~9, ~10 and ~11 respectively.

Connect the Arduino to your laptop via the USB cable, and you’re good to go.

Signing up with Firebase

Firebase is a JSON-style database service for real-time applications (You’ll need a free signup to use). Firebase implements a clever way of manipulating JSON data by adopting RESTful APIs. In this project, we will CREATE, READ and UPDATE a data chunk that looks like this:

"colors" : {
  "r" : 255,
  "g" : 0,
  "b" : 0
}

Firebase has an easy way of getting to your data via REST api, i.e. you can get your JSON data at https://burning-limbo-6666.firebaseio.com/colors.json,where https://burning-limbo-6666.firebaseio.com is the domain address of your app which Firebase generates for you after creating a new app, and /colors is the parent node of your data. Firebase has a data dashboard at the very URL, so you can just paste the address into the browser and hit it after you’ve updated the data there in the next section to see your data changed by the pot in real-time.

The Firebase “Forge” Dashboard displaying JSON data in tree format.

pot.js

Johnny-five is a javaScript library that wraps around Arduino’s C/C++ language and interface with the board via Firmata firmware, created by the awesome Rick Waldron, and is synonymous to the Nodebot Movement. In order to make it work, you must open the Arduino IDE to flash a standard Firmata code onto the board. On your IDE, go to File > Examples > Firmata > StandardFirmata and upload the code (Don’t forget to select the right board and serial port in the Tools menu). Once the upload have finished, you can close the IDE. Now, let’s have a look at our pot.js code.

https://gist.github.com/jochasinga/dad3e88b765893721c43

First, (1–3) we require our installed dependencies for the app, firebase andjohnny-five, then we (5–8) create a new firebase reference firebaseRef to where your data is stored. After that, (10) we create a new johnny-five instance of the Arduino board, and hook it to a callback function which will execute the rest of the code once it’s ready. (11–13) I assign the max value we expect from the pot to a variable and divide it by the RGB color subrange number to obtain a standard offset “distance” used as a step variable to calculate where the output value from the pot is on the RGB strip i.e. offset is MAGENTA, offset * 2 is BLUE, offset * 3 is CYAN and so on. You can see how I divided the RGB color strip into 6 subranges, as graphically shown below.

RGB Strip divided into 6 ranges

Normally on 5V power, a pot will convert analog signal (0–5V) to digital and give out a range of integer from 0–1023. In my case, my little pot maxes at half of that, so my maxValue lies somewhere around 511(Check this value by logging the output with console.log()). Then (16–19), create a new instance of the pot sensor, set its analog pin to A0 and frequency to 250. (22) Assign each LED’s pin to an array variable. Now, (25++) set our pot instance to listen to the “data” event, and within the callback function is the rest of the code that (27–40)calculates and maps the pot’s output range (0-maxValue) to a range of 0–255 (LED’s brightness)using our obtained step variable offset. (44–1o2) I use switch-case loop to conditionally adjusts each LED’s color brightness with Led.brightness method and saving these values to Firebase with Firebase.set method according to where the pot value is.

After that, run pot.js with node commandline

$ node pot.js

Your LED should light up and your terminal should be loop-printing the value from the pot (self) and which loop (or subrange) your pot value is currently in. Try spinning the pot to see the printed data changed as the LED’s color gradually shift. Then, browse to your Firebase dashboard using your app’s URL (i.e. https://burning-limbo-6666.firebaseio.com/colors). You should see your data change as you rotate your pot. We have successfully CREATEd and UPDATEd data on a database like you would have done with web forms, sliders or buttons. That concludes the hardware side.

app.js

Now we are going to work on our client, a simple web app. The structure of the app directory is as follow:

app
├── app.js
├── node_modules
|   ├── express
|   ├── firebase
|   └── socket.io
├── package.json
└── public
    ├── index.html    
    └── index.js

If you have not installed dependencies, you will probably not seenode_modules subdirectory in there. Do so now using npm install.

$ cd app && npm install

Take note of the public directory. app.js is a server code which serve static contents from public directory, in this case, index.html and index.js. Let’s hop into app.js:

https://gist.github.com/jochasinga/63ceadc19c5139f55660

What this code does is (5–18) creating a Node web server to listen to requests on port 3000 and (21)serve the front-end contents from insidepublic directory. Then (24–25) the server waits for a socket connection (i.e. GET request), print out “Connect and ready!” message, and (29++) start tracking data from Firebase and printing out changes. Firebase is not necessary here, since we will be using Angularfire library in public/index.js to sync data from Firebase directly there, but it’s intentionally included in here to exhibit a basic firebase method of detecting data changes and retrieving the snapshot of them. The most important part here is serving thepublic/index.html page and run script in public/index.js.

index.html

What our web page will look like

Our web page will be displaying R : G : B values dynamically from Firebase and changing the background color of the div according to how you rotate your pot. We’re going to use AngularFire, Firebase client library supporting Angular.js.

https://gist.github.com/jochasinga/c720677640e026381366

This html view(V) binds its part to a data model(M) that syncs data from your Firebase storage, and as data change, only that part is re-rendered. Angular operates on what’s called a “directive.” Directives add new functionalities to HTML elements instead of manipulating the DOM as in JQuery. (3) ng-app directive starts the application and defines the scope of the binding, (7) ng-controller defines the application controller(C) and scope that that particular controller method has effect on, and (10) ng-style allows dynamic styling of document (like you would have done with JQuery’s .cssor .addClass). To display data from the model, double-brackets ({{ }}) is used to contain the variable, which is a common way to do it in other web frameworks’ template language. Never mind the data object for now, you’ll see it in public/index.js. Ensure that you have included the scripts before</body>.

index.js

This is the engine room of our front end. In this file, we attach the firebase module to the app, define the controller method and sync the data from firebase to a local model object used by the html binding.

https://medium.com/media/b553c12c2bf5c868a4e4552debb06709?maxWidth=700

https://gist.github.com/jochasinga/e51c240a33f9704b3030

(2) Register firebase service to Angular app. After that, (5) you’ll have$firebase variable available for injecting into the controller. (6–9) Setting up the first part of the controller method is familiar—we create a firebase reference to our data. Now, (11) use the reference as a parameter to$firebase() to create an Angularfire reference to the data. (14)We translate the data into a JavaScript object, and (17) bind it to the variable “data” that will be used in index.html template.

Whew! That was some work, right? Now comes the exciting part. Go back topot directory and run your pot code again with:

$ node pot.js

Once you’ve got your LED turned on and the pot value start printing to the console, from another terminal window, run your app.js inside app directory:

$ node app.js

The console should start by printing “Server listening on port 3000″ then gushing out RGB values from Firebase. Go to your web browser and browser to http://localhost:3000, hopefully, you’ll get like the video below.

https://vine.co/v/OqXhOUWAY6I

If you like this article, please recommend and retweet. Feel free to shoot me at jo.chasinga@gmail.com. I’m up for talking, exchanging ideas, collaborations or consults. Comments are welcomed.