[Dev Tip] Microsoft Puts Orleans Code On GitHub

Microsoft has open-sourced Project Orleans a framework for writing highly scalable services that aims to simplify development of scalable cloud services.

According to its new GitHub page:

Orleans is a framework that provides a straightforward approach to building distributed high-scale computing applications, without the need to learn and apply complex concurrency or other scaling patterns. It was created by Microsoft Research and designed for use in the cloud. Orleans has been used extensively in Microsoft Azure by several Microsoft product groups, most notably by 343 Industries as a platform for all of Halo 4 cloud services, as well as by a number of other companies.

A public preview of Orleans was released at Build 2014, where Microsoft gave a session on the project’s use by the well-known game Halo 4, still available in this hour long Channel 9 video:

http://channel9.msdn.com/Events/Build/2014/3-641/player

The problem addressed by Orleans is that of creating interactive services that are scalable and reliable. As pointed out by Sergey Bykov, Lead Software Engineer, interactivity imposes strict constraints on availability and latency, as that directly impacts end-user experience. To support a large number of concurrent user sessions, high throughput is essential.

orleans

A three-tier architecture with stateless front-ends, stateless middle tier and a storage layer has limited scalability due to latency and throughput limits of the storage layer, which has to be consulted for every request. The traditional remedy for this is to add a caching layer between the middle tier and the storage to improve performance. This means you lose most of the concurrency and semantic guarantees of the underlying storage layer, so the cache manager has to have concurrency control to prevent inconsistencies caused by concurrent updates to a cached item. Using a stateless middle tier means that for every request, data is sent from storage or cache to the middle tier server that is processing the request. This is known as the data shipping paradigm.

The alternative solution used by Orleans is to use the actor model. This relies on the function shipping paradigm, which treats computation as the evaluation of mathematical functions and avoids changing-state and mutable data. The actor model uses “actors” as the basic entity for concurrent computation. When an actor receives a message, it can do things such as create another actor, send a message, make some local decision, or work out what to do when the next message is received. Using actors enables the building of a stateful middle tier. This gives you the performance benefits of a cache, but holds data locally, and offers the semantic and consistency benefits of encapsulated entities by using application-specific operations. Actor platforms such as Erlang and Akka make it easier to program distributed systems, but you still need to know what you’re doing in terms of the system services. Orleans is designed to offer higher levels of actor abstraction. It is actor-based, but differs from existing actor-based platforms by treating actors as virtual entities, not as physical ones.

An Orleans actor always exists virtually. It cannot be explicitly created or destroyed, and its existence transcends the lifetime of any of its in-memory instantiations.

Orleans actors are automatically instantiated: if there is no in-memory instance of an actor, a message sent to the actor causes a new instance to be created on an available server. An unused actor instance is automatically reclaimed as part of runtime resource management.

Actors never fail: if a server S crashes, the next message sent to an actor A that was running on S causes Orleans to automatically re-instantiate A on another server, eliminating the need for applications to supervise and explicitly re-create failed actors. The location of the actor instance is transparent to the application code, and Orleans can automatically create multiple instances of the same stateless actor, seamlessly scaling out hot actors. Overall, Orleans gives developers a virtual “actor space” that lets them invoke any actor in the system, whether or not it is present in memory.

The use of virtualization to map virtual actors to their physical instantiations means the runtime can take care of many hard distributed systems problems that would otherwise have to be handled by the developer, such as actor placement and load balancing, deactivation of unused actors, and actor recovery after server failures.

Orleans has already been used to build multiple production services currently running on the Microsoft Windows Azure cloud, including the back-end services for Halo 4, and that this enabled the project team to validate the scalability and reliability of production applications written using it, and adjust its model and implementation based on this feedback. Now it is on GitHub it is available for others to use.

orleanslogo

[Dev Tip] Running ASP.NET 5 applications in Linux Containers with Docker

As a part of our ASP.NET 5 cross-platform efforts, we are actively working on making applications written in ASP.NET 5 easy to deploy and ship on Linux and Mac OS X. A while ago, we have released the first official Docker image by Microsoft: the ASP.NET 5 Preview Docker Image.

Docker is an open source project that makes it easier to run applications in sandboxed application containers on Linux. With the ASP.NET 5 Docker image, you can get a base image where the ASP.NET 5 bits are already installed to run on Linux for you. All you need to do is add your application to the image and ship it so it will run in an app container!

In this tutorial we will show how a simple application written in ASP.NET 5 Preview can be deployed to a Linux Virtual Machine running on Microsoft Azure cloud using Docker. The tutorial can be executed on a Linux or Mac OS X machine where Docker client is installed (or you can ssh into the Linux VM you will use). Once Windows client for Docker is available, you will be able to run these commands on Windows and once Windows Server container support comes out you will be able to use Docker to manage Windows Server containers.

NOTE: Both ASP.NET 5 (vNext) and the Docker image are in preview and the following instructions are subject to change in the future. Please refer to Docker Hub page and GitHub repository for latest documentation on how to use the Docker image for ASP.NET 5.

Step 1: Create a Linux VM with Docker

As Docker only runs on Linux today, you will need a Linux machine or VM to run Docker on your server. You can find Docker installation instructions here or follow the Getting Started with Docker on Azure to get a Docker-ready Linux VM on Azure.

In this tutorial we will assume you have a Linux Virtual Machine on Azure with Docker installed. If you are using some other machine, most of this tutorial will still be relevant.

Step 2: Create a container image for your app

In order to deliver your ASP.NET application to the cloud, you will need to create a container image containing your application.

Docker container images are layers on top of each other. This means your application is an addition on top of a “base image” –in this case the base image will be microsoft/aspnet. The image layers are stored as diffs, therefore while deploying your application, your image will not contain the Linux distribution or the ASP.NET binaries; it will only contain your application, making it small in size and quickly deployable.

Creating a Docker image is done using a file called Dockerfile. Similar to a Makefile, the Dockerfile contains instructions telling Docker how to build the image.

For the sake of this tutorial, we will use the sample HelloWeb application from aspnet/Home repository on GitHub. First, clone this repository on your development machine and go to the HelloWeb directory using git:

git clone git@github.com:aspnet/Home.git aspnet-Home
cd aspnet-Home/samples/HelloWeb

In this directory you will see the following files:

├── Startup.cs
├── image.jpg
└── project.json

We are going to create a file called Dockerfile in this directory with the following contents:

FROM microsoft/aspnet

COPY . /app
WORKDIR /app
RUN ["kpm", "restore"]

EXPOSE 5004
ENTRYPOINT ["k", "kestrel"]

Let’s go through this Dockerfile line by line. The first FROM line tells Docker that we will use the official ASP.NET image on Docker Hub as our base image.

The COPY line tells Docker that we will copy contents of this folder (.) to the /app directory of the container and the WORKDIR instruction will move to the /app directory.

The RUN instruction tells Docker to run the kpm restore command to install the dependencies of the application. We do this before running out application for the first time.

The EXPOSE instruction will inform Docker that this image has a service which will be listening at port 5004 (see project.json of the sample file for details). Lastly, the ENTRYPOINT instruction is the command executed to start the container and keep it up and running. In this case it is the k kestrelcommand, starting the Kestrel development server for ASP.NET 5. Once executed, this process will start listening to HTTP connections coming from port 5004.

Step 3: Build the container image

Once we have Dockerfile ready, the directory should look like this, a Dockerfile residing with next to the application:

├── Dockerfile
├── Startup.cs
├── image.jpg
└── project.json

Now we will actually build the Docker image. It is very simple –just run the following Docker command in this directory:

docker build -t myapp .

This will build an image using the Dockerfile we just created and call it myapp. Every time you change your application, a new image can be built using this command. After this command finishes, we should be able to see our application in the list of Docker images on our Linux VM by running the following command on our development machine:

$ docker images
REPOSITORY          TAG                 IMAGE ID            CREATED             VIRTUAL SIZE
myapp               latest              ccb7994d2bc1        39 seconds ago      499.8 MB
microsoft/aspnet    latest              16b1838c0b34        12 days ago         473.4 MB

As you can see, your app and the ASP.NET image are listed as images that exist on your machine.

Now we are ready to deploy our application to the cloud.

Step 4: Run the container

Running the container is the easiest part of the tutorial. Run the following Docker command on your development machine:

docker run -t -d -p 80:5004 myapp
  • The -t switch attaches a pseudo-tty to the container (this switch will not be necessary in future versions of ASP.NET 5).
  • The -d switch runs the container in the background, otherwise the web server’s standard input/output streams would be attached to our development machine’s shell.
  • The -p switch maps port 80 of the VM to port 5004 of the container. In this case, connections coming to port 80 of the VM will be forwarded to our container listening on port 5004.
  • Lastly, myapp is the Docker image name we are using to start the container. We built this image in the previous step.

Once the container is started, the following command can be used to show containers running on your machine:

$ docker ps
CONTAINER ID        IMAGE               COMMAND                CREATED              STATUS              PORTS                  NAMES
f70bd9ffbc36        myapp:latest        "/bin/sh -c 'k kestr   About a minute ago   Up About a minute   0.0.0.0:80->5004/tcp   mad_goodall

Our container has started! However, we are not quite done yet. We need to complete the endpoint port mapping for Azure VM. You need to go to the Azure Management Portal to map public TCP port 80 to internal port 80 on your Linux VM (see relevant tutorial here).

Now let’s head to the browser to see if it is working. Open http://your-cloud-service-name.cloudapp.net:80/ in your web browser:

Voila, you have an ASP.NET 5 application running on Linux inside a Docker container!

If your application is slightly different than the single-project sample application we used, you can learn more about writing Dockerfiles here and build your own images with custom commands.

Conclusion

We will continue to invest in running ASP.NET 5 applications on Linux and Docker and we are happy to bring you the Microsoft’s first official Docker image: ASP.NET 5 Preview Image.

Since this tutorial depends on previews of ASP.NET 5 and its Docker image, the exact usage instructions may change over time. Please head over to Docker Hub page or GitHub repository to see up-to-date instructions.

Please send us your feedback and help us improve this Docker image by opening new issues on GitHub repository.

Ahmet Alp Balkan (@ahmetalpbalkan)
Software Engineer, Microsoft Azure

REF: http://blogs.msdn.com/b/webdev/archive/2015/01/14/running-asp-net-5-applications-in-linux-containers-with-docker.aspx

[Dev Tip] Real Time TCP/IP using C#

Introduction

The Real time Application is a sample that shows the communication techniques between a client (TcpClient) and a server (TcpServer) application using Socket class on each side. The project also demonstrates how to using listview control in the real time project.

   

  • TcpServer.exe showing the use of TCP socket communication in a separate thread. Multiple instances of TcpClient can talk to the same instance of TcpServer.
  • TcpClient.exe also uses a separate thread to read data from Socket then update the listview control in a form.

The flow of logic

  1. TcpServer listens on port 8002 and spawns a thread to waiting clients to connect.
    Hashtable socketHolder = new Hashtable();      
    Hashtable threadHolder = new Hashtable();      
    
    public Form1()   
    { 
        // Required for Windows Form Designer support           
        //         
        InitializeComponent();    
    
        tcpLsn = new TcpListener(8002);          
        tcpLsn.Start();           
        // tcpLsn.LocalEndpoint may have a bug, it only show 0.0.0.0:8002      
        stpanel.Text = "Listen at: " + tcpLsn.LocalEndpoint.ToString();        
        Thread tcpThd = new Thread(new ThreadStart(WaitingForClient));         
        threadHolder.Add(connectId, tcpThd);     
        tcpThd.Start() ;          
    
        ...
    }
  2. TcpClient connect to TcpSrv and sends Client information data packet to TcpServer then spawns a thread, which waits to receive data through the Socket.
    private void menuConn_Click(object sender, System.EventArgs e)
    { 
        ConnectDlg myDlg = new ConnectDlg();     
        myDlg.ShowDialog(this);   
        if( myDlg.DialogResult==DialogResult.OK) 
        {          
            s = new Socket(AddressFamily.InterNetwork, SocketType.Stream,    
                ProtocolType.Tcp );          
    
            IPAddress hostadd = IPAddress.Parse(myDlg.IpAdd); 
            int port=Int32.Parse(myDlg.PortNum);              
            IPEndPoint EPhost = new IPEndPoint(hostadd, port);
    
            Try  
            {    
                s.Connect(EPhost);           
    
                if (s.Connected)             
                {             
                    Byte[] bBuf;           
                    string buf;            
                    buf = String.Format("{0}:{1}", myDlg.UserName,       
                        myDlg.PassWord);       
                    bBuf=ASCII.GetBytes(buf);             
                    s.Send(bBuf, 0 , bBuf.Length,0);      
                    t = new Thread(new ThreadStart(StartRecieve));       
                    t.Start();             
                    sbar.Text="Ready to recieve data";    
                }             
            }    
            catch (Exception e1)
            {    
                MessageBox.Show(e1.ToString());             
            }    
        }          
    } 
    private void StartRecieve()     
    { 
        MethodInvoker miv = new MethodInvoker(this.UpdateListView);           
        while (true)              
        {          
            Byte[] receive = new Byte[38] ;    
            Try  
            {    
                string tmp=null;             
                // Receive will block until data coming     
                // ret is 0 or Exception happen when Socket connection is  
                // broken     
                int ret = s.Receive(receive, receive.Length, 0);           
                if (ret>0)    
                {             
                    tmp = System.Text.Encoding.ASCII.GetString(receive); 
                    if(tmp.Length > 0)     
                    {       
                        isu.symbol= Mid(tmp, 0, 4);     
                        isu.bid = Mid(tmp, 4, 5);       
                        isu.offer = Mid(tmp, 9, 5);     
                        isu.volume = Mid(tmp, 16, tmp.Length-16);      
    
                        this.BeginInvoke(miv);          
                        Thread.Sleep(300);              
                        // block until finish the
                        // UpdateListview’s job JobDone.WaitOne(); 
                    }       
                }             
            }    
            catch (Exception e) 
            {    
                if( !s.Connected )           
                {             
                    break;  
                }             
            }    
        }          
        t.Abort(); 
    }
  3. TcpServer accepts the connection and saves the socket instance into a Hashtable instance then spawns a thread to handle the socket communication and show the client information in the top listview control.
    public void WaitingForClient()                                                
    {                                                                             
          while(true)                                                             
          {                                                                       
                // Accept will block until someone connects                       
                Socket sckt = tcpLsn.AcceptSocket();                              
                if (connectId < 10000)                                            
                      Interlocked.Increment(ref connectId);                       
                Else                                                              
                      connectId = 1;                                              
                if (socketHolder.Count < MaxConnected )                           
                {                                                                 
                      while (socketHolder.Contains(connectId) )                   
                      {                                                           
                            Interlocked.Increment(ref connectId);                 
                      }                                                           
                      // it is used to keep connected Sockets                     
                      socketHolder.Add(connectId, sckt);                          
                      Thread td = new Thread(new ThreadStart(ReadSocket));        
                      // it is used to keep the active thread                     
                      threadHolder.Add(connectId, td);                            
                      td.Start();                                                 
                }                                                                 
          }                                                                       
    }                                                                             
    // follow function handle the communication from the clients and close the    
    // socket and the thread when the socket connection is down                   
    public void ReadSocket()                                                      
    {                                                                             
          // the connectId is keeping changed with new connection added. it can't 
          // be used to keep the real connectId, the local variable realId will   
          // keep the value when the thread started.                              
          long realId = connectId;                                                
          int ind=-1;                                                             
          Socket s = (Socket)socketHolder[realId];                                
          while (true)                                                            
          {                                                                       
                if(s.Connected)                                                   
                {                                                                 
                      Byte[] receive = new Byte[37] ;                             
                      Try                                                         
                      {                                                           
                            // Receive will block until data coming               
                            // ret is 0 or Exception happen when Socket connection
                            // is broken                                          
                            int ret=s.Receive(receive,receive.Length,0);          
                            if (ret>0)                                            
                            {                                                     
                                  string tmp = null;                              
                                tmp=System.Text.Encoding.ASCII.GetString(receive);
                                  if(tmp.Length > 0)                              
                                  {                                               
                                        DateTime now1=DateTime.Now;               
                                        String strDate;                           
                                        strDate = now1.ToShortDateString() + " "  
                                                    + now1.ToLongTimeString();    
                                                                                  
                                        ListViewItem newItem = new ListViewItem();
                                        string[] strArry=tmp.Split(':');          
                                        int code = checkUserInfo(strArry[0]);     
                                        if(code==2)                               
                                        {                                         
                                              userHolder.Add(realId, strArry[0]); 
                                              newItem.SubItems.Add(strArry[0]);   
                                              newItem.ImageIndex = 0;             
                                              newItem.SubItems.Add(strDate);      
                                              this.listView2.Items.Add(newItem);  
                                        ind=this.listView2.Items.IndexOf(newItem);
                                        }                                         
                                        else if( code==1)                         
                                                                                  
                                              ……………                               
                                  }                                               
                            }                                                     
                            else                                                  
                            {                                                     
                                  this.listView2.Items[ind].ImageIndex=1;         
                                  keepUser=false;                                 
                                  break;                                          
                            }                                                     
                      }                                                           
                      catch (Exception e)                                         
                      {                                                           
                            if( !s.Connected )                                    
                            {                                                     
                                  this.listView2.Items[ind].ImageIndex=1;         
                                  keepUser=false;                                 
                                  break;                                          
                            }                                                     
                      }                                                           
                }                                                                 
          }                                                                       
          CloseTheThread(realId);                                                 
    }                                                                             
    private void CloseTheThread(long realId)                                      
    {                                                                             
          socketHolder.Remove(realId);                                            
          if(!keepUser) userHolder.Remove(realId);                                
          Thread thd = (Thread)threadHolder[realId];                              
          threadHolder.Remove(realId);                                            
          thd.Abort();                                                            
    } 
    
  4. Click Load Data Menu to spawns a thread to load the information from a file then sends the information to all the clients that were connected to the TcpServer and update its own listview.In both TcpServer and TcpClient, they get the data from a working thread, and then update the Listview control in the Main thread. Here use the MethodInvoker to work it out.
    public void LoadThread()        
    { 
        MethodInvoker mi = new MethodInvoker(this.UpdateListView);             
        string tmp = null;        
        StreamReader sr = File.OpenText("Issue.txt");           
        while((tmp = sr.ReadLine()) !=null )     
        {          
            if (tmp =="")       
                break;        
            SendDataToAllClient(tmp);          
    
            isu.symbol= Mid(tmp, 0, 4);        
            isu.bid = Mid(tmp, 4, 5);          
            isu.offer = Mid(tmp, 9, 5);        
            isu.volume = Mid(tmp, 16, tmp.Length-16);         
    
            this.BeginInvoke(mi);              
            Thread.Sleep(200);  
    
            JobDone.WaitOne();  
        }          
        sr.Close();
        fThd.Abort();             
    } 
    private void SendDataToAllClient(string str)   
    { 
        foreach (Socket s in socketHolder.Values)
        {          
            if(s.Connected)     
            {    
                Byte[] byteDateLine=ASCII.GetBytes(str.ToCharArray());     
                s.Send(byteDateLine, byteDateLine.Length, 0);              
            }    
        }          
    }

    Following function demonstrate how to dynamically set BackColor and Forecolor properties of the Listview in TcpClient.

    private void UpdateListView()    
    { 
        int ind=-1;
        for (int i=0; i<this.listView1.Items.Count;i++)         
        {          
            if (this.listView1.Items[i].Text == isu.symbol.ToString())       
            {    
                ind=i;        
                break;        
            }    
        }          
        if (ind == -1)            
        {          
            ListViewItem newItem new ListViewItem(isu.symbol.ToString());    
            newItem.SubItems.Add(isu.bid);     
            newItem.SubItems.Add(isu.offer);   
            newItem.SubItems.Add(isu.volume);  
    
            this.listView1.Items.Add(newItem); 
            int i=this.listView1.Items.IndexOf(newItem);      
            setRowColor(i, System.Drawing.Color.FromArgb(255, 255, 175));    
            setColColorHL(i, 0, System.Drawing.Color.FromArgb(128,0,0));     
            setColColorHL(i, 1, System.Drawing.Color.FromArgb(128,0,0));     
            this.listView1.Update();           
            Thread.Sleep(300);  
            setColColor(i, 0, System.Drawing.Color.FromArgb(255, 255,175));  
            setColColor(i, 1, System.Drawing.Color.FromArgb(255, 255, 175));  
        }          
        else       
        {          
            this.listView1.Items[ind].Text = isu.symbol.ToString();          
            this.listView1.Items[ind].SubItems[1].Text = (isu.bid);          
            this.listView1.Items[ind].SubItems[2].Text = (isu.offer);        
            this.listView1.Items[ind].SubItems[3].Text = (isu.volume);       
            setColColorHL(ind, 0, System.Drawing.Color.FromArgb(128,0,0));   
            setColColorHL(ind, 1, System.Drawing.Color.FromArgb(128,0,0));   
            this.listView1.Update();           
            Thread.Sleep(300);  
            setColColor(ind, 0, System.Drawing.Color.FromArgb(255,255,175)); 
            setColColor(ind, 1, System.Drawing.Color.FromArgb(255,255,175)); 
        }          
        JobDone.Set();            
    } 
    
    private void setRowColor(int rowNum, Color colr )             
    { 
        for (int i=0; i<this.listView1.Items[rowNum].SubItems.Count;i++)       
            if (rowNum%2 !=0)   
                this.listView1.Items[rowNum].SubItems[i].BackColor = colr;     
    } 
    
    private void setColColor(int rowNum, int colNum, Color colr ) 
    { 
        if (rowNum%2 !=0)         
            this.listView1.Items[rowNum].SubItems[colNum].BackColor=colr;     
        else       
            this.listView1.Items[rowNum].SubItems[colNum].BackColor =        
            System.Drawing.Color.FromArgb(248, 248,248); 
        if (colNum==0)            
        {          
            this.listView1.Items[rowNum].SubItems[colNum].ForeColor =        
                System.Drawing.Color.FromArgb(128, 0, 64);  
            this.listView1.Items[rowNum].SubItems[colNum].BackColor =        
                System.Drawing.Color.FromArgb(197, 197, 182);
        }                                                                      
        else                                                                   
            this.listView1.Items[rowNum].SubItems[colNum].ForeColor =        
            System.Drawing.Color.FromArgb(20, 20,20);    
    }                                                                            
    
    private void setColColorHL(int rowNum, int colNum, Color colr )              
    {                                                                            
        this.listView1.Items[rowNum].SubItems[colNum].BackColor = colr;        
        this.listView1.Items[rowNum].SubItems[colNum].ForeColor =              
            System.Drawing.Color.FromArgb(255,255,255);  
    }

Steps to run the sample:

  1. Run TcpServer.exe on machine A.
  2. Run TcpClient.exe once or more either on machine A or machine B.
  3. On the TcpClient side, Click Menu connect; enter the server machine name where TcpServer is running. Enter user name and password in the edit box. Click Ok.
  4. When you see the client in the TcpServer top listview, click Load Data Menu on the TcpServer, and then you will see the real time data in TcpServer and TcpClient.Note:  Make sure that the Data file, Issue.txt, is in the same directory as TcpSvr.exe.

If you have any comments, I would love to hear about it. You can reach me at Jibin Pan.

Jibin Pan is VC++, C programmer at Interactive Edge Corp. Xtend Communications Corp. MoneyLine Corp in New York City since 1994 and has Master degree at computer science.

REF: http://www.codeproject.com/Articles/1430/Real-Time-TCP-IP-using-C

[Dev Tip] ASP.NET Web Api 2.2: Create a Self-Hosted OWIN-Based Web Api from Scratch

Building up a lean, minimal Web Api application from scratch is a terrific way to become more familiar with how things work under the hood in a Web Api (or any other ASP.NET) project.

The ASP.NET team provides exceptional project templates that allow developers to get started easily building web applications. The templates are structured in a way which provides a basic, boilerplate functionality for getting up and running easily. The basic application infrastructure is all in place, and all the Nuget packages and framework references you might need are all there, ready to go.

Image by Ivan Emelianov  |  Some Rights Reserved

This is all great, but also creates a two-pronged problem, particularly for those still learning web development in general, and how to navigate the innards of ASP.NET MVC and Web Api Application development specifically.

First off, the generalized approach showcased in the VS project templates tends to include a good deal more “stuff” than any one application needs. In order to provide sufficient functionality out of the box to get devs up and running quickly, and to provide a starting point for a broad variety of basic application requirements, the templates in Visual Studio bring with them a good deal of infrastructure and libraries you don’t need for your specific application.

Secondly, the templates knit together complete, ready-to-run applications in such a way that a whole lot appears to happen “by magic” behind the scenes, and it can be difficult to understand how these individual pieces fit together. This begins to matter when we want to customize our application, cut out unwanted components, or take a different architectural approach to building our application.

NOTE: In this post we will build out a simple Web Api example from scratch. The objective here is as much about understanding how ASP.NET components such as Web Api can plug into the OWIN/Katana environment, and how the various application components relate, as it is about simply “give me the codez.” There are already plenty of examples showing how to cobble together a self-hosted web api application, “Hello World” examples, and such. In this post, we will seek to understand the “why” as much as the “how.”

Understanding how these components fit together, and the notion of the middleware pipeline will become increasingly important as ASP.NET 5 (“vNext”) moves closer and closer to release. While the implementation of the the middleware pipeline itself will change somewhat with the coming release, the concepts will apply even more strongly, and more globally to the ASP.NET ecosystem.

Source Code for Examples

The source code for the example projects used in this post can be found in my Github repo. There are two branches for the self-hosted Web Api Application, one with the basic API structure in place, and one after we add Entity Framework and a database to the equation.

Web Api and the OWIN Middleware Pipeline

As of ASP.NET 4.5.1, Web Api can be used as middleware in an OWIN/Katana environment. In a previous post wetook a look at how the OWIN/Katana middleware pipeline can form the backbone, so to speak, of a modern ASP.NET web application.

The OWIN specification establishes a distinction between the host process, the web server, and a web application. IIS, in conjunction with ASP.NET, acts as both the host process and the server. The System.Web library, a heavy, all-things-to-all-people library, is tightly coupled to IIS. Web Applications with components which rely onSystem.Web, such as MVC (for the moment, until MVC 6 “vNext”) and Web Forms are the likewise bound to IIS.

In the standard ASP.NET Web Api project template, Web Api is configured as part of the IIS/ASP.NET processing pipeline, as is MVC and most of the other ASP.NET project components (Identity 2.0 is a notable exception, in that Identity uses the OWIN pipeline by default in all of the project templates). However, beginning with ASP.NET 4.5.1, Web Api (and SignalR) can also be configured to run in an OWIN pipeline, relieved of reliance upon the infrastructure provided by IIS and the monolithic System.Web library.

In this post, we will configure Web Api as a middleware component in a lightweight OWIN-based application, shedding the dependency on the heavy System.Web library.

Plugging Application Components into the OWIN/Katana Pipeline

Recall from our previous post the simple graphic describing the interaction of middleware components in the Katana pipeline, and how the Katana implementation of the OWIN specification facilitates the interaction between the hosting environment, the server, and the application:

The Simplified OWIN Environment:

owin-middleware-chain

If we review how this works, we recall that we can plug middleware into the pipeline in a number of ways, but the most common mechanism is by providing an extension method for our middleware to act as a “hook” or point of entry. Middleware is commonly defined as a separate class, like so:

Simplified Middleware Component:
public class MiddlewareComponent
{
    AppFunc _next;
    public MiddlewareComponent(AppFunc next)
    {
        _next = next;

        // ...Other initialization processing...
    }

    public async Task Invoke(IDictionary<string, object> environment)
    {
    	// ...Inbound processing on environment or HTTP request...

    	// Invoke next middleware component:
        await _next.Invoke(environment);

        // ...outbound processing on environment or HTTP request...
    }
}

 

Then, in order to plug a component into the middleware pipeline in Katana, we commonly provide an extension method according to a the convention:

Extension Method to Plug Middleware into the Katana Pipeline:
public static class AppBuilderExtensions
{
    public static void UseMiddelwareComponent(this IAppBuilder app)
    {
        app.Use<MiddlewareComponent>();
    }
}

 

This allows us to plug MiddlewareComponent into the Katana pipeline during the call to Configuration() in our OWIN Startup class:

Plugging a Middleware into Katana Using the Extension Method:
public void Configuration(IAppBuilder app)
{
    app.UseMiddlewareComponent();
}

 

When we want to use ASP.NET Web Api as a component in an OWIN-based application, we can do something similar.

Plugging Web Api into an OWIN/Katana Application

When we want to use Web Api in an OWIN-based application instead of relying on System.Web, we can install theMicrosoft.AspNet.WebApi.Owin Nuget package. This package provides a hook, similar to the above, which allows us to add Web Api to our Middleware pipeline. Once we do that, our diagram might look more like this:

OWIN/Katana Middleware Pipeline with Web Api Plugged In:

owin-middleware-chain w webapi

The Microsot.AspNet.WebApi.Owin package provides us with the UseWebApi() hook, which we will use to plug Web Api into a stripped-down, minimal application. First, we’ll look at creating a simple self-hosted Web Api, and then we will see about using the Katana pipeline to use Web Api in an application hosted on IIS, but forgoing the heavy dependency on System.Web.

Creating a Self-Hosted OWIN-Based Web Api

We’ll start by creating a bare-bones, self-hosted Web Api using a Console application as its base. First, create a new Console project in Visual Studio, then pull down the Microsoft.AspNet.WebApi.OwinSelfHost Nuget package:

Install Web Api 2.2 Self Host Nuget Package:
PM> Install-Package Microsoft.AspNet.WebApi.OwinSelfHost -Pre

 

The Microsoft.AspNet.WebApi.OwinSelfHost Nuget package installs a few new references into our project, among them Microsoft.Owin.Hosting and Microsoft.Owin.Host.HttpListener. Between these two libraries, our application can now act as its own host, and listen for HTTP requests over a port specified when the application starts up.

With that in place, add a new Class named Startup, and add the following code:

The Startup Class for a Katana-based Web Api:
// Add the following usings:
using Owin;
using System.Web.Http;

namespace MinimalOwinWebApiSelfHost
{
    public class Startup
    {
        // This method is required by Katana:
        public void Configuration(IAppBuilder app)
        {
            var webApiConfiguration = ConfigureWebApi();

            // Use the extension method provided by the WebApi.Owin library:
            app.UseWebApi(webApiConfiguration);
        }


        private HttpConfiguration ConfigureWebApi()
        {
            var config = new HttpConfiguration();
            config.Routes.MapHttpRoute(
                "DefaultApi",
                "api/{controller}/{id}",
                new { id = RouteParameter.Optional });
            return config;
        }
    }
}

 

As we can see, all we are really doing is setting up our default routing configuration here, similar to what we see in the standard VS template project. However, instead of adding the routes specified to the routes collection in the ASP.NET pipeline, we are instead passing the HttpConfiguration as an argument to the app.UseWebApi()extension method.

Next, lets set up the familiar ASP.NET Web Api folder structure. Add a Models folder, and a Controllers folder. Then add a Company class to the Models folder:

Add a Company Class to the Models Folder:
public class Company
{
    public int Id { get; set; }
    public string Name { get; set; }
}

 

Next, add a CompaniesController Class to the Controllers folder:

Add a CompaniesController to the Controllers Folder:
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading.Tasks;

// Add these usings:
using System.Web.Http;
using System.Net.Http;
using MinimalOwinWebApiSelfHost.Models;

namespace MinimalOwinWebApiSelfHost.Controllers
{
    public class CompaniesController : ApiController
    {
        // Mock a data store:
        private static List<Company> _Db = new List<Company>
            {
                new Company { Id = 1, Name = "Microsoft" },
                new Company { Id = 2, Name = "Google" },
                new Company { Id = 3, Name = "Apple" }
            };


        public IEnumerable<Company> Get()
        {
            return _Db;
        }


        public Company Get(int id)
        {
            var company = _Db.FirstOrDefault(c => c.Id == id);
            if(company == null)
            {
                throw new HttpResponseException(
                    System.Net.HttpStatusCode.NotFound);
            }
            return company;
        }


        public IHttpActionResult Post(Company company)
        {
            if(company == null)
            {
                return BadRequest("Argument Null");
            }
            var companyExists = _Db.Any(c => c.Id == company.Id);

            if(companyExists)
            {
                return BadRequest("Exists");
            }

            _Db.Add(company);
            return Ok();
        }


        public IHttpActionResult Put(Company company)
        {
            if (company == null)
            {
                return BadRequest("Argument Null");
            }
            var existing = _Db.FirstOrDefault(c => c.Id == company.Id);

            if (existing == null)
            {
                return NotFound();
            }

            existing.Name = company.Name;
            return Ok();
        }


        public IHttpActionResult Delete(int id)
        {
            var company = _Db.FirstOrDefault(c => c.Id == id);
            if (company == null)
            {
                return NotFound();
            }
            _Db.Remove(company);
            return Ok();
        }
    }
}

 

In the above code, for the moment, we are simply mocking out a data store using a List<Company>. Also, in a real controller we would probably implement async controller methods, but for now, this will do.

To complete the most basic functionality of our self-hosted Web Api application, all we need to do is set up theMain() method to start the server functionality provided by HttpListener. Add the following usings and code the the Program.cs file:

Start the Application in the Main() Method:
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading.Tasks;

// Add reference to:
using Microsoft.Owin.Hosting;

namespace MinimalOwinWebApiSelfHost
{
    class Program
    {
        static void Main(string[] args)
        {
            // Specify the URI to use for the local host:
            string baseUri = "http://localhost:8080";

            Console.WriteLine("Starting web Server...");
            WebApp.Start<Startup>(baseUri);
            Console.WriteLine("Server running at {0} - press Enter to quit. ", baseUri);
            Console.ReadLine();
        }
    }
}

 

Most of the structure above should look vaguely familiar, if you have worked with a Web Api or MVC project before.

Now all we need is a suitable client application to consume our self-hosted Web Api.

Create a Basic Web Api Client Application

We will create a simple Console application to use as a client in consuming our Web Api. Create a new Console application, and then add the Microsoft.AspNet.WebApi.Client library from Nuget:

Add the Web Api 2.2 Client Library from Nuget:
PM> Install-Package Microsoft.AspNet.WebApi.Client -Pre

 

Now, add a class named CompanyClient and add the following using statements and code:

Define the CompanyClient Class in the Web Api Client Application:
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading.Tasks;

// Add Usings:
using System.Net.Http;

namespace MinimalOwinWebApiClient
{
    public class CompanyClient
    {
        string _hostUri;

        public CompanyClient(string hostUri)
        {
            _hostUri = hostUri;
        }


        public HttpClient CreateClient()
        {
            var client = new HttpClient();
            client.BaseAddress = new Uri(new Uri(_hostUri), "api/companies/");
            return client;
        }


        public IEnumerable<Company> GetCompanies()
        {
            HttpResponseMessage response;
            using (var client = CreateClient())
            {
                response = client.GetAsync(client.BaseAddress).Result;
            }
            var result = response.Content.ReadAsAsync<IEnumerable<Company>>().Result;
            return result;
        }


        public Company GetCompany(int id)
        {
            HttpResponseMessage response;
            using (var client = CreateClient())
            {
                response = client.GetAsync(
                	new Uri(client.BaseAddress, id.ToString())).Result;
            }
            var result = response.Content.ReadAsAsync<Company>().Result;
            return result;
        }


        public System.Net.HttpStatusCode AddCompany(Company company)
        {
            HttpResponseMessage response;
            using (var client = CreateClient())
            {
                response = client.PostAsJsonAsync(client.BaseAddress, company).Result;
            }
            return response.StatusCode;
        }


        public System.Net.HttpStatusCode UpdateCompany(Company company)
        {
            HttpResponseMessage response;
            using (var client = CreateClient())
            {
                response = client.PutAsJsonAsync(client.BaseAddress, company).Result;
            }
            return response.StatusCode;
        }


        public System.Net.HttpStatusCode DeleteCompany(int id)
        {
            HttpResponseMessage response;
            using (var client = CreateClient())
            {
                response = client.DeleteAsync(
                	new Uri(client.BaseAddress, id.ToString())).Result;
            }
            return response.StatusCode;
        }
    }
}

 

We’ve written (rather hastily, I might add) a crude but simple client class which will exercise the basic API methods we have defined on out Web Api application. We’re working against a mock data set here, so we take some liberties with Id’s and such in order to run and re-run the client application without running into key collisions.

We see in the above, we created a convenience/factory method to provide an instance of HttpClient as needed, pre-configured with a base Uri matching the route for the ClientController in our Web Api. From there, we simply define a local method corresponding to each API method, which we can use in our console application.

We can get this thing into running order by adding the following code to the Program.cs file of the client application:

The Program.cs File for the API Client Application:
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading.Tasks;

// Add Usings:
using System.Net.Http;


namespace MinimalOwinWebApiClient
{
    class Program
    {
        static void Main(string[] args)
        {
            Console.WriteLine("Read all the companies...");
            var companyClient = new CompanyClient("http://localhost:8080");
            var companies = companyClient.GetCompanies();
            WriteCompaniesList(companies);

            int nextId  = (from c in companies select c.Id).Max() + 1;

            Console.WriteLine("Add a new company...");
            var result = companyClient.AddCompany(
            	new Company 
            	{ 
            		Id = nextId, 
            		Name = string.Format("New Company #{0}", nextId) 
        		});
            WriteStatusCodeResult(result);

            Console.WriteLine("Updated List after Add:");
            companies = companyClient.GetCompanies();
            WriteCompaniesList(companies);

            Console.WriteLine("Update a company...");
            var updateMe = companyClient.GetCompany(nextId);
            updateMe.Name = string.Format("Updated company #{0}", updateMe.Id);
            result = companyClient.UpdateCompany(updateMe);
            WriteStatusCodeResult(result);

            Console.WriteLine("Updated List after Update:");
            companies = companyClient.GetCompanies();
            WriteCompaniesList(companies);

            Console.WriteLine("Delete a company...");
            result = companyClient.DeleteCompany(nextId -1);
            WriteStatusCodeResult(result);

            Console.WriteLine("Updated List after Delete:");
            companies = companyClient.GetCompanies();
            WriteCompaniesList(companies);

            Console.Read();
        }


        static void WriteCompaniesList(IEnumerable<Company> companies)
        {
            foreach(var company in companies)
            {
                Console.WriteLine("Id: {0} Name: {1}", company.Id, company.Name);
            }
            Console.WriteLine("");
        }


        static void WriteStatusCodeResult(System.Net.HttpStatusCode statusCode)
        {
            if(statusCode == System.Net.HttpStatusCode.OK)
            {
                Console.WriteLine("Opreation Succeeded - status code {0}", statusCode);
            }
            else
            {
                Console.WriteLine("Opreation Failed - status code {0}", statusCode);
            }
            Console.WriteLine("");
        }
    }
}

 

Now, if we run the Self-Hosted Web Api, we should see the following console output after it has started up:

Console Output from the Self-Hosted Web Api Startup:

console-output-web-api-startup

And then, when we run our client application, we should see the following:

Console Output from the Web Api Client Application:

console-output-client-startup

We see just about what we expect, given the code we have written. We query our Web Api for a lit of companies. We then add a new company, and refresh the list. Then we update the company we just added, review the list yet again. Finally, we remove the company just before the new company in the list, and review the list one last time.

Adding a Database and Entity Framework to the Self-Hosted Web Api

So far so good. However, a Web Api (even a small, self-hosted one) is of little use without some mechanism to persist and retreive data. We can add a database, and use Entity Framework in our self-hosted Web Api.

Since we are self-hosting, we may (depending upon the needs of our application) want to also use a local, in-process database as well (as opposed to a client/server solution) to keep our Web Api completely self-contained. Ordinarily I would go to SQLite for this, but to keep things simple we will use SQL CE. There is an Entity Framework provider for SQLite, however, it does not play too nicely with EF Code-First.

You can use SQLite with Entity Framework if you don’t mind creating your database manually (or employing some work-arounds to get things working with code first), but for our purposes, SQL CE will do.

We don’t HAVE to use a local database, of course. Depending upon your application, you may very well want to connect to SQLServer, or some other external database. If so, most of the following will work just as well if you pull down the standard Entity Framework package and work against SQL Server

To add a SQL Server Compact Edition database, we can simply go to Nuget again, and pull in theEntityFramework.SqlServerCompact Nuget package:

Add the Entity Framework SQL CE Nuget Package to the Web Api Application:
PM> Install-Package EntityFramework.SqlServerCompact

 

With that done, let’s do a little housekeeping in order to pave the way for our new database.

Add an ApplicationDbContext and Initializer for Entity Framework

First, we need to add an a data context class. Also, we will want to use a database initializer we can call when the application runs to apply any changes. Also, for this particular case, we will set things up so that the database is recreated and re-seeded with data each time:

If we did not want to drop and re-create each time, we would derive from DropCreateDatabaseIfModelChangesinstead of DropCreateDatabaseAlways

Add an ApplicationDbContext and Initializer Classes to the Models Folder:
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading.Tasks;

// Add using:
using System.Data.Entity;

namespace MinimalOwinWebApiSelfHost.Models
{
    public class ApplicationDbContext : DbContext
    {
        public ApplicationDbContext() : base("MyDatabase")
        {

        }

        public IDbSet<Company> Companies { get; set; }
    }


    public class ApplicationDbInitializer : DropCreateDatabaseAlways<ApplicationDbContext>
    {
        protected override void Seed(ApplicationDbContext context)
        {
            base.Seed(context);
            context.Companies.Add(new Company { Name = "Microsoft" });
            context.Companies.Add(new Company { Name = "Google" });
            context.Companies.Add(new Company { Name = "Apple" });
        }
    }
}

 

Now we need to set things up so that the database initializer runs each time the application starts (at least, during “development”).

Update the Program.cs file as follows. Note you need to add a reference to System.Data.Entity as well as your Models namespace in your using statements:

Update Program.cs to Run  the Database Initializer:
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading.Tasks;
using Microsoft.Owin.Hosting;

// Add reference to:
using System.Data.Entity;
using MinimalOwinWebApiSelfHost.Models;

namespace MinimalOwinWebApiSelfHost
{
    class Program
    {
        static void Main(string[] args)
        {
            // Set up and seed the database:
            Console.WriteLine("Initializing and seeding database...");
            Database.SetInitializer(new ApplicationDbInitializer());
            var db = new ApplicationDbContext();
            int count = db.Companies.Count();
            Console.WriteLine("Initializing and seeding database with {0} company records...", count);

            // Specify the URI to use for the local host:
            string baseUri = "http://localhost:8080";

            Console.WriteLine("Starting web Server...");
            WebApp.Start<Startup>(baseUri);
            Console.WriteLine("Server running at {0} - press Enter to quit. ", baseUri);
            Console.ReadLine();
        }
    }
}

 

Last, let’s add a [Key] attribute to the Id in our Company class, so that EF will know we want the to be an Auto-incrementing int key. Note that you need to add a reference to System.ComponentModel.DataAnnotationsin your using statements:

Update the Company Class with a [Key] Attribute:
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading.Tasks;

// Add using:
using System.ComponentModel.DataAnnotations;

namespace MinimalOwinWebApiSelfHost.Models
{
    public class Company
    {
        // Add Key Attribute:
        [Key]
        public int Id { get; set; }
        public string Name { get; set; }
    }
}

 

Update the Controller to Consume the Database and Use Async Methods

Now we need to make some changes to our CompaniesController. Previously, we were working with a list as a mock datastore. Now let’s update our controller methods to work with an actual database. Also, we will now use async methods.

Note that we need to add a reference to System.Data.Entity in our using statements.

Update Controller Methods to Consume Database and Use Async/Await:
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading.Tasks;
using System.Web.Http;
using System.Net.Http;
using MinimalOwinWebApiSelfHost.Models;

// Add these usings:
using System.Data.Entity;

namespace MinimalOwinWebApiSelfHost.Controllers
{
    public class CompaniesController : ApiController
    {
        ApplicationDbContext _Db = new ApplicationDbContext();

        public IEnumerable<Company> Get()
        {
            return _Db.Companies;
        }


        public async Task<Company> Get(int id)
        {
            var company = await _Db.Companies.FirstOrDefaultAsync(c => c.Id == id);
            if (company == null)
            {
                throw new HttpResponseException(
                    System.Net.HttpStatusCode.NotFound);
            }
            return company;
        }


        public async Task<IHttpActionResult> Post(Company company)
        {
            if (company == null)
            {
                return BadRequest("Argument Null");
            }
            var companyExists = await _Db.Companies.AnyAsync(c => c.Id == company.Id);

            if (companyExists)
            {
                return BadRequest("Exists");
            }

            _Db.Companies.Add(company);
            await _Db.SaveChangesAsync();
            return Ok();
        }


        public async Task<IHttpActionResult> Put(Company company)
        {
            if (company == null)
            {
                return BadRequest("Argument Null");
            }
            var existing = await _Db.Companies.FirstOrDefaultAsync(c => c.Id == company.Id);

            if (existing == null)
            {
                return NotFound();
            }

            existing.Name = company.Name;
            await _Db.SaveChangesAsync();
            return Ok();
        }


        public async Task<IHttpActionResult> Delete(int id)
        {
            var company = await _Db.Companies.FirstOrDefaultAsync(c => c.Id == id);
            if (company == null)
            {
                return NotFound();
            }
            _Db.Companies.Remove(company);
            await _Db.SaveChangesAsync();
            return Ok();
        }
    }
}

 

Last, we need to make a couple minor changes to our client application, since we are now working with a database which will insert auto-incrementing integer Id’s.

Update Api Client Application

We only need to change a single line here, where we previously provided a new Id value when adding a new company. Change the highlighted line as follows:

Don’t Pass a Value for the new Id when Adding a Record:
Console.WriteLine("Add a new company...");
var result = companyClient.AddCompany(new Company 
    { 
        Name = string.Format("New Company #{0}", nextId) 
    });
WriteStatusCodeResult(result);

 

Now all we are doing is using the next Id as part of a hacked together naming scheme (and this is NOT a good way to get hold of the next Id from your database, either . . .).

Running the Self-Hosted Web Api with the Database

If we have done everything correctly, we can spin up the Web Api application, and then run the Client application, and see what happens. If all went well, our console output should be basically the same as before:

Console Output from Starting the Web Api Application:

console-output-web-api-startup-with-database

Likewise, when we run the client application, our console output should be essentially the same as before, except this time the Web Api is fetching and saving to the SQL CE database instead of an in-memory list:

Console Output from the Web Api Client Application at Startup:

console-output-client-startup-with-database

Next Steps

In this post, we’ve seen how to assemble a very simple, and minimal ASP.NET Web Api application in a self-hosted scenario, without IIS, and without taking a dependency on the heavy weight System.Web library. We took advantage of the OWIN/Katana middleware pipeline, and we saw how to “hook” the Web Api components into the host/server interaction.

Next, we will investigate how we can apply these same concepts to build out a minimal footprint Web Api while still hosting in an IIS environment, and we will see how to bring ASP.NET Identity in to add some authentication and authorization functionality to the picture.

Next: ASP.NET Web Api: Understanding OWIN/Katana Authentication/Authorization Part I: Concepts

Additional Resources and Items of Interest

[Dev Tip] ASP.NET Web Api: Understanding OWIN/Katana Authentication/Authorization Part I: Concepts

Recently we looked at the fundamentals of the OWIN/Katana Middleware pipeline, and we then applied what we learned, andbuilt out a minimal, OWIN-Based, self-hosted Web Api. In doing so, we managed to avoid the heavy weight of the System.Weblibrary or IIS, and we ended up with a pretty lightweight application. However, all of the concepts we have discussed remain valid no matter the hosting environment.

But what if we want to add some basic authentication to such a minimal project?

Image by Chad Miller  | Some Rights Reserved

Once again, we are going to see if we can’t apply what we’ve learned, and pull a very small Authentication / Authorization component into our minimal Web Api application. We’ll start by implementing a basic authentication/authorization model without using the components provided by the ASP.NET Identity framework.

Identity is fully compatible with the OWIN Authorization model, and when used in this manner, represents a very useful, ready-to go concrete implementation. But we can perhaps better understand the structure of OWIN authorization, and application security in general, if we start with simple concepts, and work our way up to concrete implementations and additional frameworks.

From the Ground Up

In this series of posts we will start with concepts, and slowly build from there.

  • Part I (this post) – We will examine the basic OAuth Resource Owner Flow model for authentication, and assemble to most basic components we need to implement authentication using this model. We will not be concerning ourselves with the cryptographic requirements of properly hashing passwords, or persisting user information to a database. We will also not be using Identity, instead implementing security using the basic components available in the Microsoft.Owin libraries.
  • Part II We will mock up some basic classes needed to model our user data, and a persistence model to see how storage of user data and other elements works at a fundamental level.
  • Part III – We will replace our mock objects with Identity 2.0 components to provide the crypto and security features (because rolling your own crypto is not a good idea).

As with our previous posts, the objective here is as much about building an understanding of how authentication in general, and Identity 2.0 in particular, actually fit in to the structure of an OWIN-based application as it is about simply “how to do it.”

With that in mind, we will take this as far as we reasonably can using only the OWIN/Katana authorization components and simplified examples. Once we have seen the underlying structure for authentication and authorization in an OWIN-based  Web Api application, THEN we will bring Identity 2.0 in to provide the concrete implementation.

Source Code for Examples

We are building up a project over a series of posts here. In order that the source for each post make sense, I am setting up branches that illustrate each concept:

On Github, the branches of the Web Api repo so far look like this:

The code for the API client application is in a different repo, and the branches look like this:

Application Security is Hard – Don’t Roll Your Own!

Implementing effective application security is a non-trivial exercise. Behind the simple-looking framework APIs we use, such as Identity 2.0 (or any other membership/auth library) is a few decades worth of development by the best and brightest minds in the industry.

Throughout the examples we will be looking at, you will see areas where we mock together some ridiculous methods of (for example) hashing or validating passwords. In reality, securely hashing passwords is a complex, but solved problem. You should never attempt to write your own crypto or data protection schemes.

Even a simple authentication mechanism such as we will implement here brings some complexity to the project, because authentication itself is inherently complex. Behind the simple-seeming framework API provided by frameworks such as ASP.NET Identity lies some crypto and logic that is best left as it is unless you REALLY know what you’re doing.

That said, understanding how the pieces fit, and where you can dig in and adapt existing authorization / authentication flows to the needs of your application, is important, and forms the primary objective of this series.

The OAuth Owner Resource Flow Authentication Model

One of the commonly used patterns for authentication in a web application is the OAuth Resource Owner Flowmodel. In fact, this is the model used in the Web Api Template project in Visual Studio. We are going to implement authentication using the Resource Owner Flow from “almost scratch” in our OWIN-based Web Api application.

The Owner Resource Flow posits four principal “actors” in an authentication scenario:

  • The Resource Owner – For example, a user, or perhaps another application.
  • The Client – Generally a client application being used by the resource owner to access the protected resource. In our case, the Client might be our Web Api Client application.
  • The Authorization Server – A server which accepts the Resource Owners credentials (generally a combination of some form of credentials and a password, such as a User Name/Password combination) and returns an encoded or encrypted Access Token.
  • The Resource Server – The server on which the resource is located, and which protects the resource from unauthorized access unless valid authentication/authorization credentials are supplied with the request.
Overview of the Owner Resource Authentication Flow:

oath-resource-owner-flow

In the above, the Resource Owner presents a set of credentials to the Client. The Client then submits the credentials to the Authorization Server, and if the credentials can be properly validated by the Authorization Server, an encoded and/or encrypted Access Token is returned to the Client.

The Client then uses the Access Token to make requests to the Resource Server. The Resource Server has been configured to accept Access Tokens which originate at the Authorization Server, and to decode/decrypt those tokens to confirm the identity and authorization claims (if provided) of the Resource Owner.

All of this is predicated on the Resource Owner having been properly registered with the Authorization Server.

It should be noted there that the OAuth specification requires that any transaction involving transmission of password/credentials MUST be conducted using SSL/TSL (HTTPS).

Our implementation, and that of the VS Web Api project template, puts a slight twist on this, by embedding theAuthentication Server within the Resource Server:

The Embedded Authentication Server Variant of the Owner Resource Flow:

oath-embedded-resource-owner-flow

The Basics – OWIN, Katana, and Authentication

We can put together a very stripped down example to demonstrate how the pieces fit together, before we clutter things up with higher-level components and any additional database concerns.

To get started, you can pull down the source for the Self-hosted web api we built in the previous post. We’re going to pick up where we left off with that project, and add a basic authentication component.

Recall that we had assembled a fairly minimal Owin-Based Web Api, consisting of an OWIN Startup class, a simple Company model class, and a CompaniesController. The application itself is a console-based application, with a standard entry point in the Main() method of the Program class.

In that project, we had decided that since we were self-hosting the application, we would keep our data store in-process and use a local file-based data store. We opted to use SQL Server Compact Edition since it would readily work with Entity Framework and Code-First database generation. Therefore, we also added anApplicationDbContext.

We can review our existing project components before we make any changes.

Starting Point – The Self-Hosted Web Api Project

First, we have our OWIN Startup class:

The OWIN Startup Class from the Minimal Self-Hosted Web Api Project:
// Add the following usings:
using Owin;
using System.Web.Http;

namespace MinimalOwinWebApiSelfHost
{
    public class Startup
    {
        // This method is required by Katana:
        public void Configuration(IAppBuilder app)
        {
            var webApiConfiguration = ConfigureWebApi();

            // Use the extension method provided by the WebApi.Owin library:
            app.UseWebApi(webApiConfiguration);
        }


        private HttpConfiguration ConfigureWebApi()
        {
            var config = new HttpConfiguration();
            config.Routes.MapHttpRoute(
                "DefaultApi",
                "api/{controller}/{id}",
                new { id = RouteParameter.Optional });
            return config;
        }
    }
}

Then, we had a simple Company model, suitably located in the Models folder in our project:

The Original Company Model Class:
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading.Tasks;

// Add using:
using System.ComponentModel.DataAnnotations;

namespace MinimalOwinWebApiSelfHost.Models
{
    public class Company
    {
        // Add Key Attribute:
        [Key]
        public int Id { get; set; }
        public string Name { get; set; }
    }
}

And our original CompaniesController class, again suitably located in the Controllers folder within our project::

The Original Companies Controller:
public class CompaniesController : ApiController
{
    ApplicationDbContext _Db = new ApplicationDbContext();


    public IEnumerable<Company> Get()
    {
        return _Db.Companies;
    }


    public async Task<Company> Get(int id)
    {
        var company = 
                await _Db.Companies.FirstOrDefaultAsync(c => c.Id == id);
        if (company == null)
        {
            throw new HttpResponseException(
                System.Net.HttpStatusCode.NotFound);
        }
        return company;
    }


    public async Task<IHttpActionResult> Post(Company company)
    {
        if (company == null)
        {
            return BadRequest("Argument Null");
        }
        var companyExists = 
                await _Db.Companies.AnyAsync(c => c.Id == company.Id);

        if (companyExists)
        {
            return BadRequest("Exists");
        }

        _Db.Companies.Add(company);
        await _Db.SaveChangesAsync();
        return Ok();
    }


    public async Task<IHttpActionResult> Put(Company company)
    {
        if (company == null)
        {
            return BadRequest("Argument Null");
        }
        var existing = 
                await _Db.Companies.FirstOrDefaultAsync(c => c.Id == company.Id);

        if (existing == null)
        {
            return NotFound();
        }

        existing.Name = company.Name;
        await _Db.SaveChangesAsync();
        return Ok();
    }


    public async Task<IHttpActionResult> Delete(int id)
    {
        var company = 
                await _Db.Companies.FirstOrDefaultAsync(c => c.Id == id);
        if (company == null)
        {
            return NotFound();
        }
        _Db.Companies.Remove(company);
        await _Db.SaveChangesAsync();
        return Ok();
    }
}

Also in the Models folder is our ApplicationDbContext.cs file, which actually contains the ApplicationDbContextitself, as well as a DBInitializer. For the moment, this derives from DropDatabaseCreateAlways, so that the database is blown away and re-seeded each time the application runs.

The Original ApplicationDbContext and DbInitializer:
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading.Tasks;

// Add using:
using System.Data.Entity;
using Microsoft.AspNet.Identity;
using Microsoft.AspNet.Identity.EntityFramework;

namespace MinimalOwinWebApiSelfHost.Models
{
    public class ApplicationDbContext : DbContext
    {
        public ApplicationDbContext()
            : base("MyDatabase")
        {

        }

        static ApplicationDbContext()
        {
            Database.SetInitializer(new ApplicationDbInitializer());
        }

        public IDbSet<Company> Companies { get; set; }
    }


    public class ApplicationDbInitializer 
        : DropCreateDatabaseAlways<ApplicationDbContext>
    {
        protected override void Seed(ApplicationDbContext context)
        {
            context.Companies.Add(new Company { Name = "Microsoft" });
            context.Companies.Add(new Company { Name = "Apple" });
            context.Companies.Add(new Company { Name = "Google" });
            context.SaveChanges();
        }
    }
}

I actually changed the code for the original ApplicationDbContext since the previous post. I have added a static constructor which sets the Database Initializer when the context is instantiated. This will call the initializer the first time we hit the database.

This is a much cleaner solution than previously, where we were doing the database initialization in the Main()method of our Program class:

The Original Program.cs File (slightly modified):
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading.Tasks;

// Add reference to:
using Microsoft.Owin.Hosting;
using System.Data.Entity;
using MinimalOwinWebApiSelfHost.Models;

namespace MinimalOwinWebApiSelfHost
{
    class Program
    {
        static void Main(string[] args)
        {
            // Specify the URI to use for the local host:
            string baseUri = "http://localhost:8080";

            Console.WriteLine("Starting web Server...");
            WebApp.Start<Startup>(baseUri);
            Console.WriteLine("Server running at {0} - press Enter to quit. ", baseUri);
            Console.ReadLine();
        }
    }
}

Now that we know where we left off, let’s see about implementing a very basic example of the OAuth Resource Owner Flow model for authentication.

The Microsoft.AspNet.Identity.Owin Nuget Package includes everything we need to implement a basic example of the Resource Owner Flow, even though we won’t be dealing with Identity directly just yet.

Pull the Microsoft.AspNet.Identity.Owin package into our project:

Add Microsoft ASP.NET Identity Owin Nuget Package:
PM> Install-Package Microsoft.AspNet.Identity.Owin -Pre

Now we are ready to get started…

Adding The Embedded Authorization Server

Key to the Resource Owner Flow is the Authorization Server. In our case, the Authorization Server will actually be contained within our Web Api application, but will perform the same function as it would if it were hosted separately.

The Microsoft.Owin.Security.OAuth library defines a default implementation ofIOAuthAuthorizationServerProvider, OAuthAuthorizationServerProvider which allows us to derive a custom implementation for our application. You should recognize this if you have used the Visual Studio Web Api project templates before. Add a new folder to the project, OAuthServerProvider, and then add a class ass follows:

Add the ApplicationOAuthServerProvider Class:
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading.Tasks;

// Add Usings:
using Microsoft.Owin.Security;
using Microsoft.Owin.Security.OAuth;
using System.Security.Claims;
using MinimalOwinWebApiSelfHost.Models;

namespace MinimalOwinWebApiSelfHost.OAuthServerProvider
{
    public class ApplicationOAuthServerProvider 
        : OAuthAuthorizationServerProvider
    {
        public override async Task ValidateClientAuthentication(
            OAuthValidateClientAuthenticationContext context)
        {
            // This call is required...
            // but we're not using client authentication, so validate and move on...
            await Task.FromResult(context.Validated());
        }


        public override async Task GrantResourceOwnerCredentials(
            OAuthGrantResourceOwnerCredentialsContext context)
        {
            // DEMO ONLY: Pretend we are doing some sort of REAL checking here:
            if (context.Password != "password")
            {
                context.SetError(
                    "invalid_grant", "The user name or password is incorrect.");
                context.Rejected();
                return;
            }

            // Create or retrieve a ClaimsIdentity to represent the 
            // Authenticated user:
            ClaimsIdentity identity = 
                new ClaimsIdentity(context.Options.AuthenticationType);
            identity.AddClaim(new Claim("user_name", context.UserName));

            // Identity info will ultimately be encoded into an Access Token
            // as a result of this call:
            context.Validated(identity);
        }
    }
}

You can see  we are overriding two of the methods available on OAuthAuthorizationServerProvider. The First, ValidateClientAuthentication(), is necessary even though in our case we are not validating the Client application (although we COULD, if we wanted to). We are simply calling Validated() on theClientValidationContext and moving on. In a more complex scenario, or one for which stronger security was required, we might authenticate the client as well as the resource owner.

Where the meat and potatoes of our authentication process occurs is in theGrantResourceOwnerCredentials() method. For this part of our example, we’re keeping this simple. We have hacked an authentication process which basically compares the password passed in with the hard-coded string value “password.” IF this check fails, an error is set, and authentication fails.

In reality, of course, we would (and WILL, shortly) implement a more complex check of the user’s credentials. For now though, this will do, without distracting us from the overall structure of things.

If the credentials check succeeds, an instance of ClaimsIdentity is created to represent the user data, including any Claims the user should have. For now, all we are doing is adding the user’s name as the single claim, and then calling Validated() on the GrantResourceOwnerCredentials context.

The call to Validated() ultimately results in the OWIN middleware encoding the ClaimsIdentity data into an Access Token. How this happens, in the context of the Microsoft.Owin implementation, is complex and beyond the scope of this article. If you want to dig deeper on this, grab a copy of Telerik’s fine tool Just Decompile. Suffice it to say that the ClaimsIdentity information is encrypted with a private key (generally, but not always the Machine Key of the machine on which the server is running). Once so encrypted, the access token is then added to the body of the outgoing HTTP response.

Configuring OWIN Authentication and Adding to the Middleware Pipeline

Now that we have our actual Authorization Server in place, let’s configure our OWIN Startup class to authenticate incoming requests.

We will add a new method, ConfigureAuth() to our Startup class. Check to make sure you have added the following usings and code to Startup:

Add a ConfigureAuth() Method to the OWIN Startup Class:
using System;

// Add the following usings:
using Owin;
using System.Web.Http;
using MinimalOwinWebApiSelfHost.Models;
using MinimalOwinWebApiSelfHost.OAuthServerProvider;
using Microsoft.Owin.Security.OAuth;
using Microsoft.Owin;

namespace MinimalOwinWebApiSelfHost
{
    public class Startup
    {
        // This method is required by Katana:
        public void Configuration(IAppBuilder app)
        {
            ConfigureAuth(app);
            var webApiConfiguration = ConfigureWebApi();
            app.UseWebApi(webApiConfiguration);
        }


        private void ConfigureAuth(IAppBuilder app)
        {
            var OAuthOptions = new OAuthAuthorizationServerOptions
            {
                TokenEndpointPath = new PathString("/Token"),
                Provider = new ApplicationOAuthServerProvider(),
                AccessTokenExpireTimeSpan = TimeSpan.FromDays(14),

                // Only do this for demo!!
                AllowInsecureHttp = true
            };
            app.UseOAuthAuthorizationServer(OAuthOptions);
            app.UseOAuthBearerAuthentication(
                    new OAuthBearerAuthenticationOptions());
        }


        private HttpConfiguration ConfigureWebApi()
        {
            var config = new HttpConfiguration();
            config.Routes.MapHttpRoute(
                "DefaultApi",
                "api/{controller}/{id}",
                new { id = RouteParameter.Optional });
            return config;
        }
    }
}

There’s a lot going on in the ConfigureAuth() method above.

First, we initialize an instance of OAuthAuthorizationServerOptions. As part of the initialization, we see that we set the token endpoint, as well as assign a new instance of ourApplicationOAuthAuthenticationServerProvider class to the Provider property of the options object.

We set an expiry for any tokens issues, and then we explicitly allow the Authorization Server to allow insecure HTTP connections. A note on this last – this is strictly for demo purposes. In the wild, you would definitely want to connect to the authorization server using a secure SSL/TLS protocol (HTTPS), since you are transporting user credentials in the clear.

Once our authorization server options are configured, we see the standard extension methods commonly used to add middleware to IAppBuilder. We pass our server options in with UseAuthorizationServer(), and then we indicate that we want to return Bearer Tokens with UseOAuthBearerAuthentication(). In this case, we are passing the default implementation for OAuthBearerAuthenticationOptions, although we could derive from that and customize if we needed to.

The server is added to the options object, which specifies other configuration items, and which is then passed into the middleware pipeline.

Authenticating the Client: Retrieve an Access Token from the Authorization Server

Again, from the previous post, we had put together a crude but effective API client application to exercise our API.

For this post, we are going to basically re-write the client application.

First, we will add a new Class, the apiClient class. This class will be responsible for submitting our credentials to our Web Api and obtaining a Dictionary<string, string> containing the de-serialized response body, which includes the access token, and additional information about the authentication process:

The ApiClient Class:

using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading.Tasks;

// Add Usings:
using System.Net.Http;

// Add for Identity/Token Deserialization:
using Newtonsoft.Json;

namespace MinimalOwinWebApiClient
{
    public class apiClientProvider
    {
        string _hostUri;
        public string AccessToken { get; private set; }

        public apiClientProvider(string hostUri)
        {
            _hostUri = hostUri;
        }


        public async Task<Dictionary<string, string>> GetTokenDictionary(
            string userName, string password)
        {
            HttpResponseMessage response;
            var pairs = new List<KeyValuePair<string, string>>
                {
                    new KeyValuePair<string, string>( "grant_type", "password" ), 
                    new KeyValuePair<string, string>( "username", userName ), 
                    new KeyValuePair<string, string> ( "password", password )
                };
            var content = new FormUrlEncodedContent(pairs);

            using (var client = new HttpClient())
            {
                var tokenEndpoint = new Uri(new Uri(_hostUri), "Token");
                response =  await client.PostAsync(tokenEndpoint, content);
            }

            var responseContent = await response.Content.ReadAsStringAsync();
            if (!response.IsSuccessStatusCode)
            {
                throw new Exception(string.Format("Error: {0}", responseContent));
            }

            return GetTokenDictionary(responseContent);
        }


        private Dictionary<string, string> GetTokenDictionary(
            string responseContent)
        {
            Dictionary<string, string> tokenDictionary =
                JsonConvert.DeserializeObject<Dictionary<string, string>>(
                responseContent);
            return tokenDictionary;
        }
    }
}

With that in place, we can re-implement the client Program class like so:

The Client Program Class:
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading.Tasks;

// Add Usings:
using System.Net.Http;

namespace MinimalOwinWebApiClient
{
    class Program
    {
        static void Main(string[] args)
        {
            // Wait for the async stuff to run...
            Run().Wait();

            // Then Write Done...
            Console.WriteLine("");
            Console.WriteLine("Done! Press the Enter key to Exit...");
            Console.ReadLine();
            return;
        }


        static async Task Run()
        {
            // Create an http client provider:
            string hostUriString = "http://localhost:8080";
            var provider = new apiClientProvider(hostUriString);
            string _accessToken;
            Dictionary<string, string> _tokenDictionary;

            try
            {
                // Pass in the credentials and retrieve a token dictionary:
                _tokenDictionary = await provider.GetTokenDictionary(
                        "john@example.com", "password");
                _accessToken = _tokenDictionary["access_token"];
            }
            catch (AggregateException ex)
            {
                // If it's an aggregate exception, an async error occurred:
                Console.WriteLine(ex.InnerExceptions[0].Message);
                Console.WriteLine("Press the Enter key to Exit...");
                Console.ReadLine();
                return;
            }
            catch (Exception ex)
            {
                // Something else happened:
                Console.WriteLine(ex.Message);
                Console.WriteLine("Press the Enter key to Exit...");
                Console.ReadLine();
                return;
            }

            // Write the contents of the dictionary:
            foreach(var kvp in _tokenDictionary)
            {
                Console.WriteLine("{0}: {1}", kvp.Key, kvp.Value);
                Console.WriteLine("");
            }
        }
    }
}

Up to this point, we’ve ditched all the code that makes requests to the CompaniesController in our API, and we’re only looking at the code which authenticates us and retrieves the access token.

Note, we have included some very rudimentary exception handling here. In a real application we would probably want a little more info, and we would need to incorporate a more robust mechanism for handling HTTP errors and other things that might go wrong.

If we run our Web Api application, and then run our client application, we should see the following output from our Client application:

Client Application Output after Authentication:

console-output-client-application-authentication

And we see that we have successfully retrieved an access token from our extra-simple auth server. But, what if we pass invalid credentials?

Change the password we are passing in from “password” to something else, say, “assword” (but mom, all I did was take the letter “p” out??!!):

Client Application after Invalid Authentication:

console-output-client-application-invalid-authentication

Appropriately, we get back an error indicating we have provided an invalid grant.

Now let’s implement the rest of our client, and try some calls into our API itself.

Implementing the API Client with Authenticated API Calls

Now, we’ll add an updated version of the CompanyClient class. In this case, we have made everything async. Also, we have updated the class itself, and all of the methods, to work with the the new authentication requirement we have introduced in our API:

The Heavily Modified CompanyClient Class:
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading.Tasks;

// Add Usings:
using System.Net.Http;
using System.Net;
using System.Net.Http.Headers;

// Add for Identity/Token Deserialization:
using Newtonsoft.Json;


namespace MinimalOwinWebApiClient
{
    public class CompanyClient
    {
        string _accessToken;
        Uri _baseRequestUri;
        public CompanyClient(Uri baseUri, string accessToken)
        {
            _accessToken = accessToken;
            _baseRequestUri = new Uri(baseUri, "api/companies/");
        }


        // Handy helper method to set the access token for each request:
        void SetClientAuthentication(HttpClient client)
        {
            client.DefaultRequestHeaders.Authorization 
                = new AuthenticationHeaderValue("Bearer", _accessToken); 
        }


        public async Task<IEnumerable<Company>> GetCompaniesAsync()
        {
            HttpResponseMessage response;
            using(var client = new HttpClient())
            {
                SetClientAuthentication(client);
                response = await client.GetAsync(_baseRequestUri);
            }
            return await response.Content.ReadAsAsync<IEnumerable<Company>>();
        }


        public async Task<Company> GetCompanyAsync(int id)
        {
            HttpResponseMessage response;
            using (var client = new HttpClient())
            {
                SetClientAuthentication(client);

                // Combine base address URI and ID to new URI
                // that looks like http://hosturl/api/companies/id
                response = await client.GetAsync(
                    new Uri(_baseRequestUri, id.ToString()));
            }
            var result = await response.Content.ReadAsAsync<Company>();
            return result;
        }


        public async Task<HttpStatusCode> AddCompanyAsync(Company company)
        {
            HttpResponseMessage response;
            using(var client = new HttpClient())
            {
                SetClientAuthentication(client);
                response = await client.PostAsJsonAsync(
                    _baseRequestUri, company);
            }
            return response.StatusCode;
        }


        public async Task<HttpStatusCode> UpdateCompanyAsync(Company company)
        {
            HttpResponseMessage response;
            using (var client = new HttpClient())
            {
                SetClientAuthentication(client);
                response = await client.PutAsJsonAsync(
                    _baseRequestUri, company);
            }
            return response.StatusCode;
        }


        public async Task<HttpStatusCode> DeleteCompanyAsync(int id)
        {
            HttpResponseMessage response;
            using (var client = new HttpClient())
            {
                SetClientAuthentication(client);

                // Combine base address URI and ID to new URI
                // that looks like http://hosturl/api/companies/id
                response = await client.DeleteAsync(
                    new Uri(_baseRequestUri, id.ToString()));
            }
            return response.StatusCode;
        }
    }
}

Now, we can update our Program class to call into CompanyClient to work with our API and output the results to the console. Basically, we’ll expand the Run() method, and exercise each of the methods we defined onCompaniesController asynchronously. We also added a pair of convenience methods for writing to the console,WriteCompaniesList() and WriteStatusCodeResult() :

Update Program Class to Consume API and Write to Console:
static async Task Run()
{
    // Create an http client provider:
    string hostUriString = "http://localhost:8080";
    var provider = new apiClientProvider(hostUriString);
    string _accessToken;
    Dictionary<string, string> _tokenDictionary;

    try
    {
        // Pass in the credentials and retrieve a token dictionary:
        _tokenDictionary = 
            await provider.GetTokenDictionary("john@example.com", "password");
        _accessToken = _tokenDictionary["access_token"];

        // Write the contents of the dictionary:
        foreach (var kvp in _tokenDictionary)
        {
            Console.WriteLine("{0}: {1}", kvp.Key, kvp.Value);
            Console.WriteLine("");
        }

        // Create a company client instance:
        var baseUri = new Uri(hostUriString);
        var companyClient = new CompanyClient(baseUri, _accessToken);

        // Read initial companies:
        Console.WriteLine("Read all the companies...");
        var companies = await companyClient.GetCompaniesAsync();
        WriteCompaniesList(companies);

        int nextId = (from c in companies select c.Id).Max() + 1;

        Console.WriteLine("Add a new company...");
        var result = await companyClient.AddCompanyAsync(
            new Company { Name = string.Format("New Company #{0}", nextId) });
        WriteStatusCodeResult(result);

        Console.WriteLine("Updated List after Add:");
        companies = await companyClient.GetCompaniesAsync();
        WriteCompaniesList(companies);

        Console.WriteLine("Update a company...");
        var updateMe = await companyClient.GetCompanyAsync(nextId);
        updateMe.Name = string.Format("Updated company #{0}", updateMe.Id);
        result = await companyClient.UpdateCompanyAsync(updateMe);
        WriteStatusCodeResult(result);

        Console.WriteLine("Updated List after Update:");
        companies = await companyClient.GetCompaniesAsync();
        WriteCompaniesList(companies);

        Console.WriteLine("Delete a company...");
        result = await companyClient.DeleteCompanyAsync(nextId - 1);
        WriteStatusCodeResult(result);

        Console.WriteLine("Updated List after Delete:");
        companies = await companyClient.GetCompaniesAsync();
        WriteCompaniesList(companies);
    }
    catch (AggregateException ex)
    {
        // If it's an aggregate exception, an async error occurred:
        Console.WriteLine(ex.InnerExceptions[0].Message);
        Console.WriteLine("Press the Enter key to Exit...");
        Console.ReadLine();
        return;
    }
    catch (Exception ex)
    {
        // Something else happened:
        Console.WriteLine(ex.Message);
        Console.WriteLine("Press the Enter key to Exit...");
        Console.ReadLine();
        return;
    }
}


static void WriteCompaniesList(IEnumerable<Company> companies)
{
    foreach (var company in companies)
    {
        Console.WriteLine("Id: {0} Name: {1}", company.Id, company.Name);
    }
    Console.WriteLine("");
}

static void WriteStatusCodeResult(System.Net.HttpStatusCode statusCode)
{
    if (statusCode == System.Net.HttpStatusCode.OK)
    {
        Console.WriteLine("Opreation Succeeded - status code {0}", statusCode);
    }
    else
    {
        Console.WriteLine("Opreation Failed - status code {0}", statusCode);
    }
    Console.WriteLine("");
}

Now that we are able to properly authenticate requests to our Web Api, we should be protected against unauthorized access, right?

Not so fast.

Protecting Resources With [Authorize] Attribute

If we fire up our Web Api Application now, open a browser, and type the URL routed to the GetCompanies()method on the CompaniesController, we find that we can still access the resource, even though the requests from the browser contains no authentication token:

Accessing the Companies Resource from the Browser without Authentication:

access-unprotected-resource-from-browser

This is because we haven’t specified that the resources represented by CompaniesController should be protected. We can fix that easily, by decorating the CompaniesController class itself with an [Authorize]attribute:

Decorate CompaniesController with an [Authorize] Attribute:
[Authorize]
public class CompaniesController : ApiController
{
    // ... Code for Companies Controller ...
}

If we re-run the Web Api application now, and refresh our browser, we find:

Accessing the Protected Companies Resource from the Browser without Authentication:

access-protected-resource-from-browser

Since the browser request had no access token in the request body, the request for the protected resource was denied.

Accessing Protected Resources with Authenticated Client Requests

Now, we should be able to run our API Client application (don’t forget to re-set the password to “password!”). If we run our client application now, we should see console output resembling the following:

Console Output from Authenticated Request for Protected Resource:

console-output-client-application-with-authenticated-api-calls

With that, we have implemented a very basic example of authenticating a user with our embedded authorization server, retrieved an access token from our client application, and successfully requested access to protected resources on the resource server.

Adding Roles as Claims

A deep look at claims-based authorization is beyond the scope of this article. However, we can use the[Authorize] attribute to ensure that only users with a specific role claim can access a protected resource:

Change the [Authorize] attribute on the CompanyController class to the following:

Add a specific Role to the [Authorize] Attribute on Company Controller:
[Authorize(Roles="Admin")]
public class CompaniesController : ApiController
{
    // ... Code for Companies Controller ...
}

If we run our Web Api application now, and then run our Api Client application, we find we have a problem:

Running the Api Client when Role Authorization is Required:

api-error-unauthorized-with-role-required

Given we have added the Role restriction for access to the CompaniesController resource, this is what we expect to see. Now let’s see about authorizing access based on Role membership in our Web Api.

Add a Role Claim to Resource Owner Identity

At the simplest level, we can add a claim to the access token granted to the resource owner in the call toGrantResourceOwnerCredentials():

Add a Role Claim to the authenticated User in GrantResourceOwnerCredentials():
public override async Task GrantResourceOwnerCredentials(
    OAuthGrantResourceOwnerCredentialsContext context)
{
    // DEMO ONLY: Pretend we are doing some sort of REAL checking here:
    if (context.Password != "password")
    {
        context.SetError(
            "invalid_grant", "The user name or password is incorrect.");
        context.Rejected();
        return;
    }

    // Create or retrieve a ClaimsIdentity to represent the 
    // Authenticated user:
    ClaimsIdentity identity = 
        new ClaimsIdentity(context.Options.AuthenticationType);
    identity.AddClaim(new Claim("user_name", context.UserName));

    // Add a Role Claim:
    identity.AddClaim(new Claim(ClaimTypes.Role, "Admin"));

    // Identity info will ultimatly be encoded into an Access Token
    // as a result of this call:
    context.Validated(identity);
}

With that simple change, we have now added a claim to the identity of the authenticated user. The claims will be encoded/encrypted as part of the access token. When the token is received by the resource server (in this case, our application), the decoded token will provide the identity of the authenticated user, as well as any additional claims, including the fact that the user is a member of the “Admin” role.

If we run both applications now, the console output from our Api Client application is what we would expect:

Console Output from Client with Authenticated User with Proper Admin Role Claim:

api-successful-access-with-role-required

We have once again successfully accessed a protected resource. Access to the CompaniesController is now restricted to authenticated users who also present a claim indicating they are a member of the Admin role.

What Next?

So far, we’ve seen in a very basic way how the Resource Owner Flow is implemented in the context of the OWIN/Katana pipeline. We have not yet examined where we might store our user information, how we get it there, or how our authorization framework might access that data.

In the next post, we’ll look at persisting authorization information, and how we access it.

Additional Resources and Items of Interest

Some very helpful articles I have referred to in learning this stuff:

REF: http://typecastexception.com/post/2015/01/19/ASPNET-Web-Api-Understanding-OWINKatana-AuthenticationAuthorization-Part-I-Concepts.aspx

[Dev Tip] ASP.NET Web Api: Understanding OWIN/Katana Authentication/Authorization Part II: Models and Persistence

In the previous post in this series we learned how the most basic authentication and authorization elements fit together in an OWIN-based Web Api application. We have seen how to authenticate a user using an Authentication Server embedded within our application, and how to add an elementary claim to use with the [Authorize] attribute.

To this point, we have been avoiding using the ready-built Identity framework, and instead we have been focusing on understanding how these pieces interrelate. We will continue this approach (for now) here, by adding some concrete authorization models to our application, and a persistence layer to store important user data.

Image by clement127  |  Some Rights Reserved

Once again, we will be doing most of this “from scratch,” in a pretty minimal fashion. I want to explore the relationships between project components without too many distractions. So we’re not attempting to design the optimal auth system here or demonstrate the latest best practices. But hopefully we will come away with a better understanding of how a fully developed authentication/authorization system such as Identity works in the context of our application. Understanding THAT gives empowers us to utilize tools like Identity more effectively.

From the Ground Up

In this series of posts we started with concepts, and are building slowly build from there.

  • Part I (last post) We will examine the basic OAuth Resource Owner Flow model for authentication, and assemble to most basic components we need to implement authentication using this model. We will not be concerning ourselves with the cryptographic requirements of properly hashing passwords, or persisting user information to a database. We will also not be using Identity, instead implementing security using the basic components available in the Microsoft.Owin libraries.
  • Part II (this post) – We will mock up some basic classes needed to model our user data, and a persistence model to see how storage of user data and other elements works at a fundamental level.
  • Part III – We will replace our mock objects with Identity 2.0 components to provide the crypto and security features (because rolling your own crypto is not a good idea).

Source Code for Examples

We are building up a project over a series of posts here. In order that the source for each post make sense, I am setting up branches that illustrate the concepts for each post:

On Github, the branches of the Web Api repo so far look like this:

The code for the API client application is in a different repo, and the branches look like this:

  • Branch: Master – Always the most current, includes all changes
  • Branch: owin-auth – Added async methods, and token-based authentication calls to the Web Api application. This is where we left the code in the last post.

Adding Auth Models to the Minimal Web Api

We’ll be starting from where we left off in the last post. Recall that We had set up a basic embedded authorization server in our application which would process HTTP POST requests made by a client to the token endpoint, validate the user credentials/password received, and return an access token. From there, the client could submit the access token with subsequent requests to authenticate, and access whichever resources are available for the given identity and/or role.

If we review our existing code for the ApplicationOAuthServerProvider, we see in theGrantOwnerResourceCredentials() method that we are performing a mock credentials check. In order to keep things simple, we just checked to see if the password submitted matched the string literal “password” and moved on:

The Existing GrantOwnerResourceCredentials Method:
public override async Task GrantResourceOwnerCredentials(
    OAuthGrantResourceOwnerCredentialsContext context)
{
    // DEMO ONLY: Pretend we are doing some sort of REAL checking here:
    if (context.Password != "password")
    {
        context.SetError(
            "invalid_grant", "The user name or password is incorrect.");
        context.Rejected();
        return;
    }

    // Create or retrieve a ClaimsIdentity to represent the 
    // Authenticated user:
    ClaimsIdentity identity = 
        new ClaimsIdentity(context.Options.AuthenticationType);
    identity.AddClaim(new Claim("user_name", context.UserName));
    identity.AddClaim(new Claim(ClaimTypes.Role, "Admin"));

    // Identity info will ultimatly be encoded into an Access Token
    // as a result of this call:
    context.Validated(identity);
}

In reality, we would most likely check to see if there was a user in our backing store which matched whatever credentials were submitted, and then also check to see if the password submitted was valid. But not by checking against a plain text representation from our backing store!

In order to flesh out this method, we need to model our authorization objects, and we need to persist some user data in our database.

First, let’s add some basic models. Add a new code file to the Models folder in the project, and then add the following code:

The AuthModels.cs Code File:
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading.Tasks;

// Add usings:
using System.Data.Entity;
using System.ComponentModel.DataAnnotations;
using System.Security.Claims;

namespace MinimalOwinWebApiSelfHost.Models
{
    public class MyUser
    {
        public MyUser()
        {
            Id = Guid.NewGuid().ToString();
            Claims = new List<MyUserClaim>();
        }

        [Key]
        public string Id { get; set; }
        public string Email { get; set; }
        public string PasswordHash { get; set; }
        public ICollection<MyUserClaim> Claims { get; set; }
    }


    public class MyUserClaim
    {
        public MyUserClaim()
        {
            Id = Guid.NewGuid().ToString();
        }
        [Key]
        public string Id { get; set; }
        public string UserId { get; set; }
        public string ClaimType { get; set; }
        public string ClaimValue { get; set; }
    }


    public class MyPasswordHasher
    {
        public string CreateHash(string password)
        {
            // FOR DEMO ONLY! Use a standard method or 
            // crypto library to do this for real:
            char[] chars = password.ToArray();
            char[] hash = chars.Reverse().ToArray();
            return new string(hash);
        }
    }
}

Above, we see a few basic models. We expect to have a user representation, and we do, in the form of the MyUserclass. While you may have been expecting to see a MyRole class, we have instead opted to carry on with the claims implementation we were using in our original project. Therefore, we have added a MyUserClaim class instead. We’ll discuss this further shortly.

Finally, we have that odd-looking MyPasswordHasher class. As you may have guessed from the comment in the code, we are really only going to mock a proper hashing mechanism here. As before, we’re going to keep things simple for our example. In reality, one would apply a proven crypto library to this task, and proven, tried and true methods for properly hashing a password. Or, of course, use a library for such things, like Identity.

Adding The Models to the ApplicationDbContext

Now that we have our auth-related entity models, we can add them to the existing ApplicationDbContext so that they can be modeled in the database, and we can access the data they represent from the context.

Recall that we set this particular example application up to use a local, file-based database (SQL CE) however, everything we are doing here would work just fine with SQL Server as well.

Add the Auth-Related Models to the ApplicationDbContext:

public class ApplicationDbContext : DbContext
{
    public ApplicationDbContext()
        : base("MyDatabase")
    {

    }


    static ApplicationDbContext()
    {
        Database.SetInitializer(new ApplicationDbInitializer());
    }


    public IDbSet<Company> Companies { get; set; }
    public IDbSet<MyUser> Users { get; set; }
    public IDbSet<MyUserClaim> Claims { get; set; }
}

Tying the Models Together – The User Store

For our simple model set, and to keep concept straightforward, we are going to implement a simple MyUserStoreclass, and add sufficient functionality to get our application working and no more.

Add the following class (I added this to the AuthModels.cs file, but you can add it in its own if you want):

The UserStore Class:
public class MyUserStore
{
    ApplicationDbContext _db;
    public MyUserStore(ApplicationDbContext context)
    {
        _db = context;
    }


    public async Task AddUserAsync(MyUser user, string password)
    {
        if (await UserExists(user))
        {
            throw new Exception(
                "A user with that Email address already exists");
        }
        var hasher = new MyPasswordHasher();
        user.PasswordHash = hasher.CreateHash(password).ToString();
        _db.Users.Add(user);
        await _db.SaveChangesAsync();
    }


    public async Task<MyUser> FindByEmailAsync(string email)
    {
        var user = _db.Users
            .Include(c => c.Claims)
            .FirstOrDefaultAsync(u => u.Email == email);

        return await _db.Users
            .FirstOrDefaultAsync(u => u.Email == email);
    }


    public async Task<MyUser> FindByIdAsync(string userId)
    {
        return await _db.Users
            .FirstOrDefaultAsync(u => u.Id == userId);
    }


    public async Task<bool> UserExists(MyUser user)
    {
        return await _db.Users
            .AnyAsync(u => u.Id == user.Id || u.Email == user.Email);
    }


    public async Task AddClaimAsync(string UserId, MyUserClaim claim)
    {
        var user = await FindByIdAsync(UserId);
        if(user == null)
        {
            throw new Exception("User does not exist");
        }
        user.Claims.Add(claim);
        await _db.SaveChangesAsync();
    }


    public bool PasswordIsValid(MyUser user, string password)
    {
        var hasher = new MyPasswordHasher();
        var hash = hasher.CreateHash(password);
        return hash.Equals(user.PasswordHash);
    }
}

In the code above, we have assembled a few basic methods to deal with persisting and retrieving User information. Note in the AddUserAsync() method, we perform some minimal validation (make sure a user with the same email address does not already exist). Also, see that we use our super-secret, super-secure MyPasswordHasher to hash, salt, re-hash, etc. our user password, and then we persist the hashed value (NEVER the clear-text password). In other words, at no point are we saving the user-submitted clear-text password to disk, anywhere.

Similarly, we provide a simple PasswordIsValid() method which again uses the MyPasswordHasher class to compare the hash of the password submitted with that of a user record (which for now, would be submitted as an argument after being previously retrieved elsewhere in our code).

The MyUserStore class provides simplistic examples of how one might implement some of this. There is minimal validation and exception handling here. This class works well for our example, and to demonstrate the concepts we are dealing with, but is not likely how you would do this in a production application.

Initialize the Database with User Data

Now all we really need to do is update our ApplicationDbInitializer to seed the database with some initial user data. Recall, we had already set this up (in the same code file as the ApplicationDbContext) to seed ourCompany table with some starting data. Update the code as follows. You will also need to addSystem.Security.Claims to the using statements at the top of your code file:

Update ApplicationDbInitializer to Seed Application with Initial User Data:
public class ApplicationDbInitializer 
    : DropCreateDatabaseAlways<ApplicationDbContext>
{
    protected async override void Seed(ApplicationDbContext context)
    {
        context.Companies.Add(new Company { Name = "Microsoft" });
        context.Companies.Add(new Company { Name = "Apple" });
        context.Companies.Add(new Company { Name = "Google" });
        context.SaveChanges();

        // Set up two initial users with different role claims:
        var john = new MyUser { Email = "john@example.com" };
        var jimi = new MyUser { Email = "jimi@Example.com" };

        john.Claims.Add(new MyUserClaim 
        { 
                ClaimType = ClaimTypes.Name, 
                UserId = john.Id, 
                ClaimValue = john.Email 
        });
        john.Claims.Add(new MyUserClaim 
        { 
                ClaimType = ClaimTypes.Role, 
                UserId = john.Id, 
                ClaimValue = "Admin" 
        });

        jimi.Claims.Add(new MyUserClaim 
        { 
            ClaimType = ClaimTypes.Name, 
            serId = jimi.Id, 
            ClaimValue = jimi.Email 
        });
        jimi.Claims.Add(new MyUserClaim 
        { 
            ClaimType = ClaimTypes.Role, 
            UserId = john.Id, 
            ClaimValue = "User" 
        });

        var store = new MyUserStore(context);
        await store.AddUserAsync(john, "JohnsPassword");
        await store.AddUserAsync(jimi, "JimisPassword");
    }
}

As we see above, we have taken advantage of the methods exposed on our new MyUserStore class to add two users, along with appropriate claims, to the database.

Also recall we are deriving our initializer from DropDatabaseCreateAlways so that the database will be re-created and re-seeded each time we run the application.

Find the User and Authenticate the Token Request

All that’s left to do now is update our GrantResourceOwnerCredentials() method to avail itself of our new user entities and data to perform its function.

Validate and Authenticate a User in GrantResourceOwnerCredentials() Method:
public override async Task GrantResourceOwnerCredentials(
    OAuthGrantResourceOwnerCredentialsContext context)
{
    // Retrieve user from database:
    var store = new MyUserStore(new ApplicationDbContext());
    var user = await store.FindByEmailAsync(context.UserName);

    // Validate user/password:
    if(user == null || !store.PasswordIsValid(user, context.Password))
    {
        context.SetError(
            "invalid_grant", "The user name or password is incorrect.");
        context.Rejected();
        return;
    }

    var identity = new ClaimsIdentity(context.Options.AuthenticationType);
    foreach(var userClaim in user.Claims)
    {
        identity.AddClaim(new Claim(userClaim.ClaimType, userClaim.ClaimValue));
    }

    context.Validated(identity);
}

Here, we retrieve a user record from our store (if there is a record for the user credentials in the request), and then we create a new ClaimsIdentity for that user, much the same as before. This time, however, we also have a record of the various claims for this user, and we add those as well.

In this case, we really only have the user’s name, and the role(s) our application recognizes for the user, but we could implement a more complex claims model if we needed. For now, we will stick with user name and roles, because the default authorization scheme, using the [Authorize] attribute, is pre-configured to work with user names and roles. We will look at customizing this in a later post.

The Api Client Application

We can leave our Api Client application pretty much as-is at the moment. If you don’t have the client application set up, you can pull down the source for the project from the Github repo. Make sure to checkout the branch owin-auth(not master!).

Recall that we has set up our application to request a token from our Api, and then make some Api calls to the CompaniesController:

Abbreviated Client Code Showing the Token Request:
static async Task Run()
{
    // Create an http client provider:
    string hostUriString = "http://localhost:8080";
    var provider = new apiClientProvider(hostUriString);
    string _accessToken;
    Dictionary<string, string> _tokenDictionary;

    try
    {
        // Pass in the credentials and retrieve a token dictionary:
        _tokenDictionary = await provider.GetTokenDictionary(
            "john@example.com", "JohnsPassword");
        _accessToken = _tokenDictionary["access_token"];

        // Write the contents of the dictionary:
        foreach (var kvp in _tokenDictionary)
        {
            Console.WriteLine("{0}: {1}", kvp.Key, kvp.Value);
            Console.WriteLine("");
        }

        // Create a company client instance:
        var baseUri = new Uri(hostUriString);
        var companyClient = new CompanyClient(baseUri, _accessToken);

        // ... a bunch of code calling to API and writing to console...
    }
    catch (AggregateException ex)
    {
        // If it's an aggregate exception, an async error occurred:
        Console.WriteLine(ex.InnerExceptions[0].Message);
        Console.WriteLine("Press the Enter key to Exit...");
        Console.ReadLine();
        return;
    }
    catch (Exception ex)
    {
        // Something else happened:
        Console.WriteLine(ex.Message);
        Console.WriteLine("Press the Enter key to Exit...");
        Console.ReadLine();
        return;
    }
}

The only thing we have changed in the above code is the password we are passing in with the token request – we have changed it to match the password for the user record we created in our Seed() method.

Running the Application with an Authenticated User

If we run our Web Api application, and then run the client, everything should work swimmingly. The Web Api application spins up the same as it always has, and the client output should look familiar:

Console Output from Client Application:

client-with-authneticated-user

Everything looks the same as it did when we wrapped up the previous post, because we haven’t changed anything the affects how the client application does its job. We’ve only changed the internals of our Web Api so that the embedded authorization server now knows how to retrieve user data from our database in order to authenticate a user, and perform a basic authorization check against the roles available to that user.

Let’s see what happens when things go wrong.

Improper Authentication – Invalid Credentials

First, let’s see what happens if we try to request a token with the wrong password. In the client application, change the password we are using in our token request to something other than “JohnsPassword”:

Using Incorrect Password for Client Token Request:
// Pass in the credentials and retrieve a token dictionary:
_tokenDictionary = await provider.GetTokenDictionary(
    "john@example.com", "SomePassword");
_accessToken = _tokenDictionary["access_token"];

If we run the client again, we see all is not well:

Running the Client with Invalid Credentials:

client-with-invalid-password

In this case, were get back an “Invalid Grant” because the client could not properly authenticate with the credentials provided.

On the other hand, things look a little different is we request a token for a user which can be authenticated, but who is not authorized access to the resource requested.

Insufficient Authorization

Recall that in our Web Api application, we protected the CompaniesController resource using the[Authorize] attribute, and we restricted access to users in the role “Admin”:

The CompaniesController is Protected Using [Authorize]:
[Authorize(Roles="Admin")]
public class CompaniesController : ApiController
{

    // ... blah blah Controller Methods etc...

}

Also recall that we seeded two users in our database. The user “jimi” does not have a claim for the “Admin” role, but instead claims the “User” role. Let’s change the code in our client application to request an access token for “jimi” instead, and then see what happens.

Change Client Token Request for Alternate User:
// Pass in the credentials and retrieve a token dictionary:
_tokenDictionary = await provider.GetTokenDictionary(
    "jimi@example.com", "JimisPassword");
_accessToken = _tokenDictionary["access_token"];

Running the client application now produces a slightly different result:

Running the Client with Valid Credentials but Insufficient Authorization:

client-with-insufficient-authorization

Unlike previously, we did not receive an invalid grant error, because the user credentials were properly authenticated. However, the user does not possess the proper Role claim in our system to access the protected resource.

In reality, the default implementation of [Authorize] limits our ability to leverage claims to the fullest extent.[Authorize] recognizes claims for user names, and roles. What if we want more granular control over our application permissions?

We’re not going to go into that in this post. However, keep this in mind, as leveraging Claims, and customizing authentication using claims instead of simple roles can become important for more complex application which require fine-grained control of permissions.

What Next?

In this post we created a “quick and dirty” implementation which performs some very basic authentication and authorization for our application.

In the real world, we would definitely tend to some critical details, such as proper crypto for hashing passwords. We would also probably want to beef up our design by applying some common patterns of abstraction. Notice, we have coded everything here directly to the implementation class. Also, we have rather tightly coupled our logical processing to our persistence model.

Lastly, we have put in place only the most rudimentary validation and exception handling.

We could go down a long road exploring how to better separate our persistence mechanism from our authentication logic, and more effectively handling exceptions and errors. However, those details are often application-specific, and/or require a long, long post.

Instead, we could now take everything we have learned, and pull in some ready-made components which already provide all of this, and more.

If the work we have done so far has been beginning to look a little familiar, that is no accident.

In the next post, we will implement our own authentication and authorization using the Identity 2.1 Framework.

Additional Resources and Items of Interest

Articles by others I have found invaluable:

REF: http://typecastexception.com/post/2015/01/25/ASPNET-Web-Api-Understanding-OWINKatana-AuthenticationAuthorization-Part-II-Models-and-Persistence.aspx

[Discovery] Hình động GIF 3D

3Ng256F.

Định dạng ảnh động GIF của công ty CompuServe đã 28 tuổi và chẳng có dấu hiệu gì là người ta sẽ từ bỏ GIF cả, nó được dùng rất rộng rãi ở khắp mọi nơi mặc cho những rào cản và lạc hậu công nghệ, ví dụ các lạc hậu là GIF chỉ hỗ trợ 256 màu, GIF có dung lượng rất to vân vân, tuy nhiên có lẽ do tính dễ dùng và thói quen khó bỏ, gif sẽ còn sống khỏe và sống lâu.

GIF ngày nay được dùng ở rất nhiều lĩnh vực, từ các hình minh họa đơn giản, đến các máy quét y tế kỹ thuật cao cũng có thể xuất gif để dễ hình dung, youtube cũng có cho xuất gif, các bạn truy cập hệ thống tag GIF tại đâytại đây để đọc rất nhiều bài về cách làm hình động gif nhé

Hôm nay mình xin giới thiệu với các bạn về kỹ thuật chia khung hình và lắc khung hình để tạo hiệu ứng 3D bằng ảnh GIF, hiệu ứng 3D này sẽ rất ấn tượng và khiến bạn ngạc nhiên, nếu bạn có hình nào 3D vui vẻ cũng chia sẻ với anh em nhé.

Afkkhrx.JCUnbdl.tumblr_n3z9r68f001siwk80o1_500.NthHVOk.GbrNQ8s.tumblr_m9r94lIgi01qlxohso1_500-1.CSYak5Q.NAr5qeJ.mnC9lSC.NSE6xWF.get-1.get. get-2.

[Discovery] HoloLens sẽ giúp NASA khám phá sao Hỏa như thế nào?

OnSight_01.

Bên cạnh việc giới thiệu những tính năng độc đáo của kính thực tế ảo HoloLens thì Microsoft cũng đã công bố hợp tác với phòng thí nghiệm các hệ thống đẩy Jet Propulsion Lab (JPL) của NASA để đưa thiết bị này vào phục vụ cho hoạt động thăm dò không gian. HoloLens có thể làm cầu nối giữa những chú robot, tàu thăm dò và con người trong hoạt động thám hiểm hành tinh bằng việc mang lại tầm nhìn tăng cường thực tế thông qua một chương trình có tên OnSight.

OnSight được phòng thí nghiệm Ops Lab của NASA phát triển dành riêng cho HoloLens. Nó cho phép các khoa học khám phá một sao Hỏa ảo bằng các dữ liệu thu thập từ tàu thăm dò Curiosity. Thêm vào đó, các nhà khoa học đang ở tại những địa điểm khác nhau có thể cùng tham dự một phiên thăm dò từ xa và mỗi người sẽ hiện diện dưới dạng một avatar hình người khi sử dụng HoloLens.

Ops Lab chịu trách nhiệm phát triển các hệ thống điều khiển robot và tàu vũ trụ tại phòng thí nghiệm JPL. Việc OnSight được ứng dụng ngay từ đầu trên HoloLens khiến không ít người bất ngờ và thắc mắc. Trên thực tế, Ops Lab đã bắt đầu hợp tác với Microsoft kể từ 5 năm trước, bắt đầu từ dự án Natal của Microsoft Kinect. Quản lý dự án của Ops Lab – Jeff Norris và cha đẻ của cảm biến Kinect – Alex Kipman đã cùng nhau thảo luận về việc làm cách nào để sử dụng các công nghệ đang được phát triển bởi nhóm Kinect để điều khiển robot và tàu vũ trụ tốt hơn.

Không lâu sau đó, Kipman đã giới thiệu đến Norris phiên bản đầu tiên của thứ sau này trở thành Windows Holographic và HoloLens. Vào thời điểm đó, Norris nói ông biết rằng đây sẽ là một tiềm năng và họ bắt đầu tìm cách sử dụng công nghệ trong hoạt động khám phá không gian. Sự hợp tác này đã tạo ra OnSight – một chương trình có thể mở rộng những khả năng của nhóm các nhà khoa học đảm trách sứ mạng Curiosity.

OnSight_02.

OnSight khai thác dữ liệu và hình ảnh từ tàu thăm dò Curiosity và sử dụng HoloLens để tạo ra một căn phòng mô phỏng bề mặt sao Hỏa. Các nhà khoa học sẽ có thể bước vào không gian ảo này và di chuyển xung quanh với một cảm nhận về phối cảnh cũng như sự hiện diện như thật mà không hình ảnh 2 chiều nào có thể đáp ứng được. Đồng thời, công nghệ sẽ cải thiện chất lượng thăm dò từ những thứ như hình dạng và bố trí đặc điểm địa chất. HoloLens cũng nhận biết được vị trí chiếc máy tính của người dùng để tách riêng ra và cho phép người dùng điều khiển trỏ chuột liền mạch giữa môi trường desktop trên máy tính và bề mặt ảo (hình trên).

OnSight_03.

Các nhà khoa học thuộc sứ mạng Curiosity sẽ tương tác chủ yếu với OnSight bằng giọng nói và cử chỉ. Mỗi nhà khoa học khi sử dụng kính HoloLens sẽ có một avatar được đồ họa hình người và sẽ có một đường kẻ đứt quãng phát ra từ avatar cho thấy vị trí họ đang nhìn vào, qua đó mang lại sự phối hợp dễ dàng hơn giữa các nhà khoa học. Nhờ việc có thể sử dụng máy tính trong khi đang đeo HoloLens, các nhà khoa học có thể khai thác dữ liệu thô trong MSLIC – một chương trình cung cấp dữ liệu thô gởi về từ Curiosity được tích hợp hoàn với OnSight.

Nhóm nghiên cứu Ops Lab cho rằng sự hiện diện của người dùng ở môi trường thật và ảo là một công cụ tối quan trọng đối với hoạt động thăm dò. Norris nói: “Những gì một nhà địa chất đang làm khi họ quan sát một quang cảnh là những gì họ đang cố gắng tìm hiểu về câu chuyện mà môi trường xung quanh muốn kể cho họ. Một trong những chương của câu chuyện này là hình dạng của môi trường – cách thức những tảng đá được sinh ra và cách thức chúng định hình thành những đường thẳng hay đường cong. Thông tin này sẽ giúp tiên đoán những gì đang xảy ra.”

Norris cho biết cách đây 1 năm, nhóm Ops Lab đã thực hiện một nghiên cứu trong đó 17 nhà khoa học đã được cung cấp các hình ảnh tiêu chuẩn về quang cảnh trên sao Hỏa lấy từ thư viện MSLIC. Nhóm nghiên cứu tiến hành quan sát các hình ảnh này theo 2 cách: quan sát hình ảnh 2 chiều thông thường và quan sát hình ảnh nổi qua một chiếc kính đeo đầu. Trong cả 2 trường hợp, các nhà khoa học được yêu cầu vẽ một tấm bản đồ về hình dạng của môi trường và đánh dấu những vị trí ưa thích. Kết quả cho thấy độ chính xác về khoảng cách ước lượng tăng gấp đôi khi dùng màn hình đeo đầu đồng thời góc quan sát ước lượng cũng chính xác hơn gấp 3 lần.

Ngạc nhiên hơn, Norris cho biết các nhà khoa học đã hoàn thành thí nghiệm này với độ chính xác cao hơn hẳn mà không cần đến bất cứ khóa tập huấn thực tế ảo hay tăng cường thực tế nào. Hầu hết các nhà khoa học tham gia thí nghiệm đều thừa nhận rằng đây là lần đầu tiên họ sử dụng một thiết bị như vậy.

OnSight_04.

Trong khi nhóm đảm trách sứ mạng Curiosity đã làm việc rất tốt với các hình ảnh 2 chiều và 3 chiều về bề mặt hành tinh đỏ thì chương trình OnSight sẽ là bước tiến tiếp theo giúp họ phân tích dữ liệu. OnSight mang lại một giải pháp tốt hơn so với việc quan sát các mô hình 3D nhờ sức mạnh cảm nhận của cơ thể. Theo lý giải của Norris: “Khi đi lại trong một không gian trên Trái Đất, cơ thể của bạn biết được đó là nơi nào và đôi mắt sẽ cung cấp cho não những hình ảnh mà bạn thấy được tại vị trí đó.” Sự tương tác giữa cảnh và người là nền tảng để xây dựng mô hình trí tuệ về môi trường và đây là lý do tại sao nhóm nghiên cứu Ops Lab rất phấn khích về thành quả đạt được với OnSight.

Ops Lab cũng từng phát triển giải pháp tương tác bằng kính thực tế ảo Oculus Rift. Tuy nhiên, chiếc kính này cho thấy những hạn chế nhất định đối với NASA, chẳng hạn như nó cần cắm dây để sử dụng và độ gần giữa người dùng và quang cảnh xung quanh. HoloLens trong khi đó khắc phục được các vấn đề này do đó Ops Lab đã bắt đầu tập trung phát triển OnSight dành cho HoloLens trong suốt 1 năm qua. Một bộ phận của nhóm phát triển do Norris lãnh đạo thậm chí đã chuyển đến Redmond để sống và làm việc sát cánh với nhóm phát triển HoloLens.

Mục tiêu sau cùng là thử nghiệm khả năng hoạt động thực tế của OnSigh vào cuối năm nay. Ngay tuần qua, các nhà khoa học giám sát sứ mạng Curiosity đã bày tỏ sự thích thú đối với OnSight cùng HoloLens. Nhà địa chất học Fred Calef đến từ JPL nói rằng OnSight không chỉ mang lại một trải nghiệm giống như dịch chuyển tức thời mà còn giúp tiết kiệm thời gian. Trong khi đó, nhà nghiên cứu Katie Stack Morgan cho biết sự cải tiến về trải nghiệm tầm nhìn sẽ giúp nhóm nghiên cứu đưa ra các quyết định chính xác hơn cho hoạt động của tàu tự hành.

Mặc dù vậy, OnSight vẫn chưa phải là một sản phẩm hoàn chỉnh. Norris nói: “Có rất nhiều việc phải làm, không chỉ đơn thuần là đưa một khả năng mới vào hoạt động sứ mạng.” Tuy nhiên, ông lạc quan cho rằng OnSight cùng HoloLens sẽ là một công cụ mới trong bộ công cụ của các nhà khám phá không gian và mở ra tầm nhìn mới cho các sứ mạng tiếp theo, từ sao Hỏa cho đến các hành tinh khác trong hệ Mặt Trời.

Theo: The Verge

[Dev Tip] 50 Tips for Working with Unity (Best Practices)

About these tips

These tips are not all applicable to every project.

  • They are based on my experience with projects with small teams from 3 to 20 people.
  • There’s is a price for structure, re-usability, clarity, and so on — team size and project size determine whether that price should be paid.
  • Many tips are a matter of taste (there may be rivalling but equally good techniques for any tip listed here).
  • Some tips may fly in the face of conventional Unity development. For instance, using prefabs for specialisation instead of instances is very non-Unity-like, and the price is quite high (many times more prefabs than without it). Yet I have seen these tips pay off, even if they seem crazy.

Process

1. Avoid branching assets. There should always only ever be one version of any asset. If you absolutely have to branch a prefab, scene, or mesh, follow a process that makes it very clear which is the right version. The “wrong” branch should have a funky name, for example, use a double underscore prefix: __MainScene_Backup. Branching prefabs requires a specific process to make it safe (see under the section Prefabs).

2. Each team member should have a second copy of the project checked out for testing if you are using version control. After changes, this second copy, the clean copy, should be updated and tested. No-one should make any changes to their clean copies. This is especially useful to catch missing assets.

3. Consider using external level tools for level editing. Unity is not the perfect level editor. For example, we have used TuDee to build levels for a 3D tile-based game, where we could benefit from the tile-friendly tools (snapping to grid, and multiple-of-90-degrees rotation, 2D view, quick tile selection). Instantiating prefabs from an XML file is straightforward. See Guerrilla Tool Development for more ideas.

4. Consider saving levels in XML instead of in scenes. This is a wonderful technique:

  • It makes it unnecessary to re-setup each scene.
  • It makes loading much faster (if most objects are shared between scenes).
  • It makes it easier to merge scenes (even with Unity’s new text-based scenes there is so much data in there that merging is often impractical in any case).
  • It makes it easier to keep track of data across levels.

You can still use Unity as a level editor (although you need not). You will need to write some code to serialize and deserialize your data, and load a level both in the editor and at runtime, and save levels from the editor. You may also need to mimic Unity’s ID system for maintaining references between objects.

5. Consider writing generic custom inspector code. To write custom inspectors is fairly straightforward, but Unity’s system has many drawbacks:

  • It does not support taking advantage of inheritance.
  • It does not let you define inspector components on a field-type level, only a class-type level. For instance, if every game object has a field of type SomeCoolType, which you want rendered differently in the inspector, you have to write inspectors for all your classes.

You can address these issues by essentially re-implementing the inspector system. Using a few tricks of reflection, this is not as hard as it seems, details are provided at the end of the article.

Scene Organisation

6. Use named empty game objects as scene folders. Carefully organise your scenes to make it easy to find objects.

7. Put maintenance prefabs and folders (empty game objects) at 0 0 0. If a transform is not specifically used to position an object, it should be at the origin. That way, there is less danger of running into problems with local and world space, and code is generally simpler.

8. Minimise using offsets for GUI components. Offsets should always be used to layout components in their parent component only; they should not rely on the positioning of their grandparents. Offsets should not cancel each other out to display correctly. It is basically to prevent this kind of thing:

Parent container arbitrarily placed at (100, -50). Child, meant to be positioned at (10, 10), then placed at (90, 60) [relative to parent].

This error is common when the container is invisible, or does not have a visual representation at all.

9. Put your world floor at y = 0. This makes it easier to put objects on the floor, and treat the world as a 2D space (when appropriate) for game logic, AI, and physics.

10. Make the game runnable from every scene. This drastically reduces testing time. To make all scenes runnable you need to do two things:

First, provide a way to mock up any data that is required from previously loaded scenes if it is not available.

Second, spawn objects that must persist between scene loads with the following idiom:

myObject = FindMyObjectInScene();
 
if (myObjet == null)
{
   myObject = SpawnMyObject();
}

Art

11. Put character and standing object pivots at the base, not in the centre. This makes it easy to put characters and objects on the floor precisely. It also makes it easier to work with 3D as if it is 2D for game logic, AI, and even physics when appropriate.

12. Make all meshes face in the same direction (positive or negative z axis). This applies to meshes such as characters and other objects that have a concept of facing direction. Many algorithms are simplified if everything have the same facing direction.

13. Get the scale right from the beginning. Make art so that they can all be imported at a scale factor of 1, and that their transforms can be scaled 1, 1, 1. Use a reference object (a Unity cube) to make scale comparisons easy. Choose a world to Unity units ratio suitable for your game, and stick to it.

14. Make a two-poly plane to use for GUI components and manually created particles. Make the plane face the positive z-axis for easy billboarding and easy GUI building.

15. Make and use test art

  • Squares labelled for skyboxes.
  • A grid.
  • Various flat colours for shader testing: white, black, 50% grey, red, green, blue, magenta, yellow, cyan.
  • Gradients for shader testing: black to white, red to green, red to blue, green to blue.
  • Black and white checkerboard.
  • Smooth and rugged normal maps.
  • A lighting rig (as prefab) for quickly setting up test scenes.

Prefabs

16. Use prefabs for everything. The only game objects in your scene that should not be prefabs should be folders. Even unique objects that are used only once should be prefabs. This makes it easier to make changes that don’t require the scene to change. (An additional benefit is that it makes building sprite atlases reliable when using EZGUI).

17. Use separate prefabs for specialisation; do not specialise instances. If you have two enemy types, and they only differ by their properties, make separate prefabs for the properties, and link them in. This makes it possible to

  • make changes to each type in one place
  • make changes without having to change the scene.

If you have too many enemy types, specialisation should still not be made in instances in the editor. One alternative is to do it procedurally, or using a central file / prefab for all enemies. A single drop down could be used to differentiate enemies, or an algorithm based on enemy position or player progress.

18. Link prefabs to prefabs; do not link instances to instances. Links to prefabs are maintained when dropping a prefab into a scene; links to instances are not. Linking to prefabs whenever possible reduces scene setup, and reduce the need to change scenes.

19. As far as possible, establish links between instances automatically. If you need to link instances, establish the links programmatically. For example, the player prefab can register itself with the GameManager when it starts, or the GameManager can find the Player prefab instance when it starts.

Don’t put meshes at the roots of prefabs if you want to add other scripts. When you make the prefab from a mesh, first parent the mesh to an empty game object, and make that the root. Put scripts on the root, not on the mesh node. That way it is much easier to replace the mesh with another mesh without loosing any values that you set up in the inspector.

Use linked prefabs as an alternative to nested prefabs. Unity does not support nested prefabs, and existing third-party solutions can be dangerous when working in a team because the relationship between nested prefabs is not obvious.

20. Use safe processes to branch prefabs. The explanation use the Player prefab as an example.

Make a risky change to the Player prefab is as follows:

  1. Duplicate the Player prefab.
  2. Rename the duplicate to __Player_Backup.
  3. Make changes to the Player prefab.
  4. If everything works, delete __Player_Backup.

Do not name the duplicate Player_New, and make changes to it!

Some situations are more complicated. For example, a certain change may involve two people, and following the above process may break the working scene for everyone until person two finished. If it is quick enough, still follow the process above. For changes that take longer, the following process can be followed:

  1. Person 1:
    1. Duplicate the Player prefab.
    2. Rename it to __Player_WithNewFeature or __Player_ForPerson2.
    3. Make changes on the duplicate, and commit / give to Person 2.
  2. Person 2:
    1. Make changes to new prefab.
    2. Duplicate Player prefab, and call it __Player_Backup.
    3. Drag an instance of __Player_WithNewFeature into the scene.
    4. Drag the instance onto the original Player prefab.
    5. If everything works, delete __Player_Backup and __Player_WithNewFeature.

Extensions and MonoBehaviourBase

21. Extend your own base mono behaviour, and derive all your components from it.

This allows you to implement some general functionality, such as type safe Invoke, and more complicated Invokes (such as random, etc.).

22. Define safe methods for Invoke, StartCoroutine and Instantiate.

Define a delegate Task, and use it to define methods that don’t rely on string names. For example:

public void Invoke(Task task, float time)
{
   Invoke(task.Method.Name, time);
}

23. Use extensions to work with components that share an interface. It is sometimes convenient to get components that implement a certain interface, or find objects with such components.

The implementations below uses typeof instead of the generic versions of these functions. The generic versions don’t work with interfaces, but typeof does. The methods below wraps this neatly in generic methods.

//Defined in the common base class for all mono behaviours
public I GetInterfaceComponent<I>() where I : class
{
   return GetComponent(typeof(I)) as I;
}
 
public static List<I> FindObjectsOfInterface<I>() where I : class
{
   MonoBehaviour[] monoBehaviours = FindObjectsOfType<MonoBehaviour>();
   List<I> list = new List<I>();
 
   foreach(MonoBehaviour behaviour in monoBehaviours)
   {
      I component = behaviour.GetComponent(typeof(I)) as I;
 
      if(component != null)
      {
         list.Add(component);
      }
   }
 
   return list;
}

24. Use extensions to make syntax more convenient. For example:

public static class CSTransform 
{
   public static void SetX(this Transform transform, float x)
   {
      Vector3 newPosition = 
         new Vector3(x, transform.position.y, transform.position.z);
 
      transform.position = newPosition;
   }
   ...
}

25. Use a defensive GetComponent alternative. Sometimes forcing component dependencies (through RequiredComponent) can be a pain. For example, it makes it difficult to change components in the inspector (even if they have the same base type). As an alternative, the following extension of GameObject can be used when a component is required to print out an error message when it is not found.

public static T GetSafeComponent<T>(this GameObject obj) where T : MonoBehaviour
{
   T component = obj.GetComponent<T>();
 
   if(component == null)
   {
      Debug.LogError("Expected to find component of type " 
         + typeof(T) + " but found none", obj);
   }
 
   return component;
}

Idioms

26. Avoid using different idioms to do the same thing. In many cases there are more than one idiomatic way to do things. In such cases, choose one to use throughout the project. Here is why:

  • Some idioms don’t work well together. Using one idiom well forces design in one direction that is not suitable for another idiom.
  • Using the same idiom throughout makes it easier for team members to understand what is going on. It makes structure and code easier to understand. It makes mistakes harder to make.

Examples of idiom groups:

  • Coroutines vs. state machines.
  • Nested prefabs vs. linked prefabs vs. God prefabs.
  • Data separation strategies.
  • Ways of using sprites for states in 2D games.
  • Prefab structure.
  • Spawning strategies.
  • Ways to locate objects: by type vs. name vs. tag vs. layer vs. reference (“links”).
  • Ways to group objects: by type vs. name vs. tag vs. layer vs. arrays of references (“links”).
  • Finding groups of objects versus self registration.
  • Controlling execution order (Using Unity’s execution order setup versus yield logic versus Awake / Start and Update / Late Update reliance versus manual methods versus any-order architecture).
  • Selecting objects / positions / targets with the mouse in-game: selection manager versus local self-management.
  • Keeping data between scene changes: through PlayerPrefs, or objects that are not Destroyed when a new scene is loaded.
  • Ways of combining (blending, adding and layering) animation.

Time

27. Maintain your own time class to make pausing easier. Wrap Time.DeltaTime andTime.TimeSinceLevelLoad to account for pausing and time scale. It requires discipline to use it, but will make things a lot easier, especially when running things of different clocks (such as interface animations and game animations).

Spawning Objects

28. Don’t let spawned objects clutter your hierarchy when the game runs. Set their parents to a scene object to make it easier to find stuff when the game is running. You could use a empty game object, or even a singleton with no behaviour to make it easier to access from code. Call this object DynamicObjects.

Class Design

29. Use singletons for convenience. The following class will make any class that inherits from it a singleton automatically:

public class Singleton<T> : MonoBehaviour where T : MonoBehaviour
{
   protected static T instance;
 
   /**
      Returns the instance of this singleton.
   */
   public static T Instance
   {
      get
      {
         if(instance == null)
         {
            instance = (T) FindObjectOfType(typeof(T));
 
            if (instance == null)
            {
               Debug.LogError("An instance of " + typeof(T) + 
                  " is needed in the scene, but there is none.");
            }
         }
 
         return instance;
      }
   }
}

Singletons are useful for managers, such as ParticleManager or AudioManager or GUIManager.

  • Avoid using singletons for unique instances of prefabs that are not managers (such as the Player). Not adhering to this principle complicates inheritance hierarchies, and makes certain types of changes harder. Rather keep references to these in your GameManager (or other suitable God class ;-) )
  • Define static properties and methods for public variables and methods that are used often from outside the class. This allows you to write GameManager.Player instead ofGameManager.Instance.player.

30. For components, never make variables public that should not be tweaked in the inspector. Otherwise it will be tweaked by a designer, especially if it is not clear what it does. In some rare cases it is unavoidable. In that case use a two or even four underscores to prefix the variable name to scare away tweakers:

public float __aVariable;

31. Separate interface from game logic. This is essentially the MVC pattern.

Any input controller should only give commands to the appropriate components to let them know the controller has been invoked. For example in controller logic, the controller could decide which commands to give based on the player state. But this is bad (for example, it will lead to duplicate logic if more controllers are added). Instead, the Player object should be notified of the intent of moving forward, and then based on the current state (slowed or stunned, for example) set the speed and update the player facing direction. Controllers should only do things that relate to their own state (the controller does not change state if the player changes state; therefore, the controller should not know of the player state at all). Another example is the changing of weapons. The right way to do it is with a method on Player SwitchWeapon(Weapon newWeapon), which the GUI can call. The GUI should not manipulate transforms and parents and all that stuff.

Any interface component should only maintain data and do processing related to it’s own state. For example, do display a map, the GUI could compute what to display based on the player’s movements. However, this is game state data, and does not belong in the GUI. The GUI should merely display game state data, which should be maintained elsewhere. The map data should be maintained elsewhere (in the GameManager, for example).

Gameplay objects should know virtually nothing of the GUI. The one exception is the pause behaviour, which is may be controlled globally through Time.timeScale (which is not a good idea as well… see ). Gameplay objects should know if the game is paused. But that is all. Therefore, no links to GUI components from gameplay objects.

In general, if you delete all the GUI classes, the game should still compile.

You should also be able to re-implement the GUI and input without needing to write any new game logic.

32. Separate state and bookkeeping. Bookkeeping variables are used for speed or convenience, and can be recovered from the state. By separating these, you make it easier to

  • save the game state, and
  • debug the game state.

One way to do it is to define a SaveData class for each game logic class. The

[Serializable]
PlayerSaveData
{
   public float health; //public for serialisation, not exposed in inspector
} 
 
Player
{
   //... bookkeeping variables
 
   //Don’t expose state in inspector. State is not tweakable.
   private PlayerSaveData playerSaveData; 
}

33. Separate specialisation configuration.

Consider two enemies with identical meshes, but different tweakables (for instance different strengths and different speeds). There are different ways to separate data. The one here is what I prefer, especially when objects are spawned, or the game is saved. (Tweakables are not state data, but configuration data, so it need not be saved. When objects are loaded or spawned, the tweakables are automatically loaded in separately)

  • Define a template class for each game logic class. For instance, for Enemy, we also defineEnemyTemplate. All the differentiating tweakables are stored in EnemyTemplate
  • In the game logic class, define a variable of the template type.
  • Make an Enemy prefab, and two template prefabs WeakEnemyTemplate andStrongEnemyTemplate.
  • When loading or spawning objects, set the template variable to the right template.

This method can become quite sophisticated (and sometimes, needlessly complicated, so beware!).

For example, to better make use of generic polymorphism, we may define our classes like this:

public class BaseTemplate
{
   ...
}
 
public class ActorTemplate : BaseTemplate
{
   ...
}
 
public class Entity<EntityTemplateType> where EntityTemplateType : BaseTemplate
{
   EntityTemplateType template;
   ...
}
 
public class Actor : Entity <ActorTemplate>
{
   ...
}

34. Don’t use strings for anything other than displayed text. In particular, do not use strings for identifying objects or prefabs etc. One unfortunate exception is animations, which generally are accessed with their string names.

35. Avoid using public index-coupled arrays. For instance, do not define an array of weapons, an array of bullets, and an array of particles , so that your code looks like this:

public void SelectWeapon(int index)
{ 
   currentWeaponIndex = index;
   Player.SwitchWeapon(weapons[currentWeapon]);
}
 
public void Shoot()
{
   Fire(bullets[currentWeapon]);
   FireParticles(particles[currentWeapon]);   
}

The problem for this is not so much in the code, but rather setting it up in the inspector without making mistakes.

Rather, define a class that encapsulates the three variables, and make an array of that:

[Serializable]
public class Weapon
{
   public GameObject prefab;
   public ParticleSystem particles;
   public Bullet bullet;
}

The code looks neater, but most importantly, it is harder to make mistakes in setting up the data in the inspector.

36. Avoid using arrays for structure other than sequences. For example, a player may have three types of attacks. Each uses the current weapon, but generates different bullets and different behaviour.

You may be tempted to dump the three bullets in an array, and then use this kind of logic:

public void FireAttack()
{
   /// behaviour
   Fire(bullets[0]);
}
 
public void IceAttack()
{
   /// behaviour
   Fire(bullets[1]);
}
 
public void WindAttack()
{
   /// behaviour
   Fire(bullets[2]);
}

Enums can make things look better in code…

public void WindAttack()
{
   /// behaviour
   Fire(bullets[WeaponType.Wind]);
}

…but not in the inspector.

It’s better to use separate variables so that the names help show which content to put in. Use a class to make it neat.

[Serializable]
public class Bullets
{
   public Bullet FireBullet;
   public Bullet IceBullet;
   public Bullet WindBullet;
}

This assumes there is no other Fire, Ice and Wind data.

37. Group data in serializable classes to make things neater in the inspector. Some entities may have dozens of tweakables. It can become a nightmare to find the right variable in the inspector. To make things easier, follow these steps:

  • Define separate classes for groups of variables. Make them public and serializable.
  • In the primary class, define public variables of each type defined as above.
  • Do not initialize these variables in Awake or Start; since they are serializable, Unity will take care of that.
  • You can specify defaults as before by assigning values in the definition;

This will group variables in collapsible units in the inspector, which is easier to manage.

[Serializable]
public class MovementProperties //Not a MonoBehaviour!
{
   public float movementSpeed;
   public float turnSpeed = 1; //default provided
}
 
public class HealthProperties //Not a MonoBehaviour!
{
   public float maxHealth;
   public float regenerationRate;
}
 
public class Player : MonoBehaviour
{
   public MovementProperties movementProeprties;
   public HealthPorperties healthProeprties;
}

Text

38. If you have a lot of story text, put it in a file. Don’t put it in fields for editing in the inspector. Make it easy to change without having to open the Unity editor, and especially without having to save the scene.

39. If you plan to localise, separate all your strings to one location. There are many ways to do this. One way is to define a Text class with a public string field for each string, with defaults set to English, for example. Other languages subclass this and re-initialize the fields with the language equivalents.

More sophisticated techniques (appropriate when the body of text is large and / or the number of languages is high) will read in a spread sheet and provide logic for selecting the right string based on the chosen language.

Testing and Debugging

40. Implement a graphical logger to debug physics, animation, and AI. This can make debugging considerably faster. See here.

41. Implement a HTML logger. In some cases, logging can still be useful. Having logs that are easier to parse (are colour coded, has multiple views, records screenshots) can make log-debugging much more pleasant. See here.

42. Implement your own FPS counter. Yup. No one knows what Unity’s FPS counter really measures, but it is not frame rate. Implement your own so that the number can correspond with intuition and visual inspection.

43. Implement shortcuts for taking screen shots. Many bugs are visual, and are much easier to report when you can take a picture. The ideal system should maintain a counter inPlayerPrefs so that successive screenshots are not overwritten. The screenshots should be saved outside the project folder to avoid people from accidentally committing them to the repository.

44. Implement shortcuts for printing the player’s world position. This makes it easy to report the position of bugs that occur in specific places in the world, which in turns makes it easier to debug.

45. Implement debug options for making testing easier. Some examples:

  • Unlock all items.
  • Disable enemies.
  • Disable GUI.
  • Make player invincible.
  • Disable all gameplay.

46. For teams that are small enough, make a prefab for each team member with debug options. Put a user identifier in a file that is not committed, and is read when the game is run. This why:

  • Team members do not commit their debug options by accident and affect everyone.
  • Changing debug options don’t change the scene.

47. Maintain a scene with all gameplay elements. For instance, a scene with all enemies, all objects you can interact with, etc. This makes it easy to test functionality without having to play too long.

48. Define constants for debug shortcut keys, and keep them in one place. Debug keys are not normally (or conveniently) processed in a single location like the rest of the game input. To avoid shortcut key collisions, define constants in a central place. An alternative is to process all keys in one place regardless of whether it is a debug function or not. (The downside is that this class may need extra references to objects just for this).

Documentation

49. Document your setup. Most documentation should be in the code, but certain things should be documented outside code. Making designers sift through code for setup is time-wasting. Documented setups improved efficiency (if the documents are current).

Document the following:

  • Layer uses (for collision, culling, and raycasting – essentially, what should be in what layer).
  • Tag uses.
  • GUI depths for layers (what should display over what).
  • Scene setup.
  • Idiom preferences.
  • Prefab structure.
  • Animation layers.

Naming Standard and Folder Structure

50. Follow a documented naming convention and folder structure. Consistent naming and folder structure makes it easier to find things, and to figure out what things are.

You will most probably want to create your own naming convention and folder structure. Here is one as an example.

Naming General Principles

  1. Call a thing what it is. A bird should be called Bird.
  2. Choose names that can be pronounced and remembered. If you make a Mayan game, do not name your level QuetzalcoatisReturn.
  3. Be consistent. When you choose a name, stick to it.
  4. Use Pascal case, like this: ComplicatedVerySpecificObject. Do not use spaces, underscores, or hyphens, with one exception (see Naming Different Aspects of the Same Thing).
  5. Do not use version numbers, or words to indicate their progress (WIP, final).
  6. Do not use abbreviations: DVamp@W should be DarkVampire@Walk.
  7. Use the terminology in the design document: if the document calls the die animation Die, then use DarkVampire@Die, not DarkVampire@Death.
  8. Keep the most specific descriptor on the left: DarkVampire, not VampireDark;PauseButton, not ButtonPaused. It is, for instance, easier to find the pause button in the inspector if not all buttons start with the word Button. [Many people prefer it the other way around, because that makes grouping more obvious visually. Names are not for grouping though, folders are. Names are to distinguish objects of the same type so that they can be located reliably and fast.]
  9. Some names form a sequence. Use numbers in these names, for example, PathNode0,PathNode1. Always start with 0, not 1.
  10. Do not use numbers for things that don’t form a sequence. For example, Bird0, Bird1,Bird2 should be Flamingo, Eagle, Swallow.
  11. Prefix temporary objects with a double underscore __Player_Backup.

Naming Different Aspects of the Same Thing

Use underscores between the core name, and the thing that describes the “aspect”. For instance:

  • GUI buttons states EnterButton_Active, EnterButton_Inactive
  • Textures DarkVampire_Diffuse, DarkVampire_Normalmap
  • Skybox JungleSky_Top, JungleSky_North
  • LOD Groups DarkVampire_LOD0, DarkVampire_LOD1

Do not use this convention just to distinguish between different types of items, for instance Rock_Small, Rock_Large should be SmallRock, LargeRock.

Structure

The organisation of your scenes, project folder, and script folder should follow a similar pattern.

Folder Structure

Materials
GUI
Effects
Meshes
   Actors
      DarkVampire
      LightVampire
      ...
   Structures
      Buildings
      ...
   Props
      Plants
      ...
   ...
Plugins
Prefabs
   Actors
   Items
   ...
Resources
   Actors
   Items
   ...
Scenes
   GUI
   Levels
   TestScenes
Scripts
Textures
GUI
Effects
...

Scene Structure

Cameras
Dynamic Objects
Gameplay
   Actors
   Items
   ...
GUI
   HUD
   PauseMenu
   ...
Management
Lights
World
   Ground
   Props
   Structure
   ...

Scripts Folder Structure

ThirdParty
   ...
MyGenericScripts
   Debug
   Extensions
   Framework
   Graphics
   IO
   Math
   ...
MyGameScripts
   Debug
   Gameplay
      Actors
      Items
      ...
   Framework
   Graphics
   GUI
   ...

How to Re-implement Inspector Drawing

1. Define a base class for all your editors

BaseEditor<T> : Editor 
where T : MonoBehaviour
{
   override public void OnInspectorGUI()
   {
      T data = (T) target;
 
      GUIContent label = new GUIContent();
      label.text = "Properties"; //
 
      DrawDefaultInspectors(label, data);
 
      if(GUI.changed)
      {         
         EditorUtility.SetDirty(target);
      }
   }
}

2. Use reflection and recursion to do draw components

public static void DrawDefaultInspectors<T>(GUIContent label, T target)
   where T : new()
{
   EditorGUILayout.Separator();
   Type type = typeof(T);      
   FieldInfo[] fields = type.GetFields();
   EditorGUI.indentLevel++;
 
   foreach(FieldInfo field in fields)
   {
      if(field.IsPublic)
      {
         if(field.FieldType == typeof(int))
         {
            field.SetValue(target, EditorGUILayout.IntField(
            MakeLabel(field), (int) field.GetValue(target)));
         }   
         else if(field.FieldType == typeof(float))
         {
            field.SetValue(target, EditorGUILayout.FloatField(
            MakeLabel(field), (float) field.GetValue(target)));
         }
 
         ///etc. for other primitive types
 
         else if(field.FieldType.IsClass)
         {
            Type[] parmTypes = new Type[]{ field.FieldType};
 
            string methodName = "DrawDefaultInspectors";
 
            MethodInfo drawMethod = 
               typeof(CSEditorGUILayout).GetMethod(methodName);
 
            if(drawMethod == null)
            {
               Debug.LogError("No method found: " + methodName);
            }
 
            bool foldOut = true;
 
            drawMethod.MakeGenericMethod(parmTypes).Invoke(null, 
               new object[]
               {
                  MakeLabel(field),
                  field.GetValue(target)
               });
         }      
         else
         {
            Debug.LogError(
               "DrawDefaultInspectors does not support fields of type " +
               field.FieldType);
         }
      }         
   }
 
   EditorGUI.indentLevel--;
}

The above method uses the following helper:

private static GUIContent MakeLabel(FieldInfo field)
{
   GUIContent guiContent = new GUIContent();      
   guiContent.text = field.Name.SplitCamelCase();      
   object[] descriptions = 
      field.GetCustomAttributes(typeof(DescriptionAttribute), true);
 
   if(descriptions.Length > 0)
   {
      //just use the first one.
      guiContent.tooltip = 
         (descriptions[0] as DescriptionAttribute).Description;
   }
 
   return guiContent;
}

Note that it uses an annotation in your class code to generate a tooltip in the inspector.

3. Define new Custom Editors

Unfortunately, you will still need to define a class for each MonoBehaviour. Fortunately, these definitions can be empty; all the actual work is done by the base class.

[CustomEditor(typeof(MyClass))]
public class MyClassEditor : BaseEditor<MyClass>
{}

In theory this step can be automated, but I have not tried it.

REF: http://devmag.org.za/2012/07/12/50-tips-for-working-with-unity-best-practices/

[Dev Tip] ASP.NET Identity 2.1 with ASP.NET Web API 2.2 (Accounts Management) – Part 1

ASP.NET Identity 2.1 is the latest membership and identity management framework provided by Microsoft, this membership system can be plugged to any ASP.NET framework such as Web API, MVC, Web Forms, etc…

In this tutorial we’ll cover how to integrate ASP.NET Identity system with ASP.NET Web API , so we can build a secure HTTP service which acts as back-end for SPA front-end built using AngularJS, I’ll try to cover in a simple way different ASP.NET Identity 2.1 features such as: Accounts managements, roles management, email confirmations, change password, roles based authorization, claims based authorization, brute force protection, etc…

The AngularJS front-end application will use bearer token based authentication using Json Web Tokens (JWTs) format and should support roles based authorization and contains the basic features of any membership system. The SPA is not ready yet but hopefully it will sit on top of our HTTP service without the need to come again and modify the ASP.NET Web API logic.

I will follow step by step approach and I’ll start from scratch without using any VS 2013 templates so we’ll have better understanding of how the ASP.NET Identity 2.1 framework talks with ASP.NET Web API framework.

The source code for this tutorial is available on GitHub.

I broke down this series into multiple posts which I’ll be posting gradually, posts are:

  • Configure ASP.NET Identity with ASP.NET Web API (Accounts Management) – (This Post)
  • ASP.NET Identity Accounts Confirmation,  and Token Based Authentication – Part 2
  • ASP.NET Identity Role Based Authorization with ASP.NET Web API – Part 3
  • ASP.NET Identity Authorization Access using Claims with ASP.NET Web API – Part 4
  • AngularJS Authentication and Authorization with ASP.NET Web API and Identity – Part 5

Configure ASP.NET Identity 2.1 with ASP.NET Web API 2.2 (Accounts Management)

Setting up the ASP.NET Identity 2.1

Step 1: Create the Web API Project

In this tutorial I’m using Visual Studio 2013 and .Net framework 4.5, now create an empty solution and name it “AspNetIdentity” then add new ASP.NET Web application named “AspNetIdentity.WebApi”, we will select an empty template with no core dependencies at all, it will be as as the image below:

WebApiNewProject

Step 2: Install the needed NuGet Packages:

We’ll install all those NuGet packages to setup our Owin server and configure ASP.NET Web API to be hosted within an Owin server, as well we will install packages needed for ASP.NET Identity 2.1, if you would like to know more about the use of each package and what is the Owin server, please check this post.

 Step 3: Add Application user class and Application Database Context:

Now we want to define our first custom entity framework class which is the “ApplicationUser” class, this class will represents a user wants to register in our membership system, as well we want to extend the default class in order to add application specific data properties for the user, data properties such as: First Name, Last Name, Level, JoinDate. Those properties will be converted to columns in table “AspNetUsers” as we’ll see on the next steps.

So to do this we need to create new class named “ApplicationUser” and derive from “Microsoft.AspNet.Identity.EntityFramework.IdentityUser” class.

Note: If you do not want to add any extra properties to this class, then there is no need to extend the default implementation and derive from “IdentityUser” class.

To do so add new folder named “Infrastructure” to our project then add new class named “ApplicationUser” and paste the code below:

Now we need to add Database context class which will be responsible to communicate with our database, so add new class and name it “ApplicationDbContext” under folder “Infrastructure” then paste the code snippet below:

As you can see this class inherits from “IdentityDbContext” class, you can think about this class as special version of the traditional “DbContext” Class, it will provide all of the entity framework code-first mapping and DbSet properties needed to manage the identity tables in SQL Server, this default constructor takes the connection string name “DefaultConnection” as an argument, this connection string will be used point to the right server and database name to connect to.

The static method “Create” will be called from our Owin Startup class, more about this later.

Lastly we need to add a connection string which points to the database that will be created using code first approach, so open “Web.config” file and paste the connection string below:

Step 4: Create the Database and Enable DB migrations:

Now we want to enable EF code first migration feature which configures the code first to update the database schema instead of dropping and re-creating the database with each change on EF entities, to do so we need to open NuGet Package Manager Console and type the following commands:

The “enable-migrations” command creates a “Migrations” folder in the “AspNetIdentity.WebApi” project, and it creates a file named “Configuration”, this file contains method named “Seed” which is used to allow us to insert or update test/initial data after code first creates or updates the database. This method is called when the database is created for the first time and every time the database schema is updated after a data model change.

Migrations

As well the “add-migration InitialCreate” command generates the code that creates the database from scratch. This code is also in the “Migrations” folder, in the file named “<timestamp>_InitialCreate.cs“. The “Up” method of the “InitialCreate” class creates the database tables that correspond to the data model entity sets, and the “Down” method deletes them. So in our case if you opened this class “201501171041277_InitialCreate” you will see the extended data properties we added in the “ApplicationUser” class in method “Up”.

Now back to the “Seed” method in class “Configuration”, open the class and replace the Seed method code with the code below:

This code basically creates a user once the database is created.

Now we are ready to trigger the event which will create the database on our SQL server based on the connection string we specified earlier, so open NuGet Package Manager Console and type the command:

The “update-database” command runs the “Up” method in the “Configuration” file and creates the database and then it runs the “Seed” method to populate the database and insert a user.

If all is fine, navigate to your SQL server instance and the database along with the additional fields in table “AspNetUsers” should be created as the image below:

AspNetIdentityDB

Step 5: Add the User Manager Class:

The User Manager class will be responsible to manage instances of the user class, the class will derive from “UserManager<T>”  where T will represent our “ApplicationUser” class, once it derives from the “ApplicationUser” class a set of methods will be available, those methods will facilitate managing users in our Identity system, some of the exposed methods we’ll use from the “UserManager” during this tutorial are:

Now to implement the “UserManager” class, add new file named “ApplicationUserManager” under folder “Infrastructure” and paste the code below:

As you notice from the code above the static method “Create” will be responsible to return an instance of the “ApplicationUserManager” class named “appUserManager”, the constructor of the “ApplicationUserManager” expects to receive an instance from the “UserStore”, as well the UserStore instance construct expects to receive an instance from our “ApplicationDbContext” defined earlier, currently we are reading this instance from the Owin context, but we didn’t add it yet to the Owin context, so let’s jump to the next step to add it.

Note: In the coming post we’ll apply different changes to the “ApplicationUserManager” class such as configuring email service, setting user and password polices.

Step 6: Add Owin “Startup” Class

Now we’ll add the Owin “Startup” class which will be fired once our server starts. The “Configuration” method accepts parameter of type “IAppBuilder” this parameter will be supplied by the host at run-time. This “app” parameter is an interface which will be used to compose the application for our Owin server, so add new file named “Startup” to the root of the project and paste the code below:

What worth noting here is how we are creating a fresh instance from the “ApplicationDbContext” and “ApplicationUserManager” for each request and set it in the Owin context using the extension method “CreatePerOwinContext”. Both objects (ApplicationDbContext and AplicationUserManager) will be available during the entire life of the request.

Note: I didn’t plug any kind of authentication here, we’ll visit this class again and add JWT Authentication in the next post, for now we’ll be fine accepting any request from any anonymous users.

Define Web API Controllers and Methods

Step 7: Create the “Accounts” Controller:

Now we’ll add our first controller named “AccountsController” which will be responsible to manage user accounts in our Identity system, to do so add new folder named “Controllers” then add new class named “AccountsController” and paste the code below:

What we have implemented above is the following:

  • Our “AccountsController” inherits from base controller named “BaseApiController”, this base controller is not created yet, but it contains methods that will be reused among different controllers we’ll add during this tutorial, the methods which comes from “BaseApiController” are: “AppUserManager”, “TheModelFactory”, and “GetErrorResult”, we’ll see the implementation for this class in the next step.
  • We have added 3 methods/actions so far in the “AccountsController”:
    • Method “GetUsers” will be responsible to return all the registered users in our system by calling the enumeration “Users” coming from “ApplicationUserManager” class.
    • Method “GetUser” will be responsible to return single user by providing it is unique identifier and calling the method “FindByIdAsync” coming from “ApplicationUserManager” class.
    • Method “GetUserByName” will be responsible to return single user by providing it is username and calling the method “FindByNameAsync” coming from “ApplicationUserManager” class.
    • The three methods send the user object to class named “TheModelFactory”, we’ll see in the next step the benefit of using this pattern to shape the object graph returned and how it will protect us from leaking any sensitive information about the user identity.
  • Note: All methods can be accessed by any anonymous user, for now we are fine with this, but we’ll manage the access control for each method and who are the authorized identities that can perform those actions in the coming posts.

Step 8: Create the “BaseApiController” Controller:

As we stated before, this “BaseApiController” will act as a base class which other Web API controllers will inherit from, for now it will contain three basic methods, so add new class named “BaseApiController” under folder “Controllers” and paste the code below:

What we have implemented above is the following:

  • We have added read only property named “AppUserManager” which gets the instance of the “ApplicationUserManager” we already set in the “Startup” class, this instance will be initialized and ready to invoked.
  • We have added another read only property named “TheModelFactory” which returns an instance of “ModelFactory” class, this factory pattern will help us in shaping and controlling the response returned to the client, so we will create a simplified model for some of our domain object model (Users, Roles, Claims, etc..) we have in the database. Shaping the response and building customized object graph is very important here; because we do not want to leak sensitive data such as “PasswordHash” to the client.
  • We have added a function named “GetErrorResult” which takes “IdentityResult” as a constructor and formats the error messages returned to the client.

Step 8: Create the “ModelFactory” Class:

Now add new folder named “Models” and inside this folder create new class named “ModelFactory”, this class will contain all the functions needed to shape the response object and control the object graph returned to the client, so open the file and paste the code below:

Notice how we included only the properties needed to return them in users object graph, for example there is no need to return the “PasswordHash” property so we didn’t include it.

Step 9: Add Method to Create Users in”AccountsController”:

It is time to add the method which allow us to register/create users in our Identity system, but before adding it, we need to add the request model object which contains the user data which will be sent from the client, so add new file named “AccountBindingModels” under folder “Models” and paste the code below:

The class is very simple, it contains properties for the fields we want to send from the client to our API with some data annotation attributes which help us to validate the model before submitting it to the database, notice how we added property named “RoleName” which will not be used now, but it will be useful in the coming posts.

Now it is time to add the method which register/creates a user, open the controller named “AccountsController” and add new method named “CreateUser” and paste the code below:

What we have implemented here is the following:

  • We validated the request model based on the data annotations we introduced in class “AccountBindingModels”, if there is a field missing then the response will return HTTP 400 with proper error message.
  • If the model is valid, we will use it to create new instance of class “ApplicationUser”, by default we’ll put all the users in level 3.
  • Then we call method “CreateAsync” in the “AppUserManager” which will do the heavy lifting for us, inside this method it will validate if the username, email is used before, and if the password matches our policy, etc.. if the request is valid then it will create new user and add to the “AspNetUsers” table and return success result. From this result and as good practice we should return the resource created in the location header and return 201 created status.

Notes:

  • Sending a confirmation email for the user, and configuring user and password policy will be covered in the next post.
  • As stated earlier, there is no authentication or authorization applied yet, any anonymous user can invoke any available method, but we will cover this authentication and authorization part in the coming posts.

Step 10: Test Methods in”AccountsController”:

Lastly it is time to test the methods added to the API, so fire your favorite REST client Fiddler or PostMan, in my case I prefer PostMan. So lets start testing the “Create” user method, so we need to issue HTTP Post to the URI: “http://localhost:59822/api/accounts/create” as the request below, if creating a user went good you will receive 201 response:

Create User

Now to test the method “GetUsers” all you need to do is to issue HTTP GET to the URI: “http://localhost:59822/api/accounts/users” and the response graph will be as the below:

The source code for this tutorial is available on GitHub.

In the next post we’ll see how we’ll configure our Identity service to start sending email confirmations, customize username and password polices, implement Json Web Token (JWTs) Authentication and manage the access for the methods.

REF: http://bitoftech.net/2015/01/21/asp-net-identity-2-with-asp-net-web-api-2-accounts-management/