Fixing Multiple actions were found that match the request Asp.net Web Api Error

WebApiTest Project Here

I’m currently developing a new project utilizing asp.net Web Api 4.5. I hit a frustrating issue today with regards to routing actions. The error is “Multiple actions were found that match the request” when I added a new method to my code bypassing the built in GET,POST,PUT convention that web API supports out of the box.

The default scaffold controller web api gives you has the following pattern out of the box.

    public class TestController : ApiController
    {
        // GET api/test
        public IEnumerable<string> Get()
        {
            return new string[] { "value1", "value2" };
        }
 
        // GET api/test/5
        public string Get(int id)
        {
            return "value";
        }
 
        // POST api/test
        public void Post([FromBody]string value)
        {
        }
 
        // PUT api/test/5
        public void Put(int id, [FromBody]string value)
        {
        }
 
        // DELETE api/test/5
        public void Delete(int id)
        {
        }
    }

Problems start to appear when you want to go beyond this convention such as another “GET” method that goes by a different name.

    public class TestController : ApiController
    {
        // GET api/test
        public IEnumerable<string> Get()
        {
            return new string[] { "value1", "value2" };
        }
 
        // GET api/test/GetAllWithFilter
        [HttpGet]
        public IEnumerable<string> GetAllWithFilter()
        {
            return new string[] { "value1"};
        }
 
        // GET api/test/5
        public string Get(int id)
        {
            return "value";
        }
 
        // POST api/test
        public void Post([FromBody]string value)
        {
        }
 
        // PUT api/test/5
        public void Put(int id, [FromBody]string value)
        {
        }
 
        // DELETE api/test/5
        public void Delete(int id)
        {
        }
    }

This shouldn’t be that hard to fix on the surface but the built in route that enables this convention actually makes this much more difficult that it should be. I searched around for a while to figure out how to solve this then finally ran across a great Stackoverflow post here that nails it. Sadly this post isn’t even marked as the answer, it should be.

In short replace your built in route in WebApiConfig with this (updated for latest release). Be sure to modify your api path to suit your application.

            config.Routes.MapHttpRoute("DefaultApiWithId", "api/v1/{controller}/{id}", new { id = RouteParameter.Optional }, new { id = @"\d+" });
            config.Routes.MapHttpRoute("DefaultApiWithAction", "api/v1/{controller}/{action}");
            config.Routes.MapHttpRoute("DefaultApiGet", "api/v1/{controller}", new { action = "Get" }, new { httpMethod = new HttpMethodConstraint(HttpMethod.Get) });
            config.Routes.MapHttpRoute("DefaultApiPost", "api/v1/{controller}", new { action = "Post" }, new { httpMethod = new HttpMethodConstraint(HttpMethod.Post) });
            config.Routes.MapHttpRoute("DefaultApiPut", "api/v1/{controller}", new { action = "Put" }, new { httpMethod = new HttpMethodConstraint(HttpMethod.Put) });
            config.Routes.MapHttpRoute("DefaultApiDelete", "api/v1/{controller}", new { action = "Delete" }, new { httpMethod = new HttpMethodConstraint(HttpMethod.Delete) });

This honors the GET,POST,PUT,DELETE convention while allowing you to create differently named actions in the same controller!

Update – Sample Project

As requested here is a sample project that I have verified works.

WebApiTest Project Here

Tagged: , , , ,

Who is Online with SignalR – A sample project

After having watched the build 2012 demo of signalR put on by Damian Edwards and David Fowler I have been nearly obsessed with playing with SignalR. Personally I feel like this will be one of those technologies that will open up a lot of new possibilities for web applications. I highly recommend you watch the talk I linked to above. Additionally SignalR as of now is 100% support by microsoft and has a home now on asp.net at www.asp.net/signalr.

This post is sharing some code I was just playing around with I might integrate into my web projects at some point. I have found it really useful to see who is currently online and a little information about what page they are on, browser, etc. A live chat application I have used in the past did this nicely (www.providesupport.com). As an eCommerce developer it is interesting to see how people progress through checkout or visually see how long they stay on your site. SignalR makes something like this painfully easy to implement.

There are many many chat program examples on the net, so I’m going to build off of of the chat example on their site. The information I want to see in real time: The users session id (a fictitious random number in this example), the user agent, their IP, and current page. I also will have two pieces, the user side which is the SignalR chat sample essentially. There is also an admin side which shows who is online. This is just some sample code you can modify however you see fit.

Here is the chat sample with a slight modification, I am passing in some information from the browser directly to the Join method. It is worth noting some of this information is available in the HTTP header, but I found that it wasn’t reliably sending when testing.

<script type="text/javascript" src="~/Scripts/jquery.cookie.js"></script>
<script type="text/javascript" src="/Scripts/jquery.signalR-1.0.0-alpha2.min.js"></script>
<!--  If this is an MVC project then use the following --> 
<!--  <script src="~/signalr/hubs" type="text/javascript"></script> -->
<script type="text/javascript" src="~/signalr/hubs"></script><script type="text/javascript">
        $(function() {
            // Proxy created on the fly          
            var chat = $.connection.chat;
 
            // Declare a function on the chat hub so the server can invoke it          
            chat.client.addMessage = function(message) {
                $('#messages').append('
	<li>' + message + '</li>');
            };
 
            $("#broadcast").click(function() {
                // Call the chat method on the server
                chat.server.send($('#msg').val());
            });
 
            // Start the connection
            $.connection.hub.start().done(function() {
                var sid = $.cookie('sid');
                chat.server.join({ Sid: sid, UserAgent: navigator.userAgent, Referer: document.referrer, CurrentPage: document.URL });
            });
        });
 
</script>

On the admin side there is some slightly different code. We don’t really care to see chat messages (we could), but instead we want to see who is online. Once the page loads and the connection is established the server sends the list of users to the page. If a new user joins or disconnects show connected is called which updates the list.

    <script src="~/Scripts/jquery.cookie.js"></script>
    <script src="/Scripts/jquery.signalR-1.0.0-alpha2.min.js" type="text/javascript"></script>
    <script src="~/signalr/hubs" type="text/javascript"></script>
    <script type="text/javascript">
        $(function() {
            // Proxy created on the fly          
            var chat = $.connection.chat;
 
            // Declare a function on the chat hub so the server can invoke it          
            chat.client.showConnected = function (message) {
                $('#messages').empty();
                $.each(message, function(index,value) {
                    $('#messages').append('<li>' + value.Sid + ' - ' + value.CurrentPage + ' - ' + value.UserAgent +' - '+ value.Connected + '</li>');
                });
            };
 
            // Start the connection
            $.connection.hub.start().done(function() {
                var sid = $.cookie('sid');
                chat.server.adminJoin().done(function() {
                    chat.server.getUsers();
                });
            });
        });
    </script>

Finally, here is the code for the hub which orchestrates everything. The UserList holds the current list of users in memory. This could easily be a database if you wanted. When an admin joins they are added to the admin group. The groups feature is part of SignalR and makes it really easy to keep groups of users separated. The UserList can be whatever you want, it could hold more or less information about your connected users.

using System;
using System.Diagnostics;
using System.Globalization;
using System.Threading.Tasks;
using Microsoft.AspNet.SignalR.Hubs;
using Microsoft.AspNet.SignalR;
using System.Collections.Concurrent;
 
namespace SignalRHubs.Hubs
{
    public class Chat : Hub
    {
        public static ConcurrentDictionary<string, UserData> UserList = new ConcurrentDictionary<string, UserData>();
 
        public override Task OnDisconnected()
        {
            UserData Value;
            UserList.TryRemove(Context.ConnectionId,out Value);
 
            return Clients.Group("admins").showConnected(UserList); 
        }
 
        public void Send(string message)
        {
            // Call the addMessage method on all clients            
            Clients.All.addMessage(message);
        }
 
        public void AdminJoin()
        {
            Groups.Add(Context.ConnectionId, "admins");
        }
 
        public void GetUsers()
        {
            Clients.Group("admins").showConnected(UserList); 
        }
 
        public void Join(UserData data)
        {
            data.Connected = DateTime.Now.ToString("f");
            data.Ip = Context.ServerVariables["REMOTE_ADDR"];
            UserList.TryAdd(Context.ConnectionId, data);
            Clients.Group("admins").showConnected(UserList);
        }
    }
 
    public class UserData
    {
        public string UserAgent { get; set; }
        public int Sid { get; set; }
        public string Connected { get; set; }
        public string Ip { get; set; }
        public string Refer { get; set; }
        public string CurrentPage { get; set; }
 
    }
}

Here is the final result. When new users connect they are shown in /Home/Admin.
signalr

Here is the source code for the project. Note you will need to run the nuget command below to restore all of the packages.

nuget install packages.config

SignalRHubs

Tagged: , , ,

Knockout.js and Select Options Binding Pre Selection

With the move into MVC4 I have taken an interest in knockout.js. Microsoft seemed fit to include it with MVC4 so it was worth taking a look at. To my surprise it will undoubtedly save me some time and effort for complex ajax UIs. I have already hit a major stumbling block though with it. That is the fact that when you bind an object to a select knockout does not work as you would expect. For example:

var Font = {FontID: 1, Alias: "Arial"}
 
var FontList = [/*An array of the structure above*/];
<select data-bind="options: $root.fontList, optionsText: 'Alias', value: Font"></select>

What you would expect is that no matter at what index the Font object was that it would select the proper item in the select menu when you loaded the page. This is not the case. Javascript does not consider two objects truly equal unless it is a reference to an exact object. This is why the menus don’t pre select the way you expect. Personally I think knockout.js needs to work the way people expect. That is, it is able to store objects as the values of a select, therefore it should have a mechanism to allow you to pre select one based on a pre determined key.

Workaround 1

Faced with this problem, one of the easiest ways around is to change the way you bind to the select. Doing below you will properly pre select the value you want. However, the only issue here is that when you update the value in the select you essentially corrupt your data. If say you have a Font object, the key would properly update, but the “Alias” value would not. Obviously this solution is not ideal.

<select data-bind="options: $root.fontList, optionsText: 'Alias', optionsValue: 'FontID', value: Font.FontID"></select>

Workaround 2

In my search for a solution many people suggested holding the selected value in a separate observable. You pre populate that observable with a reference to an object in the FontList array. This would work for a lot of people, but not for me. When I fetch my data from the server it is already nicely formatted and held in a structured object. If I have to break that structure for every drop down then the usefulness of knockout starts to come into question.

Final Solution

After toying with the idea of simply editing the knockout source code to make it work the way I expected I ended up finding out that making a custom binding handler would solve my issue. Here it is below along with the usage.

ko.bindingHandlers.preSelect = {       
    update: function (element, valueAccessor, allBindingsAccessor, viewModel, bindingContext) {
        var val = ko.utils.unwrapObservable(valueAccessor());
        var newOptions = element.getElementsByTagName("option");
        var updateRequired = false;
        for (var i = 0, j = newOptions.length; i < j; i++) {
            if (ko.utils.unwrapObservable(val.value) == ko.selectExtensions.readValue(newOptions[i])[val.key]) {
                if (!newOptions[i].selected)
                {
                    ko.utils.setOptionNodeSelectionState(newOptions[i], true);//only sets the selectedindex, object still holds index 0 as selected
                    updateRequired = true;
                }
            }
        }
        if (updateRequired)
        {
            var options = allBindingsAccessor().options;
            var selected = ko.utils.arrayFirst(options, function (item) {
                return ko.utils.unwrapObservable(val.value) == item[val.key];
            });
            if (ko.isObservable(bindingContext.$data[val.propertyName])) {
                bindingContext.$data[val.propertyName](selected); // here we write the correct object back into the $data
            } else {
                bindingContext.$data[val.propertyName] = selected; // here we write the correct object back into the $data
            }
        }
    }
};
<select data-bind="options: $root.fontList, optionsText: 'Alias', value: Font, preSelect: {key : 'FontID', propertyName : 'Font', value : Font.FontID}"></select>

In my opinion this is not an idea solution because of all the data I had to pass back in the preSelect argument. In my particular situation because of the way my objects are structured I had to know all three parameters. You situation may be different so adjust the code accordingly. I found that even though you set the selected index using knockouts method it does not update the referencing object, so the last line is there to do that. Overall this solution solves my issue and I haven’t found any pitfalls yet, but if you do let me know!

Tagged: ,

Multi Tenant Architecture with Asp.net MVC 4

Multi Tenant MVC4

I’ve been faced with a daunting challenge the last few months which is how to effectively create a multi-tenant architecture utilizing asp.net MVC 4. Creating an architecture like this can work several different ways depending on what you want to do. My particular project has a few goals.

First, the application should support tenants as “sites”. The entire project is hosted on one IIS configuration with one application pool. I’m not really going to get into the benefits of Multi-tenant architectures but one inherent benefit is that any number of sites can run on one code base. That is the approach I want to take.

Second, I’m basically converting an existing project to be multi tenant. What my approach does is introduce a Tenant table in the database which exposes a few basic properties about each tenant including a TenantID. This TenantID needs to be introduced into any table you want to be able to segregate based on tenant. You’ll find at least three schools of thought how to segregate tenant data in a database. You can create tables for each tenant making it easy to back up and restore only one tenant. You can also even create separate databases. My method is keeping all the data together and segregating by TenantID.

Third, I need a custom view engine to facilitate the view organization I desire. The structure I would like to end up with is View -> Tenant Name -> Views. I also want a View -> Global -> Views folder which is available if the view is not found in the Tenant folder. This allows me to share similar views such as in my project eCommerce shopping cart, check out, and payment code. This image below gives a nice image of what I’m talking about. More on the view engine in a bit.

Handling Static Files

You can also see I’ve restructured the /Content folder creating Global Folders for images and styles, then breaking out another Tenants folder for tenant specific resources. One issue you will run into quite quickly is how to deal with files such as robots.txt or favicon.ico. These two files are common on most any site (including many more) and must have copies for each tenant. My solution is to utilize the IISrewrite feature storing the rewrite rules directly in my web.config. An example below routes the favicon for site 1 to the proper folder. This isn’t ideal in my opinion since the web.config can get large quickly, but it does work quite well.

 

 

Storing the Tenant List in Memory

As you will soon see we need to know the list of tenants every request to determine which tenant is requesting the page. In order to do this I load the current list of tenants at the application_start(). This is a simple FetchAll() into a list using Entity Framework 5. I’m not an expert on thread safety but once this data is loaded it will only be read from this point on.

    public class MvcApplication : System.Web.HttpApplication
    {
        public static List Tenants;
 
        protected void Application_Start()
        {
            AreaRegistration.RegisterAllAreas();
 
            WebApiConfig.Register(GlobalConfiguration.Configuration);
            FilterConfig.RegisterGlobalFilters(GlobalFilters.Filters);
            RouteConfig.RegisterRoutes(RouteTable.Routes);
            BundleConfig.RegisterBundles(BundleTable.Bundles);
 
            System.Web.Mvc.ViewEngines.Engines.Clear();
            System.Web.Mvc.ViewEngines.Engines.Add(new MulitTenantRazorViewEngine());
 
            Tenants = tenant.FetchAll();
        }
    }

Determining the Tenant at the Controller

It is useful for several reasons to always know which tenant you are dealing with at the controller level. For this reason I overloaded Controller and created my own MultiTenantController that inherits Controller. All my controllers now inherit from MultiTenantController. I can do a few useful things now including intercepting the MasterName property of the ViewResult and setting that manually if needed. More importantly this is where I determine the tenant based on the domain name.

public class MultiTenantController : Controller
    {
        public tenant CurrentTenant;
 
        protected override void OnResultExecuting(ResultExecutingContext filterContext)
        {
            var viewResult = filterContext.Result as ViewResult;
            if (viewResult != null)
            {
                viewResult.MasterName = "_Layout";
            }
 
            Debug.Assert(filterContext.HttpContext.Request.Url != null, "filterContext.HttpContext.Request.Url != null");
            CurrentTenant = GetCurrentTenant(filterContext.HttpContext.Request.Url.Host.ToLower());
        }
 
        internal static tenant GetCurrentTenant(string host)
        {
            if (host == null)
            {
                host = "";
            }
            var Tenant = MvcApplication.Tenants.Where(p =&gt; //This Tenants Loaded in memory on Application_start()
            {
                var match = p.FolderName + "."; //p.FolderName holds "site1" or "site2" etc...
                return host.StartsWith(match); //is it http://site1.com?
            }).FirstOrDefault();
            if (Tenant == null)
            {
                Tenant = MvcApplication.Tenants.Where(p =&gt;
                {
                    var match = p.FolderName + ".";
                    return host.Contains("." + match); //is it http://www.site1.com?
                }).FirstOrDefault();
            }
 
            return Tenant ?? MvcApplication.Tenants[0];
        }

At this point each controller has access to CurrentTenant which hold the current tenant that is requesting the View. This is really useful because now we can create views based on which tenant. You could also swap key information on the page based on which tenant is looking at the page. Finally current tenant can be passed further down the application creating whatever behavior you need.

Custom View Engine

This is something that might not fit your needs exactly, but this is a pretty generic approach to handling views. As I discussed above I want a global folder which allows me to create views that are shared across all tenants. I also want to be able to specify views for specific tenants since each site can have different features.

Here is sort of what I based my approach off of.
http://weblogs.asp.net/imranbaloch/archive/2011/06/27/view-engine-with-dynamic-view-location.aspx

This view engine is pretty straight forward. I’m using razor in my project so this extends RazorViewEngine. The key part is that I’m pulling out of the controllerContext the controllerContext.Controller and casting as my MultiTenantController. Once I do this step I can now access my CurrentTenant variable we just talked about. At this point the %1 is simply replaced with the CurrentTenant folder name.

 public class MulitTenantRazorViewEngine : RazorViewEngine
    {
        public MulitTenantRazorViewEngine()
        {
            AreaViewLocationFormats = new[] {
            "~/Areas/{2}/Views/{1}/{0}.cshtml",
            "~/Areas/{2}/Views/{1}/{0}.vbhtml",
            "~/Areas/{2}/Views/Shared/{0}.cshtml",
            "~/Areas/{2}/Views/Shared/{0}.vbhtml"
            };
 
            AreaMasterLocationFormats = new[] {
            "~/Areas/{2}/Views/{1}/{0}.cshtml",
            "~/Areas/{2}/Views/{1}/{0}.vbhtml",
            "~/Areas/{2}/Views/Shared/{0}.cshtml",
            "~/Areas/{2}/Views/Shared/{0}.vbhtml"
            };
 
            AreaPartialViewLocationFormats = new[] {
            "~/Areas/{2}/Views/{1}/{0}.cshtml",
            "~/Areas/{2}/Views/{1}/{0}.vbhtml",
            "~/Areas/{2}/Views/Shared/{0}.cshtml",
            "~/Areas/{2}/Views/Shared/{0}.vbhtml"
            };
 
            ViewLocationFormats = new[] {
            "~/Views/%1/{1}/{0}.cshtml",
            "~/Views/%1/{1}/{0}.vbhtml",
            "~/Views/%1/Shared/{0}.cshtml",
            "~/Views/%1/Shared/{0}.vbhtml",
            "~/Views/Global/{1}/{0}.cshtml",
            "~/Views/Global/{1}/{0}.vbhtml",
            "~/Views/Global/Shared/{0}.cshtml",
            "~/Views/Global/Shared/{0}.vbhtml"
            };
 
            MasterLocationFormats = new[] {
            "~/Views/%1/{1}/{0}.cshtml",
            "~/Views/%1/{1}/{0}.vbhtml",
            "~/Views/%1/Shared/{0}.cshtml",
            "~/Views/%1/Shared/{0}.vbhtml",
            "~/Views/Global/{1}/{0}.cshtml",
            "~/Views/Global/{1}/{0}.vbhtml",
            "~/Views/Global/Shared/{0}.cshtml",
            "~/Views/Global/Shared/{0}.vbhtml"
            };
 
            PartialViewLocationFormats = new[] {
            "~/Views/%1/{1}/{0}.cshtml",
            "~/Views/%1/{1}/{0}.vbhtml",
            "~/Views/%1/Shared/{0}.cshtml",
            "~/Views/%1/Shared/{0}.vbhtml",
            "~/Views/Global/{1}/{0}.cshtml",
            "~/Views/Global/{1}/{0}.vbhtml",
            "~/Views/Global/Shared/{0}.cshtml",
            "~/Views/Global/Shared/{0}.vbhtml"
            };
        }
 
        protected override IView CreatePartialView(ControllerContext controllerContext, string partialPath)
        {
            var PassedController = controllerContext.Controller as MultiTenantController;
            Debug.Assert(PassedController != null, "PassedController != null");
            return base.CreatePartialView(controllerContext, partialPath.Replace("%1", PassedController.CurrentTenant.FolderName));
        }
 
        protected override IView CreateView(ControllerContext controllerContext, string viewPath, string masterPath)
        {
            var PassedController = controllerContext.Controller as MultiTenantController;
            Debug.Assert(PassedController != null, "PassedController != null");
            return base.CreateView(controllerContext, viewPath.Replace("%1", PassedController.CurrentTenant.FolderName), masterPath.Replace("%1", PassedController.CurrentTenant.FolderName));
        }
 
        protected override bool FileExists(ControllerContext controllerContext, string virtualPath)
        {
            var PassedController = controllerContext.Controller as MultiTenantController;
            Debug.Assert(PassedController != null, "PassedController != null");
            return base.FileExists(controllerContext, virtualPath.Replace("%1", PassedController.CurrentTenant.FolderName));
        }
    }

The Final Result

At this point you can have any number of tenants hosted under one code base. On my particular project this allows me to maintain much less code while any enhancements I make to say checkout pages or features shared across tenants are all echoed immediately. I also have the ability to create totally different views and experiences for each tenant while still sharing key parts of my application. From an eCommerce standpoint my orders now funnel into one administration area making it much simpler to manage.

Further Considerations

Here are some other things I will be considering during this project

SSL

For an eCommerce site it is important to have a means to SSL secure. Since the site is hosted under one application in IIS it is not possible to use separate SSL certificates for each domain name. My strategy involves a wildcard SSL certificate for your “main” site. This can be difficult if your tenants maintain no relation at all, but mine do. Because of the way I route static files, and test domain name my system already works for sub domain based SSL. If you need SSL on site2 you would simply do https://site2.site1.com assuming site one is the “main” site you want to create the certificate for. The routes and domain test are set up to only read the information before the first, so site2 is flagged as the CurrentTenant.

Modular Design

This is more a design consideration. When creating features for multi tenant sites it is wise to create features that are modular. Perhaps visualize something you can turn on and off arbitrarily for each tenant. By doing so you can create features that can be easily shared among current or future tenants.

Update – A Sample Project

Ok, I get it everybody wants a sample project to try this. One of the reasons I haven’t yet is because this configuration isn’t something you can just open and hit run. You’re going to have to configure some custom settings to make the demo work, but here it is!

Keep in mind there are various ways to route to static files such as robots.txt/favicon.ico. The demo shows a pretty bare bones way of doing it, it could easily be tweaked to be slightly more maintainable such filtering each file run into one rule for each file.

Set Up the Hosts File

You need to add two entries to your local DNS server or HOST file in windows. If you don’t know how, read this

127.0.0.1 site1.com
127.0.0.1 site2.com

Setting Up the Sample Project

Download the sample project here

  1. You’ll need to run visual studio as administrator.  Right click Visual Studio > Run as Administrator
  2. Extract and open the project.  You might get an error about IIS.
  3. We need to set up an IIS site to run the project, we will not be using the built in web server.  Create a new site in IIS, map it to the folder of your project.   Add two bindings as show below.
    IIS Setup
  4. Head back into Visual Studio and goto Project > MvcMultiTenant Properties then set up as belowCapture2
  5. The previous step requires administrative access so that is why you need to start visual studio with Admin privileges.
  6. At this point you should be able to build the project and start/Debug

Running the Project

Browse to Site1.com and Site2.com you’ll notice the page changes style sheets and views. You can examine the views folder to see how this works. This behavior can be extended to your entire site. You’ll also notice you can browse http://site1.com/Home/Contact and it uses the view from the global folder and shares on both sites. If you browse to http://site1.com/robots.txt and http://site2.com/robots.txt you’ll notice they are serving different files as expected.

Capture4

Tagged: , , ,

Web Api Generic MediaTypeFormatter for File Upload

I’m currently working on a personal project which uses Asp.net WebApi and .net 4.5. I have found several nice examples utilizing the Multipartformdatastreamprovider. I discovered quickly though that using this in a controller was going to multiply boilerplate code. It would be much more efficient to use a custom MediaTypeFormatter to handle the form data and pass me back the information I need in a memorystream which can be directly saved.

Jflood.net Original Code

Jflood.net provides a nice starting point to what I needed to do. However, I had a few more requirements of mine which included a JSON payload in the datafield. I also extended his ImageMedia class into a generic FileUpload class and exposed a few simple methods.

1
2
3
4
5
6
7
8
 public HttpResponseMessage Post(FileUpload<font> upload)
        {
            var FilePath = "Path";
            upload.Save(FilePath); //save the buffer
            upload.Value.Insert(); //save font object to DB
 
            return Request.CreateResponse(HttpStatusCode.OK, upload.Value);
        }

This is the file upload class. Like Jflood’s it includes a payload for the file to be stored and written in memory. I have added a simple save method which performs a few checks and saves to disk. I also have some project specific code that checks if the T type has a property named filename, if so it passes the name to it. Since my Value field is of Type T it is automatically deserialized by Json.net.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
    public class FileUpload<T>
    {
        private readonly string _RawValue;
 
        public T Value { get; set; }
        public string FileName { get; set; }
        public string MediaType { get; set; }
        public byte[] Buffer { get; set; }
 
        public FileUpload(byte[] buffer, string mediaType, string fileName, string value)
        {
            Buffer = buffer;
            MediaType = mediaType;
            FileName = fileName.Replace("\"","");
            _RawValue = value;
 
            Value = JsonConvert.DeserializeObject<T>(_RawValue);
        }
 
        public void Save(string path)
        {
            if (!Directory.Exists(path))
            {
                Directory.CreateDirectory(path);
            }
            var NewPath = Path.Combine(path, FileName);
            if (File.Exists(NewPath))
            {
                File.Delete(NewPath);
            }
 
            File.WriteAllBytes(NewPath, Buffer);
 
            var Property = Value.GetType().GetProperty("FileName");
            Property.SetValue(Value,FileName, null);
        }
    }

This is essentially the same thing as jflood is doing, however, I have added a section to parse out my “data” field that contains json.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
    public class FileMediaFormatter<T> : MediaTypeFormatter
    {
 
        public FileMediaFormatter()
        {
            SupportedMediaTypes.Add(new MediaTypeHeaderValue("application/octet-stream"));
            SupportedMediaTypes.Add(new MediaTypeHeaderValue("multipart/form-data"));
        }
 
        public override bool CanReadType(Type type)
        {
            return type == typeof(FileUpload<T>);
        }
 
        public override bool CanWriteType(Type type)
        {
            return false;
        }
 
        public async override Task<object> ReadFromStreamAsync(Type type, Stream readStream, HttpContent content, IFormatterLogger formatterLogger)
        {
 
            if (!content.IsMimeMultipartContent())
            {
                throw new HttpResponseException(HttpStatusCode.UnsupportedMediaType);
            }
 
            var Parts = await content.ReadAsMultipartAsync();
            var FileContent = Parts.Contents.First(x =>
                SupportedMediaTypes.Contains(x.Headers.ContentType));
 
            var DataString = "";
            foreach (var Part in Parts.Contents.Where(x => x.Headers.ContentDisposition.DispositionType == "form-data" 
                                                        && x.Headers.ContentDisposition.Name == "\"data\""))
            {
                var Data = await Part.ReadAsStringAsync();
                DataString = Data;
            }
 
            string FileName = FileContent.Headers.ContentDisposition.FileName;
            string MediaType = FileContent.Headers.ContentType.MediaType;
 
            using (var Imgstream = await FileContent.ReadAsStreamAsync())
            {
                byte[] Imagebuffer = ReadFully(Imgstream);
                return new FileUpload<T>(Imagebuffer, MediaType,FileName ,DataString);
            }
        }
 
        private byte[] ReadFully(Stream input)
        {
            var Buffer = new byte[16 * 1024];
            using (var Ms = new MemoryStream())
            {
                int Read;
                while ((Read = input.Read(Buffer, 0, Buffer.Length)) > 0)
                {
                    Ms.Write(Buffer, 0, Read);
                }
                return Ms.ToArray();
            }
        }
 
 
    }

Finally in your application_start you must include this line below.

1
GlobalConfiguration.Configuration.Formatters.Add(new FileMediaFormatter<font>());

Now it is really simple to accept uploads from other controllers. Be sure to tweak the MIME type for your own needs. Right now this only accepts application/octet-stream but it can easily accept other formats by adding other MIME types.

Update – How to add model validation support.

Quick update for the fileupload class. I’ve seen a few posts on stackoverflow about how to not only deserialize your object using a method such as mine, but also maintain the data annotation validation rules. That turns out to be pretty easy to do. Basically combine reflection and TryValidateProperty() and you can validate properties on demand. In my example it shows how you can get the validation messages. It simply puts them into an array. Here is sample below.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
public class FileUpload<T>
    {
        private readonly string _RawValue;
 
        public T Value { get; set; }
        public string FileName { get; set; }
        public string MediaType { get; set; }
        public byte[] Buffer { get; set; }
 
        public List<ValidationResult> ValidationResults = new List<ValidationResult>(); 
 
        public FileUpload(byte[] buffer, string mediaType, string fileName, string value)
        {
            Buffer = buffer;
            MediaType = mediaType;
            FileName = fileName.Replace("\"","");
            _RawValue = value;
 
            Value = JsonConvert.DeserializeObject<T>(_RawValue);
 
            foreach (PropertyInfo Property in Value.GetType().GetProperties())
            {
                var Results = new List<ValidationResult>();
                Validator.TryValidateProperty(Property.GetValue(Value),
                                              new ValidationContext(Value) {MemberName = Property.Name}, Results);
                ValidationResults.AddRange(Results);
            }
        }
 
        public void Save(string path, int userId)
        {
            if (!Directory.Exists(path))
            {
                Directory.CreateDirectory(path);
            }
            var SafeFileName = Md5Hash.GetSaltedFileName(userId,FileName);
            var NewPath = Path.Combine(path, SafeFileName);
            if (File.Exists(NewPath))
            {
                File.Delete(NewPath);
            }
 
            File.WriteAllBytes(NewPath, Buffer);
 
            var Property = Value.GetType().GetProperty("FileName");
            Property.SetValue(Value, SafeFileName, null);
        }
    }
Tagged: ,