Envelope Distortion using C# and GraphicsPath Class

I asked this question on StackOverflow nearly two years ago.  That is, looking for an algorithm to distort an arbitrary set of polygons (text for example) into a predetermined shape.  If you are familiar with Adobe Illustrator this feature is called Envelope Distortion.   Surprisingly my SO post got little response and I ultimately followed up with a limited workaround, although not a solution. I really wanted to incorporate this feature into my upcoming api, so I set out to figure it out myself.  Since there is not a lot of information about this on the net (that I can find) I decided to share a portion of my work. Step one was printing out some grid paper and figuring out exactly how this process should work. In this case writing code wasn’t going get anywhere until the underlying process was solved.  Essentially the algorithm should accept any X, Y inside of the bounds of the source polygon and return a transformed X, Y that fits inside of the resulting distorted polygon.

Implementing Bulge Distortion

Bulge distortion creates a bulging effect on a path by stretching it up and down.  This is useful since several other types of distortions can be easily achieved by making slight modifications to the generated distortion path.


Shown above is some arbitrary text loaded into a GDI+ GraphicsPath object.  Also drawn around the text is a distortion path that we will ultimately warp the text into.  The code accepts an float called Intensity which can be positive or negative.  This is a positive intensity of 1, should you use a negative intensity the distortion path would look like below.


In order to debug the process of writing the bulge distortion my first task was to view a grid of each point remapped into the correct location.  Upon this process it should be relatively easy to feed in a flattened Polygon (no bezier curves, just points) and see the resulting transformation.


How it works

For example, if we wanted to know where would X=170 and Y=10 translate to given the above Bulge distortion with Intensity = 1?  Refer the image below, first we calculate the relative position of Y in the bounding box.  We then scale that position linear to the relative height of the distortion path.  The difficult part is that should we ask for an X that is not in the flattened distortion path we have to calculate the location of that point on any given polygon.  This seems quite easy but in fact was not.  My solution was to utilize the outstanding polygon clipping implementation called Clipper.


In order to get the upper and lower bound of the Distortion Path given an source X, you need to essentially subtract the shaded polygon from the distortion path. At that point the resulting polygon read in the upper left and lower left points to calculate the height range. These two methods implement the above

        public PointF Distort(GraphicsPath source, PointF point)
            if (_distortionPath == null)
            var ScaledPoint = point;
            PointF UpperBoundPoint;
            PointF LowerBoundPoint;
            GetBoundedPoints(out UpperBoundPoint, out LowerBoundPoint, point);
            var Y = UpperBoundPoint.Y + (((ScaledPoint.Y - _sourceBounds.Top) / _sourceBounds.Height) * Math.Abs(UpperBoundPoint.Y - LowerBoundPoint.Y));
            return new PointF(ScaledPoint.X, Y);
        private void GetBoundedPoints(out PointF upperBoundPoint, out PointF lowerBoundPoint, PointF source)
            if (_boundCache.ContainsKey(source.X))
                upperBoundPoint = _boundCache[source.X][0];
                lowerBoundPoint = _boundCache[source.X][1];
            var Path = new GraphicsPath();
            var UpperX = source.X * (_sourceBounds.Width / (_upperRight.X -_upperLeft.X));
            var LowerX = source.X * (_sourceBounds.Width / (_lowerRight.X -_lowerLeft.X));
            Path.AddPolygon(new PointF[]{
                new PointF(_distortionBounds.Left,_distortionBounds.Bottom),
                new PointF(_distortionBounds.Left, _distortionBounds.Top),
                new PointF(UpperX,  _distortionBounds.Top),
                new PointF(LowerX, _distortionBounds.Bottom), 
            var ClippingPath = ClipperUtility.ConvertToClipperPolygons(Path);
            var ClippedPath = ClipperUtility.Clip(ClippingPath, _distortionPoints);
            if (Math.Abs(source.X - _sourceBounds.Left) < .1 || Math.Abs(source.X - _sourceBounds.Right) < .1)             {                 upperBoundPoint = new PointF(_sourceBounds.Left, _sourceBounds.Top);                 lowerBoundPoint = new PointF(_sourceBounds.Left, _sourceBounds.Bottom );             }             else             {                 var Points = ClippedPath.PathPoints;                 var QuickBounded = Points.Where(p => Math.Abs(p.X - LowerX) < .01);                 if (QuickBounded.Any())                 {                     upperBoundPoint = Points.Where(p => Math.Abs(p.X - LowerX) < .01).OrderBy(p => p.Y).First();
                    lowerBoundPoint = Points.Where(p => Math.Abs(p.X - LowerX) < .01).OrderByDescending(p => p.Y).First();
                    _boundCache.Add(source.X, new PointF[] { upperBoundPoint, lowerBoundPoint });
                    var RightMostPoints = Points.OrderByDescending(p => p.X).Take(2).ToList();
                    upperBoundPoint = RightMostPoints.OrderBy(p => p.Y).First();
                    lowerBoundPoint = RightMostPoints.OrderByDescending(p => p.Y).First();

The GetBoundedPoints method generates the shaded polygon and calls  ClipperUtility.Clip(ClippingPath, _distortionPoints) which actually clips and generates the resulting polygon.  The last if statement handles the edge case of the X=0 or X= Width else it looks through the points for the closest to the edge.     You’ll probably the notice the _boundCache, that saves some work since once the upper and lower points are calculated for bulge distortion they are the same no matter what Y.  For that reason we cache them.  My particular complaint with this code in the state it is in is the floating point precision comparison.  You’ll notice a particularly sloppy Math.Abs(source.X – _sourceBounds.Left) < .1, Yikes!   While this code does perform perfectly with the test data I’ve tried so far, I believe that will be a point of failure first.  My thought is that there is a precision loss/gain during the polygon clipping procedure I need to investigate more.   At any rate, the resulting transformed text will look like this.


Not bad for a first attempt, in fact everything looks great except the L and the right e.  Why is that?  Actually the algorithm is working perfectly, the issue is based on the process of flattening the path.  The L only has two lower points, the point on the bottom left, and the point on the bottom right.  Therefore, the algorithm did exactly what is was supposed to, move those two points into their respective distorted positions.  What we really need to is to inject points in between to increase the precision of the operation.


This code checks each path both vertical and horizontal runs, if the span is less than the flatness it injects points to increase the precision.

        private void InjectPrecisionPoints(GraphicsPath gp)
            var InsertDictionary = new Dictionary&lt;int, PointF[]&gt;();
            //inject points on vertical and horizontal runs to increase precision
            for (var j = 0; j &lt; gp.PointCount; j++)
                PointF CurrentPoint;
                PointF NextPoint;
                if (j != gp.PointCount - 1)
                    CurrentPoint = gp.PathPoints[j];
                    NextPoint = gp.PathPoints[j + 1];
                    CurrentPoint = gp.PathPoints[j];
                    NextPoint = gp.PathPoints[0];
                if (Math.Abs(CurrentPoint.X - NextPoint.X) &lt; .001 &amp;&amp; Math.Abs(CurrentPoint.Y - NextPoint.Y) &gt; _flatness)
                    var Distance = CurrentPoint.Y - NextPoint.Y;
                    var Items = Enumerable.Range(1, Convert.ToInt32(Math.Floor(Math.Abs(Distance)/_flatness)))
                                           .Select(p =&gt; new PointF(CurrentPoint.X, Distance &lt; 0 ? (CurrentPoint.Y + (_flatness * p)) : (CurrentPoint.Y - (_flatness * p))))
                    InsertDictionary.Add(j + 1, Items);
                if (Math.Abs(CurrentPoint.Y - NextPoint.Y) &lt; .001 &amp;&amp; Math.Abs(CurrentPoint.X - NextPoint.X) &gt; _flatness)
                    var Distance = CurrentPoint.X - NextPoint.X;
                    var Items =  Enumerable.Range(1, Convert.ToInt32(Math.Floor(Math.Abs(Distance)/_flatness)))
                                           .Select(p =&gt; new PointF(Distance &lt; 0 ? (CurrentPoint.X + (_flatness * p)) : (CurrentPoint.X - (_flatness * p)), CurrentPoint.Y))                                            .ToArray();                     InsertDictionary.Add(j + 1, Items);                 }             }             if (InsertDictionary.Count &gt; 0)
                var PointArray = gp.PathPoints.ToList();
                InsertDictionary.OrderByDescending(p =&gt; p.Key).ToList().ForEach(p =&gt; PointArray.InsertRange(p.Key, p.Value));

The resulting warp with the precision points injected


The same Bulge Distortion with an Intensity of -.2


Implementing other distortions



With relatively little change to the code you can also implement any of the above distortions just by modifying the BuildDistortion method.  The project is setup in such a way you could create a new class that implements IDistortion and create your own distortions.


Here is the source code available on GitHub

Tagged: , , , , ,

Switching from Verizon to AT&T Go Phone Prepaid on my new Nexus 5

I’ve been a 10 year + Verizon Wireless customer,  however, I simply had enough with their stance on Nexus devices.   I purchased my Galaxy Nexus two years ago to get the real Nexus experience.  After the release was pushed back further and further the device was finally released and time after time updates came at a snails pace from Verizon.  Not only that they loaded the phone with Verizon bloatware and banned Google Wallet.   Hardly the pure Google experience Google intended for a Nexus device.  Worst of all the Galaxy Nexus itself has some several pitfalls like poor battery life.

After the Galaxy Nexus debacle I wanted to choose my next phone carefully.  The Moto X was my first candidate as it was a near pure Google experience, but I was severely underwhelmed with it from the launch.  So much so I actually went to a store to look at it before committing to purchase.  After holding the phone in my hand I was not impressed and combined with the specs I felt like I would have another dud before two years had passed.  Not to mention Verizon did not have Moto Maker at this point which is one of the bigger selling points of the phone.

Nexus 5

Shortly after I scratched the Moto X off my list Nexus 5 rumors started to really ramp up.  It was pretty well known in the community Verizon was not getting this phone, but I knew it would be true pure Google phone.  Amazingly the Nexus 5 was never actually announced.  Google just placed it on their website with no announcement.  Possibly because of the high demand for the phone.  After the official specs and initial impressions came out on launch day I knew immediately this was the phone I wanted.

Switching to AT&T Prepaid Go Phone

The real point of this post is to chronicle my process from moving from Verizon to AT&T Go Phone.  I chose the $60/m plan with unlimited talk/text and 2 gigs of data.

Why Go Phone?

First off, my area doesn’t have any T-mobile service whatsoever.  So any T-mobile option is off the table for me and unless you live near a city that is probably a common situation.  If you primarily are in a T-Mobile service area you should check out what they offer.  I chose Go Phone because I need predictable phone service and support.  Go Phone is nearly the same service level as post paid AT&T customers from my understanding.  AIO, Straight Talk, Net10, Cricket, Virgin, MetroPCS, etc are all MVNO companies that have agreements with companies like Verizon and AT&T to lease voice and data on their towers.  Understandably the data traffic with these providers have lower priority in general.   Stipulations vary but in general on these MVNO plans when you exceed your data usage your data rate is crippled to 2G speeds for the remainder of the period.  This really isn’t acceptable for me.  Go Phone is clear about their data policy and provides an avenue ($10/gig) to purchase more data.

Straight Talk is one of the options that really stands out.  For $45/m you can get unlimited/talk/text/data.  Well, not really.  If you read the fine print data is capped at 2.5 gigs/month, there are also limits on other items as well.  Additionally a quick Google search will reveal that Straight Talk has a huge customer service problem.  Generally it sounds like you will talk to outsourced foreign support (India) and this leads to a lot of frustrations with people.    What is interesting now just in the last few weeks Straight Talk has added support for AT&T LTE to their site again.  So you can purchase a 4g LTE sim now from Straight Talk.  I mentioned Straight Talk specifically because it will likely work well for a lot of people but for me I need reliable support and predictable data (with a path to buy more) so I went with Go Phone.

Activating and Porting

The sim purchase process is a little bit hidden on the Go Phone site if you don’t have one, but basically you need to go into an AT&T store and get a sim because you can’t order it on their site.  Not sure why this is, they are for sale on eBay and Amazon but make sure you are getting the latest 4G LTE sim card if you buy one.  The sim cards in store appear to be free.  Once I stopped into my AT&T store the activation process was simple.  The clerk installed a new sim card and gave me the predictable, “why are you not buying a full blown plan for this awesome phone” speech.  He scanned my IMEI number on the back of my Nexus 5 and a few minutes later I had a new number and service!   I asked about porting my number, we decided it would be best if I drove home and made sure the service worked like I expected before porting from Verizon so he gave me an extra sim if I needed it.  I was a little confused by that, but my phone worked so I left ready to go.

Porting was probably the most surprising process.  I called customer service and explained I’d like to port my number to my new Go Phone account.  They transferred me to the porting department.  The rep was really nice and asked for my account # with Verizon, pin, and phone #.  She called my new Nexus 5 then called Verizon to authorize the transfer.  In a span of about 15 minutes the port was complete without even rebooting my Nexus 5!  So there you have it, porting your number from Verizon to AT&T Go Phone can be done over the phone in real time!


Responsive Ecommerce and Personalized Online Product Design

I’ve just put the finishing touches on a several month project and an experiment for me in responsive ecommerce design.  The site is http://letteringhq.com a vinyl lettering site where you can design your own lettering online.  My goal was to design a site that worked equally as well on phones and tablets as it did on desktops.  The real trick was making an online product designer that is usable on smaller devices.  The cornerstone of the site is being able to design your own lettering online, so that was the first thought.  When I was done I wanted to be able to walk from lettering design all the way to payment from my phone and not feel the need to “switch to desktop site” or download an app.

The State of Mobile Web

How to build a mobile site is still changing.  Perhaps it will never stop changing?  Being this is mid 2013 right at this moment the concept of how a mobile enabled site should work has changed alot since the days of Blackberry and Windows CE.  As of today most people agree you should not be pimping your app when someone lands on your site.  In fact I find it annoying and the internet agrees, so does Google.  You can now lose ranking if you redirect improperly or force your app on mobile users.  A mobile site ideally should present the same information in a better format based on the size of the screen.  This is still often poorly executed for various reasons.  Some of them relate to what Google is currently penalizing for, others relate to simply the lack of functionality in my opinion.  Often I have found myself immediately switching to the desktop site because the mobile version doesn’t have the same functionality!

One method of creating a mobile site it to utilize a responsive design which is one that responds to the size of the screen.  Twitter bootstrap makes this process pretty easy to accomplish without much work.  In fact Microsoft has just added bootstrap to the new Asp.net templates for Visual Studio 2013.  I opted to go this route as bootstrap already plays nice with most of the other JS libraries I was using.  What this means as far as development I will only have one set of views to manage and not a mobile code base to maintain.

In summary the web stack I settled on:

  • Twitter bootstrap with Flat UI Pro for responsive design
  • My Custom ASP.net MVC 4 Multi Tenant Ecommerce platform
  • Entity Framework 5  + MySql Db
  • Knockout.js
  • IonApi to generate the lettering preview images

 Improving Mobile Bounce Rates and Conversions

My litmus for all of this comes back to a few metrics.  Ultimately I want users to buy if they are on their phones or tablets, so increasing the conversion rate is the ultimate goal.  I most definitely don’t want the dreaded bounce, which is landing on a page then never returning.  If conversions improve on mobile I should see better bounce rates also.  I have some significant findings that show the bounce rate to be 20-30% higher when it comes to mobile phones on similar sites.  Interestingly it seems tablets generally don’t exhibit this behavior.  My thoughts are that most desktop sites are usable on tablets in general.  It is possible to inadvertently use a jquery plugin or some UI feature that doesn’t work well on a tablet, but in the last few years I have made it a point to check for touch usability.

On the opposite spectrum it never ceases to amaze me the amount of people that will struggle through a website that was designed years ago before mobile was a big consideration and somehow still manage to checkout using their phone.  Tablets I can understand, but some sites (even sites I’ve written) just don’t work well on phones.

Responsive Mobile Product Design



Easily my biggest challenge and biggest unknown is adapting a vinyl lettering design tool I’ve developed to adapt to the screen size.  In order to do this it also meant the design tool must react immediately if the screen is rotated and it cannot use any plugins that don’t work well on touch.  I had to completely re evaluate the complexity of my design tool and in some cases strip out some features that didn’t work well on mobile browsers (such as a gradient color picker).  I’m pretty happy with how it turned out though, you can give it a try on your phone or tablet and let me know what you think.  In the end no matter what device you use the design tool on you aren’t missing any features or functionality.

Responsive Mobile Checkout Process

Part two of my project was creating a mobile checkout process that didn’t suck.   The first hurdle was creating a cart view that does not appear too compacted or crowded on a phone.  I settled on presenting a totally separate shopping cart view for mobile phones because it was just too much information to cram into a horizontally formatted table.  As you can see below Bootstrap and flat UI pro combination make a nice contrasting cart experience on the phone.




Why no registration?

I’ve created a handful of ecommerce platforms over the years and registration is yet another really easy thing to get wrong.  I’ll save the rant, but say this.  If you are going to do registration make it an optional step, put the registration form on the thank you page of your site, have a rock solid password/username reset feature, and use something like Stripe or PayPal Vault to actually make the fact they registered useful (their Credit cards are stored for later use).  Other than that you’ll find that not having registration removes a real barrier to checkout for a lot of people.

Google Wallet is doing it right

While I didn’t really like how Google Checkout worked, I do however think that the new Google Wallet really fits in well with the direction mobile e commerce is going.  Similar to Paypal Express Checkout Google Wallet will take an existing account and pass all of the information directly to the website so the customer doesn’t have to enter anything at all.  After watching the video presentation on their site it looks really slick.  I’ll be integrating this feature at some point as it truly would be useful for some point (I would use it).  Why doesn’t Apple have something like this?

Areas of Improvement

Rarely anything I do is considered “done”.  There is always room for improvement.  As far as usability I think adding Google Wallet once the site really gets on its feet will be a great convenience for Android users.  My only other direct compliant with the finished product is the speed and page load times.  There are many areas I’m going to start tackling this though!  More often than not your customers will tell you what you need to fix/add next.


Tagged: , , ,

Knockout.js – A Year Later

It was about a year ago I started to learn Knockout.js.  I primarily develop with an ASP.net web stack of technologies.   Moving into MVC 4 I noticed Microsoft had started to ship Knockout.js with the MVC 4 template.  Once I read and understood what Knockout.js was and how it could help me things really started to click.  I’ve implemented Knockout in a few complex javascript applications I maintain over the past year.  I’m going to share my experience with moving from solid jQuery to Knockout.

The old way of doing things

You see I fell into the same hole that many developers found themselves using jQuery.  When you start to develop a complex javascript application jQuery becomes spaghetti code very quickly.   Manually mapping variables to reflect state,  referencing DOM elements directly, manually checking boxes or setting input values with a complex UI is incredibly difficult to maintain.   I know this because unfortunately I have a lot of code in the wild that is written like this.   It isn’t broke, it works well actually, but it took a long time to write and it isn’t fun to come back to and modify.

The new way is better – Using Knockout

If you don’t know what Knockout is then I’d recommend you just visit their site and read about it.  In short I feel like the cornerstone of what Knockout does is the observable pattern and dependency tracking.  This allows you to create a view model, bind the view model to the DOM, then freely use knockout’s data-bind syntax declaratively directly in the DOM.   The real magic happens though when you check a checkbox or change a drop down.  The new values are bound to these elements and values are updated automatically for you.  Your UI can react to these changes.

If you were like me moving from the mentality of micro managing your DOM and UI state it is very hard to let go and trust knockout to do the work for you.  In fact it took me awhile to even wrap my head around what is going on in the background.

Knockout saved me money

I typically don’t share actual work I’ve done, but this is a business I co founded a few years ago.  I no longer have a stake in it, but I’m still doing all the web development.  I recently revamped the entire site – http://bandegraphix.com.  More importantly though is the Knockout based lettering tool I wrote which is a fork of a tool I have in production on several other sites also.  On the surface it is deceptively simple, however, a tool like this takes a lot of boilerplate jQuery and javascript to hook everything together.  The UI must fetch an image and get a price anytime something changes.  When the design is saved the entire design has to be sent back to the server and saved.  If the design needs to be edited the design must be loaded from the DB and the UI state brought back to exactly how it was left.



I’ve written tools similar to this one for other customers in pure jQuery and I can tell you it took me about one third the time to write this tool that has more bells and whistles than most any other I’ve made.    Don’t get me wrong there are some serious gotchas with knockout that I will cover in a second though, but overall I was able to write this tool and be really confident with how it worked.  Even better was the huge js file to keep everything wired together simply disappeared, Knockout does it all for you.

When I previously had to write a tool like this I had to load the design with all of the selected options from the database then manually map these values into javascript, then call a method to properly change the state of the checkboxes and inputs.  With Knockout it just worked.  Yep, it literally just works. I load a view model from the database via an AJAX method and the entire UI returns to the same state it was when I left it.

Less Code & Less Duplicate Code

Because of Knockout’s declarative syntax and observable bindings you simply write less code.  To update a checkbox or input requires a simple statement rather than a brittle by id jQuery DOM selector.   Knockout’s powerful template binding allows you to reuse pieces of html efficiently.  In my case of the vinyl tool above I load all of the UI elements in what is essentially a toolbox of components then use knockouts template binding.  This allows me to efficiently swap out and reuse components in other tabs or create entirely new tools with very little repetitive code.

Random Thoughts and Gotchas

Debugging Knockout is Hard (my biggest gripe)

Knockout essentially runs in its own binding context during run time.  The effect of this is clear if you try to utilize firebug or chrome developer tools to debug your application.  $root, $parent, etc have no meaning because you aren’t operating in the context of the binding from the console.  For this reason it is difficult to use the traditional methods of debugging javascript.  Breakpoints and exceptions don’t occur in any meaningful way from the console window.  Knockout will throw a semi useful binding error, but it can sometimes be difficult to find the cause directly.  Often I will place console.log() calls wherever I can to isolate the error.

Variable or Variable()

Personally my biggest source of Knockout issues is using variable or variable().  In Knockout you should use () when reading the value of an observable in code, however when using the declarative syntax in the DOM you don’t have to use the ().  Since the observable function is essentially wrapped over the variable to give it the observable behavior this is required.

Computed Variables and Throttle

Computed variables in my opinion are the biggest “magical” part of Knockout.  Unfortunately it is pretty easy to get bit by a computed variable.  For example perhaps your computed variable makes an ajax call per UI update.  Again, refer to the site example I posted above the text box where you type your text updates per key press.  Initially when I wrote this and typed a sentence my server got hammered every key press!  It is a great idea to use the throttle feature to solve this issue. Throttle will delay an update until the last keypress for a period specified.

Ko.Mapping and dynamic view models

Perhaps my biggest regret in terms of added complexity is utilizing a dynamic view model.  Most examples of Knockout you’ll see simply utilize a hard coded view model.  With the site I showed off above I took a different approach.  I didn’t want to have to maintain a server side view model and a client side view model.  My solution was to take a master view model to coordinate the view, handle navigate, saving, loading, etc.  I then created a single observable to hold my child view model that is retrieved from the database.  The definition of this model remains server side and is only defined in one place.  In theory this sounds good but I question if it was a good idea, mainly because of the way Knockout maps complex objects.

In order to make this work you must use the ko.mapping plugin.  When you retrieve the model from the database it must be mapped into observables. For the most part this is handled for you, however I found that arrays of complex objects did not map to observable variables as expected.  My advice, make everything observable from the start and you’ll have less problems.  I had to solve this by creating a pretty large piece of code to handle the mapping of the object.  Every time ko.mapping is used it refers to the map as to how the object is constructed.  In the end it might have been easier to simply write out the child view model by hand.

Use BindingHandlers instead of accessing by Id

When starting out with Knockout it can be very tempting to mix classic jQuery $(‘#someid’) calls.  You don’t need to do it and the proper way is to write a Binding Handler to do the job.  You’ll still be able to accomplish whatever you are wanting to do and in the end you’ll have a reusable piece of code.  If you properly implement the binding handler then when your UI changes you won’t have to worry about things loading correctly.

In the case above I utilized binding handlers to do several things.  I use several jQuery plugins, but never reference the DOM element directly.  For example I use the jQuery tabs feature and created this simple handler to handle them.

ko.bindingHandlers.tabBinding = {
init: function(element, valueAccessor, allBindingsAccessor, viewModel, bindingContext) {
var options = valueAccessor() || {};

Moving forward

A year later a lot of other frameworks have cropped up and cemented their place.  Angular is the first one that comes to mind.  I’ll be honest I haven’t used Angular and from what I’ve read I’m not sure it would be the best fit for what I did on this project.   Mainly because Angular isn’t the same thing at all, it is a client side MVC framework, not an MVVM.  My interest was reducing boilerplate code and writing more efficiently, I believe knockout allowed me to do just that.  My app wasn’t necessarily changing views, rather it consisted of one complex view.

If you are striving for more of a single page app however, then I do believe there are better options out there for that.   If you like Knockout then the logical choice is moving to Durandal.  Durandal takes several popular libraries (knockout, sammy, and require) and hooks them all together to make a nice MVC framework.  I personally used sammy and knockout in the project above, but I did feel like what I was doing lacked a lot of structure.  Durandal will bring in a nicely designed MVC system but keep the familiar knockout style bindings.



In summary a year later with Knockout I’m really glad I invested the time to learn it and use it properly.  It has truly saved me time and money by producing less code with fewer bugs.  If you or your team is still writing spaghetti jQuery then you need to take a look at this!



Tagged: , , ,

Using Forms Authentication with ASP.net Web Api

I found myself recently needing to implement a very simple web api private web service into one of my sites to sync some data between servers. I’ve implemented a full blown API key authentication before with web api in past projects. It really isn’t that hard, but why add all of that extra code when my site already fully implements role based forms authentication? It is so easy to just use the authorize attribute with forms authentication.

        // GET api/sync
        [Authorize (Roles = "ApiUser")]
        public IEnumerable<int> Get(int? days)
             //do some work return some data

The problem isn’t really the code above, that will work fine and happily return an “unauthorized” message when you call it. In order to use the HttpClient properly you first need to login posting to the form that contains your login. This could vary depending on how your form is setup, but this works with the default asp.net template.

        private HttpClient CreateAuthenticatedSession()
            var Client = new HttpClient {BaseAddress = new Uri("https://www.yourdomain.com")};
            var Result = Client.PostAsync("/Account/Logon/",
                             new FormUrlEncodedContent(
                                 new Dictionary<string, string>{ {"UserName", "Your Username"}, 
                                     "Password","Your Password"
            return Client;

The code above once called will hopefully login to your site through forms authentication and return an HttpClient holding the correct cookie that will be passed with subsequent requests. You can keep talking to your Forms Authenticated web service as long as you hold onto the HttpClient returned after you authenticated. Below is a simple code block that calls CreateAuthenticatedSession() and calls the web api service.

        public void GenerateByDay(int days, string path)
            var HttpClientSession = CreateAuthenticatedSession();
            HttpClientSession.DefaultRequestHeaders.Accept.Add(new MediaTypeWithQualityHeaderValue("application/json"));
            var Result = HttpClientSession.GetAsync("/api/sync/?days=" + days.ToString(CultureInfo.InvariantCulture)).Result;
            var ResponseBody = Result.Content.ReadAsStringAsync().Result;
            //deserialize result and continue on 
Tagged: , , ,