Saturday 21 December 2013

JSON vs XML: Challenging my assumptions

I was recently (today actually) working on optimising a particular section of my project. This section is basically a Q & A that uses a piece of server-generated XML which is placed in to a Razor view where some Javascript works with it to generate the input fields for the user.

But XML is so 2010 right? If I swapped the XML for some JSON the payload would be smaller, it would generate faster, the javascript would be able to work with it faster, right?

Let's test those assumptions one by one shall we?

To do that, I duplicated the method that populates an object and serializes it to XML and changed the serialization to use JSON instead and ran them both at the same time so I could compare them side by side.

Payload size

I added a call to Encoding.ASCII.GetByteCount() around the serialized JSON and XML results and dumped this to the Trace windows. The results are below

Type 1 2 3 4 5 6 7
XML 3338 4133 3487 3255 3465 3194 1138
Json 2266 2717 2292 2110 2234 2061 621

This is in bytes, so the real world difference between the XML and the JSON in this case is at most a single kilobyte. This isn't always the case as I've swapped out XML for JSON in the past where it has been many times smaller.

Generating the output

Next job is to test how quickly the output is generated. Keep in mind that the method I'm testing includes other data access logic so these timings are not purely serialization. But as both will be doing the same work, it won't hurt to have that included as well to see how each performs in the real world.

My initial results looked rather promising, see table below:
Type 1 2 3 4 5 6 7 Avg
XML 87.4 31.9 33.6 29.6 31.6 29.6 45.9 41.37143
Json 38.9 20.7 27.8 21.2 23.8 24.4 23.5 25.75714

Across the whole Q & A session, JSON comes out not far off 50% faster. So I should definitely swap it right?

But wait, the method that we're capturing also includes some data access. The JSON method is running second, could it be benefiting from the XML method's work?
To test this out, I swapped them round so the JSON method was first. Below is the aggregated table from both runs.
Type 1 2 3 4 5 6 7 Avg
XML First 87.4 31.9 33.6 29.6 31.6 29.6 45.9 41.37143
XML Second 39.2 25.3 26.7 24.3 26.5 37.1 21.6 28.67143
JSON First 83.5 28.9 30.4 30.9 32.7 46.3 47.7 42.91429
JSON Second 38.9 20.7 27.8 21.2 23.8 24.4 23.5 25.75714
As you can see from the average, I was right to be suspicious. The first method to run is always slower than the second due to things like EF caching records. In reality, the difference between the 2 is so small that no one is ever going to notice.

These two simple benchmarks which took me about 10 minutes to complete shows that to substitute the current XML payload for JSON would be a micro-optimisation. I've got much bigger fish to fry than this so it is not worth my time to do this work.
 Will swapping out make things faster? Absolutely. Will anyone notice? Not remotely.

You'll notice I didn't perform the third test of measuring how the Javascript handled JSON as opposed to XML. I'm sure that would probably show a modest improvement as well but based on these numbers, my best course of action is to not waste any more time on this path.

Tuesday 3 December 2013

Windows 8.1 - The good, the bad, and the missing.

Little over a year after Windows 8, it gets an upgrade with the release of 8.1. Is this just a Service Pack by any other name or does it provide real improvement over it's predecessor that makes it worthy being a numbered release?

 First, the good.

 Real effort has been made in 8.1 to reduce the jarring effect of moving between Windows' Classic and Modern UIs. Here are the main improvements:

  • Search has been altered so that when you search, the search window and initial results appear as a flyout on the right side of the screen rather than taking over the entire screen.
  • The desktop background is now used as the background of the start screen. I though this sounded a lot like a gimmick when I first heard about it, but it genuinely does reduce the cognitive dissonance experienced when moving between the two UIs.
  • The Start button is back. I don't personally care as I've gotten used to it not being there, but hopefully it's return will go some way towards appeasing the masses. 
  • Better help on install. When you first installed Windows 8, it was very much a "go and figure it out" experience. Beyond the brief tutorial that appeared when starting for the first time and which I'm fairly sure no one ever paid any attention to, you were on your own when it came to learning the new UI. Windows 8.1 adds in some nice big helper blocks that point out things like the hot corners and how you can use drag/swipe to perform certain actions. If you've been using Windows 8 already, these will probably be a bit annoying as it's just telling you things you already now, but to new users they will go a long way in improving the transition to Windows 8 Modern UI.
  • More freedom when using multiple applications. You can now pick the amount of space each app takes up on your screen and have more than 2 of them. This is really such a basic feature that it should have been there from the beginning, better late than never I suppose.


I haven't played with it much yet, but I really like the new aggregated search view which brings up results from lots of difference sources all in a single very usable view.
 The Windows store was beyond poor in Windows 8. It was fine if you were looking for something specific but it was absolutely useless for discovering apps. This has been remedied in Windows 8.1, I'm hoping this will allow existing developers to make more money out of their apps and encourage more developers to get writing Windows 8 apps.

 There are a load of other features added, such as 3D printing support which isn't that big a deal currently but I think Microsoft have made a sly move in baking in OS level support for what is a growing technology.

The bad
The start button is back. Wait, didn't I just do this one? Yes I did, but in a prime example of "you can't please everyone all of the time", in the last year of using Windows 8 I've gotten used to having that extra slot on the taskbar and haven't remotely missed the venerable start button that used to occupy that number one slot. I completely agree with Microsoft's move of bringing it back, but they could've at least made it an option to not have it.

That's pretty much it for the bad, there really isn't that much to moan about in this release. It genuinely seems like Microsoft have taken the time to listen to users and fix the problems that have really plagued them. Some would say they should have listened to users during the Beta and Preview periods, and they're probably right. Hopefully, Microsoft have been a little humbled by their grand UI plans not being embraced as they had hoped. The concessions made in this release certainly seem to suggest that is the case.

The missing
WEI is gone! I'd really love to know the argument behind getting rid of the Windows Experience Index. It was a great tool that allowed your average user to easily see where the bottleneck was in their system without having to understand the various benchmarks or install software on their computer to perform tests against those benchmarks. I genuinely don't understand why this has been removed, perhaps Microsoft will come out with an explanation in the coming months.

Experience on my Dell Mini 9
I have a little Dell Mini 9 which I decided to use a tester for how Windows 8 ran on low powered hardware. For reference, it has a 1.6Ghz Atom, 1Gb of RAM, and a 14Gb hard disk. Windows 8 ran okay on this at first but suffered the inevitable slow down as time wore on to the point where it was pretty much unusable for anything serious. IE10 was not even worth starting. Also, I only had a couple of Gb change on the 14Gb hard disk.
 After doing a fresh install of 8.1, I had 5.2Gb free space on the hard disk, which astonished me and this little device suddenly seems a lot nippier than it was with a fresh Windows 8 install. IE11 is a lot faster than it's predecessor and I now use the Mini for email and Campfire on a daily basis.
 Time will tell if it keeps this speed boost or if it falls away with continued use, but you can see that some effort has gone in to improving performance for this release.


 So is it worthy of being a .1 release? Yes, I think it is. There are lots of small improvements here, if it only had half of them it would probably be SP1 in my eyes, but there are enough improvements and extra features here that make this more than a Service Pack.

Thursday 18 July 2013

Slimming down your JSON

Newtonsoft JSON.Net is the JSON serialization library that is so good, Microsoft use it over their own.
While converting an existing project from Linq to SQL to EF5 Code First, I hit an issue with the Unit Tests, which use test objects serialized to XML files as the basis of the tests.  This upset the XML Serializer as collections in EF are ICollection<T> as opposed to Linq to SQL's EntitySet<T> – and the XML Serializer can’t handle interfaces.

JSON.Net to the rescue - fortunately it can handle interfaces so I chose to convert the test data to serialized Json instead. This was a relatively trivial task, accomplished by the below Linqpad script (if you're not using Linqpad, stop reading this blog and go and download it now).

void Main()
{
       var filePath = @"C:\users\Alan\Downloads\";
       var filename = "OrderItemTestData.json";
       var deSerializationMode =2;
       //1 for Json deserialize to object, 2 for cast (use for derived types of abstract classes where return type is the base class).
       var update=false;

var result = LoadData<List<OrderItemBase>>(filePath + filename,deSerializationMode);

       var updated = JsonConvert.SerializeObject(result, Newtonsoft.Json.Formatting.Indented,
       new JsonSerializerSettings() { ReferenceLoopHandling = ReferenceLoopHandling.Ignore, 
TypeNameHandling = TypeNameHandling.All, 
NullValueHandling= NullValueHandling.Ignore});
      
       updated.Dump();
if(update){  
       File.WriteAllText(filePath + filename,updated);
}
}

This simply takes the XML files, deserializes it to the source objects (that’s all LoadData does), and then reserializes it to JSON.
This solved my immediate problem but I was surprised to see the size of the file, in some cases 3 times larger than the equivalent XML file. But JSON is less characters, so how is that possible? A simple comparison of the files shows the problem:
Excerpt from XML file: 
  <Company>
    <Id>201</Id>
    <Name>Company 201</Name>
    <Code>foo</Code>
    <PlaquePrice>5</PlaquePrice>
    <FreeSampleCount>10</FreeSampleCount>
    <CompanyDispensers>
      <CompanyDispenser>
        <Dispenser>
          <CompanyId>101</CompanyId>
        </Dispenser>
      </CompanyDispenser>
    </CompanyDispensers>
  </Company>
Excerpt from Json File:
 "$type": "MyCompany.Model.Customer, MyCompany.Model",
      "IsInternal": false,
      "AvailableWorkFlows": {
        "$type": "System.Collections.Generic.List`1[[System.String, mscorlib]], mscorlib",
        "$values": [
          "StandardWorkFlow"
        ]
      },
      "BillingAddress": null,
      "BillingContact": null,
      "PrimaryContact": null,
      "ActiveUsers": {
        "$type": "MyCompany.Model.User[], MyCompany.Model",
        "$values": []
      },
      "Id": 201,
      "Name": "Company 201",
      "Code": "foo",
      "RecordUpdate": "0001-01-01T00:00:00",
      "RecordCreate": "0001-01-01T00:00:00",
      "Orders": {
        "$type": "System.Collections.Generic.List`1[[MyCompany.Model.Order, MyCompany.Model]], mscorlib",
        "$values": []
      },
      "Products": {
        "$type": "System.Collections.Generic.List`1[[MyCompany.Model.Product, MyCompany.Model]], mscorlib",
        "$values": []
      },
      "Users": {
        "$type": "System.Collections.Generic.List`1[[MyCompany.Model.User, MyCompany.Model]], mscorlib",
        "$values": []
      },
      "IsValid": true
    } 

In this case, the XML file comes out at 1617 characters, the Json equivalent is 48833 characters
The JSON.Net serializer has gone over the objects and serialized every property, even ones that were null or default for their type and also empty collections. This can be easily solved by setting the appropriate properties on the serializer:
new JsonSerializerSettings() { ReferenceLoopHandling = ReferenceLoopHandling.Ignore, 
          TypeNameHandling = TypeNameHandling.All, 
          NullValueHandling= NullValueHandling.Ignore, 
          DefaultValueHandling = DefaultValueHandling.Ignore}
 Setting NullValueHandling and DefaultValueHandling to Ignore solves the problem of properties that are at the default for their type, such as datetimes, and null properties. However, this still leaves us with all of the collections, which are initialised in the object’s constructor to new List<T>.
By default, Json.Net can’t be instructed to ignore these empty lists, because ignoring them may not be the correct action in everyone’s case. In ours it is, so we need to tell Json.Net that it is okay to ignore them to reduce our file size.
To do this we need to create a custom DefaultContractResolver, the code is below:
public class IgnoreEmptyCollectionsContractResolver : DefaultContractResolver
{
    public new static readonly IgnoreEmptyCollectionsContractResolver Instance = new IgnoreEmptyCollectionsContractResolver();

  protected override JsonProperty CreateProperty(MemberInfo member, MemberSerialization memberSerialization)
  {
    JsonProperty property = base.CreateProperty(member, memberSerialization);

    if ((property.PropertyType.Name.Contains("IEnumerable") || property.PropertyType.Name.Contains("ICollection")) && property.PropertyType.GenericTypeArguments.Count() == 1)
    {
      property.ShouldSerialize = instance =>
         {
         try{
              var cnt = instance.GetType().GetProperty("Count").GetValue(instance,null);
              return (int)cnt > 0;
              }
         catch(NullReferenceException){
         return false;}
         };
    }
    return property;
  }
}
 The <catchyName>IgnoreEmptyCollectionsContractResolver</catchyName> simply checks if the current property is an ICollection or IEnumerable and that it has a single generic argument. It then checks the Count property and instructs Json.Net to serialize/deserialize that property depending on whether or not count is greater than 0. I’m sure this can be done a lot neater, but it solves my problem.
We then simply instruct Json.Net to use this as part of the JsonSerializerSettings object:
new JsonSerializerSettings() { ReferenceLoopHandling = ReferenceLoopHandling.Ignore, 
    TypeNameHandling = TypeNameHandling.All, 
    NullValueHandling= NullValueHandling.Ignore, 
    DefaultValueHandling = DefaultValueHandling.Ignore, 
    ContractResolver = new IgnoreEmptyCollectionsContractResolver()});
 Now the serialized Json looks like this:
{
      "$type": "MyCompany.Model.Customer, MyCompany.Model",
      "FreeSampleCount": 10,
      "PlaquePrice": 5.0,
      "AllDispensers": {
        "$type": "MyCompany.Model.Dispenser[], MyCompany.Model",
        "$values": []
      },
      "Id": 201,
      "Name": "Company 201",
      "Code": "foo",
      "IsValid": true
    }
 The total size has dropped from nearly 50000 characters to 1495, a much more acceptable size.

I hope this is of use to someone. If the resolver can be done in a better way, use the comments.

Monday 1 July 2013

A simple WebCache Helper

As mentioned in most of my previous posts, the main project I work on is due to move to Azure in the future. Among the many gems of Azure is their caching infrastructure, which can either be hosted on a dedicated worker role or instructed to use spare memory on your web roles.
More information and pricing for Azure Caching can be found at http://www.windowsazure.com/en-us/services/caching/

I fully intend to make use of Azure’s in-built caching when we get there, but I can’t wait to start implementing some sort of caching and I don’t want to have to do a big find-replace in the code when we do get there, so I wrote a simple WebCacheHelper which provides easy access to caching anywhere in the application but is also easy to replace when I move to Azure.

The code is below.
    public static class 
        WebCacheHelper
    {
        public static T TryGetFromCache<T>(string cacheName, string itemKey) where T:class
        {
            if (HttpContext.Current == null || HttpContext.Current.Session == null) { return null; }
 
            var sessionResult = TryGetFromSessionCache<T>(cacheName, itemKey);
            if (sessionResult != null){return sessionResult;}
 
            var applicationResult = TryGetFromApplicationCache<T>(cacheName, itemKey);
            return applicationResult;
        }
 
        public static T TryGetFromCache<T>(string cacheName, string itemKey,CachingLevel cachingLevel) where T:class
        {
            if (HttpContext.Current == null || HttpContext.Current.Session == null) { return null; }
 
            switch (cachingLevel)
            {
                case CachingLevel.Session:
                    return TryGetFromSessionCache<T>(cacheName, itemKey);
                case CachingLevel.Application:
                    return TryGetFromApplicationCache<T>(cacheName, itemKey);
            }
            return null;
        }
 
        public static void AddToCache(string cacheName, string itemKey, object cacheItem, CachingLevel cachingLevel=CachingLevel.Application)
        {
            if (HttpContext.Current == null || HttpContext.Current.Session == null) { return; }
 
            switch (cachingLevel)
            {
                case CachingLevel.Session: AddToSessionCache(cacheName, itemKey, cacheItem);
                    break;
                case CachingLevel.Application: AddToApplicationCache(cacheName, itemKey, cacheItem);
                    break;
            }
        }
 
        public static void RemoveFromCache(string cacheName, string itemKey)
        {
            if (HttpContext.Current == null || HttpContext.Current.Session == null) { return; }
 
            RemoveFromApplicationCache(cacheName,itemKey);
            RemoveFromSessionCache(cacheName,itemKey);
        }
 
        public static void RemoveFromCache(string cacheName, string itemKey, CachingLevel cachingLevel)
        {
            if (HttpContext.Current == null || HttpContext.Current.Session == null) { return; }
 
            switch (cachingLevel)
            {
                case CachingLevel.Session: RemoveFromSessionCache(cacheName, itemKey);
                    break;
                case CachingLevel.Application: RemoveFromApplicationCache(cacheName, itemKey);
                    break;
            }
        }
 
        private static string GetCacheKey(string cacheName, string itemKey)
        {
            return cacheName + "-" + itemKey;
        }
 
        #region Session Cache
 
        private static void AddToSessionCache(string cacheName, string itemKey, object cacheItem)
        {
            HttpContext.Current.Session[GetCacheKey(cacheName, itemKey)] = cacheItem;
        }
 
        private static void RemoveFromSessionCache(string cacheName, string itemKey)
        {
            var result = HttpContext.Current.Session[GetCacheKey(cacheName, itemKey)];
            if (result != null)
            {
                HttpContext.Current.Session.Remove(GetCacheKey(cacheName, itemKey));
            }
        }
        
        public static T TryGetFromSessionCache<T>(string cacheName, string itemKey) where T:class
        {
            var result = HttpContext.Current.Session[GetCacheKey(cacheName, itemKey)];
            if (result == null)
            {
                return null;
            }
            return (T) result;
        }
 
        #endregion
 
        #region Application Cache
 
        private static void AddToApplicationCache(string cacheName, string itemKey, object cacheItem)
        {
            HttpContext.Current.Application[GetCacheKey(cacheName, itemKey)] = cacheItem;
        }
 
        private static void RemoveFromApplicationCache(string cacheName, string itemKey)
        {
            var result = HttpContext.Current.Application[GetCacheKey(cacheName, itemKey)];
            if (result != null)
            {
                HttpContext.Current.Application.Remove(GetCacheKey(cacheName, itemKey));
            }
        }
 
        public static T TryGetFromApplicationCache<T>(string cacheName, string itemKey) where T:class
        {
            var result = HttpContext.Current.Application[GetCacheKey(cacheName, itemKey)];
            if (result == null)
            {
                return null;
            }
            return (T) result;
        }
 
        #endregion
 
    }

As you can see there is nothing clever going on here, there is a single AddToCache method for adding data to the cache, with a default parameter for cachingLevel so the calling code can override the default application-level caching to cache at the session level.

 There are a couple of RemoveFromCache methods, one where the calling code can specify which cache to remove the item from, the other removes the data from whichever cache it resides in.
 There are also two TryGet methods for retrieving from a specified caching level or from any that has a matching key.

CachingLevel is just an enum with items for Session and Application
     public enum CachingLevel
    {
        Application,
        Session
    }
I also use the following static string class to save having magic strings peppered through the application
    public static class WebCacheKeys    {
        public const string Users = "Users";
    }
I can just add strings as required and change the backing value if I need to without having to make a large number of changes elsewhere. If I had IOC available, I probably wouldn't have made this static and would've abstracted the functionality behind an ICacheHelper interface. The way I look at it, this is the next best thing in terms of ability to make changes in the future. When I move to Azure, I'll post the Azure-centric version of this helper.

Wednesday 15 May 2013

This just in-Windows 8 not that bad

Yep, I said it. I may have just committed professional suicide and may never be offered a job again and just to make it worth it I'm going to say it again.

Windows.8.Is.Not.That.Bad

Why such gleaming praise? Have I lost my mind, or been bribed by Microsoft to say nice things about their crappy OS to my two readers (hi guys)? No, but unlike most of those who bash Windows 8 for it's split personality and it's absent Start button which is allegedly responsible for the deaths of many kittens, I've used it. I've used it at home since shortly after it came out and have been using it at work for a couple of months now.
I published my first impressions here a while back and I think it's time to refine my impressions a little and add a dash of reason.

This week there has been a lot of talk about Windows 8.1 aka Blue and how Microsoft are going to put all our cheese back and beg for forgiveness. This won't happen. At most, there is going to be a minor submission, such as adding the Start button back but having it trigger the Start screen, or allowing boot to Desktop.

6 months of use has led me to the inescapable conclusion that the Start screen is an almost perfect marriage of the Start Menu and the desktop. Let's face it, most people's desktops consist of loosely related groups of icons and maybe an extremely crucial business file that can never be replaced and is not backed up in any way shape or form.
What is the Start screen but a load of groups of loosely related icons?
The start screen is the perfect replacement for your cluttered and dis-organised desktop. You can't put files on it, therefore forcing you to put them somewhere sensible, or just putting them on the actual desktop where you've always put them.
With just a little tidying up, at boot you are presented with a screen full of big icons for your main applications that are easily clickable through blurry eyes on a Monday morning before you've had your caffeine intake.

So the Start screen is the solution to the icon-explosion on your desktop. But what about the Start Menu?

The Start Menu, the theory goes, was a quick way to access commonly used applications and provided a nice hierarchical folder structure with which to access all your installed apps.
In actuality, the quick app access came true but the hierarchical folder view soon becomes a dumping ground for any amount of shit that installers feel the desire to put in there. Often, applications from the same company will have wildly differing paths so your hierarchical structured view of your applications has absolutely no structure whatsoever!
For years now, I have taken to manually organizing the folders in the start menu. This makes finding anything easy with the rather large caveat that when anything gets uninstalled, I have a dead entry sitting there for me to trip over at some point in the future.

I suppose the closest replacement for the "view everything" mentality of the Start Menu is the All Apps view, which is possibly the worst example of a UI I have ever seen. It is completely intractable to me and I steer clear from it. If I need something that isn't on start, I search. If search doesn't bring it up, I dip down to Explorer.
If I need to get to system functions, I use either Win+R to bring up Run or Win+X to bring up the new Quick Access menu.

Having lived with Windows 8 for a while now, I don't feel I've lost anything. The functionality of the desktop (link farm) + start menu is there in the start screen for me. Anything else can be served by Search or Win+R/X. Overall my workflow is quicker and more streamlined-and no more manually organizing the Start Menu.

Somehow, life goes on after the Start Menu and no kittens were killed in the use of this OS.

Well, not many.

Sunday 28 April 2013

Going Cloudy Part 6 - Monitoring and Load Balancing


The Monitoring Project

When using the traffic manager, it needs an endpoint on your service to hit in order to determine whether or not it is responding. This endpoint must be open (no authentication) and must be a path on your service. As you may have noticed, in the ServiceDefinition, I instructed my Site, Web Service and REST Services to only respond on port 80 or 443 to a specific host header, one which the traffic manager cannot provide as it will be accessing the instance directly, in other words via it’s cloudapp.net address.

The simple way to solve this would be to make one of the services also respond on an endpoint without a Host Header and give it an unauthenticated ActionResult somewehere that the Traffic Manager could access.
Now I’m no security expert, but I do my best, and I didn’t want any of my core services hosting an unauthenticated endpoint and I want to make sure that they are only accessible by their public urls. Therefore, I created a project that is just for monitoring the health of my services. Initially, this will just service the traffic manager. In the long term, it’s a convenient place to put any generic monitoring functionality.
In order to service the Traffic Manager, it just has a GET action on the Home controller which does a quick database connectivity check and returns a 200 if everything is okay. Before go live, this will be extended to check all the main services are responding.

Using the Azure Traffic Manager

As previously mentioned, I need to have service instances in both the EU and the US. I didn’t want users to have to decide which one they went to by going to eu.domain.com or us.domain.com, that’s just a bad user experience in my book.
The Azure Traffic Manager provides you with a load balancer that you can use between Cloud Services in the same or different regions. I am using it in the performance mode, which routes the user to the nearest (and presumably fastest) service to them.
The Traffic Manager uses the aforementioned Monitoring project to determine if the service is unavailable. It regularly hits the health endpoint and if it does not receive a 200 within 5 seconds, it considers the instance to be down. When the instance is determined to be down, on the next DNS refresh, the records will be updated to point to the next service on the list. There will still be some down time while all this happens, but it will most likely only be a couple of minutes.

Tuesday 23 April 2013

Breathing new life in to old netbooks

You don't see many netbooks around these days, and for a very good reason. When they were new, their performance ranged from ok to rubbish and they were no good for any real computing. Looking back, they were more like the first stab at the sweet spot between a smart phone and a full-on laptop/desktop, a gap which has been much more successfully filled with tablets in recent years.

I have 2 netbooks, one is a Samsung NC10 and the other a Dell Mini 9. Both of these have an Intel Atom N270 @1.60 Ghz and 1GB of RAM.
The Dell ran XP (badly) and the Samsung has run XP (badly), Windows 7 (really badly), and Ubuntu (just about acceptable).

When Windows 8 was released, one of the things that I noticed the most was the very noticeable improvements in general performance. Some of this was clearly down to the removal of superfluous effects but that could not explain all of the improvement

Just after Windows 8 came out, I installed it on the Dell. Performance was much better than I expected, better than XP. The Dell's odd screen size meant some hacking to get the screen in to 1024x768, even then there is a small amount of blur. It works okay for some basic note taking and browsing, but I wouldn't stress it with anything further.

Recently, the other half graduated to using an ASUS MemoPad for her needs, leaving the Dell spare. I decided to install Windows 8 on this too and, surprisingly considering it has exactly the same processor and amount of ram as the Dell, it performs even better. Haven't put Office on yet so it could still go downhill, but so far I am very impressed.

So if you've got some old netbooks lying around that you had written off as being useless for anything, throw Windows 8 on them you may be surprised at how nippy they are as a result.

Wednesday 17 April 2013

Making old machines immortal(-ish) with P2V

The time comes to every PC, when it's reached the end of it's life and it's time to be turned off once and for all...Except, when that PC has old software on it with a non-transferable license. In an ideal world this wouldn't happen, but sometimes it just can't be helped - either the software is no longer available to buy and transferring to an equivalent would cost a bomb, or you'd have to buy a new license for the most recent version which would also cost a bomb.

Fortunately, this is where virtualisation becomes really useful for a small business. While small businesses may not have workloads that warrant massive clusters of VMs spanning multiple centuply redundant clusters on fault tolerant blade servers, they will almost certainly at some point experience the imminent failure of that machine - the one machine in the entire company that simply must remain. It cannot be upgraded and it cannot be replaced, it must exist forever.

In short, P2V allows you to take an existing OS install on physical hardware and convert it to run as a virtual machine on a virtual host. It's easy to do, so easy in fact that I'm not going to tell you how to do it. Some detailed instructions for performing a P2V conversion can be found here
http://www.petri.co.il/virtual_convert_physical_machines_to_virtual_machines_with_vmware_converter.htm

Some tips when you're doing this:

  1. Install the converter on the machine you want to convert - I've found the agent can sometimes be difficult to get to work, if you want to get things done quickly, just install on the machine and then uninstall from the converted VM when you're done.
  2. Make sure you change the disks to Thin Provision - this will save you disk space on your virtual server. Space may not be an issue for companies with SANs, but if you're just using the drives in the server box, it doesn't hurt to save every gigabyte you can.
  3. Make sure you get the number of CPUs right - I didn't do this on the first XP machine I converted, the physical hardware had 1 core and the virtualised version 2. I ended up in a tricky situation where I had to reactivate XP but couldn't because it couldn't connect to the activation service. Eventually, after numerous attempts, it connected and I could reactivate, but if I'd set the cores I may not have had to.


Virtualisation for me takes between 2 and 3 hours, after which I have a virtualised copy of the hardware machine. As a matter of course I go through a few steps here.

  1. Create a snapshot before doing anything.
  2. Uninstall VMWare converter.
  3. A bit of general cleanup. Take out unnecessary items from services and startup, drop the visual effects (Classic mode for XP). This will make remoting a more pleasant experience as there won't be so many differing colors to transfer over the wire and render on the client side.
  4. Almost forgot this one. On your physical PC, you probably have various software installed for the graphics card (ATI Catalyst Control Center, etc), network adapters, and any number of esoteric peripherals. The majority of these you will no longer need so remove them and save your new virtual machine some work.
Take a snapshot after each of these steps just in case you get anything wrong. 10 seconds to create a snapshot could save you 2-3 hours starting all over again.

Sunday 17 March 2013

Going Cloudy Part 5 - Scalability without breaking the bank


As detailed previously, the application consists of three major components:
  • The web site, which predictably is used by everyone.
  • The asmx web service which is used by about 80% of users but performs fairly low-resource actions.
  • The REST service is hit only by external systems importing/retrieving data on a schedule, so it doesn’t need a lot of resources either.

 Going with the default cloud approach of separating everything would be prohibitively expensive. Currently a 2 instance cloud service is about £18 a month. Add on the monitoring project and this means 4 cloud services, giving me an £80 quid bill every month on servers whose utilisation would be pretty poor.
The smart thing to do is merge them all on to a single cloud service. It’s important that merging multiple sites/services on a single cloud service is only done where appropriate. Putting lots of services on the same instance when doing so will result in a shortage of resources defeats the point of moving to Cloud hosting. In this instance, they are all low-resource services which for the foreseeable future will happily sit on the same instance without problem. When traffic increases to the point where either of the services needs to be separated out to an instance of its own, this will be easy to do when required. The details of how to merge all of the services in to a single instance are detailed in my previous post.

 During initial deployment, I was getting some really appalling performance on all services, which I initially put down to the traffic manager. Using Remote Desktop to log on to one of the instances revealed that there was a process called VSPerf.exe running which was sapping all of the server’s CPU and a great deal of its RAM. There was one of these for each site initially (so 4) and then another 1 for each site was added whenever I did an upgrade deployment. This resulted in one instance where I had 16 VSPerfs running, it took me nearly 15 minutes to remote on to the server to kill them!
I could only find some anecdotal references to this problem on the internet which revealed no solutions other than to reboot after every deployment. Eventually I worked out that I had inadvertently turned on profiling at some point. I’d accidentally turned this on and didn’t need it, so I turned it off and the problem was solved. I’m sure the majority of users of profiling do not have this issue, but if you turn on profiling and then your performance falls through the floor, take a look at Task Manager and see if you are one of the unlucky few.

Friday 8 March 2013

New Relic for Azure

Ever since my company moved our main management system to Azure, I've been slowly working on the ability to monitor the application better in order to find the bottle necks and improve the user experience. To date, I've added:

  1. Timing on every request so I can query for the slowest pages.
  2. Azure Diagnostics
  3. StackExchange MiniProfiler to give me some insight in to every request, this has proved extremely useful in tracking down the exact cause of slow pages.

More recently, I've been planning to add Glimpse as well, as I like the insight it gives you in to the MVC ViewEngine and routing systems.

All these tools are great but each one adds an extra element of complexity to the system that is unavoidable, or at least it was until now.

Yesterday, I discovered New Relic. Their .net agent promises to hook in to your application with no code changes and it actually achieves it!
How?
Shamelessly lifted from the New Relic blog, this is how:
Code run by the CLR is considered ‘managed’ code, i.e., the CLR provides a managed environment in which memory object garbage collection and other services are ‘managed’ by the CLR. The Profiler API provides a mechanism for a profiler, such as the New Relic .NET agent, to inject code into whatever managed-code functions it desires. These injected bytes are in the form of MSIL, the .NET assembly language.

Personally, I think this is damn impressive, and as I already mentioned it allows the agent to hook in with no code changes. The agent can be installed on any server, but what impresses me is the simplicity with which this can be added to Azure Cloud Services.

The New Relic blog gives step by step instructions on how to add the agent using Nuget here, however this isn't all that is required. I deployed once and there was no data being reported, logging on to the server revealed that the agent had not been installed.
 I found that while the nuget package added a newrelic.cmd file to install the agent, it didn't add an entry to the cloud service's servicedefinition.csdef, so the script was never getting fired. After a few attempts, I found that the following entry works.
<Task commandLine="newrelic.cmd" executionContext="elevated" taskType="foreground" />
My initial attempt utilised a taskType of background, which meant that the task was processed asynchronously and everything was already initialised by the time the installation had completed - the agent had missed the boat to get it's hooks in.
I contacted New Relic support prior to working out the solution and they suggested that this would help them with a problem they were having with the nuget package (presumably, it was supposed to add it to the servicedefinition.csdef). The knowledge that I have possibly aided in making use of NR even smoother for other users is great, it feels good to contribute.

If you sign up through Azure, you get the Standard account free with a pro trial, all the details are on the aforementioned blog post, link below for those too lazy to scroll back up.
http://blog.newrelic.com/2012/08/21/x-ray-vision-into-your-azure-apps/

Note: I am not affiliated with New Relic in any way other than being a (free) customer and they have not paid me for this post (why would they, no one will read it!), I just think this is a really great tool.

Going Cloudy Part 4 - FTPServer and REST Service configuration


The FTP Server is a Windows Server 2012 Extra Small VM, running IIS for FTP and our in-house import/export agent.
The FTP server is used to exchange data between the application and external services. The reason for FTP is that the main external service we deal with only has that capability. We hope that external service can eventually switch to using JMS which can then be bridged to Service Bus, but for now this is all we have.
For the purposes of resilience and quick recovery after a failure, all files including the FTP folders and the executables for the agent are kept on a separate data disk and all of the configuration needed to get from brand new VM to all systems go is scripted with Powershell.
If the server ever dies, it’s just a case of firing up a new VM, adding the data disk, and running the Powershell script. I also intend to image the OS to give me the reimage option as well. The Powershell script is nothing special, but I intend to publish it for the sake of completeness.

You’ll notice from the diagram that the rest service is being directly accessed from rest.domain.com, bypassing the load balancer. You will also notice that only the EU instance is being used, the US instances are sitting there doing nothing.
There is a good reason for this.
The REST service can theoretically be accessed by any external system that is capable (as long as we grant it permission of course). Some of the imports carried out by the REST service are fairly destructive. If traffic was going through the load balancer, there is the possibility that the REST service in the US could be carrying out an import at the same time as the REST service in the EU, having some very scary consequences.
Now, in an ideal world, the REST service and the underlying data model would’ve been built to deal with this possibility. But it wasn’t, so we have to deal with that. Eventually, I intend for the REST service to simply be a receiver of files, which then puts the received data in to a queue to be picked up by a worker process, or worker processes, which WILL be designed to deal with multiple instances working at the same time without screwing everything up.
Having the entirety traffic go straight to the EU instance means that only one REST service will be taking request at any one time. It also means the US ones are doing nothing, but they use little to no resources if they are not being sent any work to do so I don’t think this is a major problem.
The only way around this would be to have a separate service definition set up for the US and other regions, which it seems to me is unnecessary duplication and extra work.
An alternative set up that I am considering is to have a second traffic manager configured for failover which is there just for the REST service. This would allow failover to the US instances if the EU ones ever became unavailable.

Sunday 24 February 2013

Azure downtime

Azure's storage system took a turn for the worse this weekend, reportedly because they forgot to renew the SSL certificate for the storage services.

Just in the comments section of this article, there are many comments chastising Microsoft for making such an amateur mistake (and rightly so). But there are also many who use this incident as a reason to write off cloud computing as a whole.

Letting a certificate expire is a massive cock-up, and as a customer I fully expect to see a report on the whys,hows, and what we'll do to stop it happening again from MS. However, let's not kid ourselves that cock-ups don't happen when we host and maintain these services ourselves. I work for a small company that couldn't afford to build and maintain equivalents of the Azure services ourselves. While it is frustrating and laughable that mistakes like this happen, at least when they happen on Azure, there is a whole team of highly intelligent well-paid developers and administrators working on solving the problem and preventing it from occurring again. In the on-site hosted premise, there's me. Now I am certain that if we were self-hosting, I'd be able to solve any problem that came my way although it would almost certainly take longer to fix purely due to available man hours, but I don't always have the time and resources to put in the necessary work to prevent it happening again, especially if it is something that is unlikely to recur.

If anything, incidents like this only serve to reinforce the message that cloud computing is not the silver bullet that it is often portrayed to be. It is not a suitable platform for everything nor is it devoid of fault or error. Just as you take the added expensive of the hardware, staffing, and management when hosting on-site, you also need to accept that things are to an extent out of your control when you move to cloud. Either way downtime will happen, and it will happen because someone made a stupid mistake. But at least if you're on Azure, you've got a team far more expensive than many companies could afford there to fix things when they go wrong, and also the resources to put processes and systems in place to prevent idiotic cock-ups like this from occurring again.

Going Cloudy Part 3 - Configuring your endpoints


In my previous post, I outlined the new architecture. You may have noticed how we no longer have urls containing relative paths. This is because all of the components will be hosted on the same web role and I don’t want to have to deal with complicated startup scripts to configure IIS, therefore I want to try and use the Azure-provided methods as much as possible. When deploying multiple sites to a single web role in Azure, there are essentially two choices for setting it up.
  • Virtual directories
  • Sites
Using virtual directories involves nesting all of the secondary services in the hierarchy of the main web role project (in this case the website). The up side of this is that I could retain my current url structure. The main down side of this is the inheritance of web.config settings. I initially prototyped using this set up and spent a lot of time overriding entries from the main web role config in the web.configs of the web and rest services to prevent missing assembly and conflicting configuration errors.
An example of this is the following snippet from the assemblies section in the web.config of the main website.
          
          <add assembly="System.Web.Helpers, Version=1.0.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" />
          <add assembly="System.Web.Mvc, Version=3.0.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35, processorArchitecture=MSIL" />
          <add assembly="System.Web.Abstractions, Version=4.0.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" />
          <add assembly="System.Web.Routing, Version=4.0.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" />
          <add assembly="System.Web.WebPages, Version=1.0.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" />
          <add assembly="System.Data.Linq, Version=4.0.0.0, Culture=neutral, PublicKeyToken=B77A5C561934E089" />

Looks harmless enough right? All of these assembly imports will get inherited down to the monitoring site’s child projects, the ASMX web service, and the Site and REST service. The latter 2 are also ASP.NET MVC so it’s not a problem. However, the asmx web service has none of the system.web assemblies in it, causing runtime errors post-deployment.
There were other items that caused me problems but this is the most obvious example. In the end, I decided the virtual directory set up was too brittle.
Using sites has the benefit of keeping all the individual applications on the server separate, so a failure or change in one won’t affect another. Because the applications are separate, we can’t use paths to distinguish which application we want when our request hits the server, so instead we use Host Headers.
The Host Header indicates the url that the user followed in order to get to us. By using subdomains for each service, we can tell from the host header which service the user is attempting to access.
The ServiceDefinition.csdef for the project looks like this.
  <WebRole name="domain.Monitoring" vmsize="ExtraSmall">
    <Sites>
      <Site name="Web">
        <Bindings>
          <Binding name="Endpoint1" endpointName="Endpoint3" />
        </Bindings>
      </Site>
      <Site name="Site" physicalDirectory="../../../../../AzPublish/Site-Release">
        <Bindings>
          <Binding name="Endpoint1" endpointName="Endpoint1" hostHeader="domain.com" />
          <Binding name="Endpoint2" endpointName="Endpoint2" hostHeader="domain.com" />
        </Bindings>
      </Site>
      <Site name="Gateway" physicalDirectory="../../../../../AzPublish/webservice-Release">
        <Bindings>
          <Binding name="Endpoint1" endpointName="Endpoint1" hostHeader="webservice.domain.com" />
          <Binding name="Endpoint2" endpointName="Endpoint2" hostHeader="webservice.domain.com" />
        </Bindings>
      </Site>
      <Site name="RESTService" physicalDirectory="../../../../../AzPublish/Rest-Release">
        <Bindings>
          <Binding name="Endpoint1" endpointName="Endpoint1" hostHeader="rest.domain.com" />
          <Binding name="Endpoint2" endpointName="Endpoint2" hostHeader="rest.domain.com" />
        </Bindings>
      </Site>
    </Sites>
    <Endpoints>
      <InputEndpoint name="Endpoint1" protocol="http" port="80" />
      <InputEndpoint name="Endpoint2" protocol="https" port="443" certificate="DomainCertificate" />
      <InputEndpoint name="Endpoint3" protocol="http" port="22202" />
    </Endpoints>


The Binding elements within each site define the endpoints that this site can be reached on and the hostHeader attribute maps this application to the url the user followed to get to us.
Notice how the domain.Monitoring project has no domain header. This means that this project will reply on endpoint3 (mapped to port 22202) for all traffic. Having it on a separate port means there is no chance of it ever grabbing traffic from the main projects.

You’ll notice that the Monitoring project is the main project in this deployment (I’ll go in to the monitoring project when I cover the traffic manager). The reason for this is that I had some initial trouble getting the host header set up to work with the Azure Traffic Manager and thought doing this may work. That later turned out to be incorrect but I see no harm in keeping the monitoring project as the main one here. Feel free to let me know if there are some caveats to doing this that I am not aware of.

Only the main project in the web role is compiled when publishing, so the ServiceDefinition needs to be pointed to a location containing the compiled project. In order to achieve this, I use file system deployment to publish the subprojects to the AzPublish folder. It’s not ideal and I intend to get all this scripted in Powershell before we go live. When I do that, I will of course publish the scripts on this blog. Also, if there is a better way, please use the comments.

Tuesday 19 February 2013

Going Cloudy Part 2 - Out with the old, in with the new


In my previous post, I briefly went over the current situation and the rationale for making the move to Microsoft’s Azure service.
I also promised details on the current architecture and what I have determined to be the new architecture. I will also attempt to explain how I got from old to new and why I have made the decisions I have made.

The old architecture. 

Here’s a quick diagram:
As stated in my previous post, this server is a single VM running 2GB of RAM and a couple of Ghz CPU. With those resources, for the live environment we are running:
  •  ASP.Net MVC Web site.
  • ASMX Web Service.
  • ASP.Net MVC-based REST service.
  • SQL Server database.

This is duplicated in the staging site which is located on the same server. On top of this, there is also SQL Server itself, an FTP Server running through IIS, an in-house import/export agent which processes all incoming and outgoing data.
There are also a number of support tools, there is Linqpad, SSMS,Notepad++, off-site file backup, and a number of other minor services.
What this all means is that, whenever I RDP on to change anything, this creates a notable strain on the system.
Bear in mind that the current service doesn’t have many users, say a dozen or so. This will increase by 10x this year so, clearly, the current setup just will not do.

Key requirements
The new infrastructure has a number of key requirements. Other than the obvious speed, reliability,etc, there is also the following.
Speed must be the same regardless of where the user is.
This means website and web service hosted in both EU and US. This also means two databases which need to be kept in sync.
Needs to respond to rises and falls in demand.
This is what cloud does best. We can add and remove instances easily in a matter of minutes. There is also the possibility of using the Scaling Application Block to drop instances down to having a single server when it is night time in their region, halving the service costs during those times.

The new infrastructure

The new infrastructure is distributed, fairly fault tolerant and should be easy enough to scale - all those cloudy terms that get overused. Over the next few posts, I will go over my reasoning for laying it out this way and problems I have encountered while creating the infrastructure. I hope to do that in approximately this order.
  • Changes to URL scheme.
  • FTP server and REST service configuration.
  • Instance configuration, the Monitoring project and the Azure Traffic Manager.
  • SQL Azure and using Data Sync.

Tuesday 12 February 2013

Going Cloudy Part 1 - The Beginning

If you believe the hype, "Cloud" is the future and everything will end up there eventually.
It's easy to think that "Cloud" is just another term for deploying your applications and services on virtual servers hosted in someone else's data center, mainstream media make this mistake all the time.

Making full use of the cloud, in my opinion anyway, is architecting and designing your system to take advantage of the availability and scalability that the cloud gives you.

Designing for the cloud is one thing, migrating an existing application to the cloud and taking advantage of all it has to offer, is another entirely. Recently, I have been tasked with planning and carrying out a migration to the cloud.

The application in question is currently hosted on a VM running with just 2Gb of RAM. This VM runs IIS, which hosts 3 individual web-based components of this application. It also hosts the database on SQL Server and a number of other additional tools and services. The user base is currently fairly small, limited to a dozen or so users located in the UK, China, USA, and South Africa.

The customer base is expected to expand massively this year, with the number of users from the US pushing over a hundred and a number of users coming on board from other countries around the world.

Clearly, the current server won't handle this, and Cloud is the way forward for this customer.

Amazon or Azure?

Amazon has a myriad of services which fill different niches and markets, but don't seem very joined up. In my opinion, while Amazon's cloud is much more fully featured and mature than Azure, it is a big bucket of disparate services trying to be everything to everyone.
Azure on the other hand, while being newer and not as feature complete as AWS, does what it does very well. All of Azure's services are aimed at allowing developers to build scalable and highly available services, the developer experience is first class and you can get started in minutes.
Microsoft finally recognizes that enticing developers is the key to success, and they have embraced other languages in Azure, creating SDKs for Node, PHP, Java, and Python. Azure is a great platform to develop for, and not just for .net developers.

So Azure it is.

Next time, we'll study the current architecture of the application and look at how it needs to change in order to make the move and meet the customer's requirements.

Disclaimer

I'm not a cloud expert so don't take what I am doing as best practice. This series is intended to help others learn from my experiences. If something can be done better, please use the comments to impart your wisdom.

Tuesday 1 January 2013

My experience of Windows 8

I think it's fair to say that Windows 8 has been the most controversial and negatively received OS since Vista. Every mention of Windows 8 invokes cries about the Metro UI and how the OS is a schizophrenic nightmare. Occasionally, amongst these voices will be a quiet statement of "it's not that bad once you get used to it".

That little voice isn't far wrong.

That's not to say that after using Windows 8 for a while that I have grown to love the Metro UI and think it's the best thing since sliced bread. No, it's just that I've learned to ignore it to the extent that I barely notice it is there.
Worth noting that, despite it allegedly only being a code name, I will be using the term Metro repeatedly in this post, it's just easier to type than Windows 8 Store app.

Start screen

It wasn't until I had used Windows 8 for a few weeks that I realized how little I used the old Start Menu. Since the ability to pin applications to the taskbar was added in Windows 7 and the addition of a fantastic little application called Bins which allows me to have 4 apps in one 'slot' of the taskbar, everything I need is in the taskbar. This relegates the Start Menu to two things - Search and shutdown.

Search

One thing that Microsoft have gotten right in Windows 8 is the search experience, which is superior to that of previous Windows versions. I get a big full page list of search results which covers files, settings, and applications and also the ability to search things like eBay from the same screen (not a feature I see myself using, but a nice touch anyway).

Shutdown

Shutdown/Restart is the biggest thing that has frustrated me about Windows 8. There are a couple of ways to do it:
1. When using a mouse, hover the mouse over a mystical hotspot in the bottom right hand corner, click on Settings, click on power, and then select from shutdown/restart/update. This is such a massive pain in the ass when using a mouse that it is not even really an option.
2. Same as above but the charm bar is invoked using Win+C instead of the mystical hotspot.
3. Alt + F4 when on the Desktop - the preferred option and not a problem for a keyboard/mouse user, but absolutely impossible if you were using a tablet, which is supposed to be Metro's Raison d'ĂȘtre.
 My wife, who is fairly technically literate, could not figure out how to turn it off. I had to Google it, expecting that there would be some ridiculously easy method that I was completely overlooking. There was not. In order to make things easy for myself, I have added a shortcut to shutdown.exe on the Metro start screen so I have the options of one-click shutdown if I'm in Metro or Alt+F4 if I'm in classic desktop. Can't see your average Joe user coming up with this solution though and this will be one of the biggest transition headaches in my opinion.

Metro applications

The other biggest issue with Windows 8 is Metro apps. I work with Windows and C# on a daily basis and I must admit that I have found it really difficult trying to work with Windows' new paradigm. Local database usage is incredibly difficult, with portable libraries not allowing you to use some of the standard data libraries. Microsoft has tried really hard with their samples, but some of them fall short, such as the settings sample which shows how to use the settings flyout but does not show how to retrieve or save settings the "Metro" way.
 The expectation is very much that Metro apps will be consuming cloud data sources and so will not need local database access. But cloud does not work for everything, such as business applications or little apps I write to make my life easier. Microsoft have limited the usefulness of Metro apps by not doing the extra leg work to make the development experience equal or superior to writings a standard Windows app. For something so radical to be picked up, they need to make the community want to use it, not try to force them to do it a certain way and remove the tools they are used to using unnecessarily.

From my perspective, I have almost completely ignored the Metro apps as the vast majority are useless to me. I suspect that, unless major improvements are made to the support for standard .net libraries in Windows apps, that Metro apps will be relegated to time-killing games and media consumption, something that the Metro UI is perfect for.

Those are the biggest issues Windows 8 has which, for your average user, are absolutely massive hurdles to overcome and is the reason many businesses will be completely skipping Windows 8 and hoping that a future service pack or Windows 9 corrects some of Windows 8's mistakes.

But of the 3 main issues described above (Start screen, shutdown, Metro apps), 2 are largely ignorable and the other is easily worked around.

But it's not all bad...

Those not withstanding, Windows 8 has some plus points.

Upgrade experience

I upgraded from Windows 98 to ME, and regretted it for weeks after until I finally bit the bullet and did a complete reinstall. The upgrade experience from Windows 7 to 8 was as smooth as it could be. The upgrade adviser told me which applications would be incompatible ahead of time and uninstalled them for me as part of the upgrade, the upgrade itself was paid for without ever leaving the upgrade application.
Post-upgrade, everything worked and the only issue I have had is Linqpad not showing up as an installed application in the search screen which was easily solved by a quick re-install.

Faster than Windows 7

 Owing largely to the removal of superfluous visual effects but I suspect there has also been some serious optimization work done under the hood. The improved Task Manager is also a nice touch.

Hyper-V built in

 I haven't made massive use of this yet owing to having an almost-1 year old in the house and generally having less time to tinker than I used to, but when I have done some work requiring a VM, it has been more than adequate. My virtual server hasn't been fired up in almost a year now.

Great upgrade offer

I haven't paid for a Microsoft OS in a very long time, apart from as part of a new machine purchase. Despite the controversy over Windows 8, the cheap upgrade price of £15 and the possibility of being able to write applications and publish them through a store that would gain my applications exposure and possibly even make me a few quid, was enough for me to bite. The ecosystem possibility seems a little less likely now, but I still think the upgrade was worth it.