Friday, 5 June 2015

Windows Phone 10 - First Impressions

After running the Windows 10 Technical Preview on my laptop for the last few months and being generally happy with it, bar the obvious problems you get running a Technical Preview, I've been itching to get my hands on Windows Phone 10 (officially referred to as Windows 10 Mobile, but I'm going to refer to it as WP10). As a very happy owner of a Lumia 1020, I'll be getting it eventually so thought it was worth taking a look at. Fortunately, a colleague has recently switched from WP to Apple, so he kindly leant me his Lumia 635 to use as a sacrifice to the Technical Preview gods.

If you want to run the Technical Preview, details are here. But seriously, heed Microsoft's warning. You don't want to apply this to your main phone. I haven't hit any bugs yet, but the many interaction problems mean I'd be really annoyed if I had installed this on my main phone.

The Good

The settings app is vastly improved over the one on WP8.1, which is essentially a load of very narrowly scoped categories in a really long list on the page. The new app is well thought out, and the categorisations seem to broadly match those on Windows 10 on the desktop, which means you experience with one should transfer pretty well to the other.

The quick access to commonly used settings that has appeared in the notifications flyout on Windows 10 is also present here. It's a welcome addition that should reduce the need to venture in to the Settings app.

The Bad

Yes, that good section whizzed by rather fast didn't it? While I generally like Windows 10, even though it does remove some of the minor features of Windows 8.1 that I quite like, WP10 takes this a step further and removes pretty much everything I like about WP in one fell swoop. Let's go through the list shall we?

Command placement

WP was very predictable, all commands were at the bottom of the screen, easily reached by your thumb to summon them up. They were large and full width, so you could easily trigger them with the hand that is holding the phone without too much stretching. The overall one-handed experience with WP8.1 is very comfortable.
A great deal of the commands seem to have moved to the new "hamburger" menus in WP10, which are situated in the top-left, about as far away from your phone-holding hand as you can get (unless you're left handed of course).

Top, bottom, ellipsis,
where do I go?


As part of their march to look like everyone else, MS have removed the pivot control from their apps. This means that instead of just swiping, again with your phone-holding hand, to move between screens, you now have to stretch
to either the hamburger menu or, in the case of the dialer, to the buttons situated in the middle-top of the screen OR the buttons situated in the middle-bottom of the screen. Add to that, the dialer still has the older ellipsis menu from WP8.1, this one app has about every interaction model available, except for the one that suited it the most. Now I'm sure the ellipsis will go before release, but MS seem to have lost sight of the fact that the Pivot control was one of the greatest things about WP8.1 and I can't help but think that they could've got away with just making it a little smaller to make things more familiar to Android/iOS users while still retaining the superior interaction.

Schizophrenic navigation in the UWP mail app.

I've noted this separately from the above as I genuinely don't know whether this is just because this is a TP or if this is how this is actually intended to work, but the interaction with the mail app is incredibly painful and I sincerely hope this is not a sign of what we can expect from the UWP platform. I generally don't like doing 1-to-1 comparisons against a preview build, but in this case it's necessary to show how much of backward step this is.

To look at account settings in WP 8 is: Swipe up (or tap ellipsis) to bring up command bar (at the bottom of the screen), tap settings in the resulting menu which appears from the bottom of the screen.

In WP10: Tap hamburger (top-left), cog icon (bottom-right), tap options (in menu that appears from top of screen). My finger is going all over the phone here. It's quite a stretch and it's not even a big phone. On my Lumia 1020, I suspect this would be a 2-handed operation.

Ready to dropdown and
nowhere to go.

Desktop input controls on a phone

Most of the criticism aimed at Windows 8 is that it uses a phone UI on the desktop. It would appear misplaced UI elements is a two-way street. In the calendar app, and presumably all UWP apps, the dropdowns, rather than being full screen as they have been previously, are dropdowns just as they would be on the desktop. Except now they've got to contend with fat fingers and an on-screen keyboard that giv
es them no vertical height to occupy.

Slow animation on apps view

Now I'm nitpicking a bit, but WP at the moment is a very snappy, fast OS. You can get things done pretty quickly and there are no slow animations getting in your way, everything happens almost instantaneously. Long pressing on the back button to bring up the running apps view triggers in what feels like about 100-200ms and the animation is quick.
On WP10, it feels more like 700-800ms and the animation is slow. I don't think this is down to the hardware as there is no visible lag, it just seems that they've slowed it down to make it smoother. Smooth === slow in this case.


Not good. WP has always been a very different beast to other mobile operating systems, maybe this is part of the reason it hasn't taken off. But, in my view, the interaction with WP is faster, easier, and more natural than interacting with Android or iOS. I haven't had a great deal of contact with iOS and even less with BlackBerry, but I had Android phones for about 6 years before my current Lumia and while I didn't hate using them, WP was the first mobile operating system that I can actually describe as being pleasant to use.
 While there are a lot less WP users, those who stick with it tend to be much more satisfied with it than users of other operating systems. This survey, while admittedly a couple of years old, proves the point (at least among the users Reader's Choice surveyed).

 In their attempts to try and appeal to Android and iOS users, MS seem to have forgotten the things that make WP so pleasant to use. While I do not blame them at all for trying to make the experience more familiar to users of other OSes so the move doesn't seem quite so scary to those used to Android or iOS, you've got to think of your existing users while trying to appeal to new ones.

I suspect that, if the current preview is anything to go by, WP10 is just going to look like a pale imitation of Android or iOS, and that's a damn shame.

Thursday, 21 May 2015

Getting started with Azure Application Insights

Now that pricing information has been released for Application Insights, I decided to take the dive and deploy it on a few of the web applications I work on. The documentation is pretty good for your basic scenarios, but it glosses over what I think are probably common use cases.
  • The instrumentation key (commonly referred to as the iKey) is located in an applicationinsights.config file, which is not very helpful if you want to change it when deploying to Live or Staging environments.
  • If you use any monitoring system, such as Traffic Manager, Web Apps AlwaysOn, or any Web Testing application, this all gets included as "real traffic", which is fair enough as AppInsights has no way of knowing that it isn't real traffic. You may want to see it, but I personally do not so I wanted a way to filter it out.
There is a really great blog post by Victor Mushkatin which covers changing where AppInsights looks for the instrumentation key, adding your version number and your own tags to the telemetry so you can filter by version number or any tag you provide. I've added a siteIdentifier tag which contains an id for the particular instance of the application, identifying where in the world it is hosted.
My particular variation of his code adds the SiteIdentifier and the assembly version to the telemetry.
    public class AppInfoApplicationInsightsConfigInitializer : IContextInitializer
        public void Initialize(TelemetryContext context)
            context.Properties["SiteIdentifier"] = System.Configuration.ConfigurationManager.AppSettings["SiteIdentifier"];

                var verAtt = (AssemblyInformationalVersionAttribute)Assembly.GetExecutingAssembly().GetCustomAttributes(typeof(AssemblyInformationalVersionAttribute), false)[0];
                context.Component.Version = verAtt.InformationalVersion;
            catch (Exception)
                context.Component.Version = "Application Version Unknown";
I also have a TelemetryInitializer that finds traffic from load balancers and web testers as explained at the top of this post, and reclassifies it as synthetic traffic, making it easier to exclude from charts and reports. I found that this traffic shows up as a different user every time, making my user count orders of magnitude higher than it should be.
    public class SyntheticSourceInitializer : ITelemetryInitializer
        public void Initialize(Microsoft.ApplicationInsights.Channel.ITelemetry telemetry)
            if (HttpContext.Current == null)

            //Set traffic manager check and web test request to synthetic.
            if (HttpContext.Current.Request.Url.ToString().EndsWith("/monitoring"))
                telemetry.Context.Operation.SyntheticSource = "AzureAliveCheck";

            //Set Azure Web Apps AlwaysOn pings to synthetic.
            if (HttpContext.Current.Request.UserAgent == "AlwaysOn")
                telemetry.Context.Operation.SyntheticSource = "AzureAliveCheck";
You've got full access to the current request so you can identify the traffic however you need to (referrer, headers, request url, etc)
I then tell AppInsights to use these classes with the below entries in Application_Start() in global.asax.cs
            //Configure application insights.
            TelemetryConfiguration.Active.InstrumentationKey = System.Configuration.ConfigurationManager.AppSettings["iKey"];

            TelemetryConfiguration.Active.ContextInitializers.Add(new AppInfoApplicationInsightsConfigInitializer());
            TelemetryConfiguration.Active.TelemetryInitializers.Add(new SyntheticSourceInitializer());

As I dig further in to Application Insights, if I find more examples of useful overrides for default behaviour, I will add additional blog posts detailing these.

Wednesday, 15 April 2015

Close a window by title in C#

When you're developing for embedded systems that don't have a mouse or keyboard attached, a misbehaving program that decides to pop up windows at random is suddenly a lot more inconvenient.

Cue the below code snippet, which takes in a window title and sets it's state to minimised, maximised, or normal depending on the parameters you pass in. As usual, this is a Linqpad script. You just need to add a reference to System.Runtime.InteropServices, which is part of .net 4 and above.

void Main()
 var windowTitle = "Untitled - Notepad";

  IntPtr hWnd = FindWindow(null, windowTitle);
    if (!hWnd.Equals(IntPtr.Zero))
        ShowWindowAsync(hWnd, SW_SHOWMINIMIZED);

// Define other methods and classes here
private const int SW_SHOWNORMAL = 1;
private const int SW_SHOWMINIMIZED = 2;
private const int SW_SHOWMAXIMIZED = 3;

private static extern bool ShowWindowAsync(IntPtr hWnd, int nCmdShow);

[DllImport("user32.dll", EntryPoint = "FindWindow")]
private static extern IntPtr FindWindow(string lp1, string lp2);

This just imports the FindWindow and ShowWindowAsync methods from the user32 assembly. FindWindow is used to find the window we want to close. This return a pointer to the window handle, which we then pass to ShowWindowAsync along with an int indicating what we want to do to the window.

In the above example, I already know the window title, but this is unlikely to be the case in the real world. You can get this with the below snippet which will select out the window title and the process name. You can obviously add a where clause and modify the results based on your needs.

Process.GetProcesses().Select (p => new{p.ProcessName, p.MainWindowTitle})

Friday, 16 January 2015

ProTip: Open Powershell as admin from Powershell

Just a quick one as I find I have been using this trick a lot lately.
If I am in a standard Powershell prompt and need to get an admin one open, I used to search for Powershell, right-click, run as admin. I'd do this even if I was already in a Powershell prompt as I can never remember the syntax for runas.exe.
A much easier way, especially if you are already in a Powershell prompt is:

Start-Process powershell -verb runas

This works for any executable and will pop up UAC appropriately to allow you to enter credentials if you need to, or just run as admin if you are already an Administrator.

Hope this helps some one.


This can be further shortened to

start powershell -verb runas

Thanks anonymous user!

Thursday, 9 October 2014

Copying records between tables in different Azure accounts : The Next Generation

A few months back I posted a small piece of code for copying objects between Azure Table Storage Accounts. I have 2 bug-bears with the code I previously posted.

  1.  It's all very custom, you need to give it the type that is in the table and add lambdas to make sure it doesn't try to select the wrong object. 
  2. Due to a change in the Azure Storage Library, it no longer works. 

 I hadn't used this script in a while but now I have an impending need for something like it but better that will allow me to copy the contents of all the tables or a defined subset of tables in a given account, enter the DynamicTableEntity which allows you to get an entry from a table as a dynamic object.
To run the code, open up Linqpad and add a reference to the Windows Azure Storage library, best way to do this is using Nuget.

void Main()
  var srcClient = CreateClient("source account connection string");
  var destClient = CreateClient("destination account connection string");
  var mappings = new List<Tuple<string,string>>();
  //Manually setup mappings.
  //mappings.Add(new Tuple<string,string>("table1","table1copy"));
  //mappings.Add(new Tuple<string,string>("table2","table2copy"));
  //Copy all tables from the src account in to identically named tables in the destination account.
  var tables = srcClient.ListTables(null, new TableRequestOptions(){PayloadFormat = TablePayloadFormat.JsonNoMetadata});
  mappings = tables.Select (t => new Tuple<string,string>(t.Name,t.Name)).ToList();


public void Copy(CloudTableClient src, CloudTableClient dest, List<Tuple<string,string>> mappings) {

    var st = src.GetTableReference(x.Item1);
    var dt = dest.GetTableReference(x.Item2);
    var query = new TableQuery<DynamicTableEntity>();

    foreach (var entity in st.ExecuteQuery(query))

public CloudTableClient CreateClient(string connString){
  var account = CloudStorageAccount.Parse(connString);
  return account.CreateCloudTableClient();

In the above code, we create CloudTableClients to represent the source and destination accounts, then we build a mapping of source and destination tables.
We can do this manually if we only want to copy some tables and/or we want the destination tables to have different names to the source tables.
Alternatively, we can get a list of all the tables from the source and use that to build a 1:1 map, this will have the result of copying all items in all tables from the source account to the destination.
The Copy method simply takes the clients and mapping and does some iteration to get the items from each table from the source account and save them to the destination account.

Note: The code above is horribly inefficient for copying large amounts of data as it inserts each request individually. In a follow up post, I'll make this more efficient by making use of the TableBatchOperation.

Sunday, 31 August 2014

Using Custom Model Binders in ASP.Net MVC

I answered a question on Reddit this week from someone starting out in MVC who had read an incorrect article about model binding which was mostly correct, but made using custom binders look like they require more code than they actually do, so I thought it was worth a post to clear that up.

What is (Custom) Model Binding?

Model Binding is the process through which MVC takes a form post and maps all of the form values in to a custom object, allowing you to have a POST action method which takes in a ViewModel and have it automagically populated for you. Custom Model Binders allow you to insert your own binders for particular scenarios where the default binding won't quite cut it.

Creating our custom binder

We have the following typical example ViewModel:
    public class MyViewModel
        public string MyStringProperty { get; set; }
It's just a class, nothing special about it at all. Now we want to manually handle the binding of this model because we want to add some text to the end of MyStringProperty when it gets bound. This is unlikely to be something you would want to do in real life, but we're just proving the point here.
This is our binder:
    public class MyViewModelBinder:IModelBinder
        protected System.Collections.Specialized.NameValueCollection Form { get; set; }

        private void Initialise(ControllerContext controllerContext, ModelBindingContext bindingContext)
            Form = controllerContext.HttpContext.Request.Form;

        public object BindModel(ControllerContext controllerContext, ModelBindingContext bindingContext)
            Initialise(controllerContext, bindingContext);
            var msp = Form["MyStringProperty"];
            return new MyViewModel {MyStringProperty = msp + " from my custom binder"};
Model Binders need to implement IModelBinder and have a BindModel method. This gives you access to the controllerContext from which you can access HttpContext and the bindingContext, which admittedly I have never had to use.
In our binder, we just manually pick up the MyStringProperty value from the form, add it to a new instance of our object and return it, adding our incredibly important piece of text to the end of the retrieved value.

Using our Custom Binder

There are 2 ways we can use our custom binder, which one we use depends on each scenario. If we need to override the binding of a class for a particular Action method, we can use the ModelBinder attribute on the relevant parameter of the Action Method:
        public ActionResult Index([ModelBinder(typeof(MyViewModelBinder))]MyViewModel model)
            return View(model);
This will apply our Custom Binder to this property (MyViewModel) for this action only, no other actions or controllers will be affected.
Alternatively, if we want to apply our custom binder to MyViewModel globally within the application, we can add the following line to Application_Start in global.asax.cs:
        protected void Application_Start()

            ModelBinders.Binders[typeof(MyViewModel)] = new MyViewModelBinder();
Using this method, everywhere a parameter of type MyViewModel is encountered on an ActionResult, our custom binder will be invoked instead of the standard one. Because this applies globally, we do not need the ModelBinder attribute on our Action Method so the overridden behaviour is completely transparent to the controller, promoting code reuse and keeping model binding logic where it belongs.

Wednesday, 6 August 2014

API Head-to-head: AWS S3 Vs Windows Azure Table Storage

Recently, I was experimenting with using S3 as a tertiary backup for my photos, an honour which eventually went to Azure as it was cheaper and I am more familiar with the Azure APIs as I use them in my day job.

I thought I’d take a deeper look at both APIs and see how they compare. I’ll go through some standard operations, comparing the amount of code required to perform the operation.

If you want a comparison of features, there are plenty of blog posts on the subject, just Bingle It

All the code in this test is being run in Linqpad, using the AWS SDK for .Net and Windows Azure Storage Nuget packages.

Create the client

Both Azure and S3 have the concept of a client, this represents the service itself and is where you provide credentials for accessing the service.


var account = Microsoft.WindowsAzure.Storage.CloudStorageAccount.Parse("connectionstring");
var client = account.CreateCloudBlobClient();


var client = AWSClientFactory.CreateAmazonS3Client("accessKey", "secret",RegionEndpoint.EUWest1);
S3 wins on lines of code but I don’t like having to declare the datacenter the account is in. In my opinion, the application shouldn’t be aware of this. 1 point to Azure.

Creating a container

This is a folder, Azure refers to is a container, S3 calls it a bucket.


var container = client.GetContainerReference("test-container");


 client.PutBucket(new PutBucketRequest { BucketName = "my-testing-bucket-123456", UseClientRegion = true});
catch (AmazonS3Exception ex)
 if(ex.ErrorCode != "BucketAlreadyOwnedByYou") {

S3 loses big time on simplicity here. To my knowledge, this is the only way to do a blind create of a container, that is creating it without knowing up front if it already exists. Azure makes this trivial with CreateIfNotExists. 2 points to Azure.

Uploading a file


var container = client.GetContainerReference("test-container");
var blob = container.GetBlockBlobReference("testfile");


var putObjectRequest = new PutObjectRequest {BucketName = "my-testing-bucket-123456", FilePath = @"M:\testfile.txt", Key = "testfile", GenerateMD5Digest = true, Timeout=-1};
var upload = client.PutObject(putObjectRequest);
They’re pretty much equal here, but the S3 code is more verbose. I like the idea of getting a reference to a blob while not knowing if it actually exists or not.

List Blobs


var container = client.GetContainerReference("test-container");
var blobs = container.ListBlobs(null, true, BlobListingDetails.Metadata);
blobs.OfType().Select (cbb => cbb.Name).Dump();


var listRequest = new ListObjectsRequest(){ BucketName = "my-testing-bucket-123456"};
client.ListObjects(listRequest).S3Objects.Select (so => so.Key).Dump();

In terms of complexity, they’re pretty even here too. Azure has one more line but it’s not a difficult one. Notice that whereas with Azure, we get a reference to a container and then perform operations against that reference, with AWS all requests are individual so you end up having to explicitly tell the client for every operation what the bucket name is. Point to Azure.

Deleting a Blob


var dblob = container.GetBlockBlobReference("testfile");


var delRequest = new DeleteObjectRequest(){ BucketName = "my-testing-bucket-123456", Key="testfile"};
Neither code is particularly complicated here, but I prefer Azure’s simplicity with the container and blob reference model so point Azure.

Delete a Container


var container = client.GetContainerReference("test-container");


var delBucket = new DeleteBucketRequest(){ BucketName = "my-testing-bucket-123456"};
Again, pretty equal. To micro-analyse the lines, you could say that for Azure, you’ve got one potentially reusable line, and one throw-away line. With S3, they’re both throw away. But in reality, unless you’re doing thousands of consecutive operations, it doesn’t really matter.


In terms of complexity, Azure’s and S3’s APIs are pretty much equal, but it’s easy to see where they each have their uses. Azure’s API is a much thicker abstraction over REST, whereas the S3 API is such a thin-veneer that you could imagine a home-grown API not turning out that differently (but most likely not as reliable).

In my mind, if you’re doing lots of operations against lots of different blobs and containers then S3’s API is more suitable as each operation is self-contained and there are no references to containers or blobs hanging around.

If you’re doing operations which share common elements, such as performing numerous operations on a blob or working with lots of blobs within a few containers, Azure’s API seems better suited as you create the references and then reuse them, reducing the amount of repeated code.

Bonus Section

If you could be bothered to read past my conclusion, congratulations on your determination! The comparative speed of Azure and AWS has been done to death, but I couldn’t resist getting my own stats.

These are ridiculously simple stats, essentially Stopwatch calls wrapped around the code in this post. The file I am uploading is only 6k. The simple reason for this is that everyone tests how these services handle lots of large objects, but no one seems to cover the probably more common scenario of users uploading very small files. The average size is probably higher than 6kb, but this is what I’ve got hanging around so this is what I’m using.

So here are my extremely simple and probably not at all reliable benchmarks.

Operation S3 Azure
Create Container573279
Upload 6Kb file9955
List Blobs (1)41103
Delete Blob5545
Delete Container22138
All times are in milliseconds. I’ve got to admit; I was expecting a more even spread here. Azure is significantly faster creating and deleting containers and uploading the file. It is also faster at deleting a blob, but the difference is insignificant. S3 wins significantly listing blobs.

Not covered in this post: Both APIs also have the Begin/End style of async operations and Azure has the bonus of async operations based on the async/await pattern, I may do another post on that in the future.

TL;DR; Azure's API is in my opinion a better abstraction and it's faster for most operations.