The Likert scale, online reviews and the ‘average’ campaign

I’ve seen an increase in the number of online retailers that are using customer feedback platforms to solicit reviews and build trust in their brand. I think it’s a very good idea and like many others I rely on reviews when deciding where to purchase. But I’ve noticed a pattern that introduces bias in these review systems and I thought it mention it just for the sake of variety!

Online retailing is a volume operation. Vendors can offer low prices because they either specialise in a particular category of product and therefore have every possible variant of the product, or they move so many of a particular item that they can operate on very small margins. As a consequence the vast majority of transactions are completed successfully. We order a widget, the vendor ships it, the courier delivers it. We use the widget for it’s stated purpose and all is well in the world. A few days later the retailer asks for our feedback and we say ’you’re fabulous’ because we’re happy with our new widget. The result is that in most cases the majority of the feedback for a vendor is so shamelessly positive that it would make a triple-espresso addicted life coach blush.

But is this a true picture of the quality of a vendor? No. Here’s why – unless the review platform is completely corrupt you’ll find for any retailer a few negative reviews. And they’ll often be very negative. When things go wrong for online retailers they very often go very wrong, the whole razor-thin margins and heavily automated approach doesn’t leave much room for human interaction and that’s usually what’s needed when things go wrong. These retailers are simply not geared up to deliver customer service. They drive us nuts with their ham-fisted attempts and so when we’re asked to provide feedback we’re so incensed that we poke our finger in their eye in the only way we can – by giving them 1 star! All to no avail though, the caffeine imbibing life coaches will swamp our meagre protest with their happy flow positivity.

So anyway, is there a point to this? Yes, there are a few:

Firstly, the Likert scale that these reviews are based on doesn’t work. Why have 2,3 and 4 starts when everybody either gets 1 or 5? It’s a swizz people, you heard it here first. Look for the 1 star reviews and find out how these people truly treat their customers.

Secondly, the next time you’re asked to comment on the service provided by a vendor ask yourself whether they actually deserve anything other than an ‘average’ rating. I don’t know about you but when I walk into a supermarket, pick up a can of beans pay and leave, I don’t feel any sense of jubilation that I managed to complete my transaction successfully. Why should it be any different for online retailers?

Finally, and this is almost relevant to the general theme of my blog, I wonder if we can adopt a similar approach to determining the success of a software product. Every time it does what it’s supposed to we all dance around in jubilation so much so that when the wheel comes off every now and again, as it surely will, we’re all so busy partying that we don’t even notice. No? You mean you wouldn’t want to fly on the plane that was powered by that software? Man, it’s a cruel world indeed. We give life to these processes that exploit human psychology to influence our behaviour.Is this how the machines will finally take over – one algorithmic oversight after another? Winking smile

All you ever wanted to know about Client-Side Rendering in SharePoint 2013 – Part 1

Part 1 – Introduction to Client Side Rendering

Part 2 – Using TypeScript to build a client side rendered web control

Part 3 – Using KnockoutJS with Client Side Rendering and TypeScript to provide client side data binding.

Part 4 – Making use of the Design Manager to customize your web parts

There have been a few pretty major changes in SharePoint 2013. In my opinion one of the most significant, in terms of it’s impact for SharePoint application developers, is the prevalence of client side rendering. You’d be forgiven for looking at a SharePoint 2013 page and thinking “ok, so there’s a new master page and some metro-style CSS” but that really is the tip of the iceberg when it comes to UI changes in the new version. Client side rendering is everywhere. Unfortunately, although not unsurprisingly given the hype surround the app development model, there isn’t much technical content on client side rendering at the moment. In this post, I’m hoping to change that. At least a little bit!

So what’s client-side rendering?

SharePoint is designed to be configurable. We can change the data structure by adding custom content types and columns, we can change the business logic by adding event receivers and workflows and we can change the UI by adding and configuring web parts. But that’s not enough. To allow us to really customize the UI we need a templating mechanism. Web part configuration is great for changing how something works but can only go so far when it comes to changing what it looks like. In previous versions of SharePoint we used XSLT as a templating mechanism. Caught in the hubris of the day, we all worshipped at the XML alter and put our angle bracket skills to the test. And it worked! But there was a catch (possibly on one of those sharp edges) and the catch was performance. Since templates were rendered on the server it meant a performance hit for each web part on each page render. Maybe not a significant performance hit but when a lot of users start hitting those pages it makes a difference.

Enter client-side rendering (CSR). As the name suggests, CSR offloads the rendering responsibility to the client. Ta dah, instant performance boost! Thank you very much and good night.

That’s all very good but how does it work?

In short, it works using JavaScript – the future of rock and roll. Remember back in the day we had ASP classic? It was basically an HTML page with some instructions for adding content dynamically. The server picked up the page, dug up the appropriate data, and sent the generated content down the wire. Well CSR works in a similar way except the templates are processed on the client and the data to be added is either retrieved from a web service or passed to the client as an object. With CSR we have some server side code that builds a view model and then sends that view model down to the client to work it’s magic. In effect, we’re delegating UI work to the client and leaving the server to deal with data and business logic.

Why should I care?

Web parts can still work as they always have done. There’s nothing to stop us from building the UI programmatically on the server or using a visual web part and bringing in the UI from an ASCX file. So is it worth the effort to switch to CSR? I suppose the honest answer is: in some cases no. I’d argue that those cases are in the minority though and here’s why. Styling SharePoint was always a bit of a pain. It could be done but it wasn’t like downloading a WordPress theme – some time and money had to be spent. As a result, a lot of organizations didn’t bother. With 2013 it’s gotten way easier – still not ‘downloading a theme’ easy but pretty close. As a result, the turtleneck department in many organizations won’t tolerate ugly web parts corrupting the feng shui of their branded 2013 deployment. If all you you have in reserve is XSLT, you’re in for fun fun fun!

How can I use it?

So we can see why we really need to jump on the bandwagon here. Let’s look at how we get on board this thing.

CSR is built on top of a relatively simple script that can be found at {sharepoint root}/template/layouts/clientrenderer.js. I say “built on top of” because some of the cool stuff like the templating that’s used by the OOTB search web parts uses a pile of other scripts as well (but that’s a story for another day).

At the time of writing there is no documentation for clientrenderer on MSDN. Let’s start by remedying that.

The easiest way to explain what’s going on is via these two constructs:

(Of course JavaScript isn’t object oriented but I’ll use object oriented concepts such as classes and interfaces to illustrate what’s going on.)

Class:   SPClientRenderer


void Render (node, context)

The Render method is the main entry point for the client renderer. It accepts two parameters:

node – an HTML DOM object that will contain the rendered content.

context – an object that implements IViewModel view model that will be passed to the template to be rendered.

void RenderReplace (node, context)

Works in the same way as Render except node is replaced with the result of the rendering operation.

Interface: IViewModel

delegate ResolveTemplate(context, component, level)

This delegate is used to determine which template should be used for rendering content. It should return either a string, representing a static template or a delegate, representing a dynamic template.

delegate OnPreRender(context)

This delegate is called before the rendering process begins. It allows for further enrichment of the context object before processing begins.

delegate OnPostRender(context)

This delegate is called after the rendering process completes. Since template generated content has been added to the DOM at this point it’s possible for it to be acted upon by additional client scripts. (For example, initializing a JQuery plugin)

dictionary(string, delegate)  Templates

If no ResolveTemplate delegate is referenced by the context object the rendering template is determined by examining the Templates dictionary for a template with the key ‘View’. As with the ResolveTemplate delegate, the associated value can be a string in the case of a static template or a delegate in the case of a dynamic template.

The rendering process works as follows:

  1. Render method is called.
  2. A call back is made to the OnPreRender delegate defined on context (if one exists)
  3. The ResolveTemplate delegate defined on context is called (if one exists), returning a template delegate to be executed or a string to be returned.
  4. If ResolveTemplate is not defined an attempt to locate an appropriate template within the context objects Templates dictionary is made.
  5. If no template can be found an empty string is returned.
  6. If a template delegate has been found it is called passing the context object as it’s only parameter.
  7. The delegate will return a string which is returned as the result of the rendering operation.
  8. The Render method injects the generated HTML as a child of the node DOM object that was passed in.
  9. A call back is made to the OnPostRender delegate defined on context (if one exists)

(If RenderReplace was called, the generated HTML is added as a child of the node DOM object’s parent and then node is removed)


In this post I’ve covered the basics of Client-Side Rendering – what it is and why it’s worth knowing about. In the next post in this series I’ll look at how we can build on this knowledge to develop a client side rendered web control using TypeScript.

ISO 639-2 to Windows LCID mapping

If you’ve ever had the misfortune to have to map three character ISO language codes to windows LCID’s then you’ll know it’s a bit of a pain.

The ISO codes can be found here and the LCID’s can be found here. Creating a mapping between the two is largely a manual process and you’d think that having these two lists is enough but sadly it isn’t. Although there are culture-neutral LCID’s in the list, pretty much nothing will work with those – if you intend to use the LCID’s for any UI type stuff ( in my case I’m mapping to language pack codes for SharePoint), you’ll need default specific cultures. For example, the default culture for English is US English – so the language pack has the LCID of 1033 as opposed to 0009 for the culture neutral LCID.

Anyways, to save fellow travellers some time here’s code for a dictionary object that contains the mappings you need.

internal static Dictionary<string, int> IsoCodeMappings = new Dictionary<string, int>
    {"afr", 1076},
    {"ara", 1118},
    {"aze", 1068},
    {"bel", 1059},
    {"bul", 1026},
    {"cat", 1027},
    {"zho", 2052},
    {"hrv", 1050},
    {"ces", 1029},
    {"dan", 1030},
    {"div", 1125},
    {"nld", 1043},
    {"eng", 1033},
    {"est", 1061},
    {"fao", 1080},
    {"fin", 1035},
    {"fra", 1036},
    {"glg", 1110},
    {"kat", 1079},
    {"deu", 1031},
    {"ell", 1032},
    {"guj", 1095},
    {"heb", 1037},
    {"hin", 1081},
    {"hun", 1038},
    {"isl", 1039},
    {"ind", 1057},
    {"ita", 1040},
    {"jpn", 1041},
    {"kan", 1099},
    {"kaz", 1087},
    {"swa", 1089},
    {"kor", 1042},
    {"kir", 1088},
    {"lav", 1062},
    {"lit", 1063},
    {"mkd", 1071},
    {"msa", 1086},
    {"mar", 1102},
    {"mon", 1104},
    {"nor", 1044},
    {"pol", 1045},
    {"por", 1046},
    {"pan", 1094},
    {"ron", 1048},
    {"rus", 1049},
    {"san", 1103},
    {"srp", 2074},
    {"slk", 1051},
    {"slv", 1060},
    {"spa", 1034},
    {"swe", 1053},
    {"tam", 1097},
    {"tat", 1092},
    {"tel", 1098},
    {"tha", 1054},
    {"tur", 1055},
    {"ukr", 1058},
    {"urd", 1056},
    {"uzb", 1091},
    {"vie", 1066}

PowerShell Primer for SharePoint Developers: Part 2

In my last post in this series I covered the basics of PowerShell, explaining what it is and how it works and introducing ways to find your way around the various facilities that are available. In this post I want to move on a bit and look at how we can perform some common SharePoint activities.

PowerShell for SharePoint

There are over 530 cmdlets in the Microsoft.SharePoint.Powershell snap-in so we won’t cover all of them but hopefully we now have the tools to be able to find the correct command for a particular task.

First things first, where can we find PowerShell? There are two possibilities when running on a SharePoint server, either select the SharePoint 2010 Management Shell from the start menu or open up a command prompt and enter:


When using the SharePoint management shell the SharePoint snap-in will already be installed. When using a standard PowerShell console we can install the snap-in by entering the following command:

Add-PSSnapIn Microsoft.SharePoint.PowerShell

We can check the list of installed snap-ins by using the command:


Connecting to SharePoint remotely

One of the real benefits of PowerShell is the ability to use it to connect to remote machines. We can open a PowerShell session on a client machine and then use remoting to connect to a SharePoint server. To enable remoting on the server enter the following command:


This command will enable the WinRM service and setup the firewall to allow incoming sessions.

Once the server has been configured, we can connect from any client machine by entering the following command:

Enter-PSSession “Server Name” -Credential (Get-Credential)

Note: If the client machine is running on a domain and your SharePoint server is running as a standalone server there are a few other steps that are necessary to enable remote connectivity such as configuring SSL connectivity on the server. Further information can be found at

Once a remote connection has been established, the SharePoint snap-in can be added with the command:

Add-PSSnapin Microsoft.SharePoint.Powershell

PowerShell Permissions

In order to use SharePoint cmdlets, users must be members of the SharePoint_Shell_Access role for the farm configuration database as well as a member of the WSS_ADMIN_WPG group on the SharePoint front-end server. To grant users the appropriate permissions use the following command:

Add-SPShellAdmin -Username domain\username 
	-database (Get-SPContentDatabase -webapplication http://Web app name)

Users must be explicitly granted permissions to every database that they need access to. By default only the account used to setup SharePoint will have permission to execute this command.

Working with Site Collections and Sites

Most of the cmdlets commonly used in the management of site collections or sites end in SPSite or SPWeb. To pick up a reference to a site collection we can use:

$site=Get-SPSite -Identity http://siteurl

Or we can return a list of all site collections by using:


When it comes to managing site objects (SPWeb), we can pick up a specific web using:

$web=Get-SPWeb -Identity http://weburl/

However to return a list of sites we need to either use the Site parameter or an SPSite object:

Get-SPWeb -Site http://SiteUrl


Get-SPWeb -Site $site

Creating Site Collections and Sites

We can create a new site collection using the New-SPSite cmdlet:

New-SPSite -Url http://localhost/Sites/NewSiteCollection - OwnerAlias username

We can also add new sites using the New-SPWeb cmdlet:

New-SPWeb -Url http://localhost/Sites/NewSiteCollection/NewWeb -Name MyNewWeb

Deleting Site Collections and Sites

We can delete site collections and sites by using the Remove-SPSite or the Remove-SPWeb cmdlets.

Remove-SPWeb -Identity http://localhost/Sites/NewSiteCollection/NewWeb


Remove-SPSite -Identity http://localhost/Sites/NewSiteCollection

Setting properties on SharePoint objects

When setting properties on the objects returned by SharePoint management cmdlets we need to call the Update method in the same manner as when updating properties using the Server Object Model. For example:

$web=SP-GetSPWeb -Identity http://myweburl 
$web.Title=”My New Title”

Working with Lists and Libraries

In the same was as in the server object model, lists and libraries are accessed via SPWeb objects. For example, we can enumerate the lists on a site using:

Get-SPWeb -Identity http://myweburl | Select -Expand lists| Select Title

We can add new lists using the Add method of the Lists property:

Get-SPWeb -Identity http://myweburl | ForEach {$_.Lists.Add("My Task List", "",

Working with Content

We can retrieve a list of all items in a site using:

Get-SPWeb -Identity http://myweburl | Select -Expand Lists | Select -Expand Items | select Name, Url

Or we can apply a filter to show only documents:

Get-SPWeb -Identity http://myweburl | Select -Expand Lists | Where {$_.BaseType -eq "DocumentLibrary"} | Select -Expand Items | select Name, Url

We can also make use of filters to search for a specific item:

Get-SPWeb -Identity http://myweburl | Select -Expand Lists | Select -Expand Items | Where {$_.Name -like "foo*"} | select Name, Url

Creating new documents

To create a new document in a document library:

function New-SPFile($WebUrl, $ListName, $DocumentName,$Content)
$stream = new-object System.IO.MemoryStream
$writer = new-object System.IO.StreamWriter($stream)
$list=(Get-SPWeb $WebUrl).Lists.TryGetList($ListName)
$file=$list.RootFolder.Files.Add($DocumentName, $stream,$true)
New-SPFile -WebUrl "http://myweburl" -ListName "Shared Documents" 
    -DocumentName "PowerShellDocument.txt" 
    -Content "Document Content"


In this post we’ve covered common activities and introduced cmdlets that are used to pick up references to entry point to the SharePoint server object model that will be familiar to most developers such as SPWeb, SPSite and SPList. Since PowerShell exposes objects in a similar manner to C#, developers can build on this knowledge to script many tasks that would otherwise require managed code.

PowerShell primer for SharePoint developers: Part 1

With SharePoint 2010 PowerShell if the preferred command line interface. In SharePoint 2013, a working knowledge is pretty much essential to get things like search up and running. Despite that, I’m often asked for assistance in preparing PowerShell scripts for SharePoint admin folks that really should know better, Bearing that in mind, it seems that there’s scope for a few posts on how to use PowerShell to manage SharePoint

Posts in this series:
  1. Getting started (this post)
  2. Using PowerShell with SharePoint

What is PowerShell?

PowerShell is a powerful scripting environment that leverages the flexibility of the .NET framework to provide command-line users with the capability to develop scripts and utilities that can automate administrative tasks. Unlike many command-line tools, PowerShell has been designed to deal with objects rather than plain text output. Most command line tools are effectively executables and as such can only read text based input from the command line while only being capable of return text based output to the console. PowerShell introduces the concept of a cmdlet (pronounced command-let). Cmdlets are PowerShell specific commands which are derived from the System.Management.Automation.Cmdlet class and are created using the .NET Framework. PowerShell uses an object pipeline to pipe the output of one cmdlet to the next cmdlet in a chain. This mechanism allows objects to be passed between functions simply.

There are many cmdlets available for use with PowerShell and users are free to create their own using tools such as Visual Studio. To make it easier to manage cmdlets, they are commonly packaged together as snap-ins. As a general rule a snap-in contains all of the cmdlets for managing a particular product or service. For example, the Microsoft.SharePoint.Powershell snap-in contains the out-of-the-box snap-ins for SharePoint 2010.

Getting help

When using legacy command-line tools, it can be difficult to remember the names of the various tools. There is no common standard for naming or passing parameters. For PowerShell cmdlets a verb-noun naming convention has been adopted. This makes it easier for users to guess the name of a command. By using the get-command cmdlet, it’s possible to get a list of the commands that are available. This command also accepts -verb or -noun as parameters for filtering the output. For example, we could enter the following command to retrieve a list of commands relating to the SPWeb object:

get-command -noun spweb

As well as a standard naming convention for cmdlets, PowerShell also imposes a standard convention for passing parameters. Parameters are always preceded with a hyphen. We can see this in the example above. It’s possible to view help for a particular cmdlet by passing the parameter –?

Tab expansion (or Intellisense if you’re a developer)

One really useful feature of the PowerShell command-line interface is the ability to use tab expansion. As developers, we’ve become used to having tools such as IntelliSense to remind us of our options when entering code. The same idea works with PowerShell. When entering a command name, if we enter part of the name and then repeatedly press the tab key we can cycle through the available commands matching our input. When entering parameters for a command if we enter the preceding hyphen we can also cycle through the list of available parameters. These two features combined with a standard naming convention make it relatively straightforward to pick up PowerShell scripting.

Using Objects

As mentioned PowerShell deals with objects rather than text. This means that we can often set or query properties or execute methods on the object that is returned by a particular cmdlet. For example, the following command returns an SPWeb object :

Get-SPWeb -Identity http://localhost

If we execute this command we’ll find that a URL is returned. This is the default output where no specific property has been called.

We can get a list of the available members for the resultant SPWeb object by passing the output of this command to the get-member command using the pipe character as follows:

Get-SPWeb -Identity http://localhost|get-member

Once we’ve found the property that we’re interested in we can either retrieve the value by placing parenthesis around the command or by assigning the output of the command to a variable and then querying the variable.

(Get-SPWeb -Identity http://localhost).Title


$web=Get-SPWeb -Identity http://localhost 

PowerShell variables are always prefixed with $ and persist for the duration of a session. We can examine the type of a variable by using the GetType method as shown:


If we need to view more than one property from an object or collection of objects we can use the select-object cmdlet to specify the properties that we require.

(Get-SPFarm).Service|Select-Object -Property TypeName,Status

This command will return a list of services on a farm along with their status. Another way to write the same command is:

(Get-SPFarm).Service|select TypeName,Status

This shortened command uses a technique known as aliasing. Many commonly used commands have simpler aliases and a full list can be retrieved using the following command:


As well as being able to specify which properties are shown when displaying a collection of objects we can also filter which objects appear in the collection by using the Where-Object cmdlet. Again this cmdlet has an alias: where. Before we seen an example of this, we need to consider the properties that are available for this cmdlet. We have the following comparison operators:

Comparison Operator


Example (returns true)


is equal to

1 -eq 1


Is not equal to

1 -ne 2


Is less than

1 -lt 2


Is less than or equal to

1 -le 2


Is greater than

2 -gt 1


Is greater than or equal to

2 -ge 1


Is like (wildcard comparison for text)

"file.doc" -like "f*.do?"


Is not like (wildcard comparison for text)

"file.doc" -notlike "p*.doc"



1,2,3 -contains 1


Does not contain

1,2,3 -notcontains 4

As well as comparison operators, we can combine comparison by using the following logical operators:

Logical Operator


Example (returns true)


Logical and; true if both sides are true

(1 -eq 1) -and (2 -eq 2)


Logical or; true if either side is true

(1 -eq 1) -or (1 -eq 2)


Logical not; reverses true and false

-not (1 -eq 2)


Logical not; reverses true and false

!(1 -eq 2)

Using these two techniques we can create queries such as:

(Get-SPFarm).Services|Where {$_.TypeName -Like “*data*”}|Select TypeName, Status

Note the use of the $_ variable. This is a system defined variable which evaluates to the current object in the object pipeline, or in other words the output of the previous command. In situations where the previous command returns an enumerable collection the where command will iterate through the collection, therefore $_ will evaluate to an instance of an object in the collection rather than the entire collection.

Using Functions

As well as being able to execute command chains and utilize variables, we can also define functions using PowerShell. Functions work in the same way as in any other programming language, the only minor difference is that all uncaptured output within a function is returned to the caller. For example, we can create the following simple function:

function addNumbers($first,$second) 
“Adding numbers”
return $first + $second

We can call this function by entering the command (note the method of passing named parameters):

addNumbers -first 1 -second 2

The resultant output will be:

Adding numbers 

This is expected. However, what isn’t expected is that if we examine the data type of the return value by piping the output to Get-Member we find that there are two return types, string and int32. In situations where we want to use our function in a chain this is not ideal. The reason this has happened is that the Adding Numbers message is uncaptured output, that is to say it isn’t assigned to a variable or passed to a cmdlet, and as a result it forms part of the output. We can prevent this from occurring by modifying the function as follows:

function addNumbers($first,$second) 
Write-Host “Adding numbers”
return $first + $second


This post is a brief introduction to PowerShell, covering the fundamental differences between PowerShell and other scripting techniques. In the next post I’ll build on this basic knowledge to look at how we can perform specific SharePoint tasks using PowerShell.

A collection of TypeScript Definition files for SharePoint 2013 et al.

I’ve been doing a lot of work recently with TypeScript and have created a few definition files for stuff that I use often such as:

I’m planning to maintain these on GitHub as I add to them. Hopefully they’ll come in useful. You can find them here:

Quick tip: Auto copy SharePoint 2013 .Net 4 DLL to GAC with VS2012

When working on complex SharePoint projects I often haver a number of projects, some of which produce DLL’s that are deployed to the GAC when the completed solution is deployed to SharePoint. Since life is short, I don’t want to be redeploying the solution every time I update one of these DLL’s. Instead I just want to copy the DLL to the GAC and maybe recycle an app pool.

While we wait eagerly for the SP2013 version of CKSDev, here’s a quick post-build script that’ll do the job:

  1. Select a project then Properties > Build Events.
  2. In the Post-build event command line box add the following:
if "$(ConfigurationName)"=="Debug" (
"C:\Program Files (x86)\Microsoft SDKs\Windows\v8.0A\bin\NETFX 4.0 Tools\gacutil.exe" -i $(TargetPath)
"C:\Windows\System32\inetsrv\appcmd.exe" recycle apppool "19311de9d6b64338a982be8c9af45345"

Note: Change the app pool name to something appropriate to your project. If you’re DLL is being used in a front end process you probably want:

"C:\Windows\System32\inetsrv\appcmd.exe" recycle apppool "SharePoint – 80"

Now when you build your project in debug mode (i.e. on your dev box), you’ll see the following in the output window:

1>------ Build started: Project: MyCompany.MyWidget, Configuration: Debug Any CPU ------
1>  MyCompany.MyWidget -> C:\Code\SharePoint2013\MyCompany.MyWidget.dll
1>  Microsoft (R) .NET Global Assembly Cache Utility.  Version 4.0.30319.17929
1>  Copyright (c) Microsoft Corporation.  All rights reserved.
1>  Assembly successfully added to the cache
1>  "19311de9d6b64338a982be8c9af45345" successfully recycled
========== Build: 1 succeeded, 0 failed, 0 up-to-date, 0 skipped ==========

A circular reference was detected while serializing an object of type ‘System.Reflection.RuntimeModule’

When building web services that return JSON you’ll probably come up against a variant of this error when trying to serialize exceptions. (It also occurs with many other types but my solution only deals with exceptions so we’ll stick with that for now)

The reason for the problem, as the error suggests, is a circular reference in the object graph that prevents the serialization from completing. Thankfully this can be easily resolved by implementing a converter to return only the relevant parts of the exception (or any other object):

using System;
using System.Collections.Generic;
using System.Collections.ObjectModel;
using System.Web.Script.Serialization;

namespace MyCompany.MyWidget
    internal class ExceptionConverter : JavaScriptConverter
        public override IEnumerable<Type> SupportedTypes
            //Add a list of the type you want to convert here
            get { return new ReadOnlyCollection<Type>(new List<Type>(new[] { typeof(Exception) })); }

        public override object Deserialize(IDictionary<string, object> dictionary, Type type, JavaScriptSerializer serializer)
            if (dictionary == null) throw new ArgumentNullException("dictionary");
            //check that the passed object is of the correct type
            if (type == typeof(Exception))
                //Since we've lost most of the data of the original exception
                //we'll deserialize to a plain Exception object
                var message = dictionary[@"Message"].ToString();
                return new Exception(message);

            return null;

        public override IDictionary<string, object> Serialize(object obj, JavaScriptSerializer serializer)
            var exception = obj as Exception;
            if (exception != null)
                var result = new Dictionary<string, object>
                        //Make sure anything added here is serializable!
                        {@"Type", exception.GetType().ToString()},
                        {@"Message", exception.Message},
                        {@"Source", exception.Source}
                        //add whatever other properties you're interested in seeing on the client

                return result;
            return new Dictionary<string, object>();

This code converts an exception into a dictionary of serializable objects and outputs the dictionary as JSON. You’d use this in your code like this:

public Stream MyWebServiceCall(Stream value)
        //Web service foo goes here
    catch (Exception ex)
        //It's all gone wrong. Send an exception to the caller

        var serializer = new JavaScriptSerializer();
        serializer.RegisterConverters(new JavaScriptConverter[] { new ExceptionConverter()});
        string json = serializer.Serialize(ex);
        return new MemoryStream(Encoding.UTF8.GetBytes(json));

For all your effort, you’ll now see this in your JSON output: image

Fabulous eh?

Quick Tip: SPRoot shortcut

How many times a day do you find yourself typing C:\Program Files\Common Files\Microsoft Shared \Web Server Extensions\<whatever>?

Save yourself some trouble – define an environment variable called SPROOT that point to the folder then you can use it everywhere. Here’s how to do it:

  1. Open a command prompt.
  2. Enter the command:
setx SPROOT "C:\Program Files\Common Files\Microsoft Shared\Web Server Extensions\15\

Hit return, close the command prompt and your done. (Note: The lack of closing quotation marks is intentional)

You can now navigate to the SharePoint hive by simply typing %SPROOT%. From the search box in the start menu it’ll show you the contents of the folder for easy access.

Another 2 seconds saved!