Pixata Custom Controls
For Lightswitch

Recent Posts

Popular tags (# posts in brackets)

Anonymous types (3) ASP.NET (5) C# (3) C# tricks and tips (2) Computers (6) Design patterns (3) DomainDataSource (3) Dynamic data (4) Entity framework (3) Entity model framework (5) F# (3) LightSwitch (12) Linq (6) Microsoft (2) MVP (2) MVVM (2) Project Euler (2) RIA services (5) Silverlight (2) SQL Server (4) Unit testing (4) Visual Studio (7) WCF (3) WPF (3)

Gratuitous link to StackExchange




The opinions expressed herein are my own personal opinions and do not represent my employer's view in any way.

Actually, as I'm self-employed, I guess that means that any views I expressed here aren't my own. That's confusing!


Theme modified from one by Tom Watts
C#/F# code styling by Manoli (for posts pre-2016) and Google code prettify (for post from Jan 2016 and beyond)

My rambling thoughts on exploring the .NET framework and related technologies

# Tuesday, 06 December 2011

"I have always wished for my computer to be as easy to use as my telephone; my wish has come true because I can no longer figure out how to use my telephone."

Danish computer scientist Bjarne Stroustrup

Tuesday, 06 December 2011 14:00:00 (GMT Standard Time, UTC+00:00)

A problem with anonymous types

One of my first posts on this blog was about the problem of using anonymous types when you want to send the data outside of your current class. At the time, the only way I knew to solve this was to create a simple class that had the structure of the anonymous type, and create one of these instead of the anonymous type. I do this regularly, although I have taken to naming such classes CustomerOverview, OrderOverview, etc, instead of CustomerTmp as I did in that blog post, as they aren't really temporary classes at all, they are slim, flattened overviews of the class.

This approach works well, but it can have its downsides. One, as mentioned in that post, is that it is easy to end up with a proliferation of classes, many of which may only be used in one location in the code. If the classes represent lightweight views of entities in the model (such as the two examples shown above), then I don't worry about this, as it is clear what they represent, and it's fairly certain that I'll end up using them somewhere else at some point.

However, the problem does become more apparent when the classes are very specific to one part of the code. I feel uncomfortable having classes that don't really represent anything in the object model as a whole. Granted, this isn't a very common scenario, as if your code is structured sensibly, you will probably find that you don't have these artificial classes, but it is something that happens every now and then.

For some odd reason, I was struck with an idea whilst I was washing my face the other morning! Now why this would pop into my head at 6am, when I hadn't even thought about the issue for a year or so is beyond me, but I don't claim to understand the workings of the human mind, especially one that is bleary from lack of sleep!

A little-known, but intriguing class

One of the less-known and more interesting (but ultimately useless in my experience) additions to the .NET framework version 4 was the Tuple class. This generic class allows you to create an object that consists of up to eight pieces of information of a specified type (Note: the slightly arbitrary reason why they picked eight as the maximum can be found in this MSDN Magazine article).

As an example, suppose you want to get a list of customer names and database IDs, but don't need any of the other information in your customer entity. Instead of creating a CustomerOverview class, you could just create a Tuple<int, string> which would hold the two pieces of information. Thus, you may have a method with a signature like...

  public List<Tuple<int, string>> GetCustomerIdAndNames() {
    List<Tuple<int, string>> customers = new List<Tuple<int, string>>();
    // In reality, this would be populated from a database...
    customers.Add(new Tuple<int, string>(1, "Fred"));
    // etc...
    return customers;

So far so good. All of this looks jolly useful eh? So why did I describe it as ultimately useless? Only because my few experiences with using the Tuple class have ended up with changes in the requirements, which meant that every bit of code that used the Tuple had to be changed, at which point it became clear that it would have been much better design to have used a WhateverOverview class in the first place. So, whilst it's an interesting addition to the framework, it's one that I have never actually used (for very long anyway).

All that changed at 6am the other day. I had a flash of inspiration, which can be quite unnerving at that time of the morning!

Binding ASP.NET Repeaters to collections of Tuples

The issue I had that prompted that old blog post was sending data to an ASP.NET page, where it would be rendered using a Repeater control. For those of you not familiar with ASP.NET, this server-side control allows you to display a collection of items on a web page, without having to write all the code to loop through the collection and render it yourself. You just set up a simple template, bind the collection to the control, and the ASP.NET rendering engine does it for you. Now, those masochists who like writing everything themselves (such as those who use ASP.NET MVC!) might turn their noses up at such ideas, but as a professional ASP.NET developer for many years, I rely on controls like this to help me work more quickly. Microsoft provide a robust and efficient way of doing a job, why should I ignore it and do the whole thing by hand?

Anyway, the problem comes when you want to pass a collection of anonymous types to a Repeater. As the rendering engine needs to know the of each item in type of the collection, so that it can handle it appropriately, you can't bind a collection of anonymous types to a Repeater. So, to continue my earlier example, if you want to use just the customer ID and name in the Repeater, you either have to send across the whole Customer entity (which may involve a large amount of unneeded data), or create a CustomerOverview class, and pass a collection of those.

In this case, a customer is a logical enough entity that you might want to create the CustomerOverview, but there are many cases where you wouldn't. The specific collection of data items you want may be unique to this view, and creating a custom class for it would pollute the overall object model.

The simple answer to the problem is to generate a collection of Tuples, and pass that to the Repeater. As the Tuple is a known class, and the instance of it is strongly-typed, the ASP.NET engine won't have any problem with it at all, and can happily render the data without the need for an overview class.

A fairly simple answer, using a class that turned out not to be so useless after all!

A caveat

I would be remiss if I didn't end off with a comment about this approach. Although it works, and neatly solves the problem, I have to say that I'm still not a huge fan of using Tuples, especially when they are going to be passed outside of the class in which they are created. The reason is simply as I stated above, that requirements change, and if you find the need to change the length of the Tuple, or change the type of one of the items, you end up having to modify code in multiple classes. This gives the very strong feeling that you're working with strongly-coupled code, which is never a good idea.

Having said that, many times when you pass a collection to a Repeater, you don't actually need any code in the ASP.NET code-behind, as most of the data binding can be done automatically, so you probably don't need any code in the code-behind that is specific to the Tuple in question. Therefore, if you change the Tuple where it is generated, you won't need to change anything in the ASP.NET for it to continue working. This is especially true if you are using MVP, where the presenter can take care of any formatting that you want on the data items. In a case like this, I would be happy to use Tuples.

Phew, that was quite a lot of thought for first thing in the morning!

Friday, 04 November 2011 12:09:00 (GMT Standard Time, UTC+00:00)

I previously blogged about a seemingly innocent LINQ problem that had us baffled for ages, which was how you sort or filter an entity’s child collection in Linq. For example, if you want to pull a collection of customers, and include all of their orders from this year, but none from earlier, then there doesn’t seem to be a simple way to do this in Linq. You need to do the query in two stages, the first which builds an anonymous type, and the second which links the to parts of it together. See that post for more details.

I also blogged about the problem of Linq not including child entities when doing joins, which requires you to cast the query to an ObjectQuery<> so you can use the Include() method on it.

The problem comes when you want to combine the two methods, meaning that your query needs to be constructed in two stages to ensure that the sorting or filtering of the child collection is done correctly, but you also need to cast the final result to an ObjectQuery<> before you send it out over WCF. The problem arises because you need to enumerate the query before doing the Include(), as that is the only way to ensure that the sorting is done, but calling AsEnumerable() gives you an IEnumerable<> (reasonably enough), which can't be cast to an ObjectQuery! So what's a fellow to do? Good question, and one that had me going for ages.

The only way I have found top do this is to enumerate the collection manually. I added a foreach loop that went through the parent collection and enumerated the children as it went along. I used Debug.Writeline() to dump the results to the Output window, which is one of my favourite debugging techniques. Anyone looking at the code would logically assume that this loop could be removed for the production code (which is I think what happened when I first wrote it), but this would cause the sorting to fail.

I ended up with code like that shown below (read to the end of this blog post to see an improved version of this code). You don’t need to know what the entities represent, just that I wanted a collection of DhrTemplates, each of which has a number of DhrTemplatesPart entities, and these had to be sorted on the DisplayPosition property.

   1:  var dhrTemplatesVar = from dt in getContext().DhrTemplates
   2:                        where dt.CurrentTemplate
   3:                        select new {
   4:                          currentTemplate = dt,
   5:                          templateParts = dt.DhrTemplatesParts.OrderBy(dtp => dtp.DisplayPosition)
   6:                        };
   7:  Debug.WriteLine("Enumerating the collections...");
   8:  foreach (var dtmpl in dhrTemplatesVar) {
   9:    DhrTemplate dt = dtmpl.currentTemplate;
  10:    Debug.WriteLine("PartDefinitionID: " + dt.PartDefinitionID);
  11:    IOrderedEnumerable<DhrTemplatesPart> parts = dtmpl.templateParts;
  12:    foreach (DhrTemplatesPart dtp in parts) {
  13:      Debug.WriteLine("  " + dtp.Description);
  14:    }
  15:  }
  16:  Debug.WriteLine(" ");
  17:  ObjectQuery<DhrTemplate> dhrTemplatesQry = (from dt in dhrTemplatesVar
  18:                                              select dt.currentTemplate) as ObjectQuery<DhrTemplate>;
  19:  if (dhrTemplatesQry != null) {
  20:    ObjectQuery<DhrTemplate> dhrTemplates = dhrTemplatesQry
  21:      .Include("User")
  22:      .Include("PartDefinition")
  23:      .Include("DhrTemplatesParts.PartDefinition.PartInformationTypes");
  24:    List<DhrTemplate> dhrTemplatesCurrent = dhrTemplates.ToList();
  25:    return dhrTemplatesCurrent;
  26:  }
  27:  return null;

The OrderBy clause on line 5 orders the parts correctly, but isn’t in effect until the query is enumerated, which is what happens between lines 8 and 15. I don’t actually need the Debug.WriteLine statements any more, but I left them in case I need to come back to this again. By the time you get to line 17, you still have the anonymous type for the query, but now it has been enumerated, so the sorting is done. Now we can cast it to an ObjectQuery<> and use Include() to include the child entities.

Quite an insidious problem, but obvious when you see the solution. I still have this feeling that there should be a better way to do it though.

Update some time later...

Well of course, there was a much better way to do it! Sadly, I was so stuck in the problem that I missed the blindingly obvious answer. As mentioned, calling AsEnumerable() enumerates the query, but gives you an IEnumerable<>, which can't be cast to an ObjectQuery. However, you don’t have to take the returned value from AsEnumerable(), you can just call it, and carry on using your original query, which has now been enumerated.

This makes the resulting code much simpler...

   1:  var dhrTemplatesVar = from dt in getContext().DhrTemplates
   2:                        where dt.CurrentTemplate
   3:                        select new {
   4:                          currentTemplate = dt,
   5:                          templateParts = dt.DhrTemplatesParts.OrderBy(dtp => dtp.DisplayPosition)
   6:                        };
   7:  dhrTemplatesVar.AsEnumerable();
   8:  ObjectQuery<DhrTemplate> dhrTemplatesQry = (from dt in dhrTemplatesVar
   9:                                              select dt.currentTemplate) as ObjectQuery<DhrTemplate>;
  10:  // rest of the code omitted as it's identical

Notice that lines 7 to 16 in the previous listing have been replaced with the single line 7 in this listing. I’m calling AsEnumerable(), but ignoring the return value. This enumerates the query, but doesn’t leave me with an ObjectQuery.

Pretty obvious really!

Wednesday, 05 October 2011 18:14:00 (GMT Daylight Time, UTC+01:00)

Geographical searches on postcode

I am currently working on an application where I want to have a search feature that allows people to search for businesses within a certain distance of their home (or anywhere else they care to choose).

I have some old UK postcode data knocking around, and was going to use that. For those not familiar with them, UK postcodes are made up of two parts, a major (also known as “outward”) part, and a minor (or “inward”) part. The major part is one or two letters followed by one or two digits, and the minor part is a digit, followed by two letters. Examples of valid postcode formats are M25 0LE, NW11 3ER and L2 3WE (no idea if these are genuine postcodes though).

Coupled with the postcodes are northings and eastings. These odd-sounding beasties are simply the number of metres north and east from a designated origin, which is somewhere west of Lands End. See the Ordinance Survey web site for more details. If you have any two postcodes, you can calculate the distance between them (as the crow flies) by a simple application of Pythagoras’ Theorem. My intention was to use all of this in my search code.

Whilst doing some ferreting around the web, looking for more up-to-date data, I found out that you can now get the UK postcode data absolutely free! When I last looked, which was some years ago admittedly, they charged an arm and a leg for this. Now all you need to do is order it from their downloads page, and you get sent a link to download the data. They have all sorts of interesting data sets there (including all sorts of maps, street views and so on), but the one I wanted was the Code-Point Open set. This was far more up-to-date than the data I had, and was a welcome discovery. Now all I had to do was import it, and write some calculation code.

Before I got any further though, I further discovered that SQL Server 2008 has a new data type known as geography data, which is used to represent real-world (ie represented on a round Earth) co-ordinates. It also has the geometry data type, which is a similar idea, but uses a Euclidean (ie flat) co-ordinate system. Given that I am only dealing with the UK, the curvature of the Earth isn’t significant in distance calculations, and so either data type would do.

However, converting northings and eastings to geography data isn’t as simple as you might hope, but I found a post on Adrian Hill’s blog, where he described how to convert OS data into geometry data, provided a C# console program that does just that, and then showed how easy it is to query the data for distances between points. In other words, exactly what I wanted!

I won’t describe the details here, because you can follow that link and read it for yourself, but basically all you need to do is get the Open-Point data, download the converter and away you go. Truth be told, it wasn’t quite that simple as the format of the data has changed slightly since he wrote the code, so I needed to make a small change. I left a comment on the post, and Adrian updated the code, so you shouldn’t need to worry about that. It doesn’t use hard-coded column numbers anymore, so should be future-proof, in case the OS people change the format again.

Testing the distance calculation, and an important point to improve performance

Once I had the data in SQL Server, I wanted to see how to use it. In the blog post, Adrian showed a simple piece of SQL that did a sample query. You can see the full code on his blog, but the main part of it was a call to an SQL Server function named GeoLocation.STDistance() that did the calculation. He commented that when he tested it, he searched for postcodes within five miles of his home, and got 8354 rows returned in around 500ms. I was somewhat surprised when I tried it to discover that it took around 14 seconds to do a similar calculation! Not exactly what you’d call snappy, and certainly too slow for a (hopefully) busy application.

I was a bit disappointed with this, but decided that for my purposes, it would be accurate enough to do the calculation based on the major part of the postcode only. One of the things Adrian’s code did when importing the data to SQL Server was create rows where the minor part was blank, and the geography value was an average for the whole postcode major area. I adjusted my SQL to do this, and I got results quickly enough that the status bar in SQL Server 2008 Management Studio reported them as zero. OK, so the results won’t be quite as accurate, but at least they will be fast.

Whilst I was writing this blog post, I looked back at Adrian’s description again, and noticed one small bullet point that I had missed. When describing what his code did, he mentioned that it created a spatial index on the geography column. Huh? What on earth (pardon the pun) is a spatial index. No, I had no idea either, so off I went to MSDN, and found an article describing how to create a spatial index. I created one, which took a few minutes (presumably because there were over 17 million postcodes in the table), and tried again. This time, I got the performance that Adrian reported. I don’t know if his code really should have created the spatial index, but it didn’t. However, once it was created, the execution speed was fast enough for me to use a full postcode search in my application.

Bringing the code into the Entity Framework model

So, all armed with working code, my next job was to code the search in my application. This seemed simple enough, just update the Entity Framework model with the new format of the postcodes table, write some Linq to do the query and we’re done… or not! Sadly, Entity Framework doesn’t support the geography data type, so it looked like I couldn’t use my new data! This was a let-down to say the least. Still, not to be put off, I went off and did some more research, and realised that it was time to learn how to use stored procedures with Entity Framework. I’d never done this before, simply because when I discovered Entity Framework, I was so excited by it that I gave up writing SQL altogether. All my data access code went into the repository classes, and was written in Linq.

Creating a stored procedure was pretty easy, and was based on Adrian’s sample SQL:

create procedure GetPostcodesWithinDistance
@OutwardCode varchar(4),
@InwardCode varchar(3),
@Miles int
  declare @home geography
  select @home = GeoLocation from PostCodeData
      where OutwardCode = @OutwardCode and InwardCode = @InwardCode
  select OutwardCode, InwardCode from dbo.PostCodeData
  where GeoLocation.STDistance(@home) <= (@Miles * 1609)
    and InwardCode <> ''

Using stored procedures in Entity Framework

Having coded the stored procedure, I now had to bring it into the Entity Framework model, and work out how to use it. The first part seemed straightforward enough (doesn’t it always?). You just refresh your model, open the “Stored Procedures” node in the tree on the Add tab, select the stored procedure, click OK and you’re done. Or not. The slight problem was that my fantastic new stored procedure wasn’t actually listed in the tree on the Add tab. It was a fairly simple case of non-presence (Vic wasn’t there either).

After some frustration, someone pointed out to me that it was probably due to the database user not having execute permission on the stored procedure. Always gets me that one! Once I had granted execute permission, the stored procedure appeared (although Vic still wasn’t there), and I was able to bring it into the model. Then I right-clicked the design surface, chose Add –> Function Import and added the stored procedure as a function in the model context. Finally I had access to the code in my model, and could begin to write the search.

Just to ensure that I didn’t get too confident, Microsoft threw another curve-ball at me at this point. My first attempt at the query looked like this:

IEnumerable<string> majors = from p in jbd.GetPostcodeWithinDistance("M7", 45)
IQueryable<Business> localBusinesses = from b in ObjSet
          where (from p in majors
                  where p.Major == b.PostcodeMajor
                  select p).Count() > 0
          select b;

The first query grabs all the postcodes within 45 miles of the average location of the major postcode M7.Bear in mind that this query was written before I discovered the spatial index, so it only uses the major part of the postcode. A later revision uses the full postcode. The variable jbd is the context, and ObjSet is an ObjectSet<Business> which is created by my base generic repository.

When this query was executed, I got a delightfully descriptive exception, “Unable to create a constant value of type 'JBD.Entities.Postcode'. Only primitive types ('such as Int32, String, and Guid') are supported in this context.” Somewhat baffled, I turned the Ultimate Source Of All Baffling Problems known as Google, and discovered that this wasn’t actually the best way to code the query, even if it had worked. I had forgotten about the Contains() extension method, which was written specifically for cases like this.

The revised code looked like this (first query omitted as it didn’t change):

IQueryable<Business> businesses = from b in ObjSet
                                  where majors.Contains(b.PostcodeMajor)
                                  select b;

Somewhat neater, clearer and (more to the point) working!

So, with working code, I now only had more hurdle to jump before I could claim this one had been cracked.

Copying geography or geometry data to another database

All of the above was going on on my development machine. Now I had working code, I wanted to put the data on the production server, so that the application could use it. This turned out to be harder than I thought. I tried the SQL Server Import/Export wizard, which usually works cleanly enough, but got an error telling me that it couldn’t convert geography data to geography data. Huh? Searching around for advice, I found someone who suggested trying to do the copy as NTEXT, NIMAGE, and various other data types, but none of these worked either.

After some more searching, I discovered an article describing it, that says that SQL Server has an XML file that contains definitions for how to convert various data types. Unfortunately, Microsoft seem to have forgotten to include the geography and geometry data types in there! I found some code to copy and paste and, lo and behold, it gave the same error message! Ho hum.

After some more messing around and frustrated attempts, I mentioned the problem to a friend of mine who came up with the rather simple suggestion of backing up the development database and attaching it to the SQL Server instance on the production machine. I had thought of this, but didn’t want to do it as the production database has live data in it, which would be lost. He pointed out to me that if I attached it under a different name, I could then copy the data from the newly-attached database to the production one (which would be very easy now that they were both on the same instance of SQL Server), and then delete the copy of the development database. Thank you Yaakov!

Thankfully, this all worked fine, and I finally have a working geographic search. As usual, it was a rough and frustrating ride, but I learned quite a few new things along the way.

Tuesday, 20 September 2011 14:12:00 (GMT Daylight Time, UTC+01:00)
# Wednesday, 07 September 2011

Of course, we professional programmers never make mistakes, ahem. That’s why we never need to use debuggers, ahem.

Well suspend belief for a moment, and assume that I had a bug in the code I was developing. You know the feeling, you stare at it, you write unit tests, you stare at it some more, and still can’t work out why on earth Visual Studio is claiming that there is an error in your code, when it’s so obvious that there isn’t. You even get to the point of talking to your computer, pointing out the error of its ways.

Eventually, you spot the mistake. Once you’ve seen it, it was so blindingly obvious that you can only offer a silent prayer of thanks that no-one else was in the room at the time. You change that one tiny typo, and suddenly Visual Studio stops complaining about your code and it all runs correctly.

Just as you sit back relieved, you notice your computer smirking. I’m certain mine just laughed at me. It did it quietly, but I noticed. It hates me.

Wednesday, 07 September 2011 16:42:00 (GMT Daylight Time, UTC+01:00)

I was trying to have a custom screen that you can use for adding and editing a product, and set it as the default screen for the Product entity. This means that however the user ends up at a product screen, the custom screen will be shown, not the one Lightswitch generates for you.

What I also wanted to do was make it so that if the user clicked the "Add product" button on the category screen, the new product's category would automatically be set to the one they were viewing when they clicked the button.

This is easy if you are only allowing your users to add and edit entities by clicking command bar buttons, but if you use links, which are one of Lightswitch's better features if you ask me, then you hit a major problem.

When you click a link, Lightswitch uses the default screen for the entity, so setting the custom product screen to be the default screen for the Product entity should do the trick. Except you can’t! I discovered that Lightswitch only allows you to use screens with exactly one parameter as the default entity screen.

This problem generated some discussion in the Lightswitch forum, and some partial answers. Late last night, I hit upon a simple and elegant answer. This blog post explains it in excruciating detail.

Thursday, 01 September 2011 14:44:00 (GMT Daylight Time, UTC+01:00)

Recognition #1

A few weeks ago, I made the grave mistake of accepting Skype’s offer to upgrade me to the latest version. I won’t go through the whole sorry saga, but the end result was that after a few posts in the Skype community forum, complaining about some of the obnoxious new “features,” I managed to find Old Apps, which offers downloads of previous versions of various bits of software. Interestingly enough, Skype was the #1 most downloaded old version, which means that I wasn’t the only one who didn’t like the new version!

Anyway, as a result of my posts in the Skype forum, I received an e-mail, informing me that I had been awarded a new rank in the community (stupidly enough, the subject line of the e-mail didn’t mention Skype at all, and it nearly got deleted as spam!). Almost too excited to click the links (OK, ever so slight exaggeration there), I went to my private messages on the Skype site, and was greeting with the overwhelming news that I had been awarded the rank of “Occasional Visitor” – a real honour that fills me with pride. OK, so that was also a slight exaggeration. My actual reaction was to laugh out loud at the blatant stupidity of awarding such a rank. Who on earth is going to be encouraged by such a title?

A few days later, after I had posted some fairly scathing remarks in the forum about Skype’s total lack of understanding of how to write a user interface, I was delighted (ahem) to receive a second message, telling me that I had now been awarded the rank of “Occasional Advisor.” Not quite as underwhelming as the previous one, but not far off!

The joke of it all is, that I was awarded these dubious honours due to my posts in the forum. I have absolutely no doubts that had any Skype employee actually read what I had written about their product, they would never have awarded me anything! On second thoughts, maybe they did read them, which is why I was given such pathetic titles!

Recognition #2

By contrast, I received an e-mail from Microsoft yesterday, telling my contributions to Microsoft online technical communities have been recognized with the “Microsoft Community Contributor Award.” Apparently, this was due to the number of posts I have made in various Microsoft forums, and promised me “important benefits.” Now call me greedy, but that was enough bait to interest me, so I clicked the link to see what it was all about. I landed on the Microsoft Community Contributor web site, which invited me to register.

As Microsoft already know pretty much everything about me anyway (they are about as snoopy as Google, and track you everywhere online), I figured I didn’t have a lot to lose. It turned out to be a clever way of getting me all excited, by generating a certificate of achievement, that looked like someone had knocked up in Powerpoint in their dinner time, and some badges to use on my web site, just so I can show everyone how amazing I am! Well, I didn’t print and frame the certificate (sorry Microsoft, was it was just a little bit too cheesy), but I succumbed to the temptation to add one of the badges to my blog. You should be able to see it on the left, just below the picture of my knobbly knees!

Cynicism aside, the one genuine benefit that this award gave me was a year’s free subscription to an online library that (apparently) contains hundreds of Microsoft Press books. I haven’t got the details yet, as it takes their computers a few days to process this part (duh), but this alone was worth the award.

So there you go, my social status has been raised, or not. depends on how you look at it! Better go and do some work now! Need to keep up the image you know. We Community Contributors can’t just hang around all day, writing pointless blog posts. We have, erm, well something important to do. When I’ve worked out what it is, I’ll get on with it!

Wednesday, 31 August 2011 14:44:00 (GMT Daylight Time, UTC+01:00)
# Tuesday, 02 August 2011

Sadly, whilst building a solution yesterday, my machine started behaving in a very weird manner, with applications not responding, the taskbar disappearing and so on, followed by the dreaded blue screen of death. When I checked the event log after pulling the plug out (I hate doing that!) and rebooting, I found lots of errors, which led me to a Microsoft Connect article where someone was reporting a very similar problem.

To my amazement, the very last comment by a Microsoft employee in response to this bus report was “This is known issue, this bug was resolved by mistake, we are already addressing this issue.”

Surely they didn’t mean that did they? Someone tell me i read that wrong!

Tuesday, 02 August 2011 13:47:00 (GMT Daylight Time, UTC+01:00)
# Thursday, 28 July 2011

If you’ve read my recent diatribes, you will be relived to know that this will be a very short post! It wasn’t going to be, but thankfully all the problems I was going to describe have been solved very simply.

Whilst tinkering around with my first Lightswitch application, I wanted to move some code into a separate class library, so it could be reused around the application. Naively, I added a C'# class library to the solution, moved the code over and then couldn’t add a reference to it from my Lightswitch project.

Whilst wondering what was going on, it dawned on me that Lightswitch is really just Silverlight underneath, so needs a Silverlight class library, not a normal .NET class library. I deleted the one I had just created, and added a new Silverlight C# class library. This time, I was able to add a reference and use the code from my Lightswitch application. Phew, one problem solved.

I then decided to write some unit tests for the class. That’s where I ran into the next problem. Normally, I just right-click a method, and choose “Create unit tests” from the context menu. Trouble was, there wasn’t a “Create unit tests” option there.

I spent rather longer than I should trying to work out how to do this, and failed. I even tried adding my own project and making into a test project, but that failed as I couldn’t add references to the appropriate test libraries. This is one of those occasions when you really wonder why Microsoft split Silverlight off from the rest of the .NET framework.

Anyway, the good news is that I just discovered that if you install the Silverlight Toolkit April 2010, you get new Visual Studio templates for unit testing Silverlight applications. They don’t work in quite the same way as normal unit tests, in that the tests themselves run in a Silverlight web application, but the basic principles are the same. You can even use the same code, and the same test attributes as you do in your normal tests.

Apparently, you can even test the UI with this framework, but I haven’t tried that. Needless to say, the fact that I could test my class library was enough to make me happy, and keep this blog post a lot shorter than it would have been - although it’s still a lot longer than it should have been, given the actual amount of useful information it contained! I must learn to be more concise.

Thursday, 28 July 2011 20:55:00 (GMT Daylight Time, UTC+01:00)

Laugh and cry with me as I describe my attempts to deploy my first Lightswitch application.

Read how it all went wrong, the started to go right, then went wrong again, then went wrong some more, then... well just read the full blog post and you'll get the picture!

Thursday, 28 July 2011 18:29:00 (GMT Daylight Time, UTC+01:00)