Pixata Custom Controls
For Lightswitch

Recent Posts

Popular tags (# posts in brackets)

Anonymous types (3) ASP.NET (5) C# tricks and tips (2) Computers (4) Design patterns (3) DomainDataSource (3) Dynamic data (4) Entity model framework (7) LightSwitch (11) Linq (6) Microsoft (1) MVP (2) MVVM (2) RIA services (5) Silverlight (1) SQL Server (1) Unit testing (4) Visual Studio (5) WCF (3) WPF (2)

Archives

Categories

Blogroll - Fav Blogs

Disclaimer

Disclaimer
The opinions expressed herein are my own personal opinions and do not represent my employer's view in any way.

Actually, as I'm self-employed, I guess that means that any views I expressed here aren't my own. That's confusing!

About me

Acknowledgments

Theme modified from one by Tom Watts
C#/HTML code styling by Manoli

My rambling thoughts on exploring the .NET framework and related technologies
Home of the surprisingly popular Pixata Custom Controls For Lightswitch (well, it was a surprise to me!)

I previously blogged about a seemingly innocent LINQ problem that had us baffled for ages, which was how you sort or filter an entity’s child collection in Linq. For example, if you want to pull a collection of customers, and include all of their orders from this year, but none from earlier, then there doesn’t seem to be a simple way to do this in Linq. You need to do the query in two stages, the first which builds an anonymous type, and the second which links the to parts of it together. See that post for more details.

I also blogged about the problem of Linq not including child entities when doing joins, which requires you to cast the query to an ObjectQuery<> so you can use the Include() method on it.

The problem comes when you want to combine the two methods, meaning that your query needs to be constructed in two stages to ensure that the sorting or filtering of the child collection is done correctly, but you also need to cast the final result to an ObjectQuery<> before you send it out over WCF. The problem arises because you need to enumerate the query before doing the Include(), as that is the only way to ensure that the sorting is done, but calling AsEnumerable() gives you an IEnumerable<> (reasonably enough), which can't be cast to an ObjectQuery! So what's a fellow to do? Good question, and one that had me going for ages.

The only way I have found top do this is to enumerate the collection manually. I added a foreach loop that went through the parent collection and enumerated the children as it went along. I used Debug.Writeline() to dump the results to the Output window, which is one of my favourite debugging techniques. Anyone looking at the code would logically assume that this loop could be removed for the production code (which is I think what happened when I first wrote it), but this would cause the sorting to fail.

I ended up with code like that shown below (read to the end of this blog post to see an improved version of this code). You don’t need to know what the entities represent, just that I wanted a collection of DhrTemplates, each of which has a number of DhrTemplatesPart entities, and these had to be sorted on the DisplayPosition property.

   1:  var dhrTemplatesVar = from dt in getContext().DhrTemplates
   2:                        where dt.CurrentTemplate
   3:                        select new {
   4:                          currentTemplate = dt,
   5:                          templateParts = dt.DhrTemplatesParts.OrderBy(dtp => dtp.DisplayPosition)
   6:                        };
   7:  Debug.WriteLine("Enumerating the collections...");
   8:  foreach (var dtmpl in dhrTemplatesVar) {
   9:    DhrTemplate dt = dtmpl.currentTemplate;
  10:    Debug.WriteLine("PartDefinitionID: " + dt.PartDefinitionID);
  11:    IOrderedEnumerable<DhrTemplatesPart> parts = dtmpl.templateParts;
  12:    foreach (DhrTemplatesPart dtp in parts) {
  13:      Debug.WriteLine("  " + dtp.Description);
  14:    }
  15:  }
  16:  Debug.WriteLine(" ");
  17:  ObjectQuery<DhrTemplate> dhrTemplatesQry = (from dt in dhrTemplatesVar
  18:                                              select dt.currentTemplate) as ObjectQuery<DhrTemplate>;
  19:  if (dhrTemplatesQry != null) {
  20:    ObjectQuery<DhrTemplate> dhrTemplates = dhrTemplatesQry
  21:      .Include("User")
  22:      .Include("PartDefinition")
  23:      .Include("DhrTemplatesParts.PartDefinition.PartInformationTypes");
  24:    List<DhrTemplate> dhrTemplatesCurrent = dhrTemplates.ToList();
  25:    return dhrTemplatesCurrent;
  26:  }
  27:  return null;

The OrderBy clause on line 5 orders the parts correctly, but isn’t in effect until the query is enumerated, which is what happens between lines 8 and 15. I don’t actually need the Debug.WriteLine statements any more, but I left them in case I need to come back to this again. By the time you get to line 17, you still have the anonymous type for the query, but now it has been enumerated, so the sorting is done. Now we can cast it to an ObjectQuery<> and use Include() to include the child entities.

Quite an insidious problem, but obvious when you see the solution. I still have this feeling that there should be a better way to do it though.

Update some time later...

Well of course, there was a much better way to do it! Sadly, I was so stuck in the problem that I missed the blindingly obvious answer. As mentioned, calling AsEnumerable() enumerates the query, but gives you an IEnumerable<>, which can't be cast to an ObjectQuery. However, you don’t have to take the returned value from AsEnumerable(), you can just call it, and carry on using your original query, which has now been enumerated.

This makes the resulting code much simpler...

   1:  var dhrTemplatesVar = from dt in getContext().DhrTemplates
   2:                        where dt.CurrentTemplate
   3:                        select new {
   4:                          currentTemplate = dt,
   5:                          templateParts = dt.DhrTemplatesParts.OrderBy(dtp => dtp.DisplayPosition)
   6:                        };
   7:  dhrTemplatesVar.AsEnumerable();
   8:  ObjectQuery<DhrTemplate> dhrTemplatesQry = (from dt in dhrTemplatesVar
   9:                                              select dt.currentTemplate) as ObjectQuery<DhrTemplate>;
  10:  // rest of the code omitted as it's identical

Notice that lines 7 to 16 in the previous listing have been replaced with the single line 7 in this listing. I’m calling AsEnumerable(), but ignoring the return value. This enumerates the query, but doesn’t leave me with an ObjectQuery.

Pretty obvious really!

Geographical searches on postcode

I am currently working on an application where I want to have a search feature that allows people to search for businesses within a certain distance of their home (or anywhere else they care to choose).

I have some old UK postcode data knocking around, and was going to use that. For those not familiar with them, UK postcodes are made up of two parts, a major (also known as “outward”) part, and a minor (or “inward”) part. The major part is one or two letters followed by one or two digits, and the minor part is a digit, followed by two letters. Examples of valid postcode formats are M25 0LE, NW11 3ER and L2 3WE (no idea if these are genuine postcodes though).

Coupled with the postcodes are northings and eastings. These odd-sounding beasties are simply the number of metres north and east from a designated origin, which is somewhere west of Lands End. See the Ordinance Survey web site for more details. If you have any two postcodes, you can calculate the distance between them (as the crow flies) by a simple application of Pythagoras’ Theorem. My intention was to use all of this in my search code.

Whilst doing some ferreting around the web, looking for more up-to-date data, I found out that you can now get the UK postcode data absolutely free! When I last looked, which was some years ago admittedly, they charged an arm and a leg for this. Now all you need to do is order it from their downloads page, and you get sent a link to download the data. They have all sorts of interesting data sets there (including all sorts of maps, street views and so on), but the one I wanted was the Code-Point Open set. This was far more up-to-date than the data I had, and was a welcome discovery. Now all I had to do was import it, and write some calculation code.

Before I got any further though, I further discovered that SQL Server 2008 has a new data type known as geography data, which is used to represent real-world (ie represented on a round Earth) co-ordinates. It also has the geometry data type, which is a similar idea, but uses a Euclidean (ie flat) co-ordinate system. Given that I am only dealing with the UK, the curvature of the Earth isn’t significant in distance calculations, and so either data type would do.

However, converting northings and eastings to geography data isn’t as simple as you might hope, but I found a post on Adrian Hill’s blog, where he described how to convert OS data into geometry data, provided a C# console program that does just that, and then showed how easy it is to query the data for distances between points. In other words, exactly what I wanted!

I won’t describe the details here, because you can follow that link and read it for yourself, but basically all you need to do is get the Open-Point data, download the converter and away you go. Truth be told, it wasn’t quite that simple as the format of the data has changed slightly since he wrote the code, so I needed to make a small change. I left a comment on the post, and Adrian updated the code, so you shouldn’t need to worry about that. It doesn’t use hard-coded column numbers anymore, so should be future-proof, in case the OS people change the format again.

Testing the distance calculation, and an important point to improve performance

Once I had the data in SQL Server, I wanted to see how to use it. In the blog post, Adrian showed a simple piece of SQL that did a sample query. You can see the full code on his blog, but the main part of it was a call to an SQL Server function named GeoLocation.STDistance() that did the calculation. He commented that when he tested it, he searched for postcodes within five miles of his home, and got 8354 rows returned in around 500ms. I was somewhat surprised when I tried it to discover that it took around 14 seconds to do a similar calculation! Not exactly what you’d call snappy, and certainly too slow for a (hopefully) busy application.

I was a bit disappointed with this, but decided that for my purposes, it would be accurate enough to do the calculation based on the major part of the postcode only. One of the things Adrian’s code did when importing the data to SQL Server was create rows where the minor part was blank, and the geography value was an average for the whole postcode major area. I adjusted my SQL to do this, and I got results quickly enough that the status bar in SQL Server 2008 Management Studio reported them as zero. OK, so the results won’t be quite as accurate, but at least they will be fast.

Whilst I was writing this blog post, I looked back at Adrian’s description again, and noticed one small bullet point that I had missed. When describing what his code did, he mentioned that it created a spatial index on the geography column. Huh? What on earth (pardon the pun) is a spatial index. No, I had no idea either, so off I went to MSDN, and found an article describing how to create a spatial index. I created one, which took a few minutes (presumably because there were over 17 million postcodes in the table), and tried again. This time, I got the performance that Adrian reported. I don’t know if his code really should have created the spatial index, but it didn’t. However, once it was created, the execution speed was fast enough for me to use a full postcode search in my application.

Bringing the code into the Entity Framework model

So, all armed with working code, my next job was to code the search in my application. This seemed simple enough, just update the Entity Framework model with the new format of the postcodes table, write some Linq to do the query and we’re done… or not! Sadly, Entity Framework doesn’t support the geography data type, so it looked like I couldn’t use my new data! This was a let-down to say the least. Still, not to be put off, I went off and did some more research, and realised that it was time to learn how to use stored procedures with Entity Framework. I’d never done this before, simply because when I discovered Entity Framework, I was so excited by it that I gave up writing SQL altogether. All my data access code went into the repository classes, and was written in Linq.

Creating a stored procedure was pretty easy, and was based on Adrian’s sample SQL:

create procedure GetPostcodesWithinDistance
@OutwardCode varchar(4),
@InwardCode varchar(3),
@Miles int
as
begin
  declare @home geography
  select @home = GeoLocation from PostCodeData
      where OutwardCode = @OutwardCode and InwardCode = @InwardCode
  select OutwardCode, InwardCode from dbo.PostCodeData
  where GeoLocation.STDistance(@home) <= (@Miles * 1609)
    and InwardCode <> ''
end

Using stored procedures in Entity Framework

Having coded the stored procedure, I now had to bring it into the Entity Framework model, and work out how to use it. The first part seemed straightforward enough (doesn’t it always?). You just refresh your model, open the “Stored Procedures” node in the tree on the Add tab, select the stored procedure, click OK and you’re done. Or not. The slight problem was that my fantastic new stored procedure wasn’t actually listed in the tree on the Add tab. It was a fairly simple case of non-presence (Vic wasn’t there either).

After some frustration, someone pointed out to me that it was probably due to the database user not having execute permission on the stored procedure. Always gets me that one! Once I had granted execute permission, the stored procedure appeared (although Vic still wasn’t there), and I was able to bring it into the model. Then I right-clicked the design surface, chose Add –> Function Import and added the stored procedure as a function in the model context. Finally I had access to the code in my model, and could begin to write the search.

Just to ensure that I didn’t get too confident, Microsoft threw another curve-ball at me at this point. My first attempt at the query looked like this:

IEnumerable<string> majors = from p in jbd.GetPostcodeWithinDistance("M7", 45)
 
IQueryable<Business> localBusinesses = from b in ObjSet
          where (from p in majors
                  where p.Major == b.PostcodeMajor
                  select p).Count() > 0
          select b;

The first query grabs all the postcodes within 45 miles of the average location of the major postcode M7.Bear in mind that this query was written before I discovered the spatial index, so it only uses the major part of the postcode. A later revision uses the full postcode. The variable jbd is the context, and ObjSet is an ObjectSet<Business> which is created by my base generic repository.

When this query was executed, I got a delightfully descriptive exception, “Unable to create a constant value of type 'JBD.Entities.Postcode'. Only primitive types ('such as Int32, String, and Guid') are supported in this context.” Somewhat baffled, I turned the Ultimate Source Of All Baffling Problems known as Google, and discovered that this wasn’t actually the best way to code the query, even if it had worked. I had forgotten about the Contains() extension method, which was written specifically for cases like this.

The revised code looked like this (first query omitted as it didn’t change):

IQueryable<Business> businesses = from b in ObjSet
                                  where majors.Contains(b.PostcodeMajor)
                                  select b;

Somewhat neater, clearer and (more to the point) working!

So, with working code, I now only had more hurdle to jump before I could claim this one had been cracked.

Copying geography or geometry data to another database

All of the above was going on on my development machine. Now I had working code, I wanted to put the data on the production server, so that the application could use it. This turned out to be harder than I thought. I tried the SQL Server Import/Export wizard, which usually works cleanly enough, but got an error telling me that it couldn’t convert geography data to geography data. Huh? Searching around for advice, I found someone who suggested trying to do the copy as NTEXT, NIMAGE, and various other data types, but none of these worked either.

After some more searching, I discovered an article describing it, that says that SQL Server has an XML file that contains definitions for how to convert various data types. Unfortunately, Microsoft seem to have forgotten to include the geography and geometry data types in there! I found some code to copy and paste and, lo and behold, it gave the same error message! Ho hum.

After some more messing around and frustrated attempts, I mentioned the problem to a friend of mine who came up with the rather simple suggestion of backing up the development database and attaching it to the SQL Server instance on the production machine. I had thought of this, but didn’t want to do it as the production database has live data in it, which would be lost. He pointed out to me that if I attached it under a different name, I could then copy the data from the newly-attached database to the production one (which would be very easy now that they were both on the same instance of SQL Server), and then delete the copy of the development database. Thank you Yaakov!

Thankfully, this all worked fine, and I finally have a working geographic search. As usual, it was a rough and frustrating ride, but I learned quite a few new things along the way.

# My computer just laughed at me (Wednesday, September 07, 2011)
 

Of course, we professional programmers never make mistakes, ahem. That’s why we never need to use debuggers, ahem.

Well suspend belief for a moment, and assume that I had a bug in the code I was developing. You know the feeling, you stare at it, you write unit tests, you stare at it some more, and still can’t work out why on earth Visual Studio is claiming that there is an error in your code, when it’s so obvious that there isn’t. You even get to the point of talking to your computer, pointing out the error of its ways.

Eventually, you spot the mistake. Once you’ve seen it, it was so blindingly obvious that you can only offer a silent prayer of thanks that no-one else was in the room at the time. You change that one tiny typo, and suddenly Visual Studio stops complaining about your code and it all runs correctly.

Just as you sit back relieved, you notice your computer smirking. I’m certain mine just laughed at me. It did it quietly, but I noticed. It hates me.

Categories: Computers

I was trying to have a custom screen that you can use for adding and editing a product, and set it as the default screen for the Product entity. This means that however the user ends up at a product screen, the custom screen will be shown, not the one Lightswitch generates for you.

What I also wanted to do was make it so that if the user clicked the "Add product" button on the category screen, the new product's category would automatically be set to the one they were viewing when they clicked the button.

This is easy if you are only allowing your users to add and edit entities by clicking command bar buttons, but if you use links, which are one of Lightswitch's better features if you ask me, then you hit a major problem.

When you click a link, Lightswitch uses the default screen for the entity, so setting the custom product screen to be the default screen for the Product entity should do the trick. Except you can’t! I discovered that Lightswitch only allows you to use screens with exactly one parameter as the default entity screen.

This problem generated some discussion in the Lightswitch forum, and some partial answers. Late last night, I hit upon a simple and elegant answer. This blog post explains it in excruciating detail.

Categories: LightSwitch

Recognition #1

A few weeks ago, I made the grave mistake of accepting Skype’s offer to upgrade me to the latest version. I won’t go through the whole sorry saga, but the end result was that after a few posts in the Skype community forum, complaining about some of the obnoxious new “features,” I managed to find Old Apps, which offers downloads of previous versions of various bits of software. Interestingly enough, Skype was the #1 most downloaded old version, which means that I wasn’t the only one who didn’t like the new version!

Anyway, as a result of my posts in the Skype forum, I received an e-mail, informing me that I had been awarded a new rank in the community (stupidly enough, the subject line of the e-mail didn’t mention Skype at all, and it nearly got deleted as spam!). Almost too excited to click the links (OK, ever so slight exaggeration there), I went to my private messages on the Skype site, and was greeting with the overwhelming news that I had been awarded the rank of “Occasional Visitor” – a real honour that fills me with pride. OK, so that was also a slight exaggeration. My actual reaction was to laugh out loud at the blatant stupidity of awarding such a rank. Who on earth is going to be encouraged by such a title?

A few days later, after I had posted some fairly scathing remarks in the forum about Skype’s total lack of understanding of how to write a user interface, I was delighted (ahem) to receive a second message, telling me that I had now been awarded the rank of “Occasional Advisor.” Not quite as underwhelming as the previous one, but not far off!

The joke of it all is, that I was awarded these dubious honours due to my posts in the forum. I have absolutely no doubts that had any Skype employee actually read what I had written about their product, they would never have awarded me anything! On second thoughts, maybe they did read them, which is why I was given such pathetic titles!

Recognition #2

By contrast, I received an e-mail from Microsoft yesterday, telling my contributions to Microsoft online technical communities have been recognized with the “Microsoft Community Contributor Award.” Apparently, this was due to the number of posts I have made in various Microsoft forums, and promised me “important benefits.” Now call me greedy, but that was enough bait to interest me, so I clicked the link to see what it was all about. I landed on the Microsoft Community Contributor web site, which invited me to register.

As Microsoft already know pretty much everything about me anyway (they are about as snoopy as Google, and track you everywhere online), I figured I didn’t have a lot to lose. It turned out to be a clever way of getting me all excited, by generating a certificate of achievement, that looked like someone had knocked up in Powerpoint in their dinner time, and some badges to use on my web site, just so I can show everyone how amazing I am! Well, I didn’t print and frame the certificate (sorry Microsoft, was it was just a little bit too cheesy), but I succumbed to the temptation to add one of the badges to my blog. You should be able to see it on the left, just below the picture of my knobbly knees!

Cynicism aside, the one genuine benefit that this award gave me was a year’s free subscription to an online library that (apparently) contains hundreds of Microsoft Press books. I haven’t got the details yet, as it takes their computers a few days to process this part (duh), but this alone was worth the award.

So there you go, my social status has been raised, or not. depends on how you look at it! Better go and do some work now! Need to keep up the image you know. We Community Contributors can’t just hang around all day, writing pointless blog posts. We have, erm, well something important to do. When I’ve worked out what it is, I’ll get on with it!

Categories: Microsoft
# They didn’t really mean that did they? (Tuesday, August 02, 2011)
 

Sadly, whilst building a solution yesterday, my machine started behaving in a very weird manner, with applications not responding, the taskbar disappearing and so on, followed by the dreaded blue screen of death. When I checked the event log after pulling the plug out (I hate doing that!) and rebooting, I found lots of errors, which led me to a Microsoft Connect article where someone was reporting a very similar problem.

To my amazement, the very last comment by a Microsoft employee in response to this bus report was “This is known issue, this bug was resolved by mistake, we are already addressing this issue.”

Surely they didn’t mean that did they? Someone tell me i read that wrong!

Categories: Visual Studio
# Unit testing a Lightswitch application (Thursday, July 28, 2011)
 

If you’ve read my recent diatribes, you will be relived to know that this will be a very short post! It wasn’t going to be, but thankfully all the problems I was going to describe have been solved very simply.

Whilst tinkering around with my first Lightswitch application, I wanted to move some code into a separate class library, so it could be reused around the application. Naively, I added a C'# class library to the solution, moved the code over and then couldn’t add a reference to it from my Lightswitch project.

Whilst wondering what was going on, it dawned on me that Lightswitch is really just Silverlight underneath, so needs a Silverlight class library, not a normal .NET class library. I deleted the one I had just created, and added a new Silverlight C# class library. This time, I was able to add a reference and use the code from my Lightswitch application. Phew, one problem solved.

I then decided to write some unit tests for the class. That’s where I ran into the next problem. Normally, I just right-click a method, and choose “Create unit tests” from the context menu. Trouble was, there wasn’t a “Create unit tests” option there.

I spent rather longer than I should trying to work out how to do this, and failed. I even tried adding my own project and making into a test project, but that failed as I couldn’t add references to the appropriate test libraries. This is one of those occasions when you really wonder why Microsoft split Silverlight off from the rest of the .NET framework.

Anyway, the good news is that I just discovered that if you install the Silverlight Toolkit April 2010, you get new Visual Studio templates for unit testing Silverlight applications. They don’t work in quite the same way as normal unit tests, in that the tests themselves run in a Silverlight web application, but the basic principles are the same. You can even use the same code, and the same test attributes as you do in your normal tests.

Apparently, you can even test the UI with this framework, but I haven’t tried that. Needless to say, the fact that I could test my class library was enough to make me happy, and keep this blog post a lot shorter than it would have been - although it’s still a lot longer than it should have been, given the actual amount of useful information it contained! I must learn to be more concise.

 

Laugh and cry with me as I describe my attempts to deploy my first Lightswitch application.

Read how it all went wrong, the started to go right, then went wrong again, then went wrong some more, then... well just read the full blog post and you'll get the picture!

Categories: LightSwitch
# Lightswitch launches today! (Tuesday, July 26, 2011)
 

Having mentioned a few weeks ago that I was quite impressed with my first impressions of Microsoft Visual Studio Lightswitch, I have since had a chance to play with it some more, and I am still very impressed – perhaps even more than I was. My initial fears about it being another ASP.NET Dynamic Data fiasco seem to have been unfounded.

As promised, it provides a very fast way to produce high-quality data-driven Silverlight applications (web or out-of-browser), but seems flexible enough not to restrain you by the way the Lightswitch developers want to work. Admittedly, I am still a rank newbie with it, but have already produced most of the administrative back-end for a web site I’m developing, and have done so in a mere few hours, as opposed to the few weeks it would have taken otherwise.

What’s even better (or maybe that’s only by comparison), it doesn’t have some of the stupid design “features” that plagued Dynamic Data. For example, one of my peeves with DD was this it did not attempt to stop you trying to delete a customer who had orders in the database. However, when you tried it, the delete failed due to the obvious referential constraints, ie you can’t delete the customer if there are still orders that reference that customer. Despite having jumped through quite a lot of hoops, I never found a satisfactory way to disable the delete link. I found one way, but it was so convoluted as to be pretty useless. As if that wasn’t bad enough, DD didn’t give any graceful way to handle the inevitable exception, so the end-user was presented with a rather ugly (and very unprofessional) unhandled exception. You couldn't even tell it to do a cascading delete for those occasions when you really did want to delete the customer and everything associated with it. It was a great idea, but really badly implemented.

Not so Lightswitch! By default, the delete button is always enabled, which would give an exception if you tried to delete something that you shouldn’t, but (and this is a huge “but”), if you tried it, you got a very neat validation error shown on the screen, telling you that the record couldn’t be deleted. Next bit of friendliness is that it’s really easy to disable the delete button when you want to. You simply override the code, and handle the CanExecute method, which simply returns a Boolean, indicating whether or not the current record can be deleted. As you have full access to the current entities on the screen, this is no more complex than something like this…

partial void CustomerListDeleteSelected_CanExecute(ref bool result) {
  Customer c = Customers.SelectedItem;
  result = (c.Orders == null || c.Orders.Count() == 0);
}

Pretty simple stuff eh? You are dealing with strongly-typed entities, and simply check the properties that interest you, setting the return value as appropriate. Because Lightswitch applications are built on MVVM (basically a WPF or Silverlight-specific implementation of MVP/MVC), the framework handles the enabling and disabling of the button for you. All you need to do is set the Boolean return value of CanExecute. This makes it very easy to prevent the user from trying to delete something that they shouldn't be able to.

The other nice touch in this area is that you can specify that cascading deletes are allowed for an entity, so if you do allow them to delete a customer, all the customer’s orders can be deleted at the same time. Whilst this isn’t the sort of thing you’d want to do too often, it’s nice to know that the functionality is there if you want it.

Just last night I was reading about using custom controls in Lightswitch. This allows you to extend the functionality in pretty much any way you want, simply by using normal Silverlight controls. I haven’t tried this yet, but it looks fairly simple, and means you aren’t restricted to the controls included in the Lightswitch framework. I even saw an example of a game someone had written in Lightswitch, which whilst not being the most logical use of the framework, served well to show how easy it is to extend.

Anyway, I intend to post more technical details when I’ve played some more. In the meantime, Lightswitch launches today, although as I’m writing, it’s not yet launched. This is probably because it’s only just today in Redmond, and the Microsoft people are probably all still in bed!

# Easing web development with WebFormsMvp (Monday, July 11, 2011)
 

Always on the lookout for ways to improve my coding efficiency, I was very interested to hear about WebFormsMvp, which is a framework for writing ASP.NET web sites, based on the MVP design pattern. This post describes my initial frustrations, and eventual delight in using the framework.

In a future post, I intend to describe how to get going with WebFormsMvp, as well as how I extended it to reduce the boilerplate code even more.

Categories: ASP.NET | Design patterns | MVP | Unit testing