Pixata Custom Controls
For Lightswitch

Recent Posts

Popular tags (# posts in brackets)

Anonymous types (3) ASP.NET (5) C# (3) C# tricks and tips (2) Computers (6) Design patterns (3) DomainDataSource (3) Dynamic data (4) Entity framework (3) Entity model framework (5) F# (3) LightSwitch (12) Linq (6) Microsoft (2) MVP (2) MVVM (2) Project Euler (2) RIA services (5) Silverlight (2) SQL Server (4) Unit testing (4) Visual Studio (7) WCF (3) WPF (3)

Gratuitous link to StackExchange

Archives


Categories


Disclaimer

The opinions expressed herein are my own personal opinions and do not represent my employer's view in any way.

Actually, as I'm self-employed, I guess that means that any views I expressed here aren't my own. That's confusing!


Acknowledgments

Theme modified from one by Tom Watts
C#/F# code styling by Manoli (for posts pre-2016) and Google code prettify (for post from Jan 2016 and beyond)


My rambling thoughts on exploring the .NET framework and related technologies

# Tuesday, 16 February 2016

I came across a situation recently in which we needed an object in the client. Due to the cost of creating this object, we didn't want to create it when loading up the window as it might not be needed (if the user didn't click the appropriate button), we didn't want to store it on the window's main object graph, but equally, we wanted to keep hold of it once it had been created.

This situation is not uncommon, where you have to create a class that is expensive (in database or bandwidth terms) to create.

The obvious solution to this was to have a private variable for the instance of Expensive, and a second private bool variable to indicate if the Expensive object had been created. We could check the bool before accessing the Expensive variable. Even better, we could create a property that wrapped this up. Imagine our class is wittily named Expensive...

public class Expensive {
  // For our purposes, this will just be a simple class that writes a message to the output
  // window when it is created (so we can see when that happens)
  public Expensive() {
    Debug.WriteLine("Created new instance of Expensive, hash code is " + GetHashCode());
  }
  // Add a property, so we can reference something
  public int AnInteger { get; set; }
}

Note that in many cases you can't simply rely on checking if _expensive is not null, as null may well be a valid value (or lack thereof) for the object. Therefore, you need two variables.

This works, but has a few drawbacks. Firstly, it's a lot of code for a simple requirement. Imagine you needed to do this for several types, you would end up with a lot of boilerplate code. Secondly, this method involves multiple variables for each expensive object you want. Finally, and most importantly, another developer coming along at some future date might not realise how it's supposed to work, and assume that _expensive will always be available, bypassing the bool and causing a null reference exception. Even worse, when they discover the null reference, they might add extra code to create the object, ignoring the code you previously added to do just the same thing. This is not good.

A more elegant solution

Drum roll please... enter Lazy<T>. This neat little class (introduced in .NET 4.0) allows you to wrap all this up in one variable, and not have to worry about the details.

To illustrate, let's define our Expensive class as follows...

public class Expensive {
  // For our purposes, this will just be a simple class that writes a message to the output
  // window when it is created (so we can see when that happens)
  public Expensive() {
    Debug.WriteLine("Created new instance of Expensive, hash code is " + GetHashCode());
  }
  // Add a property, so we can reference something
  public int AnInteger { get; set; }
}

Instead of having the two private variables mentioned before, you just have one of type Lazy. When declaring it, you add the code that creates the instance as a constructor parameter. As a simple example, you could have something like this...

Lazy<Expensive> exLazy = new Lazy<Expensive>(() => new Expensive());

That's about as simple as it gets, but there's no reason why it couldn't include more code that uses (say) a WCF service, database, EF model or whatever in there.

Now, the clever thing is that the class doesn't get created until you actually want it. For example, see the following (rather over-the-top) code...

Debug.WriteLine("About to declare, but not use our lazy value");
Lazy exLazy = new Lazy(() => new Expensive());
Debug.WriteLine("Created the lazy value, but not used yet");
for (int i = 0; i < 5; i++) {
  Debug.WriteLine("  i: " + i);
  if (i > 1) {
    Debug.WriteLine("  Setting exLazy.Value.AnInteger to " + i);
    exLazy.Value.AnInteger = i;
  }
}
Debug.WriteLine("Ended loop, exLazy.Value.AnInteger is " + exLazy.Value.AnInteger);

Debug.WriteLine("Ended loop, exLazy.Value.AnInteger is " + exLazy.Value.AnInteger);

This uses a liberal amount of writing to the Output window so we can see what's happening. When you run it, you see the following...

About to declare, but not use our lazy value
Created the lazy value, but not used yet
  i: 0
  i: 1
  i: 2
  i: 3
  Setting exLazy.Value.AnInteger to 3
Created new instance of Expensive, hash code is 1707556
  i: 4
  Setting exLazy.Value.AnInteger to 4
  i: 5
  Setting exLazy.Value.AnInteger to 5
  i: 6
  Setting exLazy.Value.AnInteger to 6
  i: 7
  Setting exLazy.Value.AnInteger to 7
  i: 8
  Setting exLazy.Value.AnInteger to 8
  i: 9
  Setting exLazy.Value.AnInteger to 9
Ended loop, exLazy.Value.AnInteger is 9

As you can see, the Expensive class isn't actually created until you want it, and from then on, it exists without needing to check it, or create it again. This means that your code can just refer to exLazy.Value all the way through, and rely on the fact that if it doesn't exists, it will be created for you.

The Lazy<T> class also has also a bool property IsValueCreated, which (oddly enough) allows you to check if the value has been created. Not sure quite how useful that is, but it's there if you need it.

Not the sort of thing you need to use every day, but when you do, it provides a more elegant and low-risk approach than the first method.

C#
Tuesday, 16 February 2016 18:00:00 (GMT Standard Time, UTC+00:00)

What should have been a relatively easy job turned into a very complex one. How I did battle with SQL Server and (mostly) won.

By the way, in case you're wondering about the title (and the subheading below if you've looked further on), then you obviously haven't read The Hunting Of The Snark by Lewis Carroll. If so, I strongly recommend you do, partly because it's a wonderful piece of sophisticated nonsense, and mainly because it's far more entertaining than this article! It was subtitled "An Agony In Eight Fits" with each section being described as a Fit. It seemed appropriate for what happened here...

For the Snark/Carroll aficionados, it's worth noting that most of the illustrations on this page are of the classic Henry Holiday edition of the Hunting Of The Snark. There were plenty more, but his illustrations best fit with Carroll's surreal poem.

Fit The First - Background

The project involved has a set of database tables that store information about support tickets raised by users. The desktop application that the support engineers use already had some searching facilities, but these were limited to things like open/close date, priority, status, etc.

I had a request to allow the user to enter some keywords, and we would show all support tickets that used the keywords in any of a number of fields. As SQL Server provides a full-text search facility, this seemed ideal for the task. Little did I know how complex it would be! I'm documenting it here, in case anyone needs to do something like this again. Hopefully this will save them a few hours of pain.

Fit The Second - The database structure

In order to avoid mucking around with the database that was under source control, I created a test database, and created some tables that included only the relevant fields...

The main table is SupportTickets, which contains two fields to be indexed. Each ticket can have any number of SupportTicketNotes, and for various boring historical reasons, each of those has an associated Actions entry which contains the Details field that is to be included in the search. In addition, a ticket may have a ProblemsAndSolutions entry linked, in which case the two fields shown there are also to be included.

OK, all ready, I thought this would be a doddle. Ha, should have known better!

Fit The Third - Setting up the full-text catalogue

Creating full-text catalogues isn't too hard. You find the table you want to index, right-click it, and from the "Full-Text index" sub-menu, choose "Define Full-Text Index" (note the inconsistent capitalisation, naughty Microsoft!)...

When the wizard appears, click past the first step (which just checks you have a primary key defined, and allows you to choose which to use), and you'll see the fields in the table...

Check off the one(s) that you want to be indexed, and click Next. Click Next again to go past the change tracking step (you want to leave it as Automatically).

You now get to choose or create the full-text catalogue that will be used for the fields you chose. If this is the first time you've created a catalogue for this table, you'll want to check the "Create a new catalogue" checkbox and enter a name for the catalogue. I use the table name with FT appended, but I can't claim this is any standard.

Make sure to click the Insensitive radio button. You can leave the two dropdowns at the bottom as they are and click Next.

Click Next again to go past the population schedules step as we don't need it.

You will then be shown a summary of the catalogue, and if you click Finish, it will be created and populated for you. Depending on how much data is in the table, it may take a few minutes for the catalogue to be fully populated.

So now we are ready to query the data...

Fit The Fourth - Single-table queries

There are various ways of querying full-text catalogues, and you can easily search to find out the benefits of each one. I used fulltexttable, as it has a very powerful algorithm for matching text, including variant word forms, proximity search, stop word ignoring, etc.

In order to search the data, you can use a query something like the following...

select ID, k.rank, ShortSummary, Description from SupportTickets st

inner join freetexttable(SupportTickets, (ShortSummary, Description), 'computer') as k on st.ID=k.[key]

order by rank desc

This uses freetexttable to create a full-text indexed view of the table, using the two columns we included in the catalogue. You can use any combination of indexed columns here, so you can query a subset of the fields if you wish. You can enter as many words as you like, and SQL Server will attempt to match as many as it can, taking frequency into account.

This gives results that look like this...

The first column is the ticket ID. The second column shows the rank, which is SQL Server's indication of how well the row matched the keyword(s) entered. Having searched around, I have come to the conclusion that the actual algorithm for calculating those ranks is based in some very deep Kabbalistic literature, and is partially based on how buckets of goat's entrails you've sacrificed to the Great God Of Database Servers that day. Microsoft seem to be pretty quiet on how the algorithm works, leaving us rather in the dark. This is a little embarrassing when you demo the feature to the client, and they, knowing their data a lot better that you, query for some significant keyword, and get the wrong results. You have to try and explain why your fab code, that you thought was working wonderfully well, came up with the wrong answers! Ho hum.

Note that as you can see from the query, we are selecting from the SupportTickets table, we can include any fields we like, not just those that were indexed.

If you only want to do full-text searching on one table, then you can skip down to Fit The Ninth, and avoid four fits of madness. Otherwise, take a deep breath and read on.

At this stage, I was smiling to myself, thinking that the job was nearly over. All I need to do is join the table,s and I'm done. Ha, teach me a lesson eh?

Fit The Fifth - Joining the tables

Armed with the above query, I set about writing one that would use all of the tables, and give me results that searched the tickets, their notes and the associated problems and solutions.

I'll spare you the gory details, but it turns out that you can't do this. Well, you can write some pretty scary queries that involve joining various freetexttables, but it doesn't work the way you expect, and you do your head in trying to work out how to combine the ranks.

So, off you go back to your favourite search engine, and you find out that what you really need is a view...

Fit The Sixth - Creating a view for the query, and the discovery of stuff

Views are very useful in situations like this, as they enable you to write a complex query, wrap it in a view, and pretend it's a single table. They tend to be read-only, but that's not an issue here.

So, all I needed to do was write a query that joined the various tables, and wrap it in a view. simple eh?

Again, sparing you a few hours' worth of unpleasant details, the end result was that this wasn't so easy, as joining the tables resulted in multiple rows for each support ticket (ie one for each note). This made querying across all the fields impossible.

What I really needed was a way to pull all of the note information into one field, leaving me with one row per ticket, but with all of the note data present. If you've used the Linq Aggregate() method, you'll know what I mean.

After some searching, I came across the delightfully named stuff T-SQL keyword. When used in conjunction with its close friend xml, it enables you to do just what I described above. The syntax is a bit hairy at first, but it works...

select st.ID, st.ShortSummary, st.Description, isnull(ps.Problem, '') Problem, isnull(ps.Solution, '') Solution,
    (select isnull(stuff((select cast(Details as varchar(max)) + ' ' from SupportTicketNotes stn
    inner join Actions a on a.ID=stn.ActionID where stn.SupportTicketID=st.ID for xml path('')), 1, 0, ''), '')) as Notes
from SupportTickets st
left join ProblemsAndSolutions ps on ps.ID=st.ProblemAndSolutionID

The first line of this is fairly simple, it's just selecting various fields. The second and third lines are where the clever stuff (sic) happens. I'm not going to give a full explanation of the stuff/xml syntax, partly because I don't fully understand it myself, and partly because it's dead dull and boring! If you're really interested, you can cure your insomnia by having a look at the MSDN pages about it. The query above is a little more complex than it needed to be, as the Details field of the Actions table was declared as type "text" which is incompatible with varchar, so I had to cast the one unto the other.

Anyway, the end result was the the data returned by this query included a Notes column that was an aggregation of all of the ticket's note details. I wrapped this query in a view, tested it out and was happy (briefly).

Ah, I must have cracked it this time. Well, no...

Fit The Seventh - Of unique keys, unique clustered indexes, schema binding and left joins

So there I was, ready to create a full-text catalogue on my shiny new view. You would think they would have the decency to leave me happy, but no, it all went wrong again.

When I tried to create the catalogue, I got an error message telling me it couldn't be created, as I needed a unique key on the view. Well, I thought I had one. After all, the primary table in the view is the SupportTickets table, and that has a unique key, which was included in the query, so what's the problem? The problem is that SQL Server is the boss, and it doesn't recognise that field as a unique key.

OK, so off to Google I went, and discovered that ll you need to do is create a unique clustered index for the primary key, and away you go. Or not...

I tried the following fabulous piece of SQL...

create unique clustered index IX_SupportTicketsWithExtraInfo on SupportTicketsWithExtraInfo(ID)

...where SupportTicketsWithExtraInfo was the snappy name of my view. I was somewhat annoyed to be told that the index couldn't be created due to some weird error about schema binding. I can't remember the exact text, but it was something weird like that.

So back off to Google to find out what that was all about, and it seems that all you need to do is alter the view to include schema binding...

create view SupportTicketsWithExtraInfo with schemabinding
as
...rest of the SQL here...

You know what's coming don't you. Another classic Microsoft "what the heck does that mean" error message...

Cannot schema bind view 'dbo.SupportTicketsWithExtraInfo' because name 'SupportTicketsWithExtraInfo is invalid for schema binding. Names must be in two-part format and an object cannot reference itself.

Huh? It turns out that in order to use schema binding you have to use table names that consist of two parts, such as Furry.Ferrets and the like. Well, there was no way I was going to rename the tables, and having tried to use aliases and views to get around the problem, I realised that I wasn't going to be able to do it that way.

I don't remember the exact sequence of events at this point, as I think hysteria was setting in, but I eventually got to the stage where I had something that (theoretically at least) could be used in the full-text catalogue. Sadly, it wasn't to be.

The final insult in this sorry story was when I found out that you can't use full-text indexing on a view that uses a left or right join. No, I'm not making this up, honest!

I came across a forum post where someone tantalisingly mentioned a trick where you could simulate an inner join, which would have been just what I wanted, except that I couldn't find any hint of it anywhere.

So that was it. After all that, I was scuppered at the last stage. Back to the drawing board.

Note that if you want to create a full-text index on a view that doesn't include left or right joins, you will be OK, and can gleefully skip down to Fit The Ninth. As for me, I had to ensure Fit The Eight first.

Fit The Eighth - The summary table and the triggers

At this point, I had to rethink the whole thing from the beginning. This was quite a strain on the brain cell, and involved a strong cup of herbal brew, but I eventually came up with the earth-shatteringly obvious idea of creating a new table to hold summaries of the support ticket information, and creating a full-text catalogue against that.

Remember stuff and xml from Fit The Sixth? Well that had enabled me to aggregate the note data into a single row in the view. All I needed to do was use this to aggregate the note data into the summary table. It was messy, but I ended up with triggers on the main tables that looked like this...

create trigger SupportTicketNotes_Update on SupportTicketNotes
after update
as
declare @ID int
select @ID = (select SupportTicketID from inserted)
declare @Notes varchar(max)
select @Notes = (select isnull(stuff((select cast(Details as varchar(max)) + ' '
  from SupportTicketNotes stn inner join Actions a on a.ID=stn.ActionID
  where stn.Active=1 and stn.SupportTicketID=@ID for xml path('')), 1, 0, ''), ''))
update SupportTicketsSummaries set Notes=@Notes where ID=@ID

With similar triggers on the other tables, I ended up with a SupportTicketsSummaries table that contained the fields I needed, and was a plain old single table. Hurray, back to Fit The Fourth!

I created a full-text catalogue against the summary table, and was finally able to write a query that would query all the support ticket data...

select ID, k.rank, ShortSummary + ' ' + Description + ' ' + Problem + ' ' + Solution + ' ' + Notes Info
from SupportTicketsSummaries st
inner join freetexttable(SupportTicketsSummaries, (ShortSummary, Description, Problem, Solution, Notes), @SearchText) as k on st.ID=k.[key]
order by rank desc

I wrapped this up in a stored procedure, with a parameter for the search text, and breathed a sigh of relief. I'd done it, I was finally ready to import the stored procedure into the Entity Framework model, and get on with implementing the code.

You know what's coming don't you? You should by now.

Fit The Ninth - Trying to find the stored procedure

I went into Visual Studio, opened the .edmx file, right-clicked on the designer surface and chose "Update model from database" as I'd done so many times before. I opened the Stored Procedures section, and scrolled down to my fab new stored procedure... that wasn't there! Huh? Wh'appen?

Back to SSMS to double-check. No, it really is there. Try updating the model again, no joy.

It turns out that you have to grant EXEC permissions on the stored procedure before you can see it in the Entity Framework model. I checked the connection string to see which user was being used to access the database, and granted EXEC permissions.

After a few false starts, I managed to get past this hurdle, only to hit the next one...

Fit The Tenth - Where are my parameters?

One of the nice things about importing a stored procedure into Entity Framework is that the wizard will check what columns are returned, and either create or map a complex entity to match the results. Except it didn't. When I clicked the button to get the schema, I got the grammatically awful message “The selected stored procedure or function returns no columns.” I know they are American, but they could make some effort to write decent English! I know, it's probably decent American, but I wasn't in a mood to argue to the point at this stage!

After a lot of searching, I didn't get very far. I found a lot of people reporting the same problem, but they were all using dynamic SQL or temporary tables. I wasn't using either, so didn't get any benefit from the suggestions.

I never did find an answer to this bit. I wondered if maybe the freetexttable was seen as a temporary table, or maybe it actually is a temporary table. I couldn't find an answer, but I did come across a wonderfully simple way around it.

The facility to detect the schema and generate a complex entity is a nice extra, but you don't actually have to use it. All you need to do is open the Model Browser, right-click on the Complex Types node, and add a new complex type that has properties that match the columns returned by your stored procedure. Make sure the names match the columns returned by your stored procedure exactly.

Then start the function import wizard, choose complex type, and pick the one you just created from the drop-down list.

Fit The Eleventh - Using the new function in the code

This bit turned out to be surprisingly easy, once I had remembered to regenerate the entities. The following code was all that was needed to get the results of the full-text search...

List<SupportTicketsFullTextSearchResults> rankedIDs = ((SalesTrackerCRMEntities)getContext()).SupportTicketsSearch(searchText).ToList();

Armed with this, I was able to pull out the full tickets for those in this query, and deliver them in ranked order.

As a final touch, I normalised the ranks, so that the highest one was always 100, and the others decreased, then used this to give the user an indication of the relative relevance of each ticket...

I showed this to the client, and got a heart-warming enthusiastic response. Made it all worthwhile.

Well, almost!

Fit The Eleventh - Finally reached dry land, none the worse for my adventures

Reminiscent in many ways of the late Mr Ulysses, I had finally made it back to dry land in one piece.

Tuesday, 16 February 2016 17:00:00 (GMT Standard Time, UTC+00:00)

As my regular reader will doubtless remember, I recently blogged about the important lesson I learnt while solving problem 8. I prophetically commented there…

"So, am I going to contemplate my problem domain before diving in and coding next time? Probably not, but at least if I don’t, I might have some idea where to look when the bug reports come in!"

Hmm, I thought I was joking! Well, sadly I wasn’t.

Faced with problem 14, I jumped right in and began coding as usual. My first attempt looked like this…

let collatz n =
  Seq.unfold (fun i -> if i % 2 = 0 then Some(i, i / 2) else Some(i, 3 * i + 1)) n
let collatzLength n =
  (collatz n |> Seq.takeWhile (fun n -> n <> 1) |> Seq.length) + 1

I had tested this on a smaller range, and it worked fine. Remembering what I thought I had learnt from problem 8, I did a quick scan of the range of numbers generated, and satisfied that an int would cope with it, set off to solve the problem…

[1..1000000] |> Seq.map (fun n -> n, collatzLength n) |> Seq.maxBy snd

Whereas this had only taken a second or two on small ranges, it churned away for a very long time when given the full range, ie up to one million. Although a million is quite a lot, it shouldn’t have taken that long to solve.

I tried the problem in C#, and had the same issue…

  int maxColl = 0;
  int maxLen = 0;
  for (int i = 2; i < 1000000; i++) {
    int coll = i;
    int len = 1;
    while (coll != 1) {
      if (coll % 2 == 0) {
        coll = coll / 2;
      } else {
        coll = 3 * coll + 1;
      }
      len++;
    }
    if (len > maxLen) {
      maxLen = len;
      maxColl = i;
    }
  }

Somewhat frustrated and baffled, I gave up and started searching around for other people’s code. I came across a C# solution that looked remarkably like the one above, that ran in about 2.4s. This was even more frustrating.

Eventually, it was pointed out to me that when you run it with the full range of starting points, some of the intermediate numbers generated in the sequence grow larger than the limits of an int, which causes the number to overflow. Under normal circumstances, this doesn’t cause an exception, but means that the number goes negative. Once that happens, the Collatz sequence will never go positive again, so will never terminate (assuming we consider the value 1 as the end of the sequence). This was easily confirmed by adding a “checked” block around the C# code, and seeing the exception thrown. Changing the “int” to “long” in the code above allowed it to give the correct answer in about 2.3s.

So what should I have learnt?

Well, I should have taken more care over my number ranges, just like in problem 8. The sad thing is that I thought I had, but I obviously didn’t check carefully enough.

Thinking about it, when the code took so long, I should have put some logging in there to show where it was up to. That would have shown the problem immediately, as I would have seen the negative values in the sequence. Strike One.

The other point is that it raises the issue of validating your input. If my function had done this, I would have found the problem very quickly. For example, changing my collatz function as follows would have raised the issue as soon as I tried to run it…

let collatz n =
  Seq.unfold (fun i -> 
    if i <= 0 then failwith "The input must be at least 1"
    if i % 2 = 0 then Some(i, i / 2) else Some(i, 3 * i + 1)) n

This sort of issue comes up more often than you might think. As developers, we (and I use the plural deliberately, I’ve seen plenty of others make the same mistakes) bravely assume that the values sent into our functions/methods are within acceptable ranges. When they aren’t, we get exceptions that are often very hard to debug.

Microsoft began to address this issue with Code Contracts. In theory, these are an excellent and easy way to address exactly this problem. In practice, I never found them to work, and gave up. Maybe it’s time to revisit them and try again.

Another day, another lesson ignored!

C# | F# | Project Euler
Thursday, 11 February 2016 17:00:00 (GMT Standard Time, UTC+00:00)
# Sunday, 07 February 2016

Up until now, I have been using Manoli’s code formatter to add code to this site. I’ve had a couple of issues with this, mainly that the line numbers it includes are part of the text, so if you copy the code and paste it into Visual Studio, LinqPad, etc, you get the line numbers as well.

I recently discovered that Google have a code formatter, which seems to do the job nicely, doesn’t require you to paste the code into a formatter before adding it to the site, and uses an order list HTML tag to create the line numbers. It also has a few extra themes, meaning I can colour the code to look more like the dark theme I use in VS.

Here is some C# formatted with the new method…

public long MaxProduct (string s, int n) {
  return Enumerable.Range (0, s.Length - n + 1)
      .Select (i => s.Substring (i, n))
      .Max (s1 => s1.ToCharArray().Aggregate (1, (long a, char c) => a * (c - '0')));
}

Hmm, judging by the preview in Live Writer, it looks pretty awful! Shame, it looked fine in a plain HTML file. Have to post it to the blog and see how it looks there.

Edit: Looked OK. Shame I can't read the code when editing! Let's try some F# and see...

// Find the sum of all primes below two million
let primes max = 
    let array = new BitArray(max, true);
    let lastp = Math.Sqrt(float max) |> int
    for p in 2..lastp+1 do
        if array.Get(p) then
            for pm in p*2..p..max-1 do
                array.Set(pm, false);
    seq { for i in 2..max-1 do if array.Get(i) then yield i }

primes 2000000 |> Seq.map (fun n -> bigint n) |> Seq.sum

OK, I’m convinced!

Sunday, 07 February 2016 17:00:00 (GMT Standard Time, UTC+00:00)

The problem

As I briefly mentioned in my rant about the F# fanboy lies, I have been using Project Euler to help me learn F#. I have got as far as problem 8, which was to find the largest product in a series of digits. To save you the bother of clicking the link, here is the description…

The four adjacent digits in the 1000-digit number that have the greatest product are 9 × 9 × 8 × 9 = 5832.

73167176531330624919225119674426574742355349194934
96983520312774506326239578318016984801869478851843
85861560789112949495459501737958331952853208805511
12540698747158523863050715693290963295227443043557
66896648950445244523161731856403098711121722383113
62229893423380308135336276614282806444486645238749
30358907296290491560440772390713810515859307960866
70172427121883998797908792274921901699720888093776
65727333001053367881220235421809751254540594752243
52584907711670556013604839586446706324415722155397
53697817977846174064955149290862569321978468622482
83972241375657056057490261407972968652414535100474
82166370484403199890008895243450658541227588666881
16427171479924442928230863465674813919123162824586
17866458359124566529476545682848912883142607690042
24219022671055626321111109370544217506941658960408
07198403850962455444362981230987879927244284909188
84580156166097919133875499200524063689912560717606
05886116467109405077541002256983155200055935729725
71636269561882670428252483600823257530420752963450

Find the thirteen adjacent digits in the 1000-digit number that have the greatest product. What is the value of this product?

Apart from the fact that this was an interesting problem, I learnt a very important lesson from this one, and thought it worth sharing.

Solving the problem - or not!

My initial stab at this looked like this…

   1:  let chop n (s : string) =
   2:    [ for i in [0..(s.Length - n)] do yield s.[i..(i + n - 1)]]
   3:  let product (s : string) =
   4:    s |> Seq.fold (fun p c -> p * (int (string c))) 1
   5:  let lgstProduct n (s : string) =
   6:    s |> chop n |> Seq.map product |> Seq.max

The chop function chops the string into chunks of length n, the product function calculates the product of the digits (assuming that the string passed in only contains numbers of course), and the lgstProduct function sticks these together to find the maximum product.

I tried this with the 1000 digit number passed as a string, and using 4 for the chunk size, and it produced the right answer, 5832. Wanting to make the code shorter and neater, I included the two helper functions in the main one, and managed to come up with this...

   1:  let largestProductInt64 (n : int64) (s : string) =
   2:    [ for i in [0L..((int64 s.Length) - n)] do yield s.[(int i)..int(i + n - 1L)]]
   3:    |> Seq.map (fun s -> s, s |> Seq.fold (fun p c -> p * (int64 (int (string c)))) 1L)
   4:    |> Seq.maxBy snd

Note that I changed the code to give me a tuple, containing both the highest product, and the n-character chunk that produced it. Chuffed to bits, I threw the number 13 at it, and got the answer ("9781797784617", 2091059712) , which I duly plugged into the answer box on the Project Euler site, only to be told that it was wrong! What a chutzpah! Of course it’s right, my code works!

Or does it?

So what went wrong?

Having spent quite a bit of time testing my code, and convincing myself that it was right, I resorted to searching for other people’s answers to the same problem. Along the way, I came across someone who had had exactly the same problem as me, albeit in C++, and had come up with the same (wrong) answer.

It turns out that the issue was simple. When multiplying 13 digits together, you could potentially end up with 9^13, ie 2,541,865,828,329. Given that the maximum number that can be stored in the .NET int type is 2,147,483,647 the problem becomes apparent.

I changed my code to use int64, which is the F# equivalent of the .NET “long” type, and can hold numbers up to 9,223,372,036,854,775,807. Lo and behold, project Euler acquiesced, and accepted my answer.

In order to make my code even more general, I actually changed it to use bigint, which can hold any size of integer, but the point I want to take away from this remains the same…

What I learnt in school today

I think there is a very important lesson here. Like many of us, I piled in and started coding without really thinking about the problem. What I should have done is take a look at the problem domain, and think it through. It should have been obvious that the eventual product was going to be too large to fit into a 32-bit integer, which is probably why the Project Euler people chose such a large number in the first place. Had I done that, I would probably have got the right answer first time.

Now, I don’t know about you, but I almost never get these sorts of interesting problems in my day job. I usually get “Pull the data from the database, display it on a window, wait for the user to make changes and then save it,” which is significantly less interesting. However, I think the basic point remains valid. Without thinking through the scope of the problem, and the bounds of the domain, it’s very easy to pile and and get coding, whilst introducing all sorts of subtle bugs. My tests worked absolutely fine, simply because I was testing on small numbers. How many times do we developers test our code against a Noddy database, mainly to save development time? No need to put your hands up, we’re all guilty.

Had my largest product function been production code, I would have released a bug that would only have been spotted some time down the line. Depending on how easy/hard it would be to predict the right numbers, it’s possible that it might not have been spotted for a long time. People would just assume that the number produced was correct.

So, am I going to contemplate my problem domain before diving in and coding next time? Probably not, but at least if I don’t, I might have some idea where to look when the bug reports come in!

Improving the code

Having sorted all that out, I asked for a code review, and came across a really useful F# function that I hadn’t seen before. My chop function, included as the first line of my slimline largestProduct function split the input string into a sequence of chunks of length n. It turns out that F# has the Seq.windowed function that does exactly the same thing, but is more readable.

I also got a slightly better understanding of function composition, and saw how to reduce the number of brackets needed to convert the character to a bigint. I ended up with…

   1:  let largestProduct n (s : string) =
   2:    Seq.windowed n s
   3:    |> Seq.map (fun s -> s, s |> Seq.fold (fun p c -> p * (string >> bigint.Parse) c) 1I)
   4:    |> Seq.maxBy snd

I was quite pleased with this. A lot of functionality in four lines.

Solving the problem in C#

I was interested to see if I could solve the problem in C# as well, so I fired up LinqPad and jumped in. My initial version (including the extra bits need to run it in LinqPad, and the line to write out the result) looked like this…

   1:  void Main() {
   2:    string s = "7316...3450"; // NOTE: Snipped for brevity!!
   3:    int n = 13;
   4:   
   5:    var maxProduct = MaxProduct(s, n);
   6:    Console.WriteLine ("1) Max product is " + maxProduct.Item2 + " from " + maxProduct.Item1);
   7:  }
   8:   
   9:  public Tuple<string, long> MaxProduct(string s, int n) {
  10:    return Chop (s, n)
  11:      .Select (s1 => new Tuple<string, long> (s1, Product (s1)))
  12:      .OrderByDescending (t => t.Item2)
  13:      .First();
  14:  }
  15:   
  16:  public long Product (string s) {
  17:    long res = 1;
  18:    for (int i = 0; i < s.Length; i++) {
  19:      res *= Convert.ToInt32 (s [i].ToString());
  20:    }
  21:    return res;
  22:  }
  23:   
  24:  public IEnumerable<string> Chop (string s, int n) {
  25:    for (int i = 0; i < s.Length - n + 1; i++) {
  26:      yield return s.Substring (i, n);
  27:    }
  28:  }

Hmm, quite a lot of code there. Looks like F# really is shorter and cleaner!

There must be a way to improve this. A few moments’ thought made me realise that the Product() method is really doing what the Linq Aggregate() extension method does. Also, the Chop() method could easily be done with Linq if I fed in a range of numbers for the starting positions of the substring (like I did in my original F# code).

After a short bit of fiddling, I came up with this rather improved C# version…

   1:  public long MaxProduct (string s, int n) {
   2:    return Enumerable.Range (0, s.Length - n + 1)
   3:        .Select (i => s.Substring (i, n))
   4:        .Max (s1 => s1.ToCharArray().Aggregate (1, (long a, char c) => a * Convert.ToInt64 (c.ToString())));
   5:  }

That's much better! Once you ignore the extraneous bits, the body of the actual method is only three lines, a mere one line longer than the F# version. The F’'# is definitely cleaner, but as I’ve mentioned before, that’s not always an advantage.

After passing this problem around the team, one of the brighter sparks came up with an even shorter version that runs faster…

   1:  public long MaxProductEC (string s, int n) {
   2:    return Enumerable.Range (0, s.Length - n + 1)
   3:      .Max (i => s.Substring (i, n).Aggregate ((long)1, (a, c) => a * (c - '0')));
   4:  }

I defy anyone to tell me that C# is verbose! Don’t get me wrong, I’m really enjoying F#, but the lies are getting on my nerves!

All in all, an interesting exercise.

C# | F# | Project Euler
Monday, 01 February 2016 02:53:00 (GMT Standard Time, UTC+00:00)
# Monday, 25 January 2016

As my regular reader will know, I love anything shiny and new, and so was intrigued when I came across F#. When I was an undergraduate (some 25 years ago, ulp!), I played around with Prolog, and enjoyed the different approach to programming it offered.

However, as my time in front of the computer is at a premium, my F# learning started off with reading books and blog posts late away from the computer, usually last thing at night, and not actually doing any coding. Like most C# developers who come across F#, my initial reaction (other than “Hey, that’s shiny and new, I want to learn it!”) was “Why?” If you search around for terms like “why f#” and “c# vs f#” you’ll find that this is a very common question.

The problem is that the answers aren’t very good. Well, let’s be blunt, there are a lot of lies out there. Now before anyone starts flaming me in the comments (not that anyone ever reads this blog, much less leaves comments), I should point out that the criticisms that I can feel coming are not aimed at the F# community in general, nor are they aimed at F# itself. It’s the way people try to sell it that annoys me. I’ve been brooding over them for months, and have written this blog post in my head many times. I feel less strongly about it than I used to, so this will probably be a watered-down version of the original rant.

Wild exaggerations

As an example, I recently came across the following quote in an F# book… “No other .NET language is as easy to use and as flexible as F#!” (the exclamation mark was part of the quote, not my addition). Now this sort of comment really annoys me. Firstly, it’s completely unsubstantiated, and secondly, it’s totally subjective. Any language is easy to use and flexible if you learn it well enough. OK, maybe not any, Lisp is clearly only for weirdos (like my brother who loves it!), but let’s not go down that route. F# is not inherently easier than any other languages, and in some ways is quite a bit harder to learn, as it involves a significant shift in thinking to do it properly. Comments like that make me feel they are trying to cover something up, and have a negative effect – apart from the obvious feelings of antipathy towards the author.

Another idiocy was included a popular book by very well-respected member of the F# community, who attempted to sell F# by showing some pseudo-C# code, claiming that you can’t do this in C#, but you can in F#. Hey, isn’t F# great? My immediate reaction was to shout at the book “You can do that in C# if you bother to write it properly!” He had deliberately lied about what you can do in C# to try and show how F# is so much better. Given that the book was aimed at C# developers who wanted to learn F#, I thought this was a pretty dumb thing to do. A lot of the book went like this, and I shouted at it a lot!

The blogs weren’t any better. I read no end of them, telling me all the reasons why F# was better, and remained unconvinced. This was partly because some of the reasons given were simply not F# at all. For example, one of the most common reasons given for why we should all be using F# is F# Interactive. The point they all ignored is that FSI is not F#, it’s a tool that Microsoft happened to have implemented for F#, and hadn’t implemented for C#. As it happens, Visual Studio 2015 now includes a C# Interactive window, so this supposed benefit is no longer valid anyway. I doubt they’ll stop their bleating though.

Type inference

Another reason given for F# being the best thing since sliced bread is the type inference. Well, despite having struggled to understand the benefits, I’m still very doubtful about this one. Yes it makes the code shorter, but it makes it a whole lot harder to read. When I look at a piece of C#, I can see straight way the variable types and the method return types, so I can concentrate my mind on hat the code is doing. I can’t do that in F#. Having said that, when writing F#, the type inference is a joy, as you can concentrate on what the code is doing, and ignore the technicalities. It’s a two-edged sword, and not an undiluted benefit.

Well, it's so clean, sir!

The final claim I want to address is that F# code is so much cleaner than C#. Well, there is something in that, but clean isn’t necessarily better. I suspect most people will recognise the following snippet from what is arguably one of the funniest moments in comedy...

Customer (John Cleese): It's not much of a cheese shop, is it?
Owner (Michael Palin): Finest in the district sir!
C: (annoyed) Explain the logic underlying that conclusion, please.
O: Well, it's so clean, sir!
C: It's certainly uncontaminated by cheese.

Being clean can also be a disadvantage.

Apart from the specification of types (see above for why I’m not convinced that this is a bad thing), the only other part of C# that’s not clean is the fact that you have brackets knocking around, whereas F# is totally devoid of them. Again, this might make the code slightly shorter, but has its disadvantages. For example, if I have a few lines of C# code, and want to surround them with an “if” statement, I can just add the “if” statement on the line before the block, and a closing brace on the line after, then reformat the code, and all the indenting is done for me. I know the insertion will be right, as it can’t fail. If I want to do the same in F#, I have to concentrate a lot harder on adding the right amount of extra indentation to the lines I want inside the “if” block. It’s significantly more likely to go wrong.

If the few extra lines added by braces really bother you that much, go buy a bigger monitor! I can see around 70 lines of code on mine, so the few extra lines added by the braces rarely cause me any problems.

To make it even worse, when comparing evil C# code against the pure and sainted F# equivalent, they somehow always manage to stick as much extra code into the C# version as they can. F# snippets would be a method, and nothing else. The equivalent C# snippet would be a whole class, including usings. This is a false comparison.

I |> Red heart F#

By this point, you probably think I hate F#. Well, it’s not true. My point here is not anything against F#, it’s against the lies and exaggerations bandied around by the F# fan boys. I probably would have given up on F# in annoyance at these lies (as I suspect many, many others have done) were it not for the fact that I am fascinated my shiny new things. I decided to carry on anyway.

Thankfully, I discovered Project Euler. For those who haven’t discovered this wonderful site yet, it’s an ever-growing collection of mathematical and computer programming problems. Apart from my inherent love of such things, this was the perfect opportunity to learn F#.

Now you can use PE by simply solving the problems, entering your answer to check you got it right, and carrying on to the next one. I decided not to do that. I solved the problem, then went back and tried to find ways of improving my code. I posted my samples up for code review, and played around with the suggestions I got. I searched around for other people’s solutions (only after I had already solved it myself) to see if I could find better code. When I did, I copied it, played with it and made sure I understood it.

In time (not much actually), an amazing thing happened. I got really excited about F#! When you get into it, it’s a really nice language to use. I’m still not convinced that there are many major benefits over C# for my line of business (pull data from a database, display it on a window, wait for the user to make changes, save the data back to the database, etc), but for the sort of work that I would love to do, like machine learning, data analysis and so on, it seems wonderful. Sadly, I reckon my use of it will be largely recreational, unless I can find some justification for using it in my day job.

If I get around to it, I’ll post some of my Project Euler code here, so others can laugh at it, and feel better about their own code.

F#
Monday, 25 January 2016 20:00:00 (GMT Standard Time, UTC+00:00)
# Thursday, 18 June 2015

Having blogged recently about a stupid error message, I came across another one! Microsoft much employ people to craft these, they just couldn’t be there by accident. I reckon they are put there to amuse developers, and distract them from the fact that the software is misbehaving!

Anyway, I was using the SQL Server Import and Export Wizard, and it threw up the following error message:

Error 0xc004706b: Data Flow Task 3: "Destination 14 - EntityHistory" failed validation and returned validation status "VS_ISBROKEN"

I just love that last bit!

Thursday, 18 June 2015 21:16:00 (GMT Daylight Time, UTC+01:00)

And I’ve seen a few!

I was just trying to back up an SQL database, and got this truly wonderful error message...

Not much you can say to that is there?

Wednesday, 18 March 2015 03:14:00 (GMT Standard Time, UTC+00:00)
# Tuesday, 13 January 2015

My regular reader may remember that I wrote a while ago that my computer laughs at me. Well, it lies to me as well.

Bored of the usual methods of inflicting torture upon myself, I thought I would upgrade SQL Server 2008 R2 to SQL Server 2014 (stop laughing at the back!), because, erm, well it seemed like a good idea at the time.

First problem was that in order to upgrade, I had to apply service pack 2 to my existing SQL Server 2008 R2 installation. Why? No idea. I would have thought that if I were about to upgrade, it should have all of the later goodness in the upgrade, but who am I to question the ways of the wise?

So, I downloaded SP2 and set it going. Part way through, I got an unhandled .NET exception (nice one Microsoft, very professional), and when I clicked Quit, it continued with the installation! Still, as it seemed to work, I didn’t complain, but I was a bit annoyed that it lied to me. Computers aren’t supposed to do that.

Once I had SQL Server 2014 installed, I happened to notice that SQL Server 2008 R2 was still installed. Huh, I thought we upgraded to 2014? Shouldn't it have uninstalled 2008 R2? Well, I would have thought so too, but that just shows how much we know eh? If you have Visual Studio 2013 installed, you may well have SQL Server 2012 installed as well! Good job disk space is cheap nowadays.

Having asked about this in an SQL Server forum, I was told it was safe to uninstall anything related to SQL Server 2008 R2. Whilst doing so, I came across one component that gave me a message saying that SSMS 2014 depended on it, and if I uninstalled it, that would stop working. I clicked "No" to say I didn't want to uninstall this component, but it uninstalled it anyway! Didn't seem to matter, as SQL Server worked fine afterwards, so it just shows what liars they are!

I’m very disillusioned. No wonder the world is sinking into moral decay if you can’t even rely on your computer to tell you the truth!

Tuesday, 13 January 2015 20:49:00 (GMT Standard Time, UTC+00:00)

As my regular reader will know, when Lightswitch came out, I was quickly hooked. Like many people, I saw an answer to the hassle of creating n-tier desktop applications. It was a dream. Development time was slashed, and the few annoyances that there were paled into insignificance compared to the speed of development, and the quality of the end product.

I dived in head-first, and quickly came up against some of the limitations of a first version. That’s not a criticism, it’s more a testimony to the amazing power the first version had, that we could produce real LOB applications, and were frustrated by the missing bits.

Thankfully, Microsoft did a sterling job of adding extensibility points to Lightswitch, and despite the poor documentation (an ongoing feature of Lightswitch that caused a lot of wasted hours), the potential was there to fill in the gaps.

Inspired by early experimenters, like Yann Duran’s excellent Luminous Controls (which I still use in every Lightswitch project), I had a go at producing a custom control for Lightswitch, more for the fun of tinkering than anything else. Having worked out how to get going, I added a few more controls, until the first version of the Pixata Custom Controls For Lightswitch was born. Feeling that I had something that might be of use to others, I added it to the Visual Studio Gallery, and posting about it in the (now extinct) forum. I got so excited when it had 100 downloads!

Spurred on by the great feedback, I added more controls, and the control suite hit the top three for Lightswitch extensions. It varied between the first, second and third places, but has always been in the top three. To date, it’s had over 25,000 downloads, despite that the fact that I haven’t updated the controls for over a year.

Roll on a few years, and the mobile market exploded. Smartphones and tablets gained more power, and were able to browse the web almost as well as their desktop cousins. However, probably due to political more than technical reasons, the manufacturers chose not to support Flash or Silverlight on these devices, leaving Lightswitch out in the cold.

Microsoft, fairly understandably, decided to concentrate their efforts on producing a version of Lightswitch that would support mobile devices. That's where the HTML client came in.

This was fine, except for two huge problems...

1) The HTML client was (and still is from what I can see) woefully short of being ready for the job. It's a good start, but not good enough to create LOB applications. Specifically, despite the massive growth of the mobile market, Microsoft seem to have forgotten that the desktop world still exists, and still accounts of a very significant number of users, especially in the business world. Until the HTML client supports the desktop, it can't be considered ready for serious use.

I have other misgivings about the HTML client, but they are minor in comparison to this one. I know the mobile market is important, but in business-to-business work, which is a significant part of the software development world, you just can’t afford to ignore the desktop.

2) Whilst I don't think we can criticise Microsoft for moving away from Silverlight, which must have been hard for them, having put so much effort into producing a first-class product, what is totally unforgiveable (in my opinion at least, and I know I'm not alone) is the way they just dumped the Silverlight developers without any support. Their support for the more advanced side of Lightswitch was always poor, but when the HTML client came along, Silverlight client developers were just ignored completely. The extensibility forum died a sudden death, with unanswered questions making up the sum total of all posts.

Now, before I get criticised for this, I understand that they have limited resources, and want to concentrate their efforts on the latest and greatest, but many, many developers sunk a large amount of time and energy into Silverlight-related development, and to have all support cut off in an instant was a really dirty trick. I think we can all understand why they don’t want to invest more time in active development, but at least support what’s already there.

The problem became more acute as time went on, and Microsoft were pushing the HTML client with an evangelical fervour. It got to the stage where I knew that if I had a problem, there was little point in posting in a forum, as my post would be swallowed in a sea of HTML-related questions, and wouldn’t be answered by anyone at Microsoft anyway. The only way I could get an support was peer-to-peer support from some of the Lightswitch rock stars.

It came to a head for me recently when, having invested a significant amount of time in developing a complex application, it suddenly stopped working. The only clue I had was the rather generic error message “Cannot communicate with the database” when the application started. It worked fine on my development machine, and the database connections were all correct (I could even change the connection strings in Visual Studio, and run the application against the remote production database, so I know that wasn’t the problem), but the deployed version refused to work.

After struggling to get the attention of any Microsoft support person, I eventually had to give up and rewrite the entire application in WPF and ASP.NET WebAPI2. Surprisingly, it didn’t actually take that long, and left me wondering if Lightswitch is really as RAD as I had thought, once you get away from the basic functionality.

So, I’m left with a sour taste in my mouth, and some deployed Lightswitch applications that I’m terrified to touch, in case they mysteriously stop working as well. I can’t afford to rewrite them from scratch, and know that there is no way I’ll get any support if they stop working. I’m just hoping the customers don’t want any changes.

Sad end to a promising product. I have little interest in investigating the HTML client, as ASP.NET MVC makes it so easy to produce high-quality web sites that work in both desktop and mobile environments, and I can’t see any benefits in Lightswitch for this.

As for the controls? Well, I’ve made them open source, so if anyone wants to develop them further, they can.

Sunday, 14 December 2014 15:01:00 (GMT Standard Time, UTC+00:00)