A random spillage of programming (and other) thoughts

Archive for the ‘.NET’ Category

Debugging into Compiled Assemblies, tracking down an apparent bug in Microsoft SQL SMO

Posted by Michael Bray on February 6, 2015

For my day job, I had written an application which would capture all changes made to our development SQL database, script those changes using Microsoft SQL SMO, and automatically check in/out the files as needed so that changes to our database could be tracked efficiently and cleanly in TFS. 

But I had a problem – Microsoft SQL SMO was failing to script a trigger.  Specifically, it was generating the following error:

Screenshot - 2_6_2015 , 1_06_46 PM

Microsoft.SqlServer.Management.Smo.FailedOperationException: Script failed for Trigger ‘as_invoice_insertToDocuments’.  —> Microsoft.SqlServer.Management.Smo.FailedOperationException: Syntax error in TextHeader of Trigger ‘as_invoice_insertToDocuments’.

When I first googled this error, I found a couple of references and suggestions…   Most of them said that this could be caused by nested comments, but my trigger had no comments at all.  I tried changing the contents and header of the trigger, but nothing worked.  When I tried debugging my program, I was stifled because I was unable to debug into the SMO DLL – all I could tell is that it was generating the exception.

My first attempt at diagnosing the error was to try to use dotPeek to disassemble the Microsoft.SqlServer.Management.Smo.DLL library to figure out why the code was throwing an error.  This is what I found:


This narrowed it down a bit, but still, there were two possible lines that could have been generating the exception, with no way to distinguish between them (grrr!). 

And that’s when I decided to try to debug into the SMO library.  I reasoned that since I knew it was possible to debug into the .NET Framework, I might be able to do the same with this library, and I was right.  Doing it was a bit tricky, though…   First, I decided to use dotPeek’s symbol server.  It’s possible that I could have used Microsoft’s symbol server too, since it was a Microsoft DLL, but I haven’t tried.  So I started up dotPeek, turned on the symbol server, and selected the option to generate PDBs for all assemblies (just to be safe).  Then I went back to Visual Studio (I have 2010), configured the symbol server with the http address provided by dotPeek, turned off the option “Enable Just My Code”, and tried debugging my program by F11 stepping into the SMO function call that was failing.  On my first try, it didn’t work. 

After a bit of research, I went back into dotPeek and opened the “Project/PDB Generation Log” window and saw that it was apparently still generating some PDBs and/or source files.  So I waited for that to finish, then went back to Visual Studio.  Before attempting to debug again, I tried a few more changes to Visual Studio options – specifically I checked the box “Enable .NET Framework source stepping” which seemed to do something – Visual Studio seemed to load and cache the PDB/source files.  I’m not sure if that was a necessary step or coincidental to the fact that dotPeek was now finished generating it’s PDB files, but it worked – now when I F11 stepped into the SMO function call, I was viewing the SMO library source code.  WIN #1!

As I started stepping thru the code to try to get to the line that was causing the Exception to be thrown, however, I realized it was going to be difficult to get what I needed.  When I got to the CheckTextCorrectness method indicated by the stack trace, I found that it was in fact the FIRST throw that was executing, in response to the function call to CheckDdlHeader apparently returning false!  I tried debugging into that function to figure out what going on, but I kept having difficulty because when I tried to analyze variables, the Visual Studio Watch window would just say “Cannot obtain value…as it is not available at this instruction pointer, possibly because it has been optimized away”, even though I was running in DEBUG mode and had “Optimize Code” unchecked in my project file settings.  Actually that made sense since those settings would only affect code that was compiled on my machine, which the SMO dll was not.  It was actually the .NET JIT compiler that was optimizing the MSIL.  But I really needed to see those variables to figure out what was going on.

Then I found a web page with the information to avoid the optimizations.  WIN #2!   Just in case that page ever goes away, here’s the answer, lifted almost verbatim from that page (sorry!):

Ok, you think you’re cool when you get the capability to debug the .NET Framework source code all set up. You’re like, “I am all powerful!” Then you start noticing the oddities.

“Wait, why I can’t get the value of that variable?!”

“Why did it step there? It should have stepped here?!”

The problem is that ‘you’re debugging against retail-optimized code’. Fortunately, someone at Microsoft handed out the trick to disable these optimizations. Check out this link [EDIT: this link appears to be broken, the correct URL might be this one!] for more info, but basically there are only a few steps:

  • Create a .cmd file that sets an environment variable and then launches Visual Studio. Name it whatever you want (e.g. DisableOptimizationsInVisualStudio.cmd). It’s contents should be:
set COMPLUS_ZapDisable=1
cd /d "%ProgramFiles%\Microsoft Visual Studio 9.0\Common7\ide\"
start devenv.exe
  • Launch Visual Studio with this .cmd file.
  • Once in Visual Studio, disable the Visual Studio hosting process:

Right click on your project and choose “Properties”.

Choose the “Debug” tab and uncheck “Enable the Visual Studio Hosting Process”.

  • Launch your application in the debugger.


NOTE: This page refers to “Microsoft Visual Studio 9.0” but your exe path might be different – for Visual Studio 2010 it’s “Microsoft Visual Studio 10.0”.

Once I did the steps above, it was like manna from heaven – I could now see the contents of all the variable that I couldn’t before, and I was finally able to diagnose (and work around) the problem in SMO.  Since doing this, I’ve found several other references to things I could have checked, such as the Advanced Build settings, but I’m not sure if they would have worked for an externally-linked DLL being debugged thru the dotPeek symbol server.  I’ve also found a method using an INI file [Microsoft reference] that controls the JIT optimization on a per-dll basis, although I’m not sure if that works independently every time the executable runs.  For me, setting the environment variable was easier.

In case you are wondering, the workaround for my actual problem was simple.  All I had to do was turn off the option “DdlBodyOnly” in the ScriptingOptions parameter passed to the Script() method.  When this option is set to true, CheckTextCorrectness attempts to ensure that the DDL text starts with CREATE or ALTER, but I’m not sure why either (a) they would do this when DdlBodyOnly is set to true since it should only check the body and not the header, or (b) why it doesn’t fail for the other object types, or (c) why it doesn’t just generate the body without the header, which would have prevented me from ever setting it to true in the first place.  I would consider this a bug one way or another in SMO.  The result of setting this parameter to false is that the trigger script passes validation, but it also includes “SET ANSI_NULLS” and “SET QUOTEDIDENTIFIERS” lines at the top of the script, which is what I was generally trying to avoid by setting DdlBodyOnly to true.


Posted in .NET, .NET Framework, Bugs, Debugging, SMO, SQL, Visual Studio | Tagged: | Leave a Comment »

Linq2Sql Caching with SqlDependency

Posted by Shaun McDonnell on November 25, 2009

It took me a while to figure this one out.  I almost thought it was impossible to implement it in a ‘good way’.  However, extension methods saved the day.  So, if you want to use Sql Service Broker dependencies in your Linq2Sql code, you’ll need an extension method like this:

   1: public static List<T> LinqCache<T>(this IQueryable<T> q, DataContext dc, string key)
   2: {
   3:     SqlDependency.Start(dc.Connection.ConnectionString);
   4:     var cachedItem = (List<T>)HttpContext.Current.Cache.Get(key);
   6:     if (cachedItem == null)
   7:     {
   8:         TracingHelper.Write("No cache key [{0}] found for query: {1}", key, dc.GetCommand(q).CommandText);
   9:         string connectionString = dc.Connection.ConnectionString;
  10:         string command = dc.GetCommand(q).CommandText;
  11:         using (var sql = new SqlConnection(connectionString))
  12:         {
  13:             sql.Open();
  14:             using (var cmd = new SqlCommand(command, sql))
  15:             {
  16:                 foreach (DbParameter dbp in dc.GetCommand(q).Parameters)
  17:                 {
  18:                     cmd.Parameters.Add(new SqlParameter(dbp.ParameterName, dbp.Value));
  19:                 }
  20:                 SqlCacheDependencyAdmin.EnableNotifications(connectionString);
  21:                 string notificationTable = q.ElementType.Name;
  22:                 if (!SqlCacheDependencyAdmin.GetTablesEnabledForNotifications(connectionString).Contains(notificationTable))
  23:                     SqlCacheDependencyAdmin.EnableTableForNotifications(connectionString, notificationTable);
  24:                 var sqldep = new SqlCacheDependency(cmd);
  26:                 cmd.ExecuteNonQuery();
  27:                 cachedItem = q.ToList();
  28:                 HttpContext.Current.Cache.Insert(key, cachedItem, sqldep);
  29:             }
  30:         }
  31:     }
  32:     else
  33:     {
  34:         TracingHelper.Write("Cache key [{0}] FOUND for query:  {1}", key, dc.GetCommand(q).CommandText);
  35:     }
  37:     SqlDependency.Stop(dc.Connection.ConnectionString);
  38:     return cachedItem;
  39: }

Now, in order to use this extension method in your queries, you’ll need to do something like this:

   1: public List<Company> Get()
   2: {
   3:     using (var dataContext = new RoutePointRepositoryDataContext(Db.ConnectionString))
   4:     {
   5:         var items = dataContext.Companies.LinqCache(dataContext, "Companies.All");
   6:         return items;
   7:     }
   8: }

The downside is that I haven’t figured out a good way to implement this when retrieving one record from the database.  So, you’ll have to do this:

   1: public Company Get(int id)
   2: {
   3:     using (var dataContext = new RoutePointRepositoryDataContext(Db.ConnectionString))
   4:     {
   5:         var company = dataContext.Companies.Where(c => c.Id == id).LinqCache(dataContext, "Company.Get." + id.ToString()).Single();
   6:         return company;
   7:     }
   8: }


MSDN did have some code similar to this but it did not work and there were no examples.  This works and and the examples should be enough to get you started.


Posted in .NET, LINQ2SQL, SQL | Leave a Comment »

A Generic run-time LINQ-based multi-level object sorter

Posted by Michael Bray on October 26, 2009

Assume you have a list of objects that has a set of properties.  These properties are stored in a StringCollection or other similar lookup, and you want to sort the objects based on some of these properties, but you don’t know at compile-time which properties to sort on or in what order (that information will be supplied at run-time, perhaps in configuration).  How do you sort this list, in a manner that honors ascending / descending as well as multi-level sorting rules?  You can’t simply sort the list by each property, since each time you sort, it will wipe out the previous sorting operation.  Of course, LINQ provides sorting thru OrderBy(…) and ThenBy(…) functions that handle the multi-level sort issue.  But it’s a bit more complicated than that, since you don’t know the properties you want to sort on.

Here, I demonstrate a relatively simple generic object sorter that correctly handles multi-level sorting and ascending/descending at each level.

private IEnumerable<T> MultiLevelSort<T, SK>(IEnumerable<T> list, List<SK> sortKeys, Func<T, SK, string> keySelector, Func<SK, bool> ascendingSelector)
    if (sortKeys.Count == 0) return list;

    IOrderedEnumerable<T> res = null;
    for (int i = 0; i < sortKeys.Count; i++)
        SK sk = sortKeys[i];
        bool ascending = ascendingSelector(sk);
        if (i == 0)
            if (ascending) res = list.OrderBy(r => keySelector(r, sk));
            else res = list.OrderByDescending(r => keySelector(r, sk));
            if (ascending) res = res.ThenBy(r => keySelector(r, sk));
            else res = res.ThenByDescending(r => keySelector(r, sk));
    return res;

This function takes 4 parameters:

  1. An IEnumerable<T> of objects to sort
  2. A List<SK> of objects that contain sorting order information (note that this list itself is expected to already be in the correct sort order)
  3. A Func<T, SK, string> to extract the value from T based on information in SK to actually sort on
  4. A Func<SK, bool> to extract the ascending/descending information from SK

…and it returns the list correctly sorted as an IEnumerable<T>.  Note that the actual object returned is actually an IOrderedEnumerable<T> as long as there is at least one valid sort key.

This code could then be used as such:

List<MyProperty> sortProps = AllProperties.Where(sp => sp.Sort != string.Empty).OrderBy(sp => sp.SortOrder).ToList();
IEnumerable<MyObject> sortedResults = MultiLevelSort<MyObject, MyProperty>(
    results, sortProps,
    (r, pe) => r.Properties.ContainsKey(pe.Name) ? r.Properties[pe.Name] : string.Empty,
    pe => pe.Sort == "Ascending"

Where MyObject is an object that contains a StringCollection called ‘Properties’, and MyProperty is an object that contains properties called ‘Sort’ (“Ascending/Descending”), ‘SortOrder’ (an integer), and ‘Name’ (the name of the property within the MyObject.Properties collection that we want to sort on).

Posted in .NET | Tagged: | Leave a Comment »

A mathematical diversion with LINQ

Posted by Michael Bray on July 18, 2009

Every now and then I’ll come across a math problem as part of my work…   It’s not what I do it’s just something that happens.  When it does happen, I either become obsessed with it until I solve it (or at least THINK I solve it, one way or another) or I eventually give up with the blunt realization that I just spent hours thinking about something that is likely to cause psychosis in old age.  Fortunately, the problem I ran across today seems to have fallen into the first category… 

Here’s the question, simplified…   Imagine you have a collection of at least 10 items, of which you are going to choose 5 at random.  One of those 5 is then taken away and replaced with a different NEW item (pulled from some other pile of items).  Next you choose another 5, again randomly.  The questions I asked myself in this scenario were:

  1. What are the chances that you WON’T see any of the same 5 that you originally picked? 
  2. What if the collection is 20 items instead of 10 items? 
  3. How many items have to be in the larger collection so that you have at least a 90% chance of NOT seeing any one of the originally chosen items if you are choosing 5 at a time?
The Math…

This actually is a fairly simple problem to solve…  but I ran thru an interesting programming journey on the way to the answers, and it was the programming aspect that prompted this post.  To reiterate the scenario, the collection starts with 10 items of which 5 are chosen and one is taken away; then another NEW item is added, again giving you 10 items from which to choose 5 of them, and out of the 10 from which you choose, 4 of them were ones you saw before. 

When working problems of this kind (because I’m not so good at probabilities) I prefer to back up to a simpler case.  So let’s consider what happens when you have 6 items in the collection, and you choose 3 at a time.  This means that on the second draw, there will be 2 that must be avoided.  If the items in the second draw are A, B, C, D, E, and F, and we presume that E and F must be avoided, then we have the following:

  • “Good” choices: ABC, ABD, ACD, BCD

So there are 20 possible choices, of which only 4 yield a good result.  Each of these numbers is a binomial coefficient.  The number 20 is the total number of 3-letter combinations, or 6C3. The number 4 is the number of 3-letter combinations of only the first 4 “good” letters, or 4C3. So it should be clear that in general, the number of total choices is NCr where N is the number of items in the collection, r is the number of items we are picking, and the number of “good” choices is MCr where M = Nr + 1.

So now the answer to the first question becomes easy…  If you only have 10 items to choose from at a time, this means that 5 were chosen from the original 10, and then one of those 5 goes away and will be replaced by a new (unseen) item, so there are 4 out of 10 that need to be avoided.  Thus in this case, N=10, r=5, and M=6.  And thus, the likelihood that you WON’T see any of the original 5 (but really 4) items is:


Wow… the answer really is 42!!!  (Sort of.)

Ok, so what about the second question…   well the answer is still easy, because it is the same formula…  except now the numbers get bigger…   N=20, r=5, M=16:


Ok so what about the third problem?  Well it’s just a matter of plugging numbers in, but calculating binomial coefficients over and over for such large numbers really gets tiring…  and besides, like any good programmer, I wanted confirmation that I was doing my math right…   And that’s where we move to the fun part of this story.

Where’s the LINQ?

Ok so at this point I have two of the three answers, but I wanted to make sure I wasn’t screwing up the “easy” part.  I figured that this type of problem isn’t exactly custom-tailored for a LINQ query, but I figured I could have some fun and learn a bit more about Linq by using “Enumerable.Range” and some Linq extension methods.  So it’s just a matter of getting the code working…    But one thing that I really hate is using Visual Studio to create some dinky little project to do this sort of stuff – I’d much prefer a “snippet compiler” that will let me just type a bit of code and run it.  There are some programs around that do this, such as…  (wait for it…) SnippetCompiler.  It is a neat program because it basically let you write fully functional program without having to open VS.net.  But I found it’s IntelliSense somehow clunky and something about it always made me not like it too much, but it was still better than the alternative.  However, recently I have found a FANTASTIC application called LinqPad.  LinqPad can also compile snippets, but it is much more than just that…  It can dynamically evaluate LINQ expressions (or entire .NET programs), translate between LINQ syntax and lambda syntax, show you the generated IL, and can access databases on the fly so you are working with real data.  The author even touts it as a replacement for SSMS, and it certainly could, but I think that’s going a bit far since the results display isn’t quite tailored to that type of data, and I don’t think you can edit data in place (but I might be wrong).  It includes a lot of examples and apparently you can download more.  BUT… one of the really nice things is that it can compile and execute single statements – they don’t even have to be full programs as they do in SnippetCompiler.  For example, you can just type in “Enumerable.Range(10,10).ToList().ConvertAll(x => new { X=x, XSquared = x*x })” and here’s what you get out:


Amazing!  And trust me… that’s only the tip of the iceberg.  (Note that the shaded line is a “total sum” of the column – I don’t know why they do that, but I don’t see a way to turn it off.)  LinqPad also provides IntelliSense, although that is a feature you have to purchase.

Now comes the crazy part…

OK so I now started writing up some some LINQ and I quickly realized that there was no Factorial function in Math or any of the other standard libraries, and there was no way I was getting by without it.  So my first hunt was to find a Factorial function.  Not just any factorial function, but one written in LINQ, of course.  (Note that this was only because I was having fun; LinqPad is fully capable of compiling and executing non-LINQ C# code, so I could have just written a function to do it.)  My hunt took an odd turn, though, because it seems that writing a recursive function in Linq isn’t as easy as it is in good old fashioned code.  But with the magic and a bit of wizardry, it can be done.  I won’t try to explain it (because I don’t even come close to understanding it) but you can find the solution here.  They use something called a “YCombinator” to do it, but to keep things simple and clear, here is just the lambda that results from its use:

Func<int, int> factorial = Extensions.YCombinator<int, int>(fact => n => n < 2 ? 1 : n * fact(n - 1));

And a quick test of this shows that it works:

var N = Enumerable.Range(1,10).ToList().ConvertAll(x => new { x = x, factorial = factorial(x) });


Ok so now instead of building up a whole lot of factorial expressions, I’ll also define a lambda that handles the Combination function, like so:

Func<int, int, int> C = (n, r) => factorial(n) / factorial(r) / factorial(n - r);

var N = Enumerable.Range(3,10).ToList().ConvertAll(x => new { x = x, factorial = factorial(x), nC3 = C(x, 3) });

Coolio!!!  Alright now we are talking, because now I can verify my answers.  In order to do this, I’ll just calculate each of the values from N=10 to N=20 (why not!), along with the corresponding percentage calculation:

var N = Enumerable.Range(10,10).ToList();
int r = 5;
var y = N.ConvertAll(x => new {
    M = x-r+1,
    mCr = C(x-r+1, r),
    N = x,
    nCr = C(x, r),
    Frac = ((decimal)C(x-r+1, r) / C(x, r)).ToString("P2")

Oops!!  Divide by zero???  How in the heck could that happen??  So I commented out the only place there could be a divide by zero, the Frac calculation:

var N = Enumerable.Range(10,11).ToList();
int r = 5;
var y = N.ConvertAll(x => new {
    M = x-r+1,
    mCr = C(x-r+1, r),
    N = x,
    nCr = C(x, r),
    //Frac = ((decimal)C(x-r+1, r) / C(x, r)).ToString("P2")

Sure enough – there’s the zeros…   but WHY – there’s no way that a binomial coefficient can produce zero!  …And the first several answers are all correct.  Then I realized that the problem you always have when dealing with Factorials – the numbers get big really fast.  Maybe int wasn’t the best choice…  so I switched to long:

static Func<long, long> factorial = Extensions.YCombinator<long, long>(fact => n => n < 2 ? 1 : n * fact(n - 1));
static Func<long, long, long> C = (n, r) => factorial(n) / factorial(r) / factorial(n - r);
var N = Enumerable.Range(10,11).ToList();
int r = 5;
var y = N.ConvertAll(x => new {
    M = x-r+1,
    mCr = C(x-r+1, r),
    N = x,
    nCr = C(x, r),
    Frac = ((decimal)C(x-r+1, r) / C(x, r)).ToString("P2")

Now that is more like it…  and so now I had the answers to TWO of the questions…   With 10 in the collection and choosing 5 at a time, you have only a 2.38% chance of not seeing one of the original items, and with 20 items you have a 28.17% chance of not seeing one of the original items.  So what would it take to get a 90% chance of not seeing one of the original items?  OK no big deal, I’ll just keep going…  Uhhhh…  wait a minute.  The long type ALSO has a maximum value…   I wondered how “long” would it be before I started getting mangled numbers?   Not very long it turns out.  In fact, the calculation goes bonkers on the very next value of N, N=21:

var N = Enumerable.Range(20,2).ToList();


So now what?  Double??  At first it appeared that it would work…   but RIGHT BEFORE I got to my answer of 90%, KABOOM

static Func<double, double> factorial = Extensions.YCombinator<double, double>(fact => n => n < 2 ? 1 : n * fact(n - 1));
static Func<double, double, double> C = (n, r) => factorial(n) / factorial(r) / factorial(n - r);
var N = Enumerable.Range(160,14).ToList();
int r = 5;
var y = N.ConvertAll(x => new {
    M = x-r+1,
    mCr = C(x-r+1, r),
    N = x,
    nCr = C(x, r),
    Frac = ((double)C(x-r+1, r) / C(x, r)).ToString("P2")


At this point, I want you to take notice of the fact that I’m already up to 160+ items in the collection…  even with this many, when choosing 5, and replacing 1 of those 5 with a new item, and then picking 5 more from the full collection, you still don’t quite have a 90% chance of NOT seeing one of the original 5.  REALLY?!?!  Out of 160+ items I’m only picking 5, trying to avoid 4 of them, and I don’t have a 90% chance of doing that!?!?!  No wonder I always lost at Battleship.  I could pick 5 items from the collection 10 times over and still not have picked one-third of the available items…   odd… but ok, I trust the math.

Anyway…  Still without an answer to the question, and coming SO close, I couldn’t drop it now.  So what is out there that’s larger than a double and will operate on my measly 32 bit machine?  Ahhh yes…  BigInt!  So that started my quest for a BigInt class for C#, and I was surprised how hard it was to find one.  Apparently, there used to be a BigInt included in the System.Core library in the pre-release versions of .NET 3.5, but it was taken out before the official release.  Too bad – that would have been the easiest way to go.  I did eventually find a C# BigInt here.

So now I plugged in my BigInt, and after a few tweaks to handle the implementation of the BigInt class and some tweaks for efficiency and display, I arrived at my answer:

var N = Enumerable.Range(190,11).ToList();
int r = 5;
var y = N.ConvertAll(x => new {
    M = x-r+1,
    mCr = C(x-r+1, r),
    N = x,
    nCr = C(x, r)
var z = y.ConvertAll(yy => new {
    M = yy.M,
    mCr = yy.mCr.ToString(),
    N = yy.N,
    nCr = yy.nCr.ToString(),
    Frac = (double.Parse(yy.mCr.ToString()) / double.Parse(yy.nCr.ToString())).ToString("P2")


Thus…   I calculate that you would need a collection of 194 items from which to pull 5 items in order to have a >90% chance that you wouldn’t see one of the original 5 (really 4, since one is taken away).  That seems totally counter-intuitive to me, but this apparently a good example of diminishing returns! 

My next quest…   to find Factorial/Combination classes (probably utilizing BigInt) that can efficiently handle the fact that calculations of this kind, although they involve HUGE numbers if you work them out directly as I have here, actually represent relatively small numbers since factors often cancel out and I probably could have calculated the answer as fast on paper faster as it took me to write the code (assming I knew what the answer was!).  The above query didn’t take that long to run on my machine (almost 25 seconds) but it took me a while to figure out where the right value was.  The point is, with specially coded Factorial and Combination classes, I think this could be much faster and much more efficient.

Finally, along the course of this, I also found a nice LaTeX Equation Image Genarator on the web that I used for the math images… actually I found 3 or 4, but this was the only one that seemed to be working, so thanks to Roger!


~ Michael D. Bray

Posted in .NET, Math, Snippets, Uncategorized | Leave a Comment »

Anonymous Workflow Execution in Sharepoint 2007

Posted by Shaun McDonnell on December 24, 2008

A colleague of mine had a simple task:

Create an InfoPath form that will be uploaded to Sharepoint 2007 and then allow anonymous internet users to submit that form.  Upon submission, send the respective anonymous user an email detailing out the form they filled out.

Not so simple.

Well, that’s not entirely true.  Allowing an anonymous user to submit is easy.  It was the automated Sharepoint Workflow that executed after the form was submitted that was a problem.  Why?  Workflows, by default and by design, cannot be executed anonymously.

Enter .NET, C# and me.

The requirement:

Add some custom code to Sharepoint 2007 that will allow the anonymous execution of Sharepoint Workflows.  Furthermore, create a WebPart that will give an administrative user the ability to assign anonymous workflow execution to a Sharepoint List.

Begin interesting learning process.

If you create a workflow in Sharepoint 2007 and then assign the workflow to be executed upon the ‘ItemAdded’ event of the Document Library the workflow will not fire.  What is even better is that you will get no immediate error from Sharepoint.  You’ll just come to the realization that your workflow never executed.  But never fear!  The Microsoft Windows Event Log is here!  Here’s a snapshot of what the event log will give you:


Yes, do not adjust your monitor.  The text actually reads “Attempted to read or write protected memory.  This is often an indication that other memory is corrupt.”

Before you decide that your server needs new memory because a stick has gone bad, realize that what this means is a process tried to execute a workflow but that process did not have the rights to do so.  Duh.

What has to be done at this point is the creation of a custom Sharepoint EventReceiver.  An EventReceiver can be assigned to any type of Sharepoint List and when the appropriate event is executed the EventReceiver kicks in.  To start this process of developing a custom EventReceiver, you create a class (i.e. ‘WorkflowReceiver’) that inherits from Microsoft.Sharepoint.SPItemEventReceiver:

public class WorkflowReceiver : SPItemEventReceiver

When inheriting from this class there are a number of virtual methods you can override and take advantage of.  For this article, we’re just going to look at the ItemAdded method.  This method executes after a new document has been added to the document library that we will eventually assign this custom EventReceiver to.  Here is an example:

public override void ItemAdded(SPItemEventProperties properties)

The reason I call the respective base method is so that any additional custom processing that might be assigned to this document library can continue before I do my custom work.  This is not required but recommended for lifecycle maintainability.

As the code shows, I then call a method named ‘StartWorkflowAnon’.  This is a custom method that looks like this:

void StartWorkflowAnon(SPItemEventProperties properties)
    SPSecurity.RunWithElevatedPrivileges(delegate()                         {
          using (SPSite site = new SPSite(properties.WebUrl))
          using (SPWeb web = site.OpenWeb())
              web.AllowUnsafeUpdates = true;

              SPList list = web.Lists[properties.ListId];
              Guid wfId = new Guid(web.Properties[list.ID.ToString() + ";" + properties.EventType.ToString()]);
              SPWorkflowAssociation assoc = list.WorkflowAssociations[wfId];

              site.WorkflowManager.StartWorkflow(properties.ListItem, assoc, assoc.AssociationData);
              web.AllowUnsafeUpdates = false;

The critical code here is SPSecurity.RunWithElevatedPrivileges as this allows whatever code executing within its delegate to have the same execution rights as an authenticated user – exactly what we need to execute a workflow as an anonymous user.

A few items that we gather from the SPItemEventProperties help us determine what Sharepoint List executed the event, it’s url, the event executed and all workflows associated with the list.  Notice that when creating the wfId (workflowId) Guid we are pulling some custom properties from the current Sharepoint Website.  These properties are created by the WebPart that we’ll look into next.  Using the wfId we are able to pull the Sharepoint Workflow that we want to execute from the current Sharepoint List.  Then, we execute that respective workflow with elevated rights. 

Boom!  We now have anonymous workflow execution.  But how did it get assigned to the ItemAdded event in this list?

The Administrative WebPart

I found this to be the most difficult task to complete because creating just a basic WebPart is not easy within Sharepoint.  All html must be hardcoded and rendered via C# which can be a timely task.  Nevertheless, it has to be done.

Custom WebParts can be created using the Microsoft.SharePoint.WebPartPages.WebPart as a base object for your custom WebPart class.  Let’s call this WebPart ‘ReceiverManagement’ and thus our class definition would look like this:

public class ReceiverManagementWebPart : Microsoft.SharePoint.WebPartPages.WebPart

For an administrative user to have the ability to assign anonymous workflow execution to the event of a Sharepoint List as well as the ability to remove workflow execution from lists we are going to need to the following controls in the custom WebPart:

  • DropDownList of Sharepoint Lists
  • DropDownList of Sharepoint List Events
  • DropDownList of Sharepoint Workflows
  • Add Button
  • DataGrid of current assignments

You need to override the ‘CreateChildControls’ event of the base class to create the UI.  Here is what I have:

protected DropDownList lstSiteLists = null;
protected DropDownList lstWorkflows = null;
protected DropDownList lstListEvents = null;
protected Button btnAddReceiver = null;
protected Label lblEventName = null;
protected Label lblWorkflowName = null;
protected DataGrid gvCurrentReceivers = null;
protected Table tblReceiverManagement = null;protected override void CreateChildControls()

    lstSiteLists = new DropDownList();
    lstSiteLists.ID = Strings.SiteListsControlID;
    lstSiteLists.AutoPostBack = true;
    lstSiteLists.DataTextField = Strings.SiteListsTitle;
    lstSiteLists.DataValueField = Strings.SiteListsID;
    lstSiteLists.DataSource = CurrentWeb.Lists;
    lstSiteLists.SelectedIndexChanged += new EventHandler(lstSiteLists_SelectedIndexChanged);

    lstListEvents = new DropDownList();
    lstListEvents.ID = Strings.ListEventControlID;

    lstWorkflows = new DropDownList();
    lstWorkflows.ID = Strings.WorkflowsControlID;
    lstWorkflows.DataTextField = Strings.WorkflowsListDataTextField;
    lstWorkflows.DataValueField = Strings.WorkflowsListDataValueField;

    gvCurrentReceivers = new DataGrid();
    gvCurrentReceivers.ID = Strings.CurrentReceiversControlID;
    gvCurrentReceivers.CellPadding = 1;
    gvCurrentReceivers.CellSpacing = 0;
    gvCurrentReceivers.AutoGenerateColumns = false;
    gvCurrentReceivers.DeleteCommand += new DataGridCommandEventHandler(gvCurrentReceivers_DeleteCommand);
    ButtonColumn btnRemoveColumn = new ButtonColumn();
    btnRemoveColumn.ButtonType = ButtonColumnType.PushButton;
    btnRemoveColumn.CommandName = Strings.DeleteCommandName;
    btnRemoveColumn.Text = Strings.DeleteCommandName;
    BoundColumn bcListId = new BoundColumn();
    bcListId.DataField = Strings.CurrentReceiversListIdDataField;
    bcListId.Visible = false;
    BoundColumn bcWorkflowId = new BoundColumn();
    bcWorkflowId.DataField = Strings.CurrentReceiversWorkflowIdDataField;
    bcWorkflowId.Visible = false;

    BoundColumn bcReceiverId = new BoundColumn();
    bcReceiverId.DataField = Strings.CurrentReceiversReceiverIdDataField;
    bcReceiverId.Visible = false;
    BoundColumn bcListName = new BoundColumn();
    bcListName.DataField = Strings.CurrentReceiversListTitleDataField;
    bcListName.HeaderText = Strings.CurrentReceiversListTitleHeaderText;
    bcListName.HeaderStyle.HorizontalAlign = HorizontalAlign.Center;
    bcListName.HeaderStyle.VerticalAlign = VerticalAlign.Middle;
    bcListName.HeaderStyle.Font.Bold = true;
    bcListName.ItemStyle.HorizontalAlign = HorizontalAlign.Center;
    bcListName.ItemStyle.VerticalAlign = VerticalAlign.Middle;
    BoundColumn bcReceiverType = new BoundColumn();
    bcReceiverType.DataField = Strings.CurrentReceiversEventTypeDataField;
    bcReceiverType.HeaderText = Strings.CurrentReceiversEventTypeHeaderText;
    bcReceiverType.HeaderStyle.HorizontalAlign = HorizontalAlign.Center;
    bcReceiverType.HeaderStyle.Font.Bold = true;
    bcReceiverType.HeaderStyle.VerticalAlign = VerticalAlign.Middle;
    bcReceiverType.ItemStyle.HorizontalAlign = HorizontalAlign.Center;
    bcReceiverType.ItemStyle.VerticalAlign = VerticalAlign.Middle;

    BoundColumn bcWorkflowName = new BoundColumn();
    bcWorkflowName.DataField = Strings.CurrentReceiversWorkflowNameDataField;
    bcWorkflowName.HeaderText = Strings.CurrentReceiversWorkflowNameHeaderText;
    bcWorkflowName.HeaderStyle.HorizontalAlign = HorizontalAlign.Center;
    bcWorkflowName.HeaderStyle.VerticalAlign = VerticalAlign.Middle;
    bcWorkflowName.HeaderStyle.Font.Bold = true;
    bcWorkflowName.ItemStyle.HorizontalAlign = HorizontalAlign.Center;
    bcWorkflowName.ItemStyle.VerticalAlign = VerticalAlign.Middle;


    tblReceiverManagement = new Table();
    tblReceiverManagement.ID = Strings.ReceiverManagementTableControlID;
    tblReceiverManagement.CellPadding = 0;
    tblReceiverManagement.CellSpacing = 0;
    tblReceiverManagement.BorderWidth = new Unit(0);

    TableCell tdListLabel = new TableCell();
    tdListLabel.Text = Strings.ListsLabelText;

    TableCell tdLists = new TableCell();

    TableCell tdEventsLabel = new TableCell();
    tdEventsLabel.Text = Strings.EventsLabelText;

    TableCell tdEvents = new TableCell();

    TableCell tdWorkflowsLabel = new TableCell();
    tdWorkflowsLabel.Text = Strings.WorkflowsLabelText;

    TableCell tdWorkflows = new TableCell();

    TableCell tdReceivers = new TableCell();
    tdReceivers.RowSpan = 2;

    btnAddReceiver = new Button();
    btnAddReceiver.Text = "Add";
    btnAddReceiver.Click += new EventHandler(btnAddReceiver_Click);

    TableCell tdButton = new TableCell();
    tdButton.RowSpan = 2;

    TableRow trListsRow = new TableRow();

    TableRow trEventsRow = new TableRow();

    TableRow trWorkflowsRow = new TableRow();

    TableRow trButtonRow = new TableRow();

    TableRow trReceiversRow = new TableRow();






All of this code successfully creates the UI of the WebPart.  Here is the BindData method:

protected void BindData()
    LogHelper.WriteDebug("BindData() called.");

Within the BindData method I am binding the respective lists, events, workflows and existing assignments to their controls.  You can download that code using the link at the end of this article.  For now, let’s take a look at what happens when the ‘Add’ button is clicked:

protected void btnAddReceiver_Click(object sender, EventArgs e)
    SPList list = CurrentWeb.Lists[new Guid(lstSiteLists.SelectedValue)];

    string key = String.Format("{0};{1}", lstSiteLists.SelectedValue, lstListEvents.SelectedValue);
    if (CurrentWeb.Properties.ContainsKey(key))

    CurrentWeb.Properties.Add(key, lstWorkflows.SelectedValue);

    SPEventReceiverType receiverType = (SPEventReceiverType)Enum.Parse(typeof(SPEventReceiverType), lstListEvents.SelectedValue);
    list.EventReceivers.Add(receiverType, Assembly.GetExecutingAssembly().FullName, CustomEventReceiverClassName);

Is is here where we create the information needed by the WorkflowReceiver (see class above) to execute the anonymous workflow.  Let’s step through the above method.

The CurrentWeb object is a local property that just retrieves the current SPWeb object where the WebPart resides.  So, first we are retrieving the list the user selected in the DropDownList that they wish to assign the workflow to.  Then, I create a unique key that is formed by the unique Guid of the list they selected, a semicolon delimiter and then the name of the event they chose. 

I then check the custom properties of the CurrentWeb to see if that key already exists.  If so, I remove that property from the web because this is a duplicate event assignment and then I proceed to re-add the same property to the CurrentWeb.  I could have chosen to inform the user that the assignment they are creating already exists but this was easier.

Using the enumerator named SPEventReceiverType I am able to cast the text-based name of the event the user chose from the dropdown to this enumerator.  Then, I add a new receiver to the user-selected list and pass it the type of event, the FQ name of the assembly that contains our custom EventReceiver (in this case it is the same assembly where the current code is executing) and finally I pass it a constant string variable that contains the full name of the class (or type) to be executed when the event is tripped.

Our custom EventReceiver (WorkflowReceiver) is now wired up correctly and ready for production.

Deploying this code to a Sharepoint 2007 Server is a little tricky.  First, all of your assemblies need to be strongly signed and uniquely versioned.  These assemblies then need to installed into the GAC (Global Assembly Cache) on the same server from which Sharepoint 2007 is running.  Finally, you need to edit the main web.config for Sharepoint and the following SafeControl lines:

<SafeControl Assembly="Cti.SharepointServices.ReceiverManagement, Version=, Culture=neutral, PublicKeyToken=e59deda35424cfa7" 
Safe="True" />

Adding your new assembly as SafeControl ensures that Sharepoint can trust this assembly and therefore allow for its execution.

Lastly, you need to create a dwp file so you can import your new WebPart into Sharepoint.  Your dwp file should contain something like this:

<?xml version="1.0" encoding="utf-8"?>
<WebPart xmlns="http://schemas.microsoft.com/WebPart/v2">
    <Assembly>Cti.SharepointServices.ReceiverManagement, Version=, Culture=neutral, PublicKeyToken=e59deda35424cfa7</Assembly>
    <Title>Workflow Customization</Title>

You can then import this DWP file from the Sharepoint interface and then drag your WebPart onto the page.  It should look something like this:


Once assigned, your workflows will now execute when submitted by anonymous users.

To download the code that relates to this post, go here.


dp.SyntaxHighlighter.ClipboardSwf = ‘http://s1.wordpress.com/wp-content/plugins/highlight/clipboard.swf&#8217;;

Posted in .NET, Sharepoint, Sharepoint Workflows, Snippets, Tips | 13 Comments »


Posted by Shaun McDonnell on December 9, 2008

I am in the process of re-installing my operating system here at work and so I decided to post a list of all the different tools I always install on every computer that I use.  Here it is:

That’s it for now, I will post more if I can remember them.


Posted in .NET, Tips | 1 Comment »

Object Inheritance in Entity Framework

Posted by Michael Bray on August 16, 2008

I started playing with the Entity Framework that was officially released a few days ago along with VS2008 SP1 and .NET 3.5 SP1.  After playing with Linq to SQL, there was one particular feature that I was interested in – the multi-table object inheritance capabilities.  Linq to SQL had the ability to provide inherited types, but the underlying data source stored all of the different types in the same table, so-called Table-Per-Hierarchy (TPH) storage.  In this mechanism, a field in the database is used to distinguish the type that the row represents and it is generated accordingly.  This would lead to artifacts such as a lot of null fields in the database – not such a bad thing as NULL types don’t take up much (if any) space but the table structure is very flat, and not so accommodating if your object hierarchy is more than a few levels deep.

Linq to Entities provides a new type of storage, Table-Per-Type (TPT) that provides a much higher fidelity mapping to the underlying tables as compared to the business classes that are being worked with.  In this mode, there is a separate table for each level of the class hierarchy.  For deep class hierarchies this seems to make more sense.  The down side is that at the very root of the class hierarchy you end up with a table that is gi-normous.

One of the things that I plan to do to mitigate the size of that base table is to link the objects with Guids (uniqueidentifier) instead of ints.  By doing this, I have a root object that has a Guid, and two other properties that I want to be common on all objects – DateCreated and DateModified.  To maximize efficiency, the Guid column is the primary key, and is also the RowGUID.  The derived tables have the same characteristic Guid RowGuid column, and this is the value that I use to link the tables together in a 1-to-1 relationship.  This way I really have a true hierarchy without any additional data storage overhead for the key field. 

Another nice benefit that this provides is that any object in any table can make reference to any other object, and I know that I can find that object.  I also plan to control some of the portions of the Guid, and I’ll use that to identify the type of object (eg Person, Account, Order, etc) so I can know exactly what type of object it is and can load it accordingly.

Here’s an example hierarchy:


Note that DbObject is the root of the hierarchy, and assuming that I maintain this structure for all objects in the database, it will grow VERY large.  Is this a problem?   Possibly, but I expect SQL can deal with it.  The main concern is that any time I load one of these objects using Linq to Entities, the SQL query will include a JOIN to the DbObject table, which could be a killer, but possibly not as bad as you might think.  Initial testing indicates that I can load the entire Cisco parts database (which is more than 330,000 items and includes about 15 properties per item) in about 23 seconds.  Not great if the user is waiting, but how often am I going to load the entire database?  Not too often, I hope.  But it isn’t too bad either…  I suspect the use of the ObjectId as the joining field (and the fact that it is the primary key and RowGuid) are significantly helping that result time.

The other idea that I have to help mitigate that huge JOIN is that I may generate two nearly identical models – one with the DbObject and one without it.  This of course doesn’t affect the underlying database – it just affects how Linq to Entities interacts with it.  In that model, it will never attempt to JOIN the DbObject table.  If I’m only looking at the data (not adding a new object or updating it) then I have no need for the data contained in DbObject.  Of course, it also means that I’ll need to have two different models for every object (eg Person and PersonNRO (NRO = NoRootObject)) but it might be worth the gain. This will of course double the work to maintain the structure when changes are made, but I think that I could use a tool or write one myself that would generate the .edmx file for both objects based on a common definition. That’s a project for a rainy day, though.

Anyway, the one odd thing I’ve found as I’ve started working with this stuff is that in the normal model (with the DbObject), the Entity Container object ONLY provides access to root objects (in this case, a member called DbObjectSet).  So you don’t get a direct reference to any of the derived types at all:


Note that there is no ‘PersonObjectSet’, ‘EmployeeObjectSet’, ‘ContactObjectSet’ or ‘AccountObjectSet’.  So how do you load those types of entities?  You have to use a function that is provided on DbObjectSet:

Model1Container c = new Model1Container(); 
var people = from p in c.DbObjectSet.OfType<Person>() 
             select p; 

This will return a list of all the objects in the People table (along with the relevant data from DbObject) as one single class ‘Person’.  Amazing!

Posted in .NET | Tagged: , | 2 Comments »

Annoying <Script> Tag Bug When Using Client-Side Web Services

Posted by Shaun McDonnell on August 11, 2008

This one took me hours to figure out and there is no rhyme or reason to it.  Yet, I have been able to reproduce it consistently so I thought I would share the details here.

Microsoft .NET 3.5 introduced a new Attribute called [ScriptService] that we can put at the top of our WCF Services or our ASMX Web Services.  The addition of this attribute to a service allows for that service to be called from the client-side of a web application using javascript.

 [ScriptService] public class Service : System.Web.Services.WebService 

Behind the scenes, .NET creates some javascript and it is automatically embedded in your code.  You can actually see the javascript it creates by just adding a ‘/js’ to the end of the ASMX url like this:


If you want to use the client-side accessible web service from a pure HTML page (or something else that isn’t .NET) you can actually make a direct reference to that javascript path like this:

 <script language="javascript" src="http://services.customers.ctiusa.com/cisco/ipphoneservice.asmx/js" type="text/javascript">

However, this won’t work and the browser won’t be able to find the javascript reference.  Why?  Because it doesn’t like ‘/>’ for ending the script tag.  It wants the full ‘ ‘ like this:

 <script language="javascript" src="http://services.customers.ctiusa.com/cisco/ipphoneservice.asmx/js" type="text/javascript">  

Once you do that, you will be able to access your web service through pure asynchronous javascript calls.



Posted in .NET, Bugs, Tips | 1 Comment »