Thursday, December 12, 2013

C# Puzzle No.23 (intermediate)

Closures are interesting and helpful. In short, closure is a function that can be exposed to the outside world that captures part of its external environment, even though the environment could be private to outside world.

This puzzle involves following short snippet inspired by the Javascript: Definitive Guide book

// create array of 10 functions
static Func<int>[] constfuncs()
    Func<int>[] funcs = new Func<int>[10];
    for ( var i = 0; i < 10; i++ )
        funcs[i] = () => i;
    return funcs;
var funcs = constfuncs();
for ( int i = 0; i < 10; i++ )
    Console.WriteLine( funcs[i]() );
// output:
// 10
// 10
// ...
// 10

Side note: closures in Javascript work very similar and the output of corresponding Javascript snippet would be the same

function constfuncs() {
  var funcs = [];
  for(var i = 0; i < 10; i++)
    funcs[i] = function() { return i; };
  return funcs;
var funcs = constfuncs();
for ( var i = 0; i < 10; i++ )
    console.log( funcs[i]() );

End of side note

Your task here is not only to explain the behavior (easy part) but also correct the inner loop of the constfuncs method so that the code outputs


More specifically, you are allowed to modify only this line of code

funcs[i] = () => i;

You are free to propose a solution for the JavaScript version as well. The book doesn’t provide one, if you are interested.

Monday, December 2, 2013

Basic tests in Apache JMeter part 2/2

In the previous post we have recorded a basic JMeter test. This time we start with adding an assertion to validate test results.

Multiple assertions can be added to each requests. However, in our simple demo we need a single assertion on the last request just to check whether or not the user is succesfully logged into the application.

Right click the last request and add a Response Assertion. In my particular case, the last page renders the message with the logged in user name. In the response assertion page I add a Pattern to test, the message I expect to see at the last page of the session. If you run the test now it will most probably fail, as the controller lacks the cookie manager, so add one (Add/Config Element/HTTP Cookie Manager) and check the “Clear cookies at each iteration?” checkbox at the cookie manager configuration tab.

The screenshot shows the Response Assertion configuration tab with the pattern to test set to my custom expected message (it says “The logged in user is <username>” in Polish).

Response Assertion with custom text If you run the test now and check the results tree, you will most probably see a green icon beside all requests which is a notification of success. Play with the assertion pattern to verify that the assertion really works – a failed assertion marks the request with a red icon at the results tree view.

The last element of the tutorial is modification of POST requests. In my particular scenario, one of the responses returns a SAML token which should be posted in the very next request. However, the recorded session replays the same token every time, the token that was returned in the response body during the recording of my session. This is because JMeter only records and replays requests and it has no knowledge that one of returned parameters should be POSTed in the next request. Because of that, my recorded session works correctly for few hours and then the SAML token will no longer be accepted by the target server (the server will complain that the token is too old).

I start by locating the request that returns the SAML token. I can use the View Results Tree which shows detailed requests and responses. Here it is, the SAML returned in the response of one of my Default.aspx pages:

SAML token response

I right click the node corresponding to the request under my controller and add a post processor (Add/Postprocessors/Regular Expression Extractor). The postprocessor allows me to provide a regular expression and assign its match value to a variable I can use in consecutive requests. At the postprocessor configuration tab I provide necessary parameters: name – SAMLToken, regular expression – name=”wresult” value=”(.*)” /><input (to capture the whole SAML token), template – $1$, match – 1, default value – NOVALUE (more on the extractor here).

Extractor configuration

Then I go to the very next request (LoginPage.aspx) and at the Parameters tab I inspect the list of posted parameters. In my example there are three parameters sent to the server, wa, wresult and wctx, all three have fixed values taken from the recorded session. I am going to modify the wresult parameter to refer to the newly created variable, SAMLToken.

I have two options, I can provide a bare value (${SAMLToken}) or an unescaped value (${__unescapeHtml(${SAMLToken})}, I choose the latter.

Referencing the parameter And this is it, the token value read from the response is correctly referenced in the consecutive request. There are other JMeter fuctions that can be used there.

I can now configure the thread group to simulate more concurrent users and verify the correctness and the performance under different load conditions.

Basic tests in Apache JMeter part 1/2

Apache JMeter is a fine tool for different types of web tests – compatibility, performance, integration. This post is about recording, running and parametrizing basic tests.

First, start by running JMeter, you start with an empty Test Plan. Give it a name and add a Thread Group to the Test Plan (right click the Test Plan node and select Add/Threads/Thread Group).

Thread Group Thread Group represents a set of “users”. As you can see, you can modify the number of threads (users) and the ramp-up period. This is handy for testing your application for multiple concurrent clients.

Next step would be to set up an HTTP Proxy Server and a Recording Controller. These two will allow me to “record” the browser’s session instead of creating all requests manually.

Right-click the WorkBench node and add a Non-Test Element, HTTP Proxy Server. Add a Recording Controller to the Thread Group. In the HTTP Proxy’s settings tab, select the newly created Recording Controller as the target controller.

Target Controller Now start the proxy (Start button at the bottom of the settings tab), open up my web browser (any browser will do) and set the proxy to http://localhost:8080. If you are on Firefox, remember to set the proxy for each protocol (including HTTPS) (Tools/Options/Advanced/Network):

Firefox network setup

Now navigate to a web site you want to test. Each action is recorded under the recording controller.

To be able to replay the sequence of tests and see the results, add a listener to the thread group, the View Results Tree listener. Now you can click the Start button (or select Run/Start from the menu) and click your listener to see the replayed session. Use the Clear All button (Run/Clear All) to clear the listener.

Recorded session in the tree listener

In the next post we will create basic assertions and also modify the flow so that values returned from requests are used in consecutive posts.

Thursday, November 21, 2013

X509Certificate2 certificate conversions

X509 certificates are useful for many common tasks. Some time ago I’ve blogged on how to create certificates programatically and how to sign and verify XML data in an interoperable way.

There are some common tasks about certificates. Let’s begin.

To access a system store and enumerate it:

X509Store store = new X509Store( StoreName.My, StoreLocation.CurrentUser );
store.Open( OpenFlags.ReadOnly );
foreach ( var cert in store.Certificates )
    Console.WriteLine( cert.ToString() );

To access a file system store (*.pfx) and enumerate certificates:

X509Certificate2Collection store = new X509Certificate2Collection();
store.Import( @"c:\filename.pfx", "password", X509KeyStorageFlags.DefaultKeySet );
foreach ( var cert in store.OfType<X509Certificate2>() )
    Console.WriteLine( cert.ToString() );

To load a single certificate from a file system store (*.pfx):

X509Certificate2 cert = 
  new X509Certificate2( @"c:\filename.pfx", "password", X509KeyStorageFlags.MachineKeySet );
Console.WriteLine( cert.ToString() );

To export a X509Certificate2 object to a file store (*.pfx) (with private key and protected with a password):

X509Certificate2 cert = ...;
File.WriteAllBytes( "cert.pfx", cert.Export( X509ContentType.Pkcs12, "foo" ) );

To export only a public key of the X509Certificate2 to a file (*.cer):

X509Certificate2 cert = ...;
File.WriteAllText( "cert.cer", Convert.ToBase64String( cert.Export( X509ContentType.Cert ) ) );

And last but not least, if you have a certificate in Base64 form (for example from ADFS2 federation metadata), just create a blank text file with *.cer extension, copy the base64 certificate, save. The file can be used from within the Windows shell.

(a side note here: although most web sources claim that base64 encoded certificates in text form need the ----BEGIN CERTIFICATE----- preamble and -----END CERTIFICATE----- at the end, this is not necessary).

For example, a first googled certificate from here stored in a cert.cer text file

and double clicked from the OS shell opens up as

Monday, November 18, 2013

MSB3147: Could not find required file 'setup.bin' on a x64 machine [.NET 4.5, cont.]

Over two years ago I’ve blogged on how to workaround the problem with building ClickOnce applications on a x64 build server. The solution was to manually create a registry entry HKEY_LOCAL_MACHINE\SOFTWARE\Wow6432Node\Microsoft\GenericBootstrapper\4.0 and add the Path value c:\Program Files (x86)\Microsoft SDKs\Windows\v7.0A\Bootstrapper\ to it.

This worked for 2 years and last week we were hit with the issue once again. Reason? Well, .NET 4.5 has been installed on the server.

It seems that .NET 4.5 changes the way applications are built (new version of msbuild?). It no longer expects the old v7.0A SDK, instead it expects the v8.0A SDK.

Unfortunately, according to this table, there is no official installer for the v8.0A SDK. It is installed with VS2012 Update 2.

The solution was to:

1) Manually copy the contents of the c:\Program Files (x86)\Microsoft SDKs\Windows\v8.0A\Bootstrapper\Engine folder from a dev machine to the build server

2) Manually create a registry entry HKEY_LOCAL_MACHINE\SOFTWARE\Wow6432Node\Microsoft\GenericBootstrapper\11.0 with Path set to the C:\Program Files (x86)\Microsoft SDKs\Windows\v8.0A\Bootstrapper\

Important note! Although the setup.bin is physically inside the Bootstrapper\Engine subfolder, the Path points to Bootstrapper rather than Bootstrapper\Engine.

Important note 2! For some reason, the bootstrapper key corresponding to the v7.0A SDK has 4.0 suffix while the key corresponding to the v8.0A SDK ends with 11.0. A lot of unnecessary confusion and I am glad this works despite all this confusion.

Thursday, November 14, 2013

Basic Authentication Module with custom Membership Provider

The goal of this post is to document the “not-quite-practical” possibility of replacing Forms Authentication Module with a “401 Challenge”-Authentication-Module but still be able to use a custom membership provider.

The clarification could possibly dispel a common confusion – in ASP.NET the 401 Challenge authentication is often confused with Windows authentication scheme. The problem stems from the fact that the authentication module and the membership provider are two distinct responsibilities:

- your authentication could be “401 Challenge” based or cookie based (or even anything-else-based)

- your users credentials can be validated against AD or against a custom user store

Now, if you think about it, 4 combinations sound to make sense:

Windows membership Custom membership
302 Redirect (cookie) ActiveDirectoryMembershipProvider
<authentication mode="Forms">
Any custom membership provider
<authentication mode="Forms">
401 Challenge Windows authentication
<authentication mode="Windows">

You can have cookie based (forms) authentication with any membership provider, including the built-in ActiveDirectoryMembershipProvider. You can have “401 Challenge” based authentication with Windows accounts.

But what about “401 Challenge” based authentiaction with a custom membersip provider? ASP.NET has no built-in solution for this. You can have 401 Challenge-based authentication only for Windows accounts, in windows authentication mode.

This is why some people think that 401 Challenge basic authentication is only possible with Windows accounts.

Does it make sense?

Good question. There are many drawbacks of the 401 Challenge authentication, just to name a few:

- the login window is not customizable, it is built into your web browser

- there is no easy way to signal “wrong credendials” (a new 401 is returned)

There are also pros:

- 401 Challenge can be used in “active” scenario, where a client (ajax? webapi client?) can carry credentials in the very first request, without the need to redirect-with-a-cookie and then carry-cookie-with-each-request. In fact, the authentication header is automatically handled by all web browsers.

How to do it?

The code below is based mostly on the WebApi authentication module by Mike Wasson from his entry on WebApi. Minor modifications introduces authorization handling (the authentication is required only when authorization fails) and a call to a membership provider.

/// <summary>
/// Based on
/// </summary>
public class BasicAuthHttpModule : IHttpModule
    public void Init( HttpApplication context )
        // Register event handlers
        context.AuthenticateRequest += OnApplicationAuthenticateRequest;
        context.EndRequest += OnApplicationEndRequest;
    private static void SetPrincipal( IPrincipal principal )
        Thread.CurrentPrincipal = principal;
        if ( HttpContext.Current != null )
            HttpContext.Current.User = principal;
    private static bool AuthenticateUser( string credentials )
        bool validated = false;
            var encoding = Encoding.GetEncoding( "iso-8859-1" );
            credentials = encoding.GetString( Convert.FromBase64String( credentials ) );
            int separator = credentials.IndexOf( ':' );
            string name = credentials.Substring( 0, separator );
            string password = credentials.Substring( separator + 1 );
            if ( Membership.ValidateUser( name, password ) )
                var identity = new GenericIdentity( name );
                SetPrincipal( new GenericPrincipal( identity, null ) );
        catch ( FormatException )
            // Credentials were not formatted correctly.
            validated = false;
        return validated;
    private static void OnApplicationAuthenticateRequest( object sender, EventArgs e )
        var request = HttpContext.Current.Request;
        var user    = Thread.CurrentPrincipal;
        if ( !UrlAuthorizationModule.CheckUrlAccessForPrincipal( 
            request.Path, user, request.HttpMethod ) )
            var authHeader = request.Headers["Authorization"];
            if ( authHeader != null )
                var authHeaderVal = AuthenticationHeaderValue.Parse( authHeader );
                // RFC 2617 sec 1.2, "scheme" name is case-insensitive
                if ( authHeaderVal.Scheme.Equals( "basic",
                        StringComparison.OrdinalIgnoreCase ) &&
                    authHeaderVal.Parameter != null )
                    AuthenticateUser( authHeaderVal.Parameter );
    private static void OnApplicationEndRequest( object sender, EventArgs e )
        var response = HttpContext.Current.Response;
        if ( response.StatusCode == 401 )
            response.Headers.Add( "WWW-Authenticate",
                string.Format( "Basic realm=\"{0}\"", 
                    HttpContext.Current.Request.Url.Host ) );
    public void Dispose()
How to configure it?

There are two steps to configure the module. First, you register it for the processing pipeline:

    <modules runAllManagedModulesForAllRequests="true">
        <add name="BasicAuthModule" type="The.Namespace.Here.BasicAuthHttpModule"/>

The other important moment of the configuration is turning off all other 401 Challenge handling modules in the application configuration in IIS:

As you can see, the only active authentication method is “Anonymous” (Włączone = Enabled, Wyłączone = Disabled).

What happens if you don’t turn off other 401 Challenge handling methods? Well, the pipeline just uses them and since all built in modules tie 401 Challenge to Windows authentication, the custom membership provider will not even be fired as the windows authentication will most probably reject provided credentials.

What now

You can test the implementation with an http debugger to see how 401 is returned, what browser does when it sees the status code and what goes to the server with the next request.

The basic 401 Challenge authentication scheme is just one of possible 401 Challenge flows. Other possibilities are Digest, Ntlm are Negotiate flows. Google for more details.

Friday, October 11, 2013

Soft Delete pattern for Entity Framework Code First

Some time ago I’ve blogged on how to implement the Soft Delete pattern with NHibernate. This time I am going to show how to do the same with Entity Framework Code First.

(a side note: I really like the EFCF, I like its simplicity the and migration infrastructure. I tend to favor EFCF over other ORMs lately)

I’ve spent some time looking for a working solution and/or trying to come up with something on my own. There are solution that almost work, like the one by Zoran Maksimovic from his post “Entity Framework – Applying Global Filters”. Zoran’s approach involves cleverly replacing DbSets by FilteredDbSets internally in the DbContext. These FilteredDbSets have filtering predicates attached so that filtering occurs upon data retrieval. Unfortunately, this approaches missing the fact that filtering should also be applied to navigation properties. Specifically, this works correctly in Zoran’s approach

// both loop correctly over non-deleted entities only
foreach ( var child in context.Child ) ...
foreach ( var parent in context.Parent )...

but this fails miserably

foreach ( var parent in ctx.Parent )       // ok
  foreach ( var child in parent.Children ) // oops, deleted entities are included!

However, another solution has been proposed by a StackOverflow user Colin. This solution involves a discriminator column which normally is used when mapping class hierarchies to mark different types of entities mapped to the same table. There is the link to the original entry.

My job here is merely:

  • cleaning this up so that it complies
  • making it a little bit more general as the original approach makes some assumptions (a common base class for all entities where the primary key is always called “ID”)
  • adding a cache for the metadata so that all the metadata searching doesn’t have to be repeated over and over

All the credit goes to Colin, though.

Let’s start with entities:

public class Child
    public long ID { get; set; }
    public string ChildName { get; set; }
    public bool IsDeleted { get; set; }
    public virtual Parent Parent { get; set; }
public class Parent
    public long ID { get; set; }
    public string ParentName { get; set; }
    public bool IsDeleted { get; set; }
    public virtual ICollection<Child> Children { get; set; }

Nothing unusual as all the Soft Delete stuff is in the DbContext:

/// <summary>
/// </summary>
public class Context : DbContext
    public DbSet<Child>  Child { get; set; }
    public DbSet<Parent> Parent { get; set; }
    public Context()
        Database.SetInitializer<Context>( new MigrateDatabaseToLatestVersion<Context, Configuration>() );
    protected override void OnModelCreating( DbModelBuilder modelBuilder )
            .Map( m => m.Requires( "IsDeleted" ).HasValue( false ) )
            .Ignore( m => m.IsDeleted );
            .Map( m => m.Requires( "IsDeleted" ).HasValue( false ) )
            .Ignore( m => m.IsDeleted );
    public override int SaveChanges()
        foreach ( var entry in ChangeTracker.Entries()
                  .Where( p => p.State == EntityState.Deleted ) )
            SoftDelete( entry );
        return base.SaveChanges();
    private void SoftDelete( DbEntityEntry entry )
        Type entryEntityType = entry.Entity.GetType();
        string tableName      = GetTableName( entryEntityType );
        string primaryKeyName = GetPrimaryKeyName( entryEntityType );
        string deletequery =
                "UPDATE {0} SET IsDeleted = 1 WHERE {1} = @id",
                    tableName, primaryKeyName );
            new SqlParameter( "@id", entry.OriginalValues[primaryKeyName] ) );
        //Marking it Unchanged prevents the hard delete
        //entry.State = EntityState.Unchanged;
        //So does setting it to Detached:
        //And that is what EF does when it deletes an item
        entry.State = EntityState.Detached;
    private static Dictionary<Type, EntitySetBase> _mappingCache = 
       new Dictionary<Type, EntitySetBase>();
    private EntitySetBase GetEntitySet( Type type )
        if ( !_mappingCache.ContainsKey( type ) )
            ObjectContext octx = ( (IObjectContextAdapter)this ).ObjectContext;
            string typeName = ObjectContext.GetObjectType( type ).Name;
            var es = octx.MetadataWorkspace
                            .GetItemCollection( DataSpace.SSpace )
                            .SelectMany( c => c.BaseEntitySets
                                            .Where( e => e.Name == typeName ) )
            if ( es == null )
                throw new ArgumentException( "Entity type not found in GetTableName", typeName );
            _mappingCache.Add( type, es );
        return _mappingCache[type];
    private string GetTableName( Type type )
        EntitySetBase es = GetEntitySet( type );
        return string.Format( "[{0}].[{1}]", 
            es.MetadataProperties["Table"].Value );
    private string GetPrimaryKeyName( Type type )
        EntitySetBase es = GetEntitySet( type );
        return es.ElementType.KeyMembers[0].Name;

A couple of explanations.

First, the mapping. Note that the discriminator column is used to force EF to focus on undeleted entities. This adds the filtering predicate to all queries, including queries involving navigation properties.

    .Map( m => m.Requires( "IsDeleted" ).HasValue( false ) )

But then the discriminator column has to be removed from the mapping:

    .Ignore( m => m.IsDeleted );

This is enough to make EF generate correct queries, you can ignore the following stuff for a moment and just try it.

Second, the data saving. It is not enough to be able to filter the data, the Soft Delete also requires that deleting should actually only mark data as deleted. This is done in the overridden SaveChanges method. For each entity that is internally marked as deleted in the EF’s object cache, we manually update it in the database and then mark them as unattached (just like EF’s SaveChanges does).

Third, the caching stuff, GetEntitySet/GetTableName/GetPrimaryKeyName. These are for reading metadata so that the query that marks the data can include correct table name and correct primary key name for given entity type.

And this is it, deleting the data

var child = ctx.Child.FirstOrDefault( c => c.ID == 123 );
ctx.Child.Remove( child );
correctly updates its state to deleted (IsDeleted=1) instead of physically deleting it from the database.