Friday, December 16, 2016

Combining sync/async WCF client/server that share an interface-based contract

Having a shared interface between WCF client and server is handy, often you just put such contract in a shared assembly and you just build your service out of the contract and then use the ChannelFactory to automatically create clients.

// shared contract
[ServiceContract]
public interface IService1
{
  [OperationContract]
  string DoWork( string s );
}
// server
public class Service1 : IService1
{
  public string DoWork( string s )
  {
     return s + " from wcf";
  }
}
// client
var binding  = new BasicHttpBinding();
var endpoint = new EndpointAddress( "http://localhost:55748/Service1.svc" );
 
var factory = new ChannelFactory<IService1>( binding, endpoint );
var channel = factory.CreateChannel();
 
var result  = channel.DoWork("foo");

The problem starts when you decide to go async at the server or the client but in the same time keep the other sync – in MVC or in Windows.Forms you just change the method from Foo Method() to async Task<Foo> Method(), here in WCF you just can’t change the signature of the interface method as it affects both the client and the server.

For example, I you to be async at the server side so that you switch to the async interface of your data provider, and you change the interface to

[ServiceContract]
public interface IService1
{
  [OperationContract]
  Task<string> DoWork( string s );
}

// oops, the client doesn't compile anymore as the supposed contract has just changed
// the client is not "smart enough" to notice that in fact this is still the same contract 
// at the conceptual level

Fortunately, a simple solution exists, mentioned here. The solution is to have two interfaces, a sync one and an async one. Both interfaces have to share the name, however they can exist in the very same assembly, in adjacent namespaces. It turns out that both the client and the server could implement any of the two interfaces and wiring still succeeds.

namespace Sync
{
    // shared contract
    [ServiceContract]
    public interface IService1
    {
        [OperationContract]
        string DoWork( string s );
    }
}

namespace Async
{
    [ServiceContract]
    public interface IService1
    {
        [OperationContract]
        Task<string> DoWork( string s );
    }
}

An example async client, using the async interface, would be

  
private async void button1_Click( object sender, EventArgs e )
{
    var binding  = new BasicHttpBinding();
    var endpoint = new EndpointAddress( "http://localhost:55748/Service1.svc" );

    var factory = new ChannelFactory<Async.IService1>( binding, endpoint );
    var channel = factory.CreateChannel();

    var result = await channel.DoWork( "foo" );

    MessageBox.Show( result );
}

An example async server would be

public class Service1 : IService1
{
    public async Task<string> DoWork( string s )
    {
        await Task.Delay( 5000 );
        return s + " from wcf";
    }
}

The conclusion is: the shared assembly should just provide both interfaces, the sync and the async one, which lets both the client and the server freely decide which interface they implement. In a sense, the client and the server still share the same interface although technically these are two different interfaces.

Thursday, August 25, 2016

SignedXml::CheckSignature and dreadful “The profile for the user is a temporary profile” on .NET 4.6.2

Sometimes, out of nowhere, your well-tested and working-for-years code just stops working just because something changes somewhere you can really have control over.

This post documents one of such cases. The case is about checking digital signatures of signed XML documets. For years we’ve been creating and validating documents with self-contained signatures. I’ve blogged on that in a three part tutorial. In our scenario, IIS hosted ASP.NET applications were receiving signed XML documents and processing them.

This worked as a charm until .NET 4.6.2. In .NET 4.6.2 the SignedXml::CheckSignature( X509Certificate2 ) method we’ve been using for years doesn’t work anymore in our scenario, it throws “The profie for the user is a temporary profile” exception.

What is a temporary profile? You see, when an app pool is configued in IIS to use a custom account (a domain account in our case) and the Load User Profile is turned on, the ASP.NET worker process is supposed to load the profile of the app pool account. The existence of the profile changes some details on how temporary paths are resolved and how keys are imported to cert stores. IIS has an unfortunate way of handling this requirement – instead of creating a regular profile, it creates a temporary profile. People often ask what these profiles are and how to turn them off.

Unfortunately, there is no easy way to change this behavior of creating temporary profiles rather than regular profiles (or we don’t know how to do it). A workaround exists – you can just log into the server as the app pool account with an interactive session (a direct login or a remote desktop session). A local profile is then created at the server and further loading the profile by ASP.NET use the newly created, valid profile rather than a temporary one (e.g. the workaround is mentioned here at Paul Stovell blog). But suppose you have a farm where servers are cloned on demand (we use the Web Farm Framework). Suppose also you have multiple apps. We have like ~150 servers and ~50 apps on each, this means logging 150*50 times to different servers only to have profiles created correctly.

Unfortunately also, to be able to use cryptographic API, you often just have to turn on loading user profiles, otherwise the crypto system just doesn’t work sometimes (private keys cannot be accessed for certs that are loaded from local files). In our case, turning off the profile raises an instant exception “Object is in invalid state”.

What all it means is that .NET 4.6.2 changes the way SignedXml::CheckSignature( X509Certificate2 ) works and changes it in a fundamental way. Before 4.6.2 the method works always, regardless of whether the profile is a temporary profile or not, as long as the profile is loaded (Load User Profile = true). In .NET 4.6.2 the method doesn’t work if the profile is a temporary profile.

One of our first attempts was to follow the previously mentioned workaround and somehow automate the profile creation so that profiles are there when IIS requests them. This however doesn’t solve the issue on already existing servers.

But then another approach was tested. Because we check signatures on self-contained signed documents, X509Certificate2 instances we pass to the method are retrieved from the SignedXml:

// Create a new SignedXml object and pass it  // the XML document class.
SignedXml signedXml = new SignedXml( xd );
 
// Load the first <signature> node.   signedXml.LoadXml( (XmlElement)messageSignatureNodeList[0] );

// load certificate 
foreach ( KeyInfoClause clause in signedXml.KeyInfo ) 

    if ( clause is KeyInfoX509Data ) 
    { 
        if ( ( (KeyInfoX509Data)clause ).Certificates.Count > 0 ) 
        { 
            certificate = (X509Certificate2)( (KeyInfoX509Data)clause ).Certificates[0]; 
        } 
    } 
}

if ( certificate != null ) 

    // Check the signature and return the result. Throws the “user profile is a temporary profile” 
    return signedXml.CheckSignature( certificate, true ); 

else 
    return false;

The exception possibly is related to the possible leakage of private keys when temporary profiles are involved. Maybe one of .NET BCL developers was just oversensitive here, implementing a bunch of new features.

There are no private keys, however, in our certificates! Since certificates are here to verify signatures, private keys are not included, we only have public keys. How about using the overload of the CheckSignature that only expects an AsymmetricAlgorithm?

// Create a new SignedXml object and pass it 
// the XML document class.
SignedXml signedXml = new SignedXml( xd );

// Load the first  node.  
signedXml.LoadXml( (XmlElement)messageSignatureNodeList[0] );

AsymmetricAlgorithm rsa = null;

// load certificate
foreach ( KeyInfoClause clause in signedXml.KeyInfo )
{
	if ( clause is KeyInfoX509Data )
	{
        	if ( ( (KeyInfoX509Data)clause ).Certificates.Count > 0 )
	        {
        	      certificate = ((KeyInfoX509Data)clause).Certificates[0] as X509Certificate2;
	        }
        }
}

if ( certificate == null )
{
	Message = "No KeyInfoX509Data clause in the signature";
        return false;
}

if ( certificate.PublicKey == null || certificate.PublicKey.Key == null )
{
	Message = "The KeyInfoX509 clause doesn't contain any valid public key";
        return false;
}

rsa = certificate.PublicKey.Key;

// Check the signature and return the result. 
return signedXml.CheckSignature( rsa );

Yes, as you can expect, this works correctly, again regardless on whether the profile is a regular profile or a temporary one.

Monday, April 18, 2016

Yet another short async/await example

The async/await pattern is here for some time and this is going to be yet another short example that could possibly make it easier for someone to grasp the idea.

First, since we talk about possible async operations, let’s start with a Windows.Forms/WPF application where there is at least a chance to see that an operation is async – we will start a long running operation and while it lasts, we the GUI will remain responsive. On the other hand, trying anything async from a console app doesn’t make much sense.

So let’s have a Windows.Forms app with two buttons. Assign something instant to the second button, something like MessageBox.Show so that you can press it and make sure the app remains responsive. We will then make the first button run a long operation.

Let’s start with this long running operation:

private int SlowOperation()
{    
    Thread.Sleep( 5000 );    
    return 17;
}

It is obvious that running this from a click handler would freeze the application for 5 seconds.

Without async/await, you still can run this asynchronously. There are numerous ways to do it, a possibly most basic one is to create a delegate and run it asynchronously, this is something delegates support since .NET has been released:

Func<int> f = () => SlowOperation();
f.BeginInvoke(     
    ar =>    
    {        
        Func<int> s = (Func<int>)ar.AsyncState;        
        var result = s.EndInvoke( ar );        
        MessageBox.Show( result.ToString() );    
    },     
    f );

This looks ugly. Not only I need a pair of BeginInvoke/EndInvoke methods but also I need to feed both with valid arguments. In particular, if the delegate returns a value, a common practice is to pass the delegate as the “async state” so that it can be later picked up and used to end the async call. This basic pattern of Begin…/End… is called the Asynchronous Programming Model (APM).

However, this works. You can run the app and make sure it stays responsive and after 5 seconds it just shows a message box with the result of the long call.

As .NET evolved, this basic pattern has been evolving too. We had background workers, we had so called Event-Based Asynchronous Pattern (EAP) which is obsolete nowadays. A newest addition to the toolbox is the Task-based Asynchronous Pattern (TAP) which is based on the Task class that represents both initiation and completion of an async method.

And this is what this tutorial aims to – to show how this ugly code from above can be rewritten to a more modern style. We will refactor the code in three steps.

Step one. This is where we introduce the Task class. It also evolves but at the time I write this, you can pretty much easily convert anything to a task with Task.Run facade. It uses the thread pool to queue the async operation and immediately starts it.

The problem with bare Task.Run is that it returns a task and I still to supply a delegate to provide a code that runs when the task completes:

Task<int> task = Task.Run( () => SlowOperation() );
task.ContinueWith( t => MessageBox.Show( t.Result.ToString() ) );

This still doesn’t look good but definitely better than the APM version.

Btw. Stephen Cleary has a nice explanation on how and why the Task.Run should be used only to invoke async methods rather than inside their implementations.

Step two. In this step I rewrite my original SlowOperation to already support the async model. This way I won’t need the Task.Run anymore:

private Task<int> SlowOperation2()
{    
    return Task.Delay( 5000 )        
               .ContinueWith( t => 17 );
}

Note, that I also had to change the blocking Thread.Sleep to a non blocking Task.Delay.

Now I go back to the client code and I rewrite it to

SlowOperation2()
    .ContinueWith( t => MessageBox.Show( t.Result.ToString() ) );

Step three. This is where the async/await is finally introduced – it is merely a syntactic sugar over .ContinueWith.

First, I rewrite the SlowOperation2. Instead of the lambda that shows the continuation, the code is now clean.

private async Task<int> SlowOperation2()
{    
    await Task.Delay( 5000 );    
    return 17;
}

Then I rewrite the client code to also replace the ContinueWith with async/await. This time it is not that easy because to be able to use await, my method has to be marked with async.

private async void button1_Click( object sender, EventArgs e )
{    
    var result = await SlowOperation2();    
    MessageBox.Show( result.ToString() );
}

Another, interesting issue is how to create a custom awaitable code? One of possible approaches involves the TaskCompletionSource class. Consider following example:

    public static class TaskExtensions
    {
        public static Task FromMiliseconds( int miliseconds )
        {
            TaskCompletionSource<object> tcs = new TaskCompletionSource<object>();
            Timer timer                      = new Timer();

            timer.Interval = miliseconds;
            timer.Elapsed += (s,e) =>
            {
                tcs.SetResult(null);
            };
            timer.Start();

            return tcs.Task;
        }
    }

    ...

    public class MainClass
    {
        public static void Main()
        {
            CustomAwaiter().ContinueWith(t => Console.WriteLine("end"));

            Console.ReadLine();
        }

        public static async Task CustomAwaiter()
        {
            Console.WriteLine("before");
            await TaskExtensions.FromMiliseconds(2000);
            Console.WriteLine("after");
        }
    }

As you can see, the class can be combined with any other object which is awaitable in any other way - a timer in this example - and provides a nice way to wrap any other interface to the async/await pattern. When a first step is already made, existing awaiters can be further wrapped, e.g.
   public static class TaskExtensions
    {
        public static TaskAwaiter GetAwaiter(this int miliseconds)
        {
            return FromMiliseconds(miliseconds).GetAwaiter();
        }

        public static Task FromMiliseconds( int miliseconds )
        {
            TaskCompletionSource<object> tcs = new TaskCompletionSource<object>();
            Timer timer                      = new Timer();

            timer.Interval = miliseconds;
            timer.Elapsed += (s,e) =>
            {
                tcs.SetResult(null);
            };
            timer.Start();

            return tcs.Task;
        }
    }

    public class MainClass
    {
        public static void Main()
        {
            CustomAwaiter().ContinueWith(t => Console.WriteLine("end"));

            Console.ReadLine();
        }

        public static async Task CustomAwaiter()
        {
            Console.WriteLine("before");
            await 2000;                  // awaiting an int32? no problem
            Console.WriteLine("after");
        }


    }

Happy asyncing/awaiting your code.

Friday, January 15, 2016

DI, Factories and the Composition Root Revisited

Some time ago I published an entry on Local Factories and what role they play in a clean architecture stack where the Composition Root configures dependencies between abstractions and implementations. The article can be found here.

There are dozens of questions about this article from people and I decided to refresh the concept by simplyfing the code a little bit.

The general idea of the Local Factory pattern is to hide the implementation details of a factory but in the same time make the factory the only legitimate source of instances. The goal the pattern tries to achieve is to create a stable API for creating instances, allow possible DI usage but not to be dependant on any specific implementation (including the DI).

Let’s start with the service:

// service contract
// no ioc here
public interface IService
{
    void Foo();
}

Then, the factory:

// the Local Factory
// still no ioc here
public class ServiceFactory
{
    private static Func<IService> _provider;
   
    // the factory is the only legal provider of service instances
    // but in fact it delegaes the creation yet elsewhere
    public IService CreateService()
    {
        if ( _provider != null )
            return _provider();
        else
            throw new ArgumentException();
    }
   
    public static void SetProvider( Func<IService> provider )
    {
        _provider = provider;
    }
}

Note how smart the factory is.

Until an actual provider is configured (and it will be configured in the Composition Root), the factory doesn’t even know how to create instances. This approach is a simplification compared to the one I presented last time – my approach that time was to have a factory provider to create factory to create instances. A redundancy that can be easily eliminated by moving the volatile part of the implementation to the factory and eliminating the factory provider from the big picture.

The client code relies on the factory:

// the client uses the factory
// no ioc here in the client
public class ServiceClient
{
    public void ServiceUsageExample()
    {
        var sf = new ServiceFactory();
        var service = sf.CreateService();
       
        service.Foo();
    }
}

Note that up to this point there is no DI, just a simple dependency from the client to the factory that returns instances. In a real world application you would probably have multiple local factories, each factory belongs to a specific subsystem in a specific layer and doesn’t interfere with other layers and other subsystems.

This is where such local factory differs from the Service Locator, a God-like factory that creates instances of multiple classes from multiple layers but in the same time it introduces an unwanted association to itself. A local factory on the other hand is a part of its local ecosystem: the factory together with the abstraction (interface) consitute the API for their future clients.

Now it is the time however to configure the factory from the Composition Root. The CR should:

  • know what actual implementation of the interface should be used
  • configure the factory provider somehow, for example to return an instance of a fixed, known type or maybe to use a DI container to return one

First, a concrete implementation of the service. Note that the factory is unaware of what actual type will be used which of course means that the type can be defined anywhere in the solution stack, for example in a different assembly from the one the interface/factory are defined.

public class ServiceImpl : IService
{
    public void Foo()
    {
        Console.WriteLine( "serviceimpl:foo" );
    }
}

And last but not least, the Composition Root:

public class Program
{
    public static void Main()
    {
        CompositionRoot();
       
        var client = new ServiceClient();
        client.ServiceUsageExample();       
    }
   
    public static void CompositionRoot()
    {
        // this is the only place in the code that is aware of the ioc       
        // but could be as well configured to not to use ioc at all
       
        // no-ioc:
        ServiceFactory.SetProvider( () => new ServiceImpl() );
       
        // ioc:
        /*
        var container = new UnityContainer();
        container.Register<IService, ServiceImpl>();
        ServiceFactory.SetProvider( () => container.Resolve<IService>() );
        */
    }                                                           
}

Note that the Composition Root is the only spot in the code that is aware of the possible DI container. We could say that the CR delegates the implementation down the application stack to the Local Factory which otherwise would not know how to create instances. And whether or not a DI container is used – this fact is known only to the CR.

Note also that the possibility to have a simple provider that returns an instance of a known type is suitable for unit testing.

Hope the idea is clearer this time.

Specified value has invalid Control characters

One of sites started to report this exception from a DotnetOpenAuth powered OAuth2 token endpoint:

Parameter name: value
   at System.Net.WebHeaderCollection.CheckBadChars(String name, Boolean isHeaderValue)
   at System.Net.WebHeaderCollection.Set(String name, String value)
   at DotNetOpenAuth.Messaging.Channel.ReadFromRequest(HttpRequestBase httpRequest)
   at DotNetOpenAuth.Messaging.Channel.TryReadFromRequest[TRequest](HttpRequestBase httpRequest, TRequest& request)
   at DotNetOpenAuth.OAuth2.AuthorizationServer.HandleTokenRequest(HttpRequestBase request)

Upon further investigation it turned out that following snippet from the ReadFromRequest method fails:

foreach (string name in httpRequest.Headers)
{
     httpDirectRequest.Headers[name] = httpRequest.Headers[name];
}

This seems to use the [string, string] indexer on the Headers collection which in turn tries to validate both header name and value in CheckBadChars.

The ARR turned out to be the culprit, when it acts as a reverse proxy, it adds a couple of headers, including the X-ARR-SSL header that contains the information on the actual SSL cert used by ARR. And one of our development sites apparently used a certificate generated by hand from an internal cert authority with an invalid character in the cert’s Subject name.

Lesson learned, the cert should never contain any non-ASCII chars in the subject name.