Category Archives: C#

New open source project – DevOpsFlex

It’s been a while since I last posted. On November last year I decided to take on a project in pure Waterfall, and from a developer point of view, the problem of Waterfall is because of the nature of the cycles, you never really build something cool or good, you’re always trying to deliver instead of building. So you get stuck into this delivery cycle and you’re not really accomplishing anything good and worth writing for.

That is behind now and I’m now back to Agile working in a DevOps team doing automation for a .Net programme. The development work I will be doing will be fully open sourced.
So far I have been working on a single TFS build workflow activity that scales Azure VMs up and down depending on what you want to do with it. For us, we want to scale down development environments during the night and during the weekend, but not completely shut them down so that we can still do continuous deployments during nightly builds. Reducing the VMs down to A1’s, or even A0’s will save a lot of money as environments ramp up during the development cycle.

The home for these TFS build activities is:
https://github.com/sfa-gov-uk/devopsflex

And they are already published to NuGet:
https://www.nuget.org/packages/DevOpsFlex.Activities/

I have a couple more things I want to do with this activity:

  • Add a nice WPF designer to the activity.
  • Add the ability to shutdown and start VMs instead of up scaling and down scaling.
  • Add the ability to wait for the VMs to be back up before you exit the activity execution cycle. This allows developers to track the TFS build for when the environment is back up fully functional and if they are tracking TFS builds they will get notifications for it.

Testing that all Fault Exceptions are being handled in a WCF client

One of the things that the .Net compiler won’t warn developers about, is when another developer decides to add a new FaultException type and the client code isn’t updated to handle this new type of exception. The solution I’m demonstrating here is a generic solution to check for this, but implies that the client is going through a ChannelFactory and not a ClientBase implementation.

ChannelFactory implementations are usually better if there’s full ownership, in the institution, of service and clients. The share of the service contracts will allow Continuous Integration builds to fail if there was a breaking change made on the service that broke one or more of the consuming clients. You may argue that ChannelFactory implementations have the issue that if you change the service, with a non-breaking change, you need to re-test and re-deploy all your clients code: This isn’t exactly true, as if it is a non-breaking change, all the clients will continue to work even with a re-deploy of the service.

Default ChannelFactory Wrapper

The generic implementation depends on our default WcfService wrapper for a ChannelFactory. This could be abstracted through an interface that had the Channel getter on it, and make the generic method depend on the interface instead of the actual implementation.

I will provide here a simple implementation of the ChannelFactory wrapper:


public class WcfService<T> : IDisposable where T : class
{
private readonly object _lockObject = new object();
private bool _disposed;
private ChannelFactory<T> _factory;
private T _channel;
internal WcfService()
{
_disposed = false;
}
internal virtual T Channel
{
get
{
if (_disposed)
{
throw new ObjectDisposedException("Resource WcfService<" + typeof(T) + "> has been disposed");
}
lock (_lockObject)
{
if (_factory == null)
{
_factory = new ChannelFactory<T>("*"); // First qualifying endpoint from the config file
_channel = _factory.CreateChannel();
}
}
return _channel;
}
}
public void Dispose()
{
Dispose(true);
GC.SuppressFinalize(this);
}
internal void Dispose(bool disposing)
{
if (_disposed)
{
return;
}
if (!disposing)
{
return;
}
lock (_lockObject)
{
if (_channel != null)
{
try
{
((IClientChannel)_channel).Close();
}
catch (Exception)
{
((IClientChannel)_channel).Abort();
}
}
if (_factory != null)
{
try
{
_factory.Close();
}
catch (Exception)
{
_factory.Abort();
}
}
_channel = null;
_factory = null;
_disposed = true;
}
}
}

view raw

gistfile1.cs

hosted with ❤ by GitHub

Example of a client using the Wrapper

Here’s an example of code that we want to test, for a client that’s using the WcfService wrappe. The separation from the method that creates the WcfService wrapped in a using clause and the internal static one is just for testing purposes, just so we can inject a WcfService mock and assert against it. The client successfully wraps a FaultException into something meaningful for the consuming application.


public class DocumentClient : IDocumentService
{
public string InsertDocument(string documentClass, string filePath)
{
using (var service = new WcfService<IDocumentService>())
{
return InsertDocument(documentClass, filePath, service);
}
}
internal static string InsertDocument(string documentClass, string filePath, WcfService<IDocumentService> service)
{
try
{
return service.Channel.InsertDocument(documentClass, filePath);
}
catch (FaultException<CALFault> ex)
{
throw new DocumentCALException(ex);
}
catch (Exception ex)
{
throw new ServiceUnavailableException(ex.Message, ex);
}
}
}

view raw

gistfile1.cs

hosted with ❤ by GitHub

The generic Fault contract checker

This implementation is using Moq as the Mocking framework and the code is dependent on it. It also provides signatures up to 4 exceptions that are expected, this is done with a Top-Down approach, where the signature with the most type parameters has the full implementation and the others just call the one that’s one higher level in the signature chain. To support this mind set, a special empty DummyException is declared to fill the gaps between Type Parameters in the different signatures.

Breaking down the code, what it is doing is creating a dynamic Expression Tree that we can wire in the Setup method of the client mock that will intercept calls with any type of parameter (It.IsAny). Then for each FaultContractAttribute that is decorating the service, instantiate it and wire everything so that the service method is setup to throw it. Finally call it, and check if it was caught and wrapped or if we are getting the original FaultException back.


public static class ContractCheckerExtension
{
public static string CheckFaultContractMapping<TContract, TEx1>(this MethodInfo method, Action<Mock<WcfService<TContract>>> action)
where TContract : class
where TEx1 : Exception
{
return method.CheckFaultContractMapping<TContract, TEx1, DummyException>(action);
}
public static string CheckFaultContractMapping<TContract, TEx1, TEx2>(this MethodInfo method, Action<Mock<WcfService<TContract>>> action)
where TContract : class
where TEx1 : Exception
where TEx2 : Exception
{
return method.CheckFaultContractMapping<TContract, TEx1, TEx2, DummyException>(action);
}
public static string CheckFaultContractMapping<TContract, TEx1, TEx2, TEx3>(this MethodInfo method, Action<Mock<WcfService<TContract>>> action)
where TContract : class
where TEx1 : Exception
where TEx2 : Exception
where TEx3 : Exception
{
return method.CheckFaultContractMapping<TContract, TEx1, TEx2, TEx3, DummyException>(action);
}
public static string CheckFaultContractMapping<TContract, TEx1, TEx2, TEx3, TEx4>(this MethodInfo method, Action<Mock<WcfService<TContract>>> action)
where TContract : class
where TEx1 : Exception
where TEx2 : Exception
where TEx3 : Exception
where TEx4 : Exception
{
// we're creating a lambda on the fly that will call on the target method
// with all parameters set to It.IsAny<[the type of the param]>.
var lambda = Expression.Lambda<Action<TContract>>(
Expression.Call(
Expression.Parameter(typeof (TContract)),
method,
CreateAnyParameters(method)),
Expression.Parameter(typeof (TContract)));
// for all the fault contract attributes that are decorating the method
foreach (var faultAttr in method.GetCustomAttributes(typeof(FaultContractAttribute), false).Cast<FaultContractAttribute>())
{
// create the specific exception that get's thrown by the fault contract
var faultDetail = Activator.CreateInstance(faultAttr.DetailType);
var faultExceptionType = typeof(FaultException<>).MakeGenericType(new[] { faultAttr.DetailType });
var exception = (FaultException)Activator.CreateInstance(faultExceptionType, faultDetail);
// mock the WCF pipeline objects, channel and client
var mockChannel = new Mock<WcfService<TContract>>();
var mockClient = new Mock<TContract>();
// set the mocks
mockChannel.Setup(x => x.Channel)
.Returns(mockClient.Object);
mockClient.Setup(lambda)
.Throws(exception);
try
{
// invoke the client, wrapped in an Action delegate
action(mockChannel);
}
catch (Exception ex)
{
// if we get a targeted exception it's because the fault isn't being handled
// and we return with the type of the fault contract detail type that was caught
if (ex is TEx1 || ex is TEx2 || ex is TEx3 || ex is TEx4)
return faultAttr.DetailType.FullName;
// else soak all other exceptions because we are expecting them
}
}
return null;
}
private static IEnumerable<Expression> CreateAnyParameters(MethodInfo method)
{
return method.GetParameters()
.Select(p => typeof (It).GetMethod("IsAny").MakeGenericMethod(p.ParameterType))
.Select(a => Expression.Call(null, a));
}
}
[Serializable]
public class DummyException : Exception
{
}

view raw

gistfile1.cs

hosted with ❤ by GitHub

Here’s a sample of a unit test using the ContractChecker for the example client showed previously in the post:


[TestMethod]
public void Ensure_InsertDocument_FaultContracts_AreAllMapped()
{
var targetOperation = typeof (IDocumentService).GetMethod(
"InsertDocument",
new[]
{
typeof (string),
typeof (string)
});
var result = targetOperation.CheckFaultContractMapping<IDocumentService, ServiceUnavailableException>(
m => DocumentClient.InsertDocument(string.Empty, string.Empty, m.Object));
Assert.IsNull(result, "The type {0} used to detail a FaultContract isn't being properly handled on the Service client", result);
}

view raw

gistfile1.cs

hosted with ❤ by GitHub

Unit Testing IBM Websphere MQ with Fakes

.Net Projects that target the IBM Websphere MQ objects are often hard to unit test. Even with some amount of effort in isolating all the MQ objects through Dependency Injection and some tweaks around stubbing some of the common MQ classes, it’s easy to get into trouble with NullReferenceExceptions being thrown.

When targeting IBM MQ, there are two separate options: The native libraries (amqmdnet.dll) or the IBM.XMS library. I have found the JMS .Net implementation very problematic and hiding important queue options from the consuming classes, so I use mostly the native libraries and those are the focus of this post.

I won’t cover the basic principles of starting to use fakes, many people have covered that already and MSDN has a very nice series of articles on that. I will just highlight some common utility code and tricks I have learned along the way.

IBM MQ Design considerations

When I’m targeting IBM MQ, there’s a common set of design choices I make, and some of the code samples will reflect these options:

  • I always browse messages first. Only after I have actually done what I need to do with the messages, I do a read on them.
  • I use Rx right after the queues, this is why I always browse first. Once a message is browsed I push it through an IObservable, so that later I can do things like filter, sort, throttle, etc.
  • I use System.Threading Timers to do pooling on the MQ queues. They do a very nice usage of threads and they also allow me to change the pooling frequency at run-time.
  • Although I use several mocking frameworks, I tend to use only one per test class. On the test examples, everything will be going through Fakes, but I can easily argue that Fakes isn’t as mature as Moq or Rhino Mocks when it comes to fluent writing and API productivity.

IBM MQ id fields and properties

One common task around testing MQ objects is playing around with their Ids, either the object Id or other Ids like the correlation Id. These are always byte arrays and most times they are fixed size, so I wrote a nice utility method for creating fixed sized arrays:


private static byte[] CreateByteArray(int size, byte b)
{
var array = new byte[size];
for (var i = 0; i < array.Length; i++)
{
array[i] = b;
}
return array;
}

view raw

gistfile1.cs

hosted with ❤ by GitHub

To assert the different Id fields just use CollectionAssert and use the ToList on both byte arrays.

CollectionAssert.AreEqual(messageId.ToList(), message.MessageId.ToList());

IBM MQ Shims tips and tricks

One of the problems you will see when you start using IBM.WMQ Shims is null references when instantiating some objects. This is easily fixed by overriding the constructors on the Shims:

ShimMQQueueManager.ConstructorStringHashtable = (_, __, ___) => { };
ShimMQMessage.ConstructorMQMessage = (_, __) => { };

Some of the objects in IBM.WMQ have a long inheritance chain. Shims don’t follow this, so for example, if you’re doing a Get on a queue with MQMessage and MQGetMessageOptions, this exists in MQDestination that MQQueue inherits from, so to be able to stub this method you need to write something like this:

ShimMQDestination.AllInstances.GetMQMessageMQGetMessageOptions = (_, message, options) =>
{
    Assert.AreEqual(MQC.MQGMO_NONE, options.Options);
    Assert.AreEqual(MQC.MQMO_MATCH_MSG_ID, options.MatchOptions);
    CollectionAssert.AreEqual(messageId.ToList(), message.MessageId.ToList());
};

Open Queue example with the corresponding tests

Here’s a full example of a method that Opens a Queue for reading and/or writing


/// <summary>
/// Opens this queue. Supports listening and writing options, if setup
/// for listening it will do browse and not really reads from the queue
/// as messages arrive to the queue.
/// </summary>
/// <param name="reading">True if we want to read from this queue, false otherwise.</param>
/// <param name="writing">True if we want to write to this queue, false otherwise.</param>
/// <returns>The Observable where all the messages read from this queue will appear.</returns>
public IObservable<IServiceMessage> OpenConnection(bool reading = true, bool writing = false)
{
// create the properties HashTable
var mqProperties = new Hashtable
{
{ MQC.TRANSPORT_PROPERTY, MQC.TRANSPORT_MQSERIES_MANAGED },
{ MQC.HOST_NAME_PROPERTY, ConfigurationProvider.MQMessageListenerHostName },
{ MQC.PORT_PROPERTY, ConfigurationProvider.MQMessageListenerPortNumeric },
{ MQC.CHANNEL_PROPERTY, ConfigurationProvider.MQMessageListenerChannelName }
};
// create the queue manager
_queueManager = new MQQueueManager(ConfigurationProvider.MQMessageListenerQueueManagerName, mqProperties);
// deal with the queue open options
var openOptions = MQC.MQOO_INPUT_AS_Q_DEF + MQC.MQOO_FAIL_IF_QUIESCING;
if (reading)
{
openOptions += MQC.MQOO_BROWSE;
}
if (writing)
{
openOptions += MQC.MQOO_OUTPUT;
}
// create and start the queue, check for potential bad queue names
try
{
Queue = _queueManager.AccessQueue(QueueName, openOptions);
}
catch (MQException ex)
{
if (ex.ReasonCode == 2085)
{
throw new ConfigurationErrorsException(string.Format(CultureInfo.InvariantCulture, "Wrong Queue name: {0}", QueueName));
}
throw;
}
if (reading)
{
StartListening();
}
return Stream.AsObservable();
}

view raw

gistfile1.cs

hosted with ❤ by GitHub

And the unit tests that test it


/// <summary>
/// Tests that OpenConnection creates the proper MQQueueManager and accesses the
/// queue with the right set of options.
/// </summary>
[TestMethod]
public void Test_OpenConnection_GoesThrough()
{
const string host = "some random host";
const string channel = "some random channel";
const string manager = "some random manager";
const string queueName = "some random queue";
const int port = 1234;
var startListeningCall = false;
using (ShimsContext.Create())
{
var configShim = new ShimConfigurationProvider
{
MQMessageListenerChannelNameGet = () => channel,
MQMessageListenerHostNameGet = () => host,
MQMessageListenerPortNumericGet = () => port,
MQMessageListenerQueueManagerNameGet = () => manager
};
ShimMQQueueManager.ConstructorStringHashtable = (_, s, options) =>
{
Assert.AreEqual(manager, s);
Assert.AreEqual(host, options[MQC.HOST_NAME_PROPERTY]);
Assert.AreEqual(channel, options[MQC.CHANNEL_PROPERTY]);
Assert.AreEqual(port, options[MQC.PORT_PROPERTY]);
Assert.AreEqual(MQC.TRANSPORT_MQSERIES_MANAGED, options[MQC.TRANSPORT_PROPERTY]);
};
ShimMQQueueManager.AllInstances.AccessQueueStringInt32 = (_, s, options) =>
{
Assert.AreEqual(MQC.MQOO_INPUT_AS_Q_DEF + MQC.MQOO_FAIL_IF_QUIESCING, options);
Assert.AreEqual(queueName, s);
return null;
};
var mqShim = new ShimMQMessageQueue
{
InstanceBehavior = ShimBehaviors.Fallthrough,
ConfigurationProviderGet = () => configShim.Instance,
QueueNameGet = () => queueName,
StreamGet = () => new Subject<IServiceMessage>(),
StartListening = () => { startListeningCall = true; }
};
mqShim.Instance.OpenConnection(false);
Assert.IsFalse(startListeningCall);
}
}
/// <summary>
/// Tests the OpenConnection options in the queue access when the queue is setup to read.
/// It also ensures that StartListening is called if the queue is opened for reading.
/// </summary>
[TestMethod]
public void Test_OpenConnection_ForReading()
{
const string queueName = "some random queue";
var startListeningCall = false;
using (ShimsContext.Create())
{
ShimMQQueueManager.ConstructorStringHashtable = (_, __, ___) => { };
ShimMQQueueManager.AllInstances.AccessQueueStringInt32 = (_, s, options) =>
{
Assert.AreEqual(MQC.MQOO_INPUT_AS_Q_DEF + MQC.MQOO_FAIL_IF_QUIESCING + MQC.MQOO_BROWSE, options);
Assert.AreEqual(queueName, s);
return null;
};
var mqShim = new ShimMQMessageQueue
{
InstanceBehavior = ShimBehaviors.Fallthrough,
ConfigurationProviderGet = () => new ShimConfigurationProvider().Instance,
QueueNameGet = () => queueName,
StreamGet = () => new Subject<IServiceMessage>(),
StartListening = () => { startListeningCall = true; }
};
mqShim.Instance.OpenConnection();
Assert.IsTrue(startListeningCall);
}
}
/// <summary>
/// Tests the OpenConnection options in the queue access when the queue is setup to write.
/// </summary>
[TestMethod]
public void Test_OpenConnection_ForWriting()
{
const string queueName = "some random queue";
var startListeningCall = false;
using (ShimsContext.Create())
{
ShimMQQueueManager.ConstructorStringHashtable = (_, __, ___) => { };
ShimMQQueueManager.AllInstances.AccessQueueStringInt32 = (_, s, options) =>
{
Assert.AreEqual(MQC.MQOO_INPUT_AS_Q_DEF + MQC.MQOO_FAIL_IF_QUIESCING + MQC.MQOO_OUTPUT, options);
Assert.AreEqual(queueName, s);
return null;
};
var mqShim = new ShimMQMessageQueue
{
InstanceBehavior = ShimBehaviors.Fallthrough,
ConfigurationProviderGet = () => new ShimConfigurationProvider().Instance,
QueueNameGet = () => queueName,
StreamGet = () => new Subject<IServiceMessage>(),
StartListening = () => { startListeningCall = true; }
};
mqShim.Instance.OpenConnection(false, true);
Assert.IsFalse(startListeningCall);
}
}
/// <summary>
/// Tests the OpenConnection options in the queue access when the queue is setup to
/// read and write at the same time.
/// It also ensures that StartListening is called if the queue is opened for reading.
/// </summary>
[TestMethod]
public void Test_OpenConnection_ForReadingAndWriting()
{
const string queueName = "some random queue";
var startListeningCall = false;
using (ShimsContext.Create())
{
ShimMQQueueManager.ConstructorStringHashtable = (_, __, ___) => { };
ShimMQQueueManager.AllInstances.AccessQueueStringInt32 = (_, s, options) =>
{
Assert.AreEqual(MQC.MQOO_INPUT_AS_Q_DEF + MQC.MQOO_FAIL_IF_QUIESCING + MQC.MQOO_BROWSE + MQC.MQOO_OUTPUT, options);
Assert.AreEqual(queueName, s);
return null;
};
var mqShim = new ShimMQMessageQueue
{
InstanceBehavior = ShimBehaviors.Fallthrough,
ConfigurationProviderGet = () => new ShimConfigurationProvider().Instance,
QueueNameGet = () => queueName,
StreamGet = () => new Subject<IServiceMessage>(),
StartListening = () => { startListeningCall = true; }
};
mqShim.Instance.OpenConnection(true, true);
Assert.IsTrue(startListeningCall);
}
}
/// <summary>
/// Ensure that opening a connection with a Bad Queue Name will throw a proper
/// <see cref="ConfigurationErrorsException"/>.
/// </summary>
[TestMethod]
[ExpectedException(typeof(ConfigurationErrorsException))]
public void Ensure_OpenConnection_ThrowsBadQueueName()
{
const int nameReasonCode = 2085;
using (ShimsContext.Create())
{
ShimMQQueueManager.ConstructorStringHashtable = (_, __, ___) => { };
ShimMQQueueManager.AllInstances.AccessQueueStringInt32 = (_, __, ___) =>
{
throw new MQException(1, nameReasonCode);
};
var mqShim = new ShimMQMessageQueue
{
InstanceBehavior = ShimBehaviors.Fallthrough,
ConfigurationProviderGet = () => new ShimConfigurationProvider().Instance,
QueueNameGet = () => "something",
StreamGet = () => new Subject<IServiceMessage>(),
};
mqShim.Instance.OpenConnection();
}
}
/// <summary>
/// Ensure that any exception besides Bad Queue Name will be re-thrown
/// and bubble out of the OpenConnection method.
/// </summary>
[TestMethod]
[ExpectedException(typeof(MQException))]
public void Ensure_OpenConnection_ThrowsOthersExceptBadName()
{
using (ShimsContext.Create())
{
ShimMQQueueManager.ConstructorStringHashtable = (_, __, ___) => { };
ShimMQQueueManager.AllInstances.AccessQueueStringInt32 = (_, __, ___) =>
{
throw new MQException(1, 1);
};
var mqShim = new ShimMQMessageQueue
{
InstanceBehavior = ShimBehaviors.Fallthrough,
ConfigurationProviderGet = () => new ShimConfigurationProvider().Instance,
QueueNameGet = () => "something",
StreamGet = () => new Subject<IServiceMessage>(),
};
mqShim.Instance.OpenConnection();
}
}

view raw

gistfile1.cs

hosted with ❤ by GitHub

Recording a Web Performance test from a CodedUI test

On a project that’s well supported with tests, it is very common to have a good suite of automated tests. The two most common frameworks for test automation in the .Net stack are CodedUI and Watin. This article will cover utility code that improves recording a Web Performance test from a CodedUI test to automate the initial recording of the performance test. While it is possible to do the same with Watin, there is less control over the recording process, so I won’t cover Watin in this post.

There are two common tasks while going from a CodedUI into a Web Performance:

  • Find the Browser with the recorder.
  • Control the recording process. Often part of the CodedUI is getting to where we want to do the action, and this process isn’t part of the recording phase.

Finding a browser that is ready for recording

Finding a browser that is able to record is just going through the open browsers and look for the recording toolbar and the recording buttons. If we find them, then we have one and we can use it, otherwise just open a new browser and run the test normally.

Some things to note here:

  • Make sure that you wrap all the code that looks for recording controls in compiler directives. If the CodedUI is looking for these controls and can’t find them, it takes a lot longer to run, doing this as part of a build process will just increase the build time by a great amount.
  • While we are looking for things, keep track of the main buttons, Record and Resume, because we may want to click them later on, as part of scoping the recording process.
  • The method that launches the browser takes a Boolean parameter that allows the browser recorder to be paused at the start of the CodedUI test, instead of the default recording behavior.

The code that handles this:


public static class CodedUIExtensions
{
#if !DO_NOT_FIND_WEBRECORD
private static bool _recording;
private static WinButton _recordButton;
private static WinButton _pauseButton;
#endif
public static BrowserWindow Launch(bool pauseRecording = false)
{
return Launch("main.aspx", pauseRecording);
}
public static BrowserWindow Launch(string path, bool pauseRecording = false)
{
#if !DO_NOT_FIND_WEBRECORD
// Try to find an open browser that is recording to do a web performance recording session
try
{
var recordingBrowser = new BrowserWindow();
recordingBrowser.SearchProperties[UITestControl.PropertyNames.Name] = "Blank Page";
recordingBrowser.SearchProperties[UITestControl.PropertyNames.ClassName] = "IEFrame";
recordingBrowser.Find();
var recordWindow = new WinWindow(recordingBrowser);
recordWindow.SearchProperties[WinControl.PropertyNames.ControlName] = "toolStrip1";
recordWindow.Find();
var toolbar = new WinToolBar(recordWindow);
toolbar.SearchProperties[UITestControl.PropertyNames.Name] = "toolStrip1";
toolbar.Find();
_recordButton = new WinButton(toolbar);
_recordButton.SearchProperties[UITestControl.PropertyNames.Name] = "Record";
_recordButton.Find();
_pauseButton = new WinButton(toolbar);
_pauseButton.SearchProperties[UITestControl.PropertyNames.Name] = "Pause";
_pauseButton.Find();
if (pauseRecording)
{
Mouse.Click(_pauseButton);
recordingBrowser.WaitForControlReady();
}
recordingBrowser.NavigateToUrl(new Uri(path));
_recording = true;
return recordingBrowser;
}
catch
{
}
#endif
// A browser with a session ready to record couldn't be found, so open a new one
var browserWindow = BrowserWindow.Launch(path);
browserWindow.WaitForControlReady();
return browserWindow;
}
}

view raw

gistfile1.cs

hosted with ❤ by GitHub

Controlling the recording process

Besides finding the browser, that are 3 common things that we want, as part of controlling the recording process:

  • Be able to pause the recording process.
  • Be able to resume the recording process.
  • Some applications will spawn multiple windows, so at the end of the test an ALT+F4 is sent to the target app. However in the scope of recording a performance test, we want the browser to stay open, so we can do final adjustments or just stop recording and generate the test.

To accomplish this, just add 3 more methods to the utility class (also with compiler directives to improve test run speeds during builds):


public static void PauseRecording()
{
#if !DO_NOT_FIND_WEBRECORD
if (!_recording) return;
Mouse.Click(_pauseButton);
_pauseButton.WaitForControlReady();
#endif
}
public static void ResumeRecording()
{
#if !DO_NOT_FIND_WEBRECORD
if (!_recording) return;
Mouse.Click(_recordButton);
_recordButton.WaitForControlReady();
#endif
}
public static void CloseWindows()
{
#if !DO_NOT_FIND_WEBRECORD
if (!_recording)
{
Keyboard.SendKeys("%{F4}");
}
#else
Keyboard.SendKeys("%{F4}");
#endif
}

view raw

gistfile1.cs

hosted with ❤ by GitHub

Validation in WebForms with Data Annotations

Some very old projects get enough development time so that certain parts of them move forward, but too often this time isn’t enough for a full re-write to modern day technologies.

In most scenarios the persistence layer will move forward before the Web UI. This is because legacy WebForms projects relied on patterns like the Supervising Controller that don’t translate directly to modern implementations like ASP.NET’s MVC. Migrating projects from old legacy patterns to ASP.NET MVC usually starts with the persistence layer because presenters wrap and transform models into what the view needs instead of providing a way to directly bind these; changing the persistence layer into something that can be bound directly, refactoring the presenter first and paving the way to replacing presenters with controllers is usually the way to go.

In a scenario where a legacy persistence layer is left intact and a change from WebForms to ASP.NET MVC is done instead, it usually takes longer to execute due to the fact that Model wrappers need to be written that wrap around the old persistence layer and bind directly to the views, along with the usual view re-writing and refactoring presenters into controllers. These wrappers also add obscurity to the overall solution, so anyone maintaining the solution between the changes will have a hard time with it.

Doing ASP.NET WebForms validation with Data Annotations

ASP.NET WebForms does validation through a series of ASP.NET Validation Server Controls, what they have in common is that they all inherit from BaseValidator. The strategy is to start from this inheritance, expose two additional properties for the name of the property we want to validate and another for the type of the class where this property exists.

/// <summary>
/// Exposes the Property Name that we want to validate against.
/// </summary>
public string PropertyName { get; set; }
        
/// <summary>
/// Exposes the SourceType for Data Annotation lookup that we want to validate against.
/// </summary>
public string SourceType { get; set; }

The BaseValidator class has an abstract method EnsureIsValid that is the main override point for creating our own Validator. By getting the PropertyName and the SourceType you can use reflection and get the Data Annotations, then use these to check for Validation and to properly create and format the Error Message.

/// <summary>
/// Performs the real Validation process, sets the isValid flag on the
/// BaseValidator class.
/// </summary>
/// <returns>If the property if Valid or Not.</returns>
protected override bool EvaluateIsValid()
{
    var objectType = Type.GetType(SourceType, true, true);
    var property = objectType.GetProperty(PropertyName);

    var control = base.FindControl(ControlToValidate) as TextBox;

    if(control == null)
        throw new InvalidOperationException("This implementation can only be used to validate Textbox controls, attempting to validate something else will fail!");

    foreach (var attr in property.GetCustomAttributes(typeof (ValidationAttribute), true)
                                    .OfType<ValidationAttribute>()
                                    .Where(attr => !attr.IsValid(control.Text)))
    {
        // This implementation will break on the first attribute fail and will only return the first error found.
        // I kept the foreach and the where clause to allow for easier transition into an implementation that
        // tracks and displays all the errors found and not just the first one!
        var displayNameAttr = property.GetCustomAttributes(typeof (DisplayNameAttribute), true)
                                        .OfType<DisplayNameAttribute>()
                                        .FirstOrDefault();

        var displayName = displayNameAttr == null ? property.Name : displayNameAttr.DisplayName;
        ErrorMessage = attr.FormatErrorMessage(displayName);
        return false; 
    }

    return true;
}

This is a very naive implementation, it will only work with the TextBox control, explicitly throwing otherwise:

var control = base.FindControl(ControlToValidate) as TextBox;

if(control == null)
    throw new InvalidOperationException("This implementation can only be used to validate Textbox controls, attempting to validate something else will fail!");

And it doesn’t do any proper logging and trapping of the reflection bits in the code, if there’s any problem setting the SourceType and PropertyName, like for example a typo, it just blows up without any exception handling:

var objectType = Type.GetType(SourceType, true, true);
var property = objectType.GetProperty(PropertyName);

Usage examples

To use the DataAnnotationValidator simply add it where you want the validation text to appear like for example:

<asp:Label ID="CardFirstNameTextLabel" runat="server" CssClass="FormLabel" AssociatedControlID="CardFirstNameText">First Name</asp:Label>
<asp:TextBox ID="CardFirstNameText" runat="server" AutoCompleteType="firstname" />

<val:DataAnnotationValidator ID="FirstNameValidator" runat="server"
    ControlToValidate="CardFirstNameText" Text="**" PropertyName="FirstName" SourceType="InnerWorkings.Model.CardDetails, InnerWorkings.Model" />

<span class="Notes">(as it appears on the card)</span>

You can also use the built in ASP.NET ValidationSummary control to display the validation errors summary:

<asp:ValidationSummary runat="server" ID="vSumAll" DisplayMode="BulletList" CssClass="validation-errors" HeaderText="<span>Oops! Please fix the following errors:</span>" />

The full source Code for the DataAnnotationValidator

namespace ValidationWithDataAnnotations
{
    using System;
    using System.ComponentModel;
    using System.ComponentModel.DataAnnotations;
    using System.Linq;
    using System.Web.UI.WebControls;

    /// <summary>
    /// Reasonable wrapper for performing Validation using Data Annotations.
    /// With the inclusion of EntityFramework in the solution, all model elements are properly
    /// Data Annotated, so the logical path is to perform UI validation using the same set
    /// of annotations used by EF.
    /// The Validator still requires the setting of PropertyName and SourceType, this is
    /// where this class could be improved, as both these things can be looked up instead of
    /// just set.
    /// </summary>
    public class DataAnnotationValidator : BaseValidator
    {
        /// <summary>
        /// Exposes the Property Name that we want to validate against.
        /// </summary>
        public string PropertyName { get; set; }
        
        /// <summary>
        /// Exposes the SourceType for Data Annotation lookup that we want to validate against.
        /// </summary>
        public string SourceType { get; set; }

        /// <summary>
        /// Performs the real Validation process, sets the isValid flag on the
        /// BaseValidator class.
        /// </summary>
        /// <returns>If the property if Valid or Not.</returns>
        protected override bool EvaluateIsValid()
        {
            var objectType = Type.GetType(SourceType, true, true);
            var property = objectType.GetProperty(PropertyName);

            var control = base.FindControl(ControlToValidate) as TextBox;

            if(control == null)
                throw new InvalidOperationException("This implementation can only be used to validate Textbox controls, attempting to validate something else will fail!");

            foreach (var attr in property.GetCustomAttributes(typeof (ValidationAttribute), true)
                                         .OfType<ValidationAttribute>()
                                         .Where(attr => !attr.IsValid(control.Text)))
            {
                // This implementation will break on the first attribute fail and will only return the first error found.
                // I kept the foreach and the where clause to allow for easier transition into an implementation that
                // tracks and displays all the errors found and not just the first one!
                var displayNameAttr = property.GetCustomAttributes(typeof (DisplayNameAttribute), true)
                                              .OfType<DisplayNameAttribute>()
                                              .FirstOrDefault();

                var displayName = displayNameAttr == null ? property.Name : displayNameAttr.DisplayName;
                ErrorMessage = attr.FormatErrorMessage(displayName);
                return false; 
            }

            return true;
        }
    }
}

Formatting email HTML with T4 templates

There’s several techniques available to a .Net developer to properly format HTML outside web pages. One of them is actually using an HTML view render engine like Razor to format it.

The one I find the cleaner and easier to maintain is using T4 templates. Since this is a post about T4, I suggest that you take a loot at my T4 Templates page before moving on, to get used to the code reading.

The full demo project can be downloaded here

Project Structure

blog17

The simple demo project is composed of two T4 Runtime Templates:

  • EmailTemplate defines and transforms the default HTML template.
  • The BodyTemplate defines and transforms the HTML that composes the “Body” of the email.

The entry point for this template transformation is on the MailExtensions.cs file and written as extensions to MailMessage. The Program.cs file just contains enough code to setup an email message and call the template entry point

class Program
{
    static void Main(string[] args)
    {
        var mail = new MailMessage
        {
            From = new MailAddress("me@mycompany.com", "Me AndMe"),
            Subject = "Me poking You",
            Body = string.Empty
        };

        mail.To.Add("someemail@somecompany.com");

        var template = new BodyTemplate
                            {
                                FirstName = "You",
                                LastName = "AndYou"
                            };

        mail.CreateHtmlBody(template);

        using (var client = new SmtpClient())
        {
            client.SendAsync(mail, null);
        }
    }
}

The email is just setup to be delivered to a static folder in the app.config file

<system.net>
    <mailSettings>
        <smtp deliveryMethod="SpecifiedPickupDirectory">
            <specifiedPickupDirectory pickupDirectoryLocation="D:\Mail" />
        </smtp>
    </mailSettings>
</system.net>

T4 Runtime Templates

To create a T4 Runtime Template all you have to do is Add New Item, then select the Runtime Text Template Item.

blog18

What this template does is generate a C# class that you can use at Run-Time to transform and generate it’s output. You can either pass parameters to these templates by using their built in T4 Parameter Directive or simply by extending the generated partial class. I prefer extending the generated class as it makes it more unit testable when required, so I used this approach in the demo code.

This type of templates ignores some of the T4 Directives, however some of them are put to good use to trick the editor into proper syntax highlighting. I use the T4 Output Directive to make tangible syntax highlight my HTML, for some reason at the time of this post, tangible didn’t highlight it with “.html” so “.xml” had to do the trick.

<#@ output extension=".xml" #>

The EmailTemplate

The template itself is very simple, containing very simple HTML. This should contain the default style template for your emails, with a proper Header, Footer and common sections in all your emails.

<#@ template language="C#" #>
<#@ output extension=".xml" #>
<#@ assembly name="System.Core" #>
<#@ import namespace="System.Linq" #>
<#@ import namespace="System.Text" #>
<#@ import namespace="System.Collections.Generic" #>
<html>
  <body>
    <h1>This is a header</h1>
    <div>
		<#= GetBodyText() #>
    </div>
    <h1>This is a footer</h1>
</body>
</html>

The generated class partial definition contains the method used by the template to generate the Body. This method supports both a Body string and a Body object that will be verified as being another valid template. So if it get’s another template it will attempt to render it, while if it get’s a string it will just dump it.

public partial class EmailTemplate
{
    public string Body { get; set; }

    private object _bodyTemplate;
    public object BodyTemplate
    {
        get { return _bodyTemplate; }
        set
        {
            // Get the type and the TransformText method using reflection
            var type = value.GetType();
            var method = type.GetMethod("TransformText");

            // Reflection signature checks
            if (method == null) throw new ArgumentException("BodyTemplate needs to be a RunTimeTemplate with a TransformText method");
            if (method.ReturnType != typeof(string) || method.GetParameters().Any()) throw new ArgumentException("Wrong TransformText signature on the BodyTemplate");

            // If everything is ok, assign the value
            _bodyTemplate = value;
        }
    }

    private string GetBodyText()
    {
        var result = string.Empty;

        // Use the BodyTemplate if available
        if(BodyTemplate != null)
        {
            dynamic castTemplate = BodyTemplate;
            result = castTemplate.TransformText();
        }
        // Otherwise use the Body string if it's not null or empty
        else if(!string.IsNullOrEmpty(Body))
        {
            result = Body;
        }

        return result;
    }
}

The BodyTemplate

The BodyTemplate is a very simple template just to show the linear transformation of both templates and the inclusion of email specific fields for email customization.

<#@ template language="C#" #>
<#@ output extension=".xml" #>
<#@ assembly name="System.Core" #>
<#@ import namespace="System.Linq" #>
<#@ import namespace="System.Text" #>
<#@ import namespace="System.Collections.Generic" #>
<p>Hi there <#= FirstName #> <#= LastName #></p>
<p>This is an Example of a Body.</p>

It’s extension only contains properties so that we can configure the email Body.

public partial class BodyTemplate
{
    public string FirstName { get; set; }
    public string LastName { get; set; }
}

The Entry Point – MailExtensions

The wrapper methods that manage the main template transformations are in the MailExtensions.cs file and are written as extension methods, one for a using a Body string and another one for using a Body template. This demo however doesn’t make any use of the Body string, it only uses a Body template.

public static class MailExtensions
{
    public static void CreateHtmlBody(this MailMessage message, string body)
    {
        var mailTemplate = new EmailTemplate { Body = body };
        var html = AlternateView.CreateAlternateViewFromString(mailTemplate.TransformText(), Encoding.UTF8, "text/html");
        message.AlternateViews.Add(html);
    }

    public static void CreateHtmlBody(this MailMessage message, object template)
    {
        var mailTemplate = new EmailTemplate { BodyTemplate = template };
        var html = AlternateView.CreateAlternateViewFromString(mailTemplate.TransformText(), Encoding.UTF8, "text/html");
        message.AlternateViews.Add(html);
    }
}

Async integration with SalesForce Leads web service

The SalesForce Leads web service is used mainly to register new leads from other applications, for example a trial page where a user registers for a trial and a Lead in SalesForce is created to be followed by commercial teams.

The integration itself is very simple and straightforward, this post is about doing it in an async way, and also presents a solution for integrations with several different Models.

The web service is called through a normal HttpWebRequest object, but it’s setup is done in 3 stages:

  • The creation of the HttpWebRequest with the initial configuration.
  • Writing the RequestStream from a string – This is the service’s parameters that will be sent in the HTTP POST.
  • Submiting the request.

The RequestStream writing and the request submission are written as async ready methods (they can be awaited on) but the HttpWebRequest creation isn’t as this is of very fast execution.

Calling the service through the HttpWebRequest in an async manner

The integration is written in a static class. The entry point is the static method SubmitRequest, it takes as a parameter an object that is basically a Model object that will have it’s metadata parsed for Param attributes to get the SalesForce common parameters.

public static async Task SubmitRequest(object info)
{
    var request = CreateSalesForceRequest();
    var message = GetSalesForceParams(info) + "&" + GetSalesForceStaticParams();

    await request.WriteRequestStream(message);

    try
    {
        await request.SubmitSalesForce();
    }
    catch (Exception)
    {
        // This doesn't seem to do anything as the servlet allways returns HTTP 200.OK.
        // Use the debug email param and debug param on the servlet Parameter list instead.
        Trace.TraceError("Error registering lead in salesforce. Encoded message string was: {0}", message);
    }
}

Besides the GetSalesForceParams, we can see the 3 stages described earlier in the following methods:

private static HttpWebRequest CreateSalesForceRequest()
{
    var request = (HttpWebRequest)WebRequest.Create(ConfigurationManager.AppSettings["SalesforceWebToLeadUrl"]);
    request.Timeout = 60000;
    request.ContentType = "application/x-www-form-urlencoded";
    request.Method = WebRequestMethods.Http.Post;
    request.KeepAlive = true;
    request.ProtocolVersion = HttpVersion.Version11;
    request.UserAgent = "";

    return request;
}

This creates the HttpWebRequest properly configured to call the Leads web service. The specific web service URL is being retrieve from the configuration file.

private static Task WriteRequestStream(this WebRequest request, string message)
{
    return Task.Factory.FromAsync<Stream>(request.BeginGetRequestStream, request.EndGetRequestStream, null)
        .ContinueWith(t =>
                            {
                                var stream = t.Result;
                                var data = Encoding.ASCII.GetBytes(message);
                                Task.Factory.FromAsync(stream.BeginWrite, stream.EndWrite, data, 0, data.Length,
                                                        null, TaskCreationOptions.AttachedToParent)
                                    .ContinueWith(x => stream.Close());
                            });
}

This is the first of the async ready methods. It is written as an extension to the WebRequest object. It returns a Task so that it can be awaited and uses the useful FromAsync method in the TPL that wraps a pair of begin and end methods.

private static Task SubmitSalesForce(this WebRequest request)
{
    return Task.Factory.FromAsync<WebResponse>(request.BeginGetResponse, request.EndGetResponse, null)
        .ContinueWith(t =>
                            {
                                var response = t.Result;
                                if (response != null)
                                    response.Close();
                            });
}

The last stage of the request submission is the actual SubmitSalesForce method. It is written as an extension to the WebRequest object.

Formatting the service’s parameters

The request string is composed by 2 different blocks. The first one will compose the static parameters with the Leads group, your company OID in SalesForce and additional things that you might want to setup. These are saved in a static Dictionary

private static readonly Dictionary<string, string> StaticParams = new Dictionary<string, string>
                                                                        {
                                                                            {"oid", "MYOID"},
                                                                            {"lead_source", "Web Trial"},
                                                                            {"debug", "1"},
                                                                            {
                                                                                "debugEmail",
                                                                                "me@mycompany.com"
                                                                            }
                                                                        };

And then transformed by the simple method

private static string GetSalesForceStaticParams()
{
    return string.Join("&", StaticParams.Select(p => p.Key + "=" + HttpUtility.UrlEncode(p.Value)));
}

The tricky part comes from the non-static parameters. In my scenario I have several projects using this integration with SalesForce, and these projects use different types of Models and Model architectures. To cope with these differences I used an attribute decoration pattern, much like ADO.NET does validation, so that Models could be decorated specifying certain properties as SalesForce parameters. An extra degree of complexity is added because I have to support EntityFramework Database first modelling, so property decoration is done through the MetadataType attribute instead of having metadata at the properties themselves, an example of an extension of an EntityFramework Database first model object is given below:

[MetadataType(typeof(LandingPageUserMetadata))]
public partial class LandingPageUser
{
}

public class LandingPageUserMetadata
{
    [Required]
    [Display(Name = "First Name")]
    [SalesForceParam(Type = SalesForceParamType.FirstName)]
    public string FirstName { get; set; }

    [Required]
    [Display(Name = "Last Name")]
    [SalesForceParam(Type = SalesForceParamType.LastName)]
    public string LastName { get; set; }

    [Required]
    [Display(Name = "Email account")]
    [EmailValidation(ErrorMessage = "This has to be a valid email address.")]
    [SalesForceParam(Type = SalesForceParamType.Email)]
    public string EmailAddress { get; set; }
}

To support this type of decoration, additional code needs to be written to check for the MetadataType attribute and then parse it’s configured Type. Then the mapping between the actual metadata and the properties needs to be in place so that the values are retrieved from the Model object and not the object used to define the metadata.

The code that takes an object and parses it’s metadata or metadatatype attribute and returns a request message string is:

public static string GetSalesForceParams(object info)
{
    var sfProperties = info.GetType()
        .GetProperties()
        .Where(p => p.GetCustomAttributes(typeof(SalesForceParamAttribute), true).Any())
        .ToArray();

    if (!sfProperties.Any())
    {
        var metadataTypes = info.GetType()
            .GetCustomAttributes(typeof(MetadataTypeAttribute), true)
            .OfType<MetadataTypeAttribute>()
            .ToArray();

        var metadata = metadataTypes.FirstOrDefault();

        if (metadata != null)
        {
            sfProperties = metadata.MetadataClassType
                .GetProperties()
                .Where(p => p.GetCustomAttributes(typeof(SalesForceParamAttribute), true).Any())
                .ToArray();
        }
    }

    var sfParams =
        sfProperties
            .Where(p => info.GetType().GetProperty(p.Name).GetValue(info) != null)
            .Select(
                p =>
                ((SalesForceParamAttribute)p.GetCustomAttributes(typeof(SalesForceParamAttribute), false).First()).SalesForceParam +
                "=" +
                HttpUtility.UrlEncode(info.GetType().GetProperty(p.Name).GetValue(info).ToString()));

    return string.Join("&", sfParams);
}

The attribute definition is simple and straightforward:

namespace IW.Web.Common.SalesForce
{
    using System;

    [AttributeUsage(AttributeTargets.Property | AttributeTargets.Field)]
    public class SalesForceParamAttribute : Attribute
    {
        public SalesForceParamType Type { get; set; }

        public string SalesForceParam
        {
            get
            {
                switch (Type)
                {
                    case SalesForceParamType.FirstName:
                        return "first_name";
                    case SalesForceParamType.LastName:
                        return "last_name";
                    case SalesForceParamType.Email:
                        return "email";
                    case SalesForceParamType.Company:
                        return "company";
                    case SalesForceParamType.Phone:
                        return "phone";
                    default:
                        return string.Empty;
                }
            }
        }
    }

    public enum SalesForceParamType
    {
        FirstName,
        LastName,
        Email,
        Company,
        Phone
    }
}

Calling the Async methods from an ASP.NET MVC 4 controller

Whenever you’re calling async methods, that are awaiting method calls, you need to make sure that the controller awaits on them, so that they can be executed to the end within the lifetime scope of the controller, without destroying the thread running the controller.

ASP.NET MVC 2 introduced AsyncController, but then the way we had to use to write async methods was too quirky. With the async framework that was changed and now writing an AsyncController is very clean:

public class HomeController : AsyncController
{
    public ActionResult Index()
    {
        return View(new LandingPageUser());
    }

    [HttpPost]
    public async Task<ActionResult> Index(LandingPageUser model)
    {
        // Check if the model is valid and try to Save it to the Database
        if (ModelState.IsValid && model.Save(ModelState))
        {
            // DO YOUR WORK
            // (...)

            // Integrate with SalesForce and send in the request
            if (Boolean.Parse(ConfigurationManager.AppSettings["EnableSalesforceRegistrations"]))
                await SalesForceExtensions.SubmitRequest(model);

            return View("Success");
        }

        return View(model);
    }
}

Entity Framework Code-First in a “semi-production” context

Lately I used Entity Framework Code First in a “semi-production” context, not in an application, but for managing Load Tests. The scenario was I had a Load Test, based on a unit test (not a web test) and I had to manage a pool of users so that a single user wasn’t used at the same time by 2 instances of the unit test. Because the Load Test ran in a multi-agent scenario, this had to be accessible by all the agents running the unit test, thus a Database approach.

Preparing the Database in SQL Server

Entity Framework Code First will create and manage the Database for you. But in this context, I will be accessing the Database through a connection string using a SQL user, because not all the machines running agents are in the same domain, so going through a trusted connection isn’t an option. The SQL user will have access to the Database for the Load Test, but won’t have access to the master Database.

The first thing you need to do is to create the Database, because using the EF connection string would result in an authorization error when creating the Database.

After creating the Database setup the SQL user so that it can connect to it, read, write and manage the schema.

The Entity Framework Model and Context

The very first thing you need to do is add the Entity Framework NuGet package to your solution, this can be done either by going through the NuGet package manager:

blog12

Or just by opening the package manager console and typing in Install-Package EntityFramework

blog13

After that, create your model and your context. For our scenario we just needed an object User that has a UserName key, a static password that’s not going to the Database, a boolean InUse and a DateTime timestamp ReleasedOn so that we can ask for users that have been released by the unit test for the longest time.

namespace EFCodeFirstStart.Model
{
    using System;
    using System.ComponentModel.DataAnnotations;
    using System.ComponentModel.DataAnnotations.Schema;

    public class User
    {
        [Key]
        public string UserName { get; set; }

        [NotMapped]
        public string Password { get { return "myStaticPassword"; } }

        public bool InUse { get; set; }

        public DateTime ReleasedOn { get; set; }
    }
}

The context is really simple (we’ll get back to it later on in the post). You need to inherit from DbContext and you are implementing the default constructor to call the DbContext constructor that takes a connection string name so that you can point to the Database previously created.

namespace EFCodeFirstStart.Model
{
    using System.Data.Entity;

    public class LoadTestDataContext : DbContext
    {
        public LoadTestDataContext() : base("name=EFConnectionString") { }

        public DbSet<User> Users { get; set; }
   }
}

Creating the Database from the Model – Code First approach

Make sure that the connection string that you’re using on the DbContext contructor is configured in the app.config or web.config of your application:

<connectionStrings>
  <add name="EFConnectionString" providerName="System.Data.SqlClient" connectionString="Server=localhost; Database=LoadTestData; User=efcodefirst; Password=mypwd1234" />
</connectionStrings>

The first step is enabling Code First migrations for your application, this must be done for every project type with the following command Enable-Migrations:

blog14

When this step is executed a new folder Migrations and a Configuration.cs file will be created. The Configuration.cs file is one of the points where control is given to the developer in the Code First approach.

namespace EFCodeFirstStart.Migrations
{
    using System;
    using System.Data.Entity;
    using System.Data.Entity.Migrations;
    using System.Linq;

    internal sealed class Configuration : DbMigrationsConfiguration<EFCodeFirstStart.Model.LoadTestDataContext>
    {
        public Configuration()
        {
            AutomaticMigrationsEnabled = false;
        }

        protected override void Seed(EFCodeFirstStart.Model.LoadTestDataContext context)
        {
            //  This method will be called after migrating to the latest version.

            //  You can use the DbSet<T>.AddOrUpdate() helper extension method 
            //  to avoid creating duplicate seed data. E.g.
            //
            //    context.People.AddOrUpdate(
            //      p => p.FullName,
            //      new Person { FullName = "Andrew Peters" },
            //      new Person { FullName = "Brice Lambson" },
            //      new Person { FullName = "Rowan Miller" }
            //    );
            //
        }
    }
}

You then need to add migrations every time we want to snapshot the Database schema, so let’s do one now and call it InitialSetup by running the command Add-Migration InitialSetup:

blog15

This will create another file on the Migrations folder with a timestamp followed by _InitialSetup (the name you gave to the Migration):

namespace EFCodeFirstStart.Migrations
{
    using System;
    using System.Data.Entity.Migrations;
    
    public partial class InitialSetup : DbMigration
    {
        public override void Up()
        {
            CreateTable(
                "dbo.Users",
                c => new
                    {
                        UserName = c.String(nullable: false, maxLength: 128),
                        InUse = c.Boolean(nullable: false),
                        ReleasedOn = c.DateTime(nullable: false),
                    })
                .PrimaryKey(t => t.UserName);
            
        }
        
        public override void Down()
        {
            DropTable("dbo.Users");
        }
    }
}

In a normal application scenario we would be done, as Entity Framework will handle the Database updates on every run, extra commands are only needed if we need to revert or do other extra work on the migrations. However, because I had to run this from a LoadTest project, the Database update has to be done manually, by calling Update-Database on the package manager console

blog16

Where did the EDMX go?

If you’re like me, you do a lot of code generation based on the ADO.NET EDMX model. So far Code First looked really nice and I was liking it a lot, but without the EDMX I don’t have a good source for writing templates against.

The folks at the Entity Framework team created the ability to Save the EDMX file in the framework, so we just need to call that every time we are changing the model (calling Update-Database). This is done by overriding the OnModelCreating method:

namespace EFCodeFirstStart.Model
{
    using System.Data.Entity;
    using System.Data.Entity.Infrastructure;
    using System.Text;
    using System.Xml;

    public class LoadTestDataContext : DbContext
    {
        public LoadTestDataContext() : base("name=EFConnectionString") { }

        public DbSet<User> Users { get; set; }

        protected override void OnModelCreating(DbModelBuilder modelBuilder)
        {
            var provider = new DbProviderInfo("System.Data.SqlClient", "2008");
            var model = modelBuilder.Build(provider);

            var writer = new XmlTextWriter(@"D:\TmpEdmx\my.edmx", Encoding.UTF8);
            EdmxWriter.WriteEdmx(model, writer);
        }
    }
}

Code First thoughts

So far I liked using Code First, seemed smooth, flexible and lean, making model and schema change iterations a breeze. With the added support to generate the EDMX everything is in place to do code generation like we used to do with Model First approaches.

Playing around with the Ribbon and RichTextBox–4 of 4: Adding Hyperlink support to the RichTextBox

For my work, one of the things I had to support was Hyperlinks. I could do this the traditional way, where I would create a button in the Ribbon

blog10

And follow it up with a MessageDialogBox asking the user additional Hyperlink details

blog11

However, because my hyperlinks will always list the URI and never a user typed name, detecting hyperlinks as the user types and creating them is a lot better then specifically forcing the user to press a button and fill in the details.

Adding Hyperlink detection to the RichTextBox

I found a very good article on MSDN about this, written by Prajakta Joshi that I modified to suit my needs. It all starts with detecting the preceding word in a FlowDocument Paragraph object

private static string GetPreceedingWordInParagraph(TextPointer position, out TextPointer wordStartPosition)
{
    wordStartPosition = null;
    var word = String.Empty;
    var paragraph = position.Paragraph;

    if (paragraph != null)
    {
        var navigator = position;
        while (navigator != null && navigator.CompareTo(paragraph.ContentStart) > 0)
        {
            var runText = navigator.GetTextInRun(LogicalDirection.Backward);

            if (runText.Contains(" "))
            {
                var index = runText.LastIndexOf(" ", StringComparison.Ordinal);
                word = runText.Substring(index + 1, runText.Length - index - 1) + word;
                wordStartPosition = navigator.GetPositionAtOffset(-1 * (runText.Length - index - 1));
                break;
            }

            wordStartPosition = navigator;
            word = runText + word;
            navigator = navigator.GetNextContextPosition(LogicalDirection.Backward);
        }
    }

    return word;
}

I then hooked an Event Handler on the KeyDown Event of the RichTextBox on the constructor of my UserControl

public RibbonRichTextBox()
{
    InitializeComponent();
    _richTextBox.KeyDown += RibbonRichTextBoxKeyDown;
}

With the following implementation

private static void RibbonRichTextBoxKeyDown(object sender, KeyEventArgs e)
{
    var rtb = (RichTextBox) sender;
    if (e.Key != Key.Space && e.Key != Key.Return) return;

    var caretPosition = rtb.Selection.Start;
    TextPointer wordStartPosition;

    var word = GetPreceedingWordInParagraph(caretPosition, out wordStartPosition);
    if (!Uri.IsWellFormedUriString(word, UriKind.Absolute)) return;

    if (wordStartPosition == null || caretPosition == null) return;

    var tpStart = wordStartPosition.GetPositionAtOffset(0, LogicalDirection.Backward);
    var tpEnd = caretPosition.GetPositionAtOffset(0, LogicalDirection.Forward);

    if(tpStart != null && tpEnd != null)
    {
        var link = new Hyperlink(tpStart, tpEnd)
                        {
                            NavigateUri = new Uri(word)
                        };

        link.MouseLeftButtonDown += FollowHyperlink;
    }
}

Notice that I’m using the Uri class to check if the word is an URI or not, this could be changed for a Regex or other forms of checking this out

if (!Uri.IsWellFormedUriString(word, UriKind.Absolute)) return;

Adding the ability to Ctrl+Click Hyperlinks and open them in IE

If you notice the RibbonRichTextBoxKeyDown imlpementation, there’s a line there, after creating the Hyperlink, where I add an event handler for the MouseLeftButtonDown Event

link.MouseLeftButtonDown += FollowHyperlink;

In the Handler implementation, I check for Ctrl (left or right) is down, then just start IE with the URI present on the Link and make sure the event’s Handled is set to true to stop the routing

private static void FollowHyperlink(object sender, MouseButtonEventArgs e)
{
    if (!Keyboard.IsKeyDown(Key.LeftCtrl) && !Keyboard.IsKeyDown(Key.RightCtrl)) return;

    var link = (Hyperlink) sender;
    Process.Start(new ProcessStartInfo(link.NavigateUri.ToString()));
    e.Handled = true;
}

The Full Series

The full solution can be downloaded from here.

This is the third article in the series Playing around with the Ribbon and RichTextBox.

Playing around with the Ribbon and RichTextBox–3 of 4: Creating an Insert Picture button in the Ribbon

For the work I was doing I had to design and create special buttons that extended the EditingCommands Class like special text formatting for special text blocks, an Insert Image command and an insert Video command. The special text formatting was very specific to my work and the Video command is a lot more tricky to implement and the implementation is bound to a transformation that is performed later where I convert the RichTextBox’s document XAML to XML.

So I choose to demonstrate the Insert Picture command.

blog9

The button is a normal RibbonButton that’s defined inside a RibbonControlGroup inside a RibbonGroup

<ribbon:RibbonGroup Header="Media" x:Name="_mediaHeader">
    <ribbon:RibbonControlGroup>
        <ribbon:RibbonButton x:Name="_btnImage" Label="Image" LargeImageSource="/PlayingWithRibbon;component/Images/picture.png" Click="ButtonImageClick">
            <ribbon:RibbonButton.ControlSizeDefinition>
                <ribbon:RibbonControlSizeDefinition ImageSize="Large" />
            </ribbon:RibbonButton.ControlSizeDefinition>
        </ribbon:RibbonButton>
    </ribbon:RibbonControlGroup>
</ribbon:RibbonGroup>

Once the button is clicked an OpenFileDialog is shown for the user to select the image, since in my work I can only support JPG and PNG file formats, these are the ones that are being filtered

private static Image SelectImage()
{
    var dlg = new OpenFileDialog
    {
        Filter = "Image Files|*.png;*.jpg;*.gif"
    };

    var result = dlg.ShowDialog();
    if (result.Value)
    {
        var bitmap = new BitmapImage(new Uri(dlg.FileName));
        return new Image
        {
            Source = bitmap,
            Height = bitmap.Height,
            Width = bitmap.Width
        };
    }

    return null;
}

The Click Event Handler implementation, that inserts the actual picture in the Document

private void ButtonImageClick(object sender, RoutedEventArgs e)
{
    var image = SelectImage();
    if (image == null) return;

    var tp = _richTextBox.CaretPosition.GetInsertionPosition(LogicalDirection.Forward);
    new InlineUIContainer(image, tp);
}

Extra work could be done to create text flow around the image or other fancy tricks that the System.Windows.Document namespace offers, but since my final output is XML, none of those were going to be supported anyways.

The Full Series

The full solution can be downloaded from here.

This is the third article in the series Playing around with the Ribbon and RichTextBox.