Roslyn: Package consolidation analyzer

Package consolidation is a very important factor in healthy code bases. When you have a single solution, Visual Studio’s package manager is the only tool you ever need, to make sure you’re consolidated across the solution. However, sometimes, teams decide to have multiple solutions and consolidating packages across multiple solutions can be a difficult task.

The more you move forward without consolidating, the harder it will be to consolidate and the more risk you take when building, packaging and deploying. If things were working before and your build order changes and now you get the newest version being packaged instead of the oldest, there are no redirects that can save you, you will deploy something that won’t run!

Roslyn Analyzers to the rescue

The idea is simple, tap into the assembly compilation hook of Roslyn, and for each reference if it contains the word “packages” in them, inspect the packages folder and check for multiple references of the same assembly.

The first thing to change is the actual Analyzer template, because Microsoft templates all analyzers as PNL’s so that they can run on any kind of project. But for this specific case, the projects that this contract deals with are all deployed to Windows Server topologies, either Azure IaaS or Azure PaaS. So I re-created the analyzer project from a PNL to a classic C# class library, so that I can tap into System.IO.

Another thing to note is that we’re not scanning the entire folder, because the rationale is the analyzer should only analyze your current scope, so if you have multiple versions of a package that your solution doesn’t reference, you shouldn’t error in that solution (your current scope), so the analyzer should only look at references for assemblies being compiled at the time and never a full folder scan.

The analyzer

Here’s the analyzer and below a reference to a model object called “Package” that I ended up creating because the analyzer was getting too big. I’m of the opinion that you shouldn’t over-design analyzers unless you need to, so basically start in the analyzer itself until you reach that point where the analyzer is dealing with too many responsabilities and code is becoming harder to read, then design around it.


namespace DevOpsFlex.Analyzers
{
using System;
using System.Collections.Generic;
using System.Collections.Immutable;
using System.IO;
using System.Linq;
using Microsoft.CodeAnalysis;
using Microsoft.CodeAnalysis.Diagnostics;
/// <summary>
/// Represents the Analyzer that enforces package consolidation (unique reference per package) and a unique packages folder
/// for each assembly being compiled.
/// </summary>
[DiagnosticAnalyzer(LanguageNames.CSharp)]
public class PackageConsolidationAnalyzer : DiagnosticAnalyzer
{
/// <summary>
/// This exists as a private static for performance reasons. We might get into the space where the HashSet might become too big,
/// but we'll re-strategize if we get there.
/// </summary>
private static readonly HashSet<Package> Packages = new HashSet<Package>();
private static readonly DiagnosticDescriptor SinglePackagesFolderRule =
new DiagnosticDescriptor(
id: "DOF0001",
title: new LocalizableResourceString(nameof(Resources.SinglePackagesFolderTitle), Resources.ResourceManager, typeof(Resources)),
messageFormat: new LocalizableResourceString(nameof(Resources.SinglePackagesFolderMessageFormat), Resources.ResourceManager, typeof(Resources)),
category: "NuGet",
defaultSeverity: DiagnosticSeverity.Error,
isEnabledByDefault: true,
description: new LocalizableResourceString(nameof(Resources.SinglePackagesFolderDescription), Resources.ResourceManager, typeof(Resources)));
private static readonly DiagnosticDescriptor UniqueVersionRule =
new DiagnosticDescriptor(
id: "DOF0002",
title: new LocalizableResourceString(nameof(Resources.UniqueVersionTitle), Resources.ResourceManager, typeof(Resources)),
messageFormat: new LocalizableResourceString(nameof(Resources.UniqueVersionMessageFormat), Resources.ResourceManager, typeof(Resources)),
category: "NuGet",
defaultSeverity: DiagnosticSeverity.Error,
isEnabledByDefault: true,
description: new LocalizableResourceString(nameof(Resources.UniqueVersionDescription), Resources.ResourceManager, typeof(Resources)));
/// <summary>
/// Returns a set of descriptors for the diagnostics that this analyzer is capable of producing.
/// </summary>
public sealed override ImmutableArray<DiagnosticDescriptor> SupportedDiagnostics => ImmutableArray.Create(SinglePackagesFolderRule, UniqueVersionRule);
/// <summary>
/// Called once at session start to register actions in the analysis context.
/// </summary>
/// <param name="context">The <see cref="AnalysisContext"/> context used to register actions.</param>
public sealed override void Initialize(AnalysisContext context)
{
context.RegisterCompilationAction(AnalyzePackageConsolidation);
}
/// <summary>
/// Analyzes that package consolidation (unique reference per package) and a unique packages folder
/// are in place for each assembly being compiled. Because this is being run per assembly you might
/// see a repetition of the same error.
/// </summary>
/// <param name="context">The <see cref="CompilationAnalysisContext"/> context that parents all analysis elements.</param>
private static void AnalyzePackageConsolidation(CompilationAnalysisContext context)
{
var packageReferences = context.Compilation
.References
.Where(r => r is PortableExecutableReference)
.Cast<PortableExecutableReference>()
.Where(r => r.FilePath.ToLower().Contains(Package.PackagesFolderName))
.ToList();
if (!packageReferences.Any()) return;
var firstReferencePath = packageReferences.First().FilePath;
var packagesFolder = firstReferencePath.Substring(0, firstReferencePath.IndexOf(Package.PackagesFolderName, StringComparison.Ordinal) + Package.PackagesFolderName.Length);
// 1. Make sure there's only a packages folder
if (packageReferences.Any(r => !r.FilePath.Contains(packagesFolder)))
{
context.ReportDiagnostic(
Diagnostic.Create(
SinglePackagesFolderRule,
context.Compilation.Assembly.Locations[0],
context.Compilation.AssemblyName // {0} MessageFormat
));
}
// 2. Make sure for each reference in the packages folder, we're only dealing with a unique version
var newPackages = Directory.EnumerateDirectories(packagesFolder).Select(d => new Package(d)).Except(Packages);
foreach (var package in newPackages)
{
Packages.Add(package);
}
var packagesNotConsolidated = packageReferences.Select(r => new Package(r.FilePath))
.Where(r => Packages.Count(p => p.Name == r.Name) > 1);
foreach (var referencePackage in packagesNotConsolidated)
{
context.ReportDiagnostic(
Diagnostic.Create(
UniqueVersionRule,
context.Compilation.Assembly.Locations[0],
context.Compilation.AssemblyName, // {0} MessageFormat
referencePackage.Name // {1} MessageFormat
));
}
}
}
}

And the companion Package class


namespace DevOpsFlex.Analyzers
{
using System.Diagnostics.Contracts;
using System.IO;
using System.Text.RegularExpressions;
/// <summary>
/// Wraps logic around Name, Version and generic regular expression lazy initializations to support
/// the package consolidation analyzer.
/// </summary>
public class Package
{
private static readonly string PackageVersionRegex = PackagesFolderName.Replace("\\", "\\\\") + "[^0-9]*([0-9]+(?:\\.[0-9]+)+)(?:\\\\)?";
private static readonly string PackageNameRegex = PackagesFolderName.Replace("\\", "\\\\") + "([a-zA-Z]+(?:\\.[a-zA-Z]+)*)[^\\\\]*(?:\\\\)?";
private static readonly string PackageFolderRegex = "(.*" + PackagesFolderName.Replace("\\", "\\\\") + "[^\\\\]*)\\\\?";
private string _version;
private string _name;
/// <summary>
/// This is a convention constant that olds a string that all folders that we consider a "packages" folder contain.
/// </summary>
internal const string PackagesFolderName = "\\packages\\"; // convention
/// <summary>
/// Initializes a new instance of <see cref="Package"/>.
/// Has built in Contract validations that will all throw before any other code is able to throw.
/// </summary>
/// <param name="path">The path to the package folder that this package is based on.</param>
public Package(string path)
{
Contract.Requires(!string.IsNullOrEmpty(path));
Contract.Requires(Directory.Exists(path));
Contract.Requires(path.Contains(PackagesFolderName));
Contract.Requires(Regex.IsMatch(path, PackageFolderRegex, RegexOptions.Singleline), $"When casting string (path) to Package you need to ensure your path is being matched by the Folder Regex [{PackageFolderRegex}]");
Folder = Regex.Match(path, PackageFolderRegex, RegexOptions.Singleline).Groups[1].Value;
}
/// <summary>
/// Gets the package folder without the last "\".
/// </summary>
public string Folder { get; }
/// <summary>
/// Gets the package name component of the package folder as a string.
/// </summary>
public string Name => _name ?? (_name = Regex.Match(Folder, PackageNameRegex, RegexOptions.Singleline).Groups[1].Value);
/// <summary>
/// Gets the package version component of the package folder as a string.
/// </summary>
public string Version => _version ?? (_version = Regex.Match(Folder, PackageVersionRegex, RegexOptions.Singleline).Groups[1].Value);
/// <summary>
/// Determines whether the specified objects are equal.
/// </summary>
/// <param name="y">The second <see cref="Package"/> object to compare.</param>
/// <returns>true if the specified objects are equal; otherwise, false.</returns>
public override bool Equals(object y)
{
Contract.Requires(y != null);
Contract.Requires(y.GetType() == typeof(Package));
return Folder == (y as Package)?.Folder;
}
/// <summary>
/// Returns a hash code for the specified object.
/// </summary>
/// <returns>A hash code for the specified object.</returns>
public override int GetHashCode()
{
return Folder.GetHashCode();
}
}
}

view raw

Package.cs

hosted with ❤ by GitHub

DevOps rant: Pussy developers

Series Overview

I moved to a DevOps team about a year ago and although we’re not really doing DevOps, it’s a good team and we try really hard sometimes! While trying hard I have come across all sorts of funny stuff and recently I have decided to blog about it, maybe someone reading this won’t allow folks to do the same mistakes when presented with the same funny stuff.

Overview

When a development team, or a group of teams, collectively act like a bunch of pussies, they get into trouble easily. Often, this trouble they get themselves into will span across into other folks and teams, and when they aren’t also collectively a bunch of pussies, this will upset them.

The Pussy Developer

A pussy developer is typically a guy that will always agree and do whatever they throw at him. The most intense example would be when you ask the technical lead of your off-shore team in India: “Hey, can you guys build a button on the system that every time I press it, popcorn will come out at my desk?” and you get the obvious answer “Sure, that’s not a problem“.

I’m of the opinion that today successful software is very different then what it was 10 years ago. With the agile mind-set the best thing you can do as a developer is to write lean code, the leaner, the better you will be able to cope with change and the more Agile you’ll be. So things like abstractions don’t really fit today’s lean codebases, you want to deal with the now and ignore the “What if in the future we (…)” and instead align the codebase to deal with those “What if’s” in a very quick way when they actually happen.

I keep seeing folks that haven’t written good code in the last 5 years, tell developers how to write code, as if 5 years later you’d expect things in the fastest changing engineering space to be done the same. Worse, I keep on seeing folks being pushed by others into technology hubs they aren’t comfortable with. And while all this is going on, not a single fuck off.

The genuine knowledge driven nature of developers

I love listening to genuine sports folks talking about their art. Living in Ireland, while being for Portugal has inevitably let to me following two genuine sports figures: Guy Martin and Conor McGregor. I was listening to the radio a few weeks ago and they had on a radio show: Conor McGregor and a high reputation sports comentator. They start the show and McGregor starts disagreeing with the commentator on the subject of enduring pain and commiting to a certain sport and after disagreeing with her twice he says “You sit your fat ass in a fucking sofa all day long and you’re talking to me about pain? What do you know about pain?” and then he carries on until he completly destroys the show. This is expectable when you mix the doer (and he’s not a pussy) with the thinker and they start to disagree.

If McGregor was a developer he’d probably say something in the lines of: “You haven’t written code in 5 years and you’re fucking telling me how to write code?“.

Developers are technical folks. “Technical” comes from technique and it means these folks are more interested in technique then in application, or in other words they care about how a system works instead of what a system does, unlike business users for example. So they pride themselves in how the system works and if it doesn’t work properly, despite doing everything it’s supposed to do, there is no pride, no joy and no fun. This is one of the reasons why turnover is so high in developers and still today a lot of IT management doesn’t get this.

So if you pride yourself on building stuff that works well, why would you ever let someone that no longer knows how to build stuff push you around?

Avoid being a pussy

Just tell people to fuck off, in a blunt fashion if the environment allows you to or in a politer and diplomatic fashion if not: I’ll take what you just said into consideration and evaluate before doing the task.

If you’re being pressured into writing code in a specific way that you don’t agree with, ask the person where can you see any of the commits he(she) has done so that you can evaluate if he(she) is a peer or not, because if the person isn’t your peer you shouldn’t be wasting your time being taught how to code by someone who doesn’t code.

1511338_10205856613883609_3414295426928705643_n.jpg

If you’re being pressured into the Java stack as a .Net developer, just say “Hey, just get rid of me and find a Java guy, I have no hard feelings guys, it’s business as usual”.

5cce58f2-2920-11e4-90ef-22000ab82dd9-large.jpgBecause if you don’t do these things, the outcome won’t be anything that will ever give you or anyone else that actually built it any pride, joy or even a small spark of fun.

 

Moving XAML CD builds to vNext

I’m currently in the process of porting a series of Continuous Deployment XAML builds to the new vNext builds. I want to share the ins and outs of doing this and some of the current constraints you can face today while doing this.

Continuous Deployment build passes

I like to break down CD builds into two distinct passes: the build and deploy pass and the test pass.

While building and deploying I prefer to target specific projects, because a lot of times you will see multiple deployment targets in the same solution and these will sometimes have different MSBuild /target’s, so applying a single MSBuild target to a solution will sometimes imply you either route targets in the project files (this is a terrible idea) or you build the same solution multiple times, which can lead to confusion.

While for the test pass it’s preferable to build solutions, otherwise you need to constantly maintain the build definitions as people add more test projects. If folks are following good agile practices they will have 1 test assembly per assembly under test and as folks add assemblies, test assemblies will spawn. If it’s an over-engineered system you will notice assemblies will get created like popcorn popping out in the pot, so if you go down the path of targeting projects in the test pass the amount of maintnance required will be high.

Deployment tasks in vNext

One of the main reasons for us to move to vNext was the fact that now if you wire things properly, you get direct feedback from PowerShell tasks as the new engine is PowerShell driven so it will bubble onto the build logs and summary warnings and errors that PowerShell tasks have. Unlike the previous XAML workflow that would log issues in the text log but wouldn’t bubble out anything.

With this in mind, one of the cool out-of-the-box features of vNext is the amount of deployment tasks that the VSO folks have written for us, so right away you can decommission some of the custom deployment scripts and start using the tasks. If these aren’t really ideal for you, you can get their source from github, tweak them and then publish them again into your collection or project as tweaked versions of the original tasks.

So for example, a given Cloud Service build and deploy pass for us looks like this:

vnext_blog_1.png

Some of the deployment tasks don’t suit our needs, for example we often deploy Azure Web Sites as Web Jobs shells, so what we really care about is the Web Jobs publish process and especially the scheduling pass. The out-of-the-box deployment tasks for Web Sites won’t schedule web jobs.

Build and deployment pass

vnext_blog_2.png

The first 3 are build + deployment tasks for Cloud Services and the last 3 and pure webdeploy build passes for web deployment targets. One of them is actually a pure web deploy into a Web Server IaaS VM pass.

So for web deployments we are actually calling MSBuild with a build /target and then setting the web deployment properties to trigger the packaging and deployment and giving it a publish profile.

The Publish Build artefacts task is a nice addition to the build engine, because we do not allow developers to access any build rigs. This will actually publish the selected artefacts into TFS and they will stay there for as long as the build stays according to what retention policy we have applied to that build definition. So for this specific case, where we are storing Cloud Service packages, the developer can actually download them, unpack and check if a certain file that should be there is actually there.

vnext_blog_3.png

Security wise, because the build artefacts can only be viewed by someone with view permissions on the build, you get to control this. For example for Production builds, you wouldn’t have your developers viewing the builds anyways, so they won’t see the artefacts nor any configuration contained in them.

Test pass

vnext_blog_4.png

Ideally we would be calling a VS Build task for each solution. However, in our current TFS on-premises version (2015 RTM) we have found out that the VS Build task keeps adding a bunch of stuff to the path environment variable each time it runs, so after a while you reach the maximum size for an environment variable and the task will error right away breaking the build. For us this is calling the VS Build task 8 times, if we try the 9th we will get the path error right away.

So we have a single pass that picks up solution on a matching pattern that we, by convention, have set for solutions that contain tests: *.B.sln. Then we run all test passes individually for whatever test passes a given project has. These test passes use Test Traits to pick and filter the tests that should run on them.

We do not usually run Unit Tests, because these are in all Continuous Integration builds, so from a quality point of view the CI pass should have cleared out any issue related to failing Unit Tests by now.

Creating builds that are easily cloned

The first thing you need to do before you fully automate the creation of Release Pipelines is to make sure that creating vNext builds from code doesn’t need to deal with a lot of moving parts.

So ideally, if you can create a CD build for a new environment just by cloning a build and changing its configuration, you’re in a very good space for automation. One of the tricks of achieving this is making sure that everything is aligned, from code to infrastructure, with solution configurations.

So in our case, we align all PaaS components when we create them with in a fully automated way with the configuration names for a project, so if there is a solution configuration called AT, then the Cloud Service for the AT environment will be called sysname-component-AT. Ideally you want a single set of configurations across all branches, so that you have a leaner amount of configurations you need to deal with and so that you prevent in a smooth automated way branches to deploy to environments they shouldn’t be deploying to. However, sometimes folks will resist this idea.

With this in mind, try to use $(BuildConfiguration) as much as possible, for example:

vnext_blog_5.png

vnext_blog_6.png

 

DevOps rant: The maintenance test

Series Overview

I moved to a DevOps team about a year ago and although we’re not really doing DevOps, it’s a good team and we try really hard sometimes! While trying hard I have come across all sorts of funny stuff and recently I have decided to blog about it, maybe someone reading this won’t allow folks to do the same mistakes when presented with the same funny stuff.

Overview

By now you’re probably wondering What the fuck is a maintenance test?

Well, it’s definitely not a test, but instead it’s an automated runbook that a developer, because he probably lacks operations/infrastructure knowledge, decided to write as a test and wire it in a test run in an automated release pipeline.

This specific one is worth mentioning, because the reasons that cause it to be written are the same old mistakes people were doing 10 years ago, and sadly keep repeating them today in hopes of different outcomes.

Context

There is a set of performance tests that create a lot of documents in Sharepoint (in Office365). After a while, the container of these documents has more than 5.000 of these, so Sharepoint, with the default list size applied, will start showing you nothing but an error page saying you have more than 5.000 documents in that list.

This means the test needs to clean up. Tests that require cleaning up after them will always do it “after” and never before, because you never want to leave a given environment dirty until to you at it again, it’s a bad principle. However, this set of performance tests decided to “try” to clean up, before the test run, leaving the environment unusable during performance test runs.

This is like you only cleaning up your house before a party, so that it’s always clean on parties but the rest of the time while you’re leaving there you get to enjoy all the dirt and the mess of the previous party.

Moral of the Story

About 10 years ago, all stacks had examples of frameworks or tools that were designed with the goal of anyone can build apps in mind. In the generic sense, without taking into account specific niches, they all failed. In the .Net space the biggest crash was webforms, which was designed around the notion of anyone can drag a few boxes around in the editor, populate a few properties and build any kind of app. The resulting programming model was awful and developers usually tried to stay away from it as much as they could!

The only platforms that truly succeeded in this space were the ones that were built on top of very strong programming frameworks and always allowed for developers to go in and customize/tweak things their way. A good example is Unity3D where the level designer can do a lot in the graphic designer by dragging boxes around, but Mono and C# are at the disposal of developers to build the boxes the other guy drags around.

So, you might think, with all these failures in the history of software, have we all learned that you always need developers around to actually build a piece of code? Obviously not, there are lots of folks out there that actually jump through hundreds of hoops trying to reach the utopia of software without developers.

So sadly we keep on witnessing people using testers to “build” automated UI tests, testers to “build” automated performance tests, etc. This specific example is one of these, where a tester built a performance suite. Because he’s a tester, he has a hard time coming up with a way to properly clean up Sharepoint after his test suite runs.

Because the developer doesn’t want anything to do with a bunch of generated code from the performance test recorder, he wants to stay away from the tester built performance suite, where, ideally, the clean-up code should be written.

My previous contract had a tester building an automated UI test suite for about 6 months, only to realize it wasn’t maintainable. So instead what they decided to do was get a full team of testers to build a new one …

Einstein-Frame

DevOps rant: TFS merge discard strategy

Series Overview

I moved to a DevOps team about a year ago and although we’re not really doing DevOps, it’s a good team and we try really hard sometimes! While trying hard I have come across all sorts of funny stuff and recently I have decided to blog about it, maybe someone reading this won’t allow folks to do the same mistakes when presented with the same funny stuff.

Overview

Today, I’m a solid believer that most TFS projects should be on Git, not TFS SVC. Yes Git does have a learning curve over the massively supported by Visual Studio UI TFS SVC, but once that learning curve is climbed, the rewards are greater.

This is especially true on projects that are using PaaS components and are built by folks that love to over-engineer, so instead of a few components, you end up with tens of components and instead of a few config files you should avoid merging, you end up with tens or even hundreds of these. If you are in a Git repo you just combine clever use of Git Attributes with Git-Filter-Branch, however if you are on a TFS repo, your options are a lot more limited.

Real Life Example

I’m currently working with two projects, one should definitely be using Git as the repo as the level of over-engineering is high, and the other fits nicely in TFS.

The super engineered project never knew how to deal with merges, basically for a very long time what they did was do a “blind merge” then manually undo the changes they thought shouldn’t go in. While this was done by a single person, it actually worked, their problems started when other folks started to merge and they didn’t really know what not to merge.

So their solution was simple: let’s create a project configuration per environment per branch. Let’s not argue about the fact that this is a lot harder to maintain, because honestly if it’s over-engineered, going down the path of arguing about maintainability indexes is purely a waste of time for everyone. But instead focus on what this prevents my DevOps team from doing in the scope of this project.

Let’s imagine DevOps is now given the time resources to build a magic button, that when you press it you get a new branch, a new set of environments and a new release pipeline (after we have built the magic buttons that bring expressos and popcorn!). Currently we aren’t very far from this, the only real automation we are missing is the release pipeline, but that’s not that hard.

When you add the fact you now need new configurations and all sorts of crap related to that, like new config transforms, new service configuration files, etc. you immediately drop the idea of automating.

I have been blabbling about the notion of controlling the merge process through scripting a set of tf merge /discard commands for a while now, but every time I mention it I get that feeling I’m talking Portuguese to a bunch of Indian folks and although they always nod saying “Yes” they are actually thinking “I have no idea what this crazy guy is babbling about“.

So the other project, that’s more on the Lean side of things had this same problem recently. But due to its simplicity I decided to step in and instead of babbling anything just write the script for the project and kick-off the merge workflow instead of giving them the chance to wonder into the realms of creating 10 more solution configurations.

Later I sent the script to the first set of guys so that they could understand what I have been babbling about all this time, but the feedback I indirectly got was that it was “technically advanced”

The tf merge /discard PowerShell script


function ApplyMergeDiscard
{
[cmdletbinding(SupportsShouldProcess=$true)]
param
(
[Parameter(Mandatory=$true)]
[string] $LocalPath,
[Parameter(Mandatory=$true)]
[ValidateSet("MainIntoDev", "DevIntoMain")]
[string] $Direction,
[Parameter(Mandatory=$false)]
[string] $BaseDevBranch = "$/YOUR PROJECT/BRANCH1/",
[Parameter(Mandatory=$false)]
[string] $BaseMainBranch = "$/YOUR PROJECT/BRANCH2/"
)
$env:Path = $env:Path + ";C:\Program Files (x86)\Microsoft Visual Studio 14.0\Common7\IDE"
$discards = @( `
# Some stuff you shouldn't merge
"Stuff1.publish.proj", `
"Stuff2.publish.proj", `
# Some more stuff you shouldn't merge
"Some.Project/AConfiguration.Debug.config", `
"Some.Project/AConfiguration.Release.config" `
)
Set-Location $LocalPath
$discards | ForEach-Object {
if($Direction -eq "MainIntoDev") {
$sourcePath = $BaseMainBranch + $_
$targetPath = $BaseDevBranch + $_
}
else {
$sourcePath = $BaseDevBranch + $_
$targetPath = $BaseMainBranch + $_
}
if($WhatIfPreference -eq $false) {
Write-Verbose "Discarding $sourcePath into $targetPath"
& tf merge /discard $sourcePath $targetPath
}
else {
Write-Host "WhatIf: Discarding $sourcePath into $targetPath"
}
}
}

This scrip supports both -Verbose and -WhatIf commandlet bindings and it’s written in a way that the only thing you actually need to maintain is the array of strings of the sub paths of stuff you don’t want to merge.

So, unlike the feedback I got, this is definitely not rocket science to maintain and it’s a good starting foundation to deal with merges.

You run the script before you actually do the merge, if you didn’t have it right you can simply undo pending changes, tweak the script, and check again. When you’re happy with the discards you perform the merge and then check in.

Adding R# and StyleCop to your project build process

Most .Net projects, because the team composition isn’t ideal, will tend to force a series of what project leads like to call good coding standards. These are then checked by tools like StyleCop and R# (R# will check for a lot more, not just good coding standards). But for anyone to get feedback on these on a multi solution system, you need to wire them into your build process, this is where your options will open and what you pick will impact the way developers look at it. With StyleCop you have more options than R#, because it’s such an enforce tool in some industries like the finance, there are more community contributions to artefacts that are pluggable in either the TFS build workflow or simply MSBuild.

The right option: MSBuild task

I have found out that the best way to wire these tools is through MSBuild. This is due to the fact that developers do not run these tools very often or willingly in many cases. Having them integrated as MSBuild tasks will produce build warnings in Visual Studio as the developer continuous builds the solution as part of his development cycle. This will also give each developer a build output from within Visual Studio exactly the same as a Continuous Integration build that’s triggered through TFS, so if you have a clean error log you’re guaranteed to have a clean build summary on the TFS CI build.

These MSBuild tasks should be applied on a specific configuration, to free Debug from validation passes and not degrade the development experience, I usually create a DebugCI configuration that I run all CI builds on. Most developers also tend to have violations throughout the duration of an entire project, having code validation on a specific configuration also makes the other builds free of violation warnings so that people can focus on potential dangerous warnings instead of having to look through hundreds of them.

Adding StyleCop to a project

Simply do Install-Package StyleCop.MSBuild on the package manager console to install the StyleCop.MSBuild NuGet package. This will add StyleCop to your project, but for all configurations. Unload the project in Visual Studio and Edit the project file, or optionally just edit right away in your preferred text editor. Look for the line that imports the StyleCop targets file:

<Import Project=”..\..\packages\StyleCop.MSBuild.4.7.49.1\build\StyleCop.MSBuild.Targets” Condition=”Exists(‘..\..\packages\StyleCop.MSBuild.4.7.49.1\build\StyleCop.MSBuild.Targets’)” />

And add a condition for the specific configuration you want to target, in my case that’s DebugCI

<Import Project=”..\..\packages\StyleCop.MSBuild.4.7.49.1\build\StyleCop.MSBuild.Targets” Condition=”Exists(‘..\..\packages\StyleCop.MSBuild.4.7.49.1\build\StyleCop.MSBuild.Targets’) And ‘$(Configuration)’ == ‘DebugCI’” />

Adding R# to a project

Jetbrains exposes their common R# tooling assemblies through a NuGet package, just add it to the project that you want to set it up for in the solution (note that not all of the projects need to include the package). The package, when included in the project will import a targets file that in turn references another targets file that does the UsingTask MSBuild task.

Unlike StyleCop, R# is added to a single project, targets the solution file and contains a project filter, so I usually add this to the output project of a solution that has the least chances of being refactored. Like StyleCop, don’t forget to add a condition for a specific configuration.

  <Target Name=”AfterBuild”>
    <InspectCode SolutionFile=”..\[MySolutionFile].sln” IncludedProjects=”[MyProject_1];[MyProject_2]” Condition=” ‘$(Configuration)’ == ‘DebugCI’ ” />
  </Target>

Applying configuration transforms outside Visual Studio

Recently I was putting a NuGet package together and one of the things the package needs to do is change the configuration file when added to a project.

NuGet 2.7 and forward supports the use of XDT transformation files in the form of .install.xdt and .uninstall.xdt that run during package install and uninstall respectively.

However, all the files that are part of a NuGet package composition are usually outside projects in Visual Studio, so I needed a way to test that these transformations actually worked on a live config file.

To achieve this I wrote a PowerShell script that references the Microsoft.Web.XmlTransform.dll assembly that executes the transformation, applies the transformation and writes to an out.xml file in the same location as the script. The path to the XmlTransform dll is hardcoded into the script, but it is the path on a default installation of Visual Studio 2015.


param
(
[parameter(Mandatory=$true)]
[string]
$Xml,
[parameter(Mandatory=$true)]
[string]
$Xdt
)
if (!(Test-Path -path $Xml -PathType Leaf))
{
throw "XML File not found. $Xml"
}
if (!(Test-Path -path $Xdt -PathType Leaf))
{
throw "XDT File not found. $Xdt"
}
Add-Type -LiteralPath "C:\Program Files (x86)\Microsoft Visual Studio 14.0\Common7\IDE\Extensions\spzpqgng.yjx\Microsoft.Web.XmlTransform.dll"
$xmldoc = New-Object Microsoft.Web.XmlTransform.XmlTransformableDocument
$xmldoc.PreserveWhitespace = $true
$xmldoc.Load($Xml)
$transf = New-Object Microsoft.Web.XmlTransform.XmlTransformation($xdt)
if ($transf.Apply($xmldoc) -eq $false)
{
throw "Transformation failed."
}
$xmldoc.Save("$PSScriptRoot\out.xml")

New open source project – DevOpsFlex

It’s been a while since I last posted. On November last year I decided to take on a project in pure Waterfall, and from a developer point of view, the problem of Waterfall is because of the nature of the cycles, you never really build something cool or good, you’re always trying to deliver instead of building. So you get stuck into this delivery cycle and you’re not really accomplishing anything good and worth writing for.

That is behind now and I’m now back to Agile working in a DevOps team doing automation for a .Net programme. The development work I will be doing will be fully open sourced.
So far I have been working on a single TFS build workflow activity that scales Azure VMs up and down depending on what you want to do with it. For us, we want to scale down development environments during the night and during the weekend, but not completely shut them down so that we can still do continuous deployments during nightly builds. Reducing the VMs down to A1’s, or even A0’s will save a lot of money as environments ramp up during the development cycle.

The home for these TFS build activities is:
https://github.com/sfa-gov-uk/devopsflex

And they are already published to NuGet:
https://www.nuget.org/packages/DevOpsFlex.Activities/

I have a couple more things I want to do with this activity:

  • Add a nice WPF designer to the activity.
  • Add the ability to shutdown and start VMs instead of up scaling and down scaling.
  • Add the ability to wait for the VMs to be back up before you exit the activity execution cycle. This allows developers to track the TFS build for when the environment is back up fully functional and if they are tracking TFS builds they will get notifications for it.

Testing that all Fault Exceptions are being handled in a WCF client

One of the things that the .Net compiler won’t warn developers about, is when another developer decides to add a new FaultException type and the client code isn’t updated to handle this new type of exception. The solution I’m demonstrating here is a generic solution to check for this, but implies that the client is going through a ChannelFactory and not a ClientBase implementation.

ChannelFactory implementations are usually better if there’s full ownership, in the institution, of service and clients. The share of the service contracts will allow Continuous Integration builds to fail if there was a breaking change made on the service that broke one or more of the consuming clients. You may argue that ChannelFactory implementations have the issue that if you change the service, with a non-breaking change, you need to re-test and re-deploy all your clients code: This isn’t exactly true, as if it is a non-breaking change, all the clients will continue to work even with a re-deploy of the service.

Default ChannelFactory Wrapper

The generic implementation depends on our default WcfService wrapper for a ChannelFactory. This could be abstracted through an interface that had the Channel getter on it, and make the generic method depend on the interface instead of the actual implementation.

I will provide here a simple implementation of the ChannelFactory wrapper:


public class WcfService<T> : IDisposable where T : class
{
private readonly object _lockObject = new object();
private bool _disposed;
private ChannelFactory<T> _factory;
private T _channel;
internal WcfService()
{
_disposed = false;
}
internal virtual T Channel
{
get
{
if (_disposed)
{
throw new ObjectDisposedException("Resource WcfService<" + typeof(T) + "> has been disposed");
}
lock (_lockObject)
{
if (_factory == null)
{
_factory = new ChannelFactory<T>("*"); // First qualifying endpoint from the config file
_channel = _factory.CreateChannel();
}
}
return _channel;
}
}
public void Dispose()
{
Dispose(true);
GC.SuppressFinalize(this);
}
internal void Dispose(bool disposing)
{
if (_disposed)
{
return;
}
if (!disposing)
{
return;
}
lock (_lockObject)
{
if (_channel != null)
{
try
{
((IClientChannel)_channel).Close();
}
catch (Exception)
{
((IClientChannel)_channel).Abort();
}
}
if (_factory != null)
{
try
{
_factory.Close();
}
catch (Exception)
{
_factory.Abort();
}
}
_channel = null;
_factory = null;
_disposed = true;
}
}
}

view raw

gistfile1.cs

hosted with ❤ by GitHub

Example of a client using the Wrapper

Here’s an example of code that we want to test, for a client that’s using the WcfService wrappe. The separation from the method that creates the WcfService wrapped in a using clause and the internal static one is just for testing purposes, just so we can inject a WcfService mock and assert against it. The client successfully wraps a FaultException into something meaningful for the consuming application.


public class DocumentClient : IDocumentService
{
public string InsertDocument(string documentClass, string filePath)
{
using (var service = new WcfService<IDocumentService>())
{
return InsertDocument(documentClass, filePath, service);
}
}
internal static string InsertDocument(string documentClass, string filePath, WcfService<IDocumentService> service)
{
try
{
return service.Channel.InsertDocument(documentClass, filePath);
}
catch (FaultException<CALFault> ex)
{
throw new DocumentCALException(ex);
}
catch (Exception ex)
{
throw new ServiceUnavailableException(ex.Message, ex);
}
}
}

view raw

gistfile1.cs

hosted with ❤ by GitHub

The generic Fault contract checker

This implementation is using Moq as the Mocking framework and the code is dependent on it. It also provides signatures up to 4 exceptions that are expected, this is done with a Top-Down approach, where the signature with the most type parameters has the full implementation and the others just call the one that’s one higher level in the signature chain. To support this mind set, a special empty DummyException is declared to fill the gaps between Type Parameters in the different signatures.

Breaking down the code, what it is doing is creating a dynamic Expression Tree that we can wire in the Setup method of the client mock that will intercept calls with any type of parameter (It.IsAny). Then for each FaultContractAttribute that is decorating the service, instantiate it and wire everything so that the service method is setup to throw it. Finally call it, and check if it was caught and wrapped or if we are getting the original FaultException back.


public static class ContractCheckerExtension
{
public static string CheckFaultContractMapping<TContract, TEx1>(this MethodInfo method, Action<Mock<WcfService<TContract>>> action)
where TContract : class
where TEx1 : Exception
{
return method.CheckFaultContractMapping<TContract, TEx1, DummyException>(action);
}
public static string CheckFaultContractMapping<TContract, TEx1, TEx2>(this MethodInfo method, Action<Mock<WcfService<TContract>>> action)
where TContract : class
where TEx1 : Exception
where TEx2 : Exception
{
return method.CheckFaultContractMapping<TContract, TEx1, TEx2, DummyException>(action);
}
public static string CheckFaultContractMapping<TContract, TEx1, TEx2, TEx3>(this MethodInfo method, Action<Mock<WcfService<TContract>>> action)
where TContract : class
where TEx1 : Exception
where TEx2 : Exception
where TEx3 : Exception
{
return method.CheckFaultContractMapping<TContract, TEx1, TEx2, TEx3, DummyException>(action);
}
public static string CheckFaultContractMapping<TContract, TEx1, TEx2, TEx3, TEx4>(this MethodInfo method, Action<Mock<WcfService<TContract>>> action)
where TContract : class
where TEx1 : Exception
where TEx2 : Exception
where TEx3 : Exception
where TEx4 : Exception
{
// we're creating a lambda on the fly that will call on the target method
// with all parameters set to It.IsAny<[the type of the param]>.
var lambda = Expression.Lambda<Action<TContract>>(
Expression.Call(
Expression.Parameter(typeof (TContract)),
method,
CreateAnyParameters(method)),
Expression.Parameter(typeof (TContract)));
// for all the fault contract attributes that are decorating the method
foreach (var faultAttr in method.GetCustomAttributes(typeof(FaultContractAttribute), false).Cast<FaultContractAttribute>())
{
// create the specific exception that get's thrown by the fault contract
var faultDetail = Activator.CreateInstance(faultAttr.DetailType);
var faultExceptionType = typeof(FaultException<>).MakeGenericType(new[] { faultAttr.DetailType });
var exception = (FaultException)Activator.CreateInstance(faultExceptionType, faultDetail);
// mock the WCF pipeline objects, channel and client
var mockChannel = new Mock<WcfService<TContract>>();
var mockClient = new Mock<TContract>();
// set the mocks
mockChannel.Setup(x => x.Channel)
.Returns(mockClient.Object);
mockClient.Setup(lambda)
.Throws(exception);
try
{
// invoke the client, wrapped in an Action delegate
action(mockChannel);
}
catch (Exception ex)
{
// if we get a targeted exception it's because the fault isn't being handled
// and we return with the type of the fault contract detail type that was caught
if (ex is TEx1 || ex is TEx2 || ex is TEx3 || ex is TEx4)
return faultAttr.DetailType.FullName;
// else soak all other exceptions because we are expecting them
}
}
return null;
}
private static IEnumerable<Expression> CreateAnyParameters(MethodInfo method)
{
return method.GetParameters()
.Select(p => typeof (It).GetMethod("IsAny").MakeGenericMethod(p.ParameterType))
.Select(a => Expression.Call(null, a));
}
}
[Serializable]
public class DummyException : Exception
{
}

view raw

gistfile1.cs

hosted with ❤ by GitHub

Here’s a sample of a unit test using the ContractChecker for the example client showed previously in the post:


[TestMethod]
public void Ensure_InsertDocument_FaultContracts_AreAllMapped()
{
var targetOperation = typeof (IDocumentService).GetMethod(
"InsertDocument",
new[]
{
typeof (string),
typeof (string)
});
var result = targetOperation.CheckFaultContractMapping<IDocumentService, ServiceUnavailableException>(
m => DocumentClient.InsertDocument(string.Empty, string.Empty, m.Object));
Assert.IsNull(result, "The type {0} used to detail a FaultContract isn't being properly handled on the Service client", result);
}

view raw

gistfile1.cs

hosted with ❤ by GitHub

Unit Testing IBM Websphere MQ with Fakes

.Net Projects that target the IBM Websphere MQ objects are often hard to unit test. Even with some amount of effort in isolating all the MQ objects through Dependency Injection and some tweaks around stubbing some of the common MQ classes, it’s easy to get into trouble with NullReferenceExceptions being thrown.

When targeting IBM MQ, there are two separate options: The native libraries (amqmdnet.dll) or the IBM.XMS library. I have found the JMS .Net implementation very problematic and hiding important queue options from the consuming classes, so I use mostly the native libraries and those are the focus of this post.

I won’t cover the basic principles of starting to use fakes, many people have covered that already and MSDN has a very nice series of articles on that. I will just highlight some common utility code and tricks I have learned along the way.

IBM MQ Design considerations

When I’m targeting IBM MQ, there’s a common set of design choices I make, and some of the code samples will reflect these options:

  • I always browse messages first. Only after I have actually done what I need to do with the messages, I do a read on them.
  • I use Rx right after the queues, this is why I always browse first. Once a message is browsed I push it through an IObservable, so that later I can do things like filter, sort, throttle, etc.
  • I use System.Threading Timers to do pooling on the MQ queues. They do a very nice usage of threads and they also allow me to change the pooling frequency at run-time.
  • Although I use several mocking frameworks, I tend to use only one per test class. On the test examples, everything will be going through Fakes, but I can easily argue that Fakes isn’t as mature as Moq or Rhino Mocks when it comes to fluent writing and API productivity.

IBM MQ id fields and properties

One common task around testing MQ objects is playing around with their Ids, either the object Id or other Ids like the correlation Id. These are always byte arrays and most times they are fixed size, so I wrote a nice utility method for creating fixed sized arrays:


private static byte[] CreateByteArray(int size, byte b)
{
var array = new byte[size];
for (var i = 0; i < array.Length; i++)
{
array[i] = b;
}
return array;
}

view raw

gistfile1.cs

hosted with ❤ by GitHub

To assert the different Id fields just use CollectionAssert and use the ToList on both byte arrays.

CollectionAssert.AreEqual(messageId.ToList(), message.MessageId.ToList());

IBM MQ Shims tips and tricks

One of the problems you will see when you start using IBM.WMQ Shims is null references when instantiating some objects. This is easily fixed by overriding the constructors on the Shims:

ShimMQQueueManager.ConstructorStringHashtable = (_, __, ___) => { };
ShimMQMessage.ConstructorMQMessage = (_, __) => { };

Some of the objects in IBM.WMQ have a long inheritance chain. Shims don’t follow this, so for example, if you’re doing a Get on a queue with MQMessage and MQGetMessageOptions, this exists in MQDestination that MQQueue inherits from, so to be able to stub this method you need to write something like this:

ShimMQDestination.AllInstances.GetMQMessageMQGetMessageOptions = (_, message, options) =>
{
    Assert.AreEqual(MQC.MQGMO_NONE, options.Options);
    Assert.AreEqual(MQC.MQMO_MATCH_MSG_ID, options.MatchOptions);
    CollectionAssert.AreEqual(messageId.ToList(), message.MessageId.ToList());
};

Open Queue example with the corresponding tests

Here’s a full example of a method that Opens a Queue for reading and/or writing


/// <summary>
/// Opens this queue. Supports listening and writing options, if setup
/// for listening it will do browse and not really reads from the queue
/// as messages arrive to the queue.
/// </summary>
/// <param name="reading">True if we want to read from this queue, false otherwise.</param>
/// <param name="writing">True if we want to write to this queue, false otherwise.</param>
/// <returns>The Observable where all the messages read from this queue will appear.</returns>
public IObservable<IServiceMessage> OpenConnection(bool reading = true, bool writing = false)
{
// create the properties HashTable
var mqProperties = new Hashtable
{
{ MQC.TRANSPORT_PROPERTY, MQC.TRANSPORT_MQSERIES_MANAGED },
{ MQC.HOST_NAME_PROPERTY, ConfigurationProvider.MQMessageListenerHostName },
{ MQC.PORT_PROPERTY, ConfigurationProvider.MQMessageListenerPortNumeric },
{ MQC.CHANNEL_PROPERTY, ConfigurationProvider.MQMessageListenerChannelName }
};
// create the queue manager
_queueManager = new MQQueueManager(ConfigurationProvider.MQMessageListenerQueueManagerName, mqProperties);
// deal with the queue open options
var openOptions = MQC.MQOO_INPUT_AS_Q_DEF + MQC.MQOO_FAIL_IF_QUIESCING;
if (reading)
{
openOptions += MQC.MQOO_BROWSE;
}
if (writing)
{
openOptions += MQC.MQOO_OUTPUT;
}
// create and start the queue, check for potential bad queue names
try
{
Queue = _queueManager.AccessQueue(QueueName, openOptions);
}
catch (MQException ex)
{
if (ex.ReasonCode == 2085)
{
throw new ConfigurationErrorsException(string.Format(CultureInfo.InvariantCulture, "Wrong Queue name: {0}", QueueName));
}
throw;
}
if (reading)
{
StartListening();
}
return Stream.AsObservable();
}

view raw

gistfile1.cs

hosted with ❤ by GitHub

And the unit tests that test it


/// <summary>
/// Tests that OpenConnection creates the proper MQQueueManager and accesses the
/// queue with the right set of options.
/// </summary>
[TestMethod]
public void Test_OpenConnection_GoesThrough()
{
const string host = "some random host";
const string channel = "some random channel";
const string manager = "some random manager";
const string queueName = "some random queue";
const int port = 1234;
var startListeningCall = false;
using (ShimsContext.Create())
{
var configShim = new ShimConfigurationProvider
{
MQMessageListenerChannelNameGet = () => channel,
MQMessageListenerHostNameGet = () => host,
MQMessageListenerPortNumericGet = () => port,
MQMessageListenerQueueManagerNameGet = () => manager
};
ShimMQQueueManager.ConstructorStringHashtable = (_, s, options) =>
{
Assert.AreEqual(manager, s);
Assert.AreEqual(host, options[MQC.HOST_NAME_PROPERTY]);
Assert.AreEqual(channel, options[MQC.CHANNEL_PROPERTY]);
Assert.AreEqual(port, options[MQC.PORT_PROPERTY]);
Assert.AreEqual(MQC.TRANSPORT_MQSERIES_MANAGED, options[MQC.TRANSPORT_PROPERTY]);
};
ShimMQQueueManager.AllInstances.AccessQueueStringInt32 = (_, s, options) =>
{
Assert.AreEqual(MQC.MQOO_INPUT_AS_Q_DEF + MQC.MQOO_FAIL_IF_QUIESCING, options);
Assert.AreEqual(queueName, s);
return null;
};
var mqShim = new ShimMQMessageQueue
{
InstanceBehavior = ShimBehaviors.Fallthrough,
ConfigurationProviderGet = () => configShim.Instance,
QueueNameGet = () => queueName,
StreamGet = () => new Subject<IServiceMessage>(),
StartListening = () => { startListeningCall = true; }
};
mqShim.Instance.OpenConnection(false);
Assert.IsFalse(startListeningCall);
}
}
/// <summary>
/// Tests the OpenConnection options in the queue access when the queue is setup to read.
/// It also ensures that StartListening is called if the queue is opened for reading.
/// </summary>
[TestMethod]
public void Test_OpenConnection_ForReading()
{
const string queueName = "some random queue";
var startListeningCall = false;
using (ShimsContext.Create())
{
ShimMQQueueManager.ConstructorStringHashtable = (_, __, ___) => { };
ShimMQQueueManager.AllInstances.AccessQueueStringInt32 = (_, s, options) =>
{
Assert.AreEqual(MQC.MQOO_INPUT_AS_Q_DEF + MQC.MQOO_FAIL_IF_QUIESCING + MQC.MQOO_BROWSE, options);
Assert.AreEqual(queueName, s);
return null;
};
var mqShim = new ShimMQMessageQueue
{
InstanceBehavior = ShimBehaviors.Fallthrough,
ConfigurationProviderGet = () => new ShimConfigurationProvider().Instance,
QueueNameGet = () => queueName,
StreamGet = () => new Subject<IServiceMessage>(),
StartListening = () => { startListeningCall = true; }
};
mqShim.Instance.OpenConnection();
Assert.IsTrue(startListeningCall);
}
}
/// <summary>
/// Tests the OpenConnection options in the queue access when the queue is setup to write.
/// </summary>
[TestMethod]
public void Test_OpenConnection_ForWriting()
{
const string queueName = "some random queue";
var startListeningCall = false;
using (ShimsContext.Create())
{
ShimMQQueueManager.ConstructorStringHashtable = (_, __, ___) => { };
ShimMQQueueManager.AllInstances.AccessQueueStringInt32 = (_, s, options) =>
{
Assert.AreEqual(MQC.MQOO_INPUT_AS_Q_DEF + MQC.MQOO_FAIL_IF_QUIESCING + MQC.MQOO_OUTPUT, options);
Assert.AreEqual(queueName, s);
return null;
};
var mqShim = new ShimMQMessageQueue
{
InstanceBehavior = ShimBehaviors.Fallthrough,
ConfigurationProviderGet = () => new ShimConfigurationProvider().Instance,
QueueNameGet = () => queueName,
StreamGet = () => new Subject<IServiceMessage>(),
StartListening = () => { startListeningCall = true; }
};
mqShim.Instance.OpenConnection(false, true);
Assert.IsFalse(startListeningCall);
}
}
/// <summary>
/// Tests the OpenConnection options in the queue access when the queue is setup to
/// read and write at the same time.
/// It also ensures that StartListening is called if the queue is opened for reading.
/// </summary>
[TestMethod]
public void Test_OpenConnection_ForReadingAndWriting()
{
const string queueName = "some random queue";
var startListeningCall = false;
using (ShimsContext.Create())
{
ShimMQQueueManager.ConstructorStringHashtable = (_, __, ___) => { };
ShimMQQueueManager.AllInstances.AccessQueueStringInt32 = (_, s, options) =>
{
Assert.AreEqual(MQC.MQOO_INPUT_AS_Q_DEF + MQC.MQOO_FAIL_IF_QUIESCING + MQC.MQOO_BROWSE + MQC.MQOO_OUTPUT, options);
Assert.AreEqual(queueName, s);
return null;
};
var mqShim = new ShimMQMessageQueue
{
InstanceBehavior = ShimBehaviors.Fallthrough,
ConfigurationProviderGet = () => new ShimConfigurationProvider().Instance,
QueueNameGet = () => queueName,
StreamGet = () => new Subject<IServiceMessage>(),
StartListening = () => { startListeningCall = true; }
};
mqShim.Instance.OpenConnection(true, true);
Assert.IsTrue(startListeningCall);
}
}
/// <summary>
/// Ensure that opening a connection with a Bad Queue Name will throw a proper
/// <see cref="ConfigurationErrorsException"/>.
/// </summary>
[TestMethod]
[ExpectedException(typeof(ConfigurationErrorsException))]
public void Ensure_OpenConnection_ThrowsBadQueueName()
{
const int nameReasonCode = 2085;
using (ShimsContext.Create())
{
ShimMQQueueManager.ConstructorStringHashtable = (_, __, ___) => { };
ShimMQQueueManager.AllInstances.AccessQueueStringInt32 = (_, __, ___) =>
{
throw new MQException(1, nameReasonCode);
};
var mqShim = new ShimMQMessageQueue
{
InstanceBehavior = ShimBehaviors.Fallthrough,
ConfigurationProviderGet = () => new ShimConfigurationProvider().Instance,
QueueNameGet = () => "something",
StreamGet = () => new Subject<IServiceMessage>(),
};
mqShim.Instance.OpenConnection();
}
}
/// <summary>
/// Ensure that any exception besides Bad Queue Name will be re-thrown
/// and bubble out of the OpenConnection method.
/// </summary>
[TestMethod]
[ExpectedException(typeof(MQException))]
public void Ensure_OpenConnection_ThrowsOthersExceptBadName()
{
using (ShimsContext.Create())
{
ShimMQQueueManager.ConstructorStringHashtable = (_, __, ___) => { };
ShimMQQueueManager.AllInstances.AccessQueueStringInt32 = (_, __, ___) =>
{
throw new MQException(1, 1);
};
var mqShim = new ShimMQMessageQueue
{
InstanceBehavior = ShimBehaviors.Fallthrough,
ConfigurationProviderGet = () => new ShimConfigurationProvider().Instance,
QueueNameGet = () => "something",
StreamGet = () => new Subject<IServiceMessage>(),
};
mqShim.Instance.OpenConnection();
}
}

view raw

gistfile1.cs

hosted with ❤ by GitHub