Category Archives: Uncategorized

Roslyn: Package consolidation analyzer

Package consolidation is a very important factor in healthy code bases. When you have a single solution, Visual Studio’s package manager is the only tool you ever need, to make sure you’re consolidated across the solution. However, sometimes, teams decide to have multiple solutions and consolidating packages across multiple solutions can be a difficult task.

The more you move forward without consolidating, the harder it will be to consolidate and the more risk you take when building, packaging and deploying. If things were working before and your build order changes and now you get the newest version being packaged instead of the oldest, there are no redirects that can save you, you will deploy something that won’t run!

Roslyn Analyzers to the rescue

The idea is simple, tap into the assembly compilation hook of Roslyn, and for each reference if it contains the word “packages” in them, inspect the packages folder and check for multiple references of the same assembly.

The first thing to change is the actual Analyzer template, because Microsoft templates all analyzers as PNL’s so that they can run on any kind of project. But for this specific case, the projects that this contract deals with are all deployed to Windows Server topologies, either Azure IaaS or Azure PaaS. So I re-created the analyzer project from a PNL to a classic C# class library, so that I can tap into System.IO.

Another thing to note is that we’re not scanning the entire folder, because the rationale is the analyzer should only analyze your current scope, so if you have multiple versions of a package that your solution doesn’t reference, you shouldn’t error in that solution (your current scope), so the analyzer should only look at references for assemblies being compiled at the time and never a full folder scan.

The analyzer

Here’s the analyzer and below a reference to a model object called “Package” that I ended up creating because the analyzer was getting too big. I’m of the opinion that you shouldn’t over-design analyzers unless you need to, so basically start in the analyzer itself until you reach that point where the analyzer is dealing with too many responsabilities and code is becoming harder to read, then design around it.


namespace DevOpsFlex.Analyzers
{
using System;
using System.Collections.Generic;
using System.Collections.Immutable;
using System.IO;
using System.Linq;
using Microsoft.CodeAnalysis;
using Microsoft.CodeAnalysis.Diagnostics;
/// <summary>
/// Represents the Analyzer that enforces package consolidation (unique reference per package) and a unique packages folder
/// for each assembly being compiled.
/// </summary>
[DiagnosticAnalyzer(LanguageNames.CSharp)]
public class PackageConsolidationAnalyzer : DiagnosticAnalyzer
{
/// <summary>
/// This exists as a private static for performance reasons. We might get into the space where the HashSet might become too big,
/// but we'll re-strategize if we get there.
/// </summary>
private static readonly HashSet<Package> Packages = new HashSet<Package>();
private static readonly DiagnosticDescriptor SinglePackagesFolderRule =
new DiagnosticDescriptor(
id: "DOF0001",
title: new LocalizableResourceString(nameof(Resources.SinglePackagesFolderTitle), Resources.ResourceManager, typeof(Resources)),
messageFormat: new LocalizableResourceString(nameof(Resources.SinglePackagesFolderMessageFormat), Resources.ResourceManager, typeof(Resources)),
category: "NuGet",
defaultSeverity: DiagnosticSeverity.Error,
isEnabledByDefault: true,
description: new LocalizableResourceString(nameof(Resources.SinglePackagesFolderDescription), Resources.ResourceManager, typeof(Resources)));
private static readonly DiagnosticDescriptor UniqueVersionRule =
new DiagnosticDescriptor(
id: "DOF0002",
title: new LocalizableResourceString(nameof(Resources.UniqueVersionTitle), Resources.ResourceManager, typeof(Resources)),
messageFormat: new LocalizableResourceString(nameof(Resources.UniqueVersionMessageFormat), Resources.ResourceManager, typeof(Resources)),
category: "NuGet",
defaultSeverity: DiagnosticSeverity.Error,
isEnabledByDefault: true,
description: new LocalizableResourceString(nameof(Resources.UniqueVersionDescription), Resources.ResourceManager, typeof(Resources)));
/// <summary>
/// Returns a set of descriptors for the diagnostics that this analyzer is capable of producing.
/// </summary>
public sealed override ImmutableArray<DiagnosticDescriptor> SupportedDiagnostics => ImmutableArray.Create(SinglePackagesFolderRule, UniqueVersionRule);
/// <summary>
/// Called once at session start to register actions in the analysis context.
/// </summary>
/// <param name="context">The <see cref="AnalysisContext"/> context used to register actions.</param>
public sealed override void Initialize(AnalysisContext context)
{
context.RegisterCompilationAction(AnalyzePackageConsolidation);
}
/// <summary>
/// Analyzes that package consolidation (unique reference per package) and a unique packages folder
/// are in place for each assembly being compiled. Because this is being run per assembly you might
/// see a repetition of the same error.
/// </summary>
/// <param name="context">The <see cref="CompilationAnalysisContext"/> context that parents all analysis elements.</param>
private static void AnalyzePackageConsolidation(CompilationAnalysisContext context)
{
var packageReferences = context.Compilation
.References
.Where(r => r is PortableExecutableReference)
.Cast<PortableExecutableReference>()
.Where(r => r.FilePath.ToLower().Contains(Package.PackagesFolderName))
.ToList();
if (!packageReferences.Any()) return;
var firstReferencePath = packageReferences.First().FilePath;
var packagesFolder = firstReferencePath.Substring(0, firstReferencePath.IndexOf(Package.PackagesFolderName, StringComparison.Ordinal) + Package.PackagesFolderName.Length);
// 1. Make sure there's only a packages folder
if (packageReferences.Any(r => !r.FilePath.Contains(packagesFolder)))
{
context.ReportDiagnostic(
Diagnostic.Create(
SinglePackagesFolderRule,
context.Compilation.Assembly.Locations[0],
context.Compilation.AssemblyName // {0} MessageFormat
));
}
// 2. Make sure for each reference in the packages folder, we're only dealing with a unique version
var newPackages = Directory.EnumerateDirectories(packagesFolder).Select(d => new Package(d)).Except(Packages);
foreach (var package in newPackages)
{
Packages.Add(package);
}
var packagesNotConsolidated = packageReferences.Select(r => new Package(r.FilePath))
.Where(r => Packages.Count(p => p.Name == r.Name) > 1);
foreach (var referencePackage in packagesNotConsolidated)
{
context.ReportDiagnostic(
Diagnostic.Create(
UniqueVersionRule,
context.Compilation.Assembly.Locations[0],
context.Compilation.AssemblyName, // {0} MessageFormat
referencePackage.Name // {1} MessageFormat
));
}
}
}
}

And the companion Package class


namespace DevOpsFlex.Analyzers
{
using System.Diagnostics.Contracts;
using System.IO;
using System.Text.RegularExpressions;
/// <summary>
/// Wraps logic around Name, Version and generic regular expression lazy initializations to support
/// the package consolidation analyzer.
/// </summary>
public class Package
{
private static readonly string PackageVersionRegex = PackagesFolderName.Replace("\\", "\\\\") + "[^0-9]*([0-9]+(?:\\.[0-9]+)+)(?:\\\\)?";
private static readonly string PackageNameRegex = PackagesFolderName.Replace("\\", "\\\\") + "([a-zA-Z]+(?:\\.[a-zA-Z]+)*)[^\\\\]*(?:\\\\)?";
private static readonly string PackageFolderRegex = "(.*" + PackagesFolderName.Replace("\\", "\\\\") + "[^\\\\]*)\\\\?";
private string _version;
private string _name;
/// <summary>
/// This is a convention constant that olds a string that all folders that we consider a "packages" folder contain.
/// </summary>
internal const string PackagesFolderName = "\\packages\\"; // convention
/// <summary>
/// Initializes a new instance of <see cref="Package"/>.
/// Has built in Contract validations that will all throw before any other code is able to throw.
/// </summary>
/// <param name="path">The path to the package folder that this package is based on.</param>
public Package(string path)
{
Contract.Requires(!string.IsNullOrEmpty(path));
Contract.Requires(Directory.Exists(path));
Contract.Requires(path.Contains(PackagesFolderName));
Contract.Requires(Regex.IsMatch(path, PackageFolderRegex, RegexOptions.Singleline), $"When casting string (path) to Package you need to ensure your path is being matched by the Folder Regex [{PackageFolderRegex}]");
Folder = Regex.Match(path, PackageFolderRegex, RegexOptions.Singleline).Groups[1].Value;
}
/// <summary>
/// Gets the package folder without the last "\".
/// </summary>
public string Folder { get; }
/// <summary>
/// Gets the package name component of the package folder as a string.
/// </summary>
public string Name => _name ?? (_name = Regex.Match(Folder, PackageNameRegex, RegexOptions.Singleline).Groups[1].Value);
/// <summary>
/// Gets the package version component of the package folder as a string.
/// </summary>
public string Version => _version ?? (_version = Regex.Match(Folder, PackageVersionRegex, RegexOptions.Singleline).Groups[1].Value);
/// <summary>
/// Determines whether the specified objects are equal.
/// </summary>
/// <param name="y">The second <see cref="Package"/> object to compare.</param>
/// <returns>true if the specified objects are equal; otherwise, false.</returns>
public override bool Equals(object y)
{
Contract.Requires(y != null);
Contract.Requires(y.GetType() == typeof(Package));
return Folder == (y as Package)?.Folder;
}
/// <summary>
/// Returns a hash code for the specified object.
/// </summary>
/// <returns>A hash code for the specified object.</returns>
public override int GetHashCode()
{
return Folder.GetHashCode();
}
}
}

view raw

Package.cs

hosted with ❤ by GitHub

DevOps rant: Pussy developers

Series Overview

I moved to a DevOps team about a year ago and although we’re not really doing DevOps, it’s a good team and we try really hard sometimes! While trying hard I have come across all sorts of funny stuff and recently I have decided to blog about it, maybe someone reading this won’t allow folks to do the same mistakes when presented with the same funny stuff.

Overview

When a development team, or a group of teams, collectively act like a bunch of pussies, they get into trouble easily. Often, this trouble they get themselves into will span across into other folks and teams, and when they aren’t also collectively a bunch of pussies, this will upset them.

The Pussy Developer

A pussy developer is typically a guy that will always agree and do whatever they throw at him. The most intense example would be when you ask the technical lead of your off-shore team in India: “Hey, can you guys build a button on the system that every time I press it, popcorn will come out at my desk?” and you get the obvious answer “Sure, that’s not a problem“.

I’m of the opinion that today successful software is very different then what it was 10 years ago. With the agile mind-set the best thing you can do as a developer is to write lean code, the leaner, the better you will be able to cope with change and the more Agile you’ll be. So things like abstractions don’t really fit today’s lean codebases, you want to deal with the now and ignore the “What if in the future we (…)” and instead align the codebase to deal with those “What if’s” in a very quick way when they actually happen.

I keep seeing folks that haven’t written good code in the last 5 years, tell developers how to write code, as if 5 years later you’d expect things in the fastest changing engineering space to be done the same. Worse, I keep on seeing folks being pushed by others into technology hubs they aren’t comfortable with. And while all this is going on, not a single fuck off.

The genuine knowledge driven nature of developers

I love listening to genuine sports folks talking about their art. Living in Ireland, while being for Portugal has inevitably let to me following two genuine sports figures: Guy Martin and Conor McGregor. I was listening to the radio a few weeks ago and they had on a radio show: Conor McGregor and a high reputation sports comentator. They start the show and McGregor starts disagreeing with the commentator on the subject of enduring pain and commiting to a certain sport and after disagreeing with her twice he says “You sit your fat ass in a fucking sofa all day long and you’re talking to me about pain? What do you know about pain?” and then he carries on until he completly destroys the show. This is expectable when you mix the doer (and he’s not a pussy) with the thinker and they start to disagree.

If McGregor was a developer he’d probably say something in the lines of: “You haven’t written code in 5 years and you’re fucking telling me how to write code?“.

Developers are technical folks. “Technical” comes from technique and it means these folks are more interested in technique then in application, or in other words they care about how a system works instead of what a system does, unlike business users for example. So they pride themselves in how the system works and if it doesn’t work properly, despite doing everything it’s supposed to do, there is no pride, no joy and no fun. This is one of the reasons why turnover is so high in developers and still today a lot of IT management doesn’t get this.

So if you pride yourself on building stuff that works well, why would you ever let someone that no longer knows how to build stuff push you around?

Avoid being a pussy

Just tell people to fuck off, in a blunt fashion if the environment allows you to or in a politer and diplomatic fashion if not: I’ll take what you just said into consideration and evaluate before doing the task.

If you’re being pressured into writing code in a specific way that you don’t agree with, ask the person where can you see any of the commits he(she) has done so that you can evaluate if he(she) is a peer or not, because if the person isn’t your peer you shouldn’t be wasting your time being taught how to code by someone who doesn’t code.

1511338_10205856613883609_3414295426928705643_n.jpg

If you’re being pressured into the Java stack as a .Net developer, just say “Hey, just get rid of me and find a Java guy, I have no hard feelings guys, it’s business as usual”.

5cce58f2-2920-11e4-90ef-22000ab82dd9-large.jpgBecause if you don’t do these things, the outcome won’t be anything that will ever give you or anyone else that actually built it any pride, joy or even a small spark of fun.

 

DevOps rant: TFS merge discard strategy

Series Overview

I moved to a DevOps team about a year ago and although we’re not really doing DevOps, it’s a good team and we try really hard sometimes! While trying hard I have come across all sorts of funny stuff and recently I have decided to blog about it, maybe someone reading this won’t allow folks to do the same mistakes when presented with the same funny stuff.

Overview

Today, I’m a solid believer that most TFS projects should be on Git, not TFS SVC. Yes Git does have a learning curve over the massively supported by Visual Studio UI TFS SVC, but once that learning curve is climbed, the rewards are greater.

This is especially true on projects that are using PaaS components and are built by folks that love to over-engineer, so instead of a few components, you end up with tens of components and instead of a few config files you should avoid merging, you end up with tens or even hundreds of these. If you are in a Git repo you just combine clever use of Git Attributes with Git-Filter-Branch, however if you are on a TFS repo, your options are a lot more limited.

Real Life Example

I’m currently working with two projects, one should definitely be using Git as the repo as the level of over-engineering is high, and the other fits nicely in TFS.

The super engineered project never knew how to deal with merges, basically for a very long time what they did was do a “blind merge” then manually undo the changes they thought shouldn’t go in. While this was done by a single person, it actually worked, their problems started when other folks started to merge and they didn’t really know what not to merge.

So their solution was simple: let’s create a project configuration per environment per branch. Let’s not argue about the fact that this is a lot harder to maintain, because honestly if it’s over-engineered, going down the path of arguing about maintainability indexes is purely a waste of time for everyone. But instead focus on what this prevents my DevOps team from doing in the scope of this project.

Let’s imagine DevOps is now given the time resources to build a magic button, that when you press it you get a new branch, a new set of environments and a new release pipeline (after we have built the magic buttons that bring expressos and popcorn!). Currently we aren’t very far from this, the only real automation we are missing is the release pipeline, but that’s not that hard.

When you add the fact you now need new configurations and all sorts of crap related to that, like new config transforms, new service configuration files, etc. you immediately drop the idea of automating.

I have been blabbling about the notion of controlling the merge process through scripting a set of tf merge /discard commands for a while now, but every time I mention it I get that feeling I’m talking Portuguese to a bunch of Indian folks and although they always nod saying “Yes” they are actually thinking “I have no idea what this crazy guy is babbling about“.

So the other project, that’s more on the Lean side of things had this same problem recently. But due to its simplicity I decided to step in and instead of babbling anything just write the script for the project and kick-off the merge workflow instead of giving them the chance to wonder into the realms of creating 10 more solution configurations.

Later I sent the script to the first set of guys so that they could understand what I have been babbling about all this time, but the feedback I indirectly got was that it was “technically advanced”

The tf merge /discard PowerShell script


function ApplyMergeDiscard
{
[cmdletbinding(SupportsShouldProcess=$true)]
param
(
[Parameter(Mandatory=$true)]
[string] $LocalPath,
[Parameter(Mandatory=$true)]
[ValidateSet("MainIntoDev", "DevIntoMain")]
[string] $Direction,
[Parameter(Mandatory=$false)]
[string] $BaseDevBranch = "$/YOUR PROJECT/BRANCH1/",
[Parameter(Mandatory=$false)]
[string] $BaseMainBranch = "$/YOUR PROJECT/BRANCH2/"
)
$env:Path = $env:Path + ";C:\Program Files (x86)\Microsoft Visual Studio 14.0\Common7\IDE"
$discards = @( `
# Some stuff you shouldn't merge
"Stuff1.publish.proj", `
"Stuff2.publish.proj", `
# Some more stuff you shouldn't merge
"Some.Project/AConfiguration.Debug.config", `
"Some.Project/AConfiguration.Release.config" `
)
Set-Location $LocalPath
$discards | ForEach-Object {
if($Direction -eq "MainIntoDev") {
$sourcePath = $BaseMainBranch + $_
$targetPath = $BaseDevBranch + $_
}
else {
$sourcePath = $BaseDevBranch + $_
$targetPath = $BaseMainBranch + $_
}
if($WhatIfPreference -eq $false) {
Write-Verbose "Discarding $sourcePath into $targetPath"
& tf merge /discard $sourcePath $targetPath
}
else {
Write-Host "WhatIf: Discarding $sourcePath into $targetPath"
}
}
}

This scrip supports both -Verbose and -WhatIf commandlet bindings and it’s written in a way that the only thing you actually need to maintain is the array of strings of the sub paths of stuff you don’t want to merge.

So, unlike the feedback I got, this is definitely not rocket science to maintain and it’s a good starting foundation to deal with merges.

You run the script before you actually do the merge, if you didn’t have it right you can simply undo pending changes, tweak the script, and check again. When you’re happy with the discards you perform the merge and then check in.

Testing that all Fault Exceptions are being handled in a WCF client

One of the things that the .Net compiler won’t warn developers about, is when another developer decides to add a new FaultException type and the client code isn’t updated to handle this new type of exception. The solution I’m demonstrating here is a generic solution to check for this, but implies that the client is going through a ChannelFactory and not a ClientBase implementation.

ChannelFactory implementations are usually better if there’s full ownership, in the institution, of service and clients. The share of the service contracts will allow Continuous Integration builds to fail if there was a breaking change made on the service that broke one or more of the consuming clients. You may argue that ChannelFactory implementations have the issue that if you change the service, with a non-breaking change, you need to re-test and re-deploy all your clients code: This isn’t exactly true, as if it is a non-breaking change, all the clients will continue to work even with a re-deploy of the service.

Default ChannelFactory Wrapper

The generic implementation depends on our default WcfService wrapper for a ChannelFactory. This could be abstracted through an interface that had the Channel getter on it, and make the generic method depend on the interface instead of the actual implementation.

I will provide here a simple implementation of the ChannelFactory wrapper:


public class WcfService<T> : IDisposable where T : class
{
private readonly object _lockObject = new object();
private bool _disposed;
private ChannelFactory<T> _factory;
private T _channel;
internal WcfService()
{
_disposed = false;
}
internal virtual T Channel
{
get
{
if (_disposed)
{
throw new ObjectDisposedException("Resource WcfService<" + typeof(T) + "> has been disposed");
}
lock (_lockObject)
{
if (_factory == null)
{
_factory = new ChannelFactory<T>("*"); // First qualifying endpoint from the config file
_channel = _factory.CreateChannel();
}
}
return _channel;
}
}
public void Dispose()
{
Dispose(true);
GC.SuppressFinalize(this);
}
internal void Dispose(bool disposing)
{
if (_disposed)
{
return;
}
if (!disposing)
{
return;
}
lock (_lockObject)
{
if (_channel != null)
{
try
{
((IClientChannel)_channel).Close();
}
catch (Exception)
{
((IClientChannel)_channel).Abort();
}
}
if (_factory != null)
{
try
{
_factory.Close();
}
catch (Exception)
{
_factory.Abort();
}
}
_channel = null;
_factory = null;
_disposed = true;
}
}
}

view raw

gistfile1.cs

hosted with ❤ by GitHub

Example of a client using the Wrapper

Here’s an example of code that we want to test, for a client that’s using the WcfService wrappe. The separation from the method that creates the WcfService wrapped in a using clause and the internal static one is just for testing purposes, just so we can inject a WcfService mock and assert against it. The client successfully wraps a FaultException into something meaningful for the consuming application.


public class DocumentClient : IDocumentService
{
public string InsertDocument(string documentClass, string filePath)
{
using (var service = new WcfService<IDocumentService>())
{
return InsertDocument(documentClass, filePath, service);
}
}
internal static string InsertDocument(string documentClass, string filePath, WcfService<IDocumentService> service)
{
try
{
return service.Channel.InsertDocument(documentClass, filePath);
}
catch (FaultException<CALFault> ex)
{
throw new DocumentCALException(ex);
}
catch (Exception ex)
{
throw new ServiceUnavailableException(ex.Message, ex);
}
}
}

view raw

gistfile1.cs

hosted with ❤ by GitHub

The generic Fault contract checker

This implementation is using Moq as the Mocking framework and the code is dependent on it. It also provides signatures up to 4 exceptions that are expected, this is done with a Top-Down approach, where the signature with the most type parameters has the full implementation and the others just call the one that’s one higher level in the signature chain. To support this mind set, a special empty DummyException is declared to fill the gaps between Type Parameters in the different signatures.

Breaking down the code, what it is doing is creating a dynamic Expression Tree that we can wire in the Setup method of the client mock that will intercept calls with any type of parameter (It.IsAny). Then for each FaultContractAttribute that is decorating the service, instantiate it and wire everything so that the service method is setup to throw it. Finally call it, and check if it was caught and wrapped or if we are getting the original FaultException back.


public static class ContractCheckerExtension
{
public static string CheckFaultContractMapping<TContract, TEx1>(this MethodInfo method, Action<Mock<WcfService<TContract>>> action)
where TContract : class
where TEx1 : Exception
{
return method.CheckFaultContractMapping<TContract, TEx1, DummyException>(action);
}
public static string CheckFaultContractMapping<TContract, TEx1, TEx2>(this MethodInfo method, Action<Mock<WcfService<TContract>>> action)
where TContract : class
where TEx1 : Exception
where TEx2 : Exception
{
return method.CheckFaultContractMapping<TContract, TEx1, TEx2, DummyException>(action);
}
public static string CheckFaultContractMapping<TContract, TEx1, TEx2, TEx3>(this MethodInfo method, Action<Mock<WcfService<TContract>>> action)
where TContract : class
where TEx1 : Exception
where TEx2 : Exception
where TEx3 : Exception
{
return method.CheckFaultContractMapping<TContract, TEx1, TEx2, TEx3, DummyException>(action);
}
public static string CheckFaultContractMapping<TContract, TEx1, TEx2, TEx3, TEx4>(this MethodInfo method, Action<Mock<WcfService<TContract>>> action)
where TContract : class
where TEx1 : Exception
where TEx2 : Exception
where TEx3 : Exception
where TEx4 : Exception
{
// we're creating a lambda on the fly that will call on the target method
// with all parameters set to It.IsAny<[the type of the param]>.
var lambda = Expression.Lambda<Action<TContract>>(
Expression.Call(
Expression.Parameter(typeof (TContract)),
method,
CreateAnyParameters(method)),
Expression.Parameter(typeof (TContract)));
// for all the fault contract attributes that are decorating the method
foreach (var faultAttr in method.GetCustomAttributes(typeof(FaultContractAttribute), false).Cast<FaultContractAttribute>())
{
// create the specific exception that get's thrown by the fault contract
var faultDetail = Activator.CreateInstance(faultAttr.DetailType);
var faultExceptionType = typeof(FaultException<>).MakeGenericType(new[] { faultAttr.DetailType });
var exception = (FaultException)Activator.CreateInstance(faultExceptionType, faultDetail);
// mock the WCF pipeline objects, channel and client
var mockChannel = new Mock<WcfService<TContract>>();
var mockClient = new Mock<TContract>();
// set the mocks
mockChannel.Setup(x => x.Channel)
.Returns(mockClient.Object);
mockClient.Setup(lambda)
.Throws(exception);
try
{
// invoke the client, wrapped in an Action delegate
action(mockChannel);
}
catch (Exception ex)
{
// if we get a targeted exception it's because the fault isn't being handled
// and we return with the type of the fault contract detail type that was caught
if (ex is TEx1 || ex is TEx2 || ex is TEx3 || ex is TEx4)
return faultAttr.DetailType.FullName;
// else soak all other exceptions because we are expecting them
}
}
return null;
}
private static IEnumerable<Expression> CreateAnyParameters(MethodInfo method)
{
return method.GetParameters()
.Select(p => typeof (It).GetMethod("IsAny").MakeGenericMethod(p.ParameterType))
.Select(a => Expression.Call(null, a));
}
}
[Serializable]
public class DummyException : Exception
{
}

view raw

gistfile1.cs

hosted with ❤ by GitHub

Here’s a sample of a unit test using the ContractChecker for the example client showed previously in the post:


[TestMethod]
public void Ensure_InsertDocument_FaultContracts_AreAllMapped()
{
var targetOperation = typeof (IDocumentService).GetMethod(
"InsertDocument",
new[]
{
typeof (string),
typeof (string)
});
var result = targetOperation.CheckFaultContractMapping<IDocumentService, ServiceUnavailableException>(
m => DocumentClient.InsertDocument(string.Empty, string.Empty, m.Object));
Assert.IsNull(result, "The type {0} used to detail a FaultContract isn't being properly handled on the Service client", result);
}

view raw

gistfile1.cs

hosted with ❤ by GitHub

Getting the default WPF control templates in Visual Studio 2012

When designing WPF control templates, one of the most common starting points is the ControlTemplate for that specific control that is shipped with the .Net Framework. By having a dump of that template we can easily start making changes as we go along.

In Visual Studio 2012, the way we get these templates dumped into a code file changed slightly, here’s a tutorial on how to do this.

1. Make a Copy of the control template

In the designer, right click what you want to template, in my example I’m using a Button control. Select Edit Template then select Edit Copy.

blog1

Next another windows appears, that let’s you select the name of the template and where exactly do you want to dump it into. In my example I want it on the code-behind file of the control that’s hosting my button (a Window).

blog2

2. Go to where you dumped it and start editing

This will assign the “ButtonStyle” template to your Button and will add the template to the Window.Resources tag.

<Style x:Key="ButtonStyle" TargetType="{x:Type Button}">
    <Setter Property="FocusVisualStyle">
        <Setter.Value>
            <Style>
                <Setter Property="Control.Template">
                    <Setter.Value>
                        <ControlTemplate>
                            <Rectangle Margin="2" SnapsToDevicePixels="True" Stroke="{DynamicResource {x:Static SystemColors.ControlTextBrushKey}}" StrokeThickness="1" StrokeDashArray="1 2"/>
                        </ControlTemplate>
                    </Setter.Value>
                </Setter>
            </Style>
        </Setter.Value>
    </Setter>
    <Setter Property="Background" Value="#FFDDDDDD"/>
    <Setter Property="BorderBrush" Value="#FF707070"/>
    <Setter Property="Foreground" Value="{DynamicResource {x:Static SystemColors.ControlTextBrushKey}}"/>
    <Setter Property="BorderThickness" Value="1"/>
    <Setter Property="HorizontalContentAlignment" Value="Center"/>
    <Setter Property="VerticalContentAlignment" Value="Center"/>
    <Setter Property="Padding" Value="1"/>
    <Setter Property="Template">
        <Setter.Value>
            <ControlTemplate TargetType="{x:Type Button}">
                <Border x:Name="border" BorderBrush="{TemplateBinding BorderBrush}" BorderThickness="{TemplateBinding BorderThickness}" Background="{TemplateBinding Background}" SnapsToDevicePixels="True">
                    <ContentPresenter x:Name="contentPresenter" ContentTemplate="{TemplateBinding ContentTemplate}" Content="{TemplateBinding Content}" ContentStringFormat="{TemplateBinding ContentStringFormat}" Focusable="False" HorizontalAlignment="{TemplateBinding HorizontalContentAlignment}" Margin="{TemplateBinding Padding}" RecognizesAccessKey="True" SnapsToDevicePixels="{TemplateBinding SnapsToDevicePixels}" VerticalAlignment="{TemplateBinding VerticalContentAlignment}"/>
                </Border>
                <ControlTemplate.Triggers>
                    <Trigger Property="IsDefaulted" Value="True">
                        <Setter Property="BorderBrush" TargetName="border" Value="{DynamicResource {x:Static SystemColors.HighlightBrushKey}}"/>
                    </Trigger>
                    <Trigger Property="IsMouseOver" Value="True">
                        <Setter Property="Background" TargetName="border" Value="#FFBEE6FD"/>
                        <Setter Property="BorderBrush" TargetName="border" Value="#FF3C7FB1"/>
                    </Trigger>
                    <Trigger Property="IsPressed" Value="True">
                        <Setter Property="Background" TargetName="border" Value="#FFC4E5F6"/>
                        <Setter Property="BorderBrush" TargetName="border" Value="#FF2C628B"/>
                    </Trigger>
                    <Trigger Property="IsEnabled" Value="False">
                        <Setter Property="Background" TargetName="border" Value="#FFF4F4F4"/>
                        <Setter Property="BorderBrush" TargetName="border" Value="#FFADB2B5"/>
                        <Setter Property="TextElement.Foreground" TargetName="contentPresenter" Value="#FF838383"/>
                    </Trigger>
                </ControlTemplate.Triggers>
            </ControlTemplate>
        </Setter.Value>
    </Setter>
</Style>