Monthly Archives: December 2015

DevOps rant: The maintenance test

Series Overview

I moved to a DevOps team about a year ago and although we’re not really doing DevOps, it’s a good team and we try really hard sometimes! While trying hard I have come across all sorts of funny stuff and recently I have decided to blog about it, maybe someone reading this won’t allow folks to do the same mistakes when presented with the same funny stuff.

Overview

By now you’re probably wondering What the fuck is a maintenance test?

Well, it’s definitely not a test, but instead it’s an automated runbook that a developer, because he probably lacks operations/infrastructure knowledge, decided to write as a test and wire it in a test run in an automated release pipeline.

This specific one is worth mentioning, because the reasons that cause it to be written are the same old mistakes people were doing 10 years ago, and sadly keep repeating them today in hopes of different outcomes.

Context

There is a set of performance tests that create a lot of documents in Sharepoint (in Office365). After a while, the container of these documents has more than 5.000 of these, so Sharepoint, with the default list size applied, will start showing you nothing but an error page saying you have more than 5.000 documents in that list.

This means the test needs to clean up. Tests that require cleaning up after them will always do it “after” and never before, because you never want to leave a given environment dirty until to you at it again, it’s a bad principle. However, this set of performance tests decided to “try” to clean up, before the test run, leaving the environment unusable during performance test runs.

This is like you only cleaning up your house before a party, so that it’s always clean on parties but the rest of the time while you’re leaving there you get to enjoy all the dirt and the mess of the previous party.

Moral of the Story

About 10 years ago, all stacks had examples of frameworks or tools that were designed with the goal of anyone can build apps in mind. In the generic sense, without taking into account specific niches, they all failed. In the .Net space the biggest crash was webforms, which was designed around the notion of anyone can drag a few boxes around in the editor, populate a few properties and build any kind of app. The resulting programming model was awful and developers usually tried to stay away from it as much as they could!

The only platforms that truly succeeded in this space were the ones that were built on top of very strong programming frameworks and always allowed for developers to go in and customize/tweak things their way. A good example is Unity3D where the level designer can do a lot in the graphic designer by dragging boxes around, but Mono and C# are at the disposal of developers to build the boxes the other guy drags around.

So, you might think, with all these failures in the history of software, have we all learned that you always need developers around to actually build a piece of code? Obviously not, there are lots of folks out there that actually jump through hundreds of hoops trying to reach the utopia of software without developers.

So sadly we keep on witnessing people using testers to “build” automated UI tests, testers to “build” automated performance tests, etc. This specific example is one of these, where a tester built a performance suite. Because he’s a tester, he has a hard time coming up with a way to properly clean up Sharepoint after his test suite runs.

Because the developer doesn’t want anything to do with a bunch of generated code from the performance test recorder, he wants to stay away from the tester built performance suite, where, ideally, the clean-up code should be written.

My previous contract had a tester building an automated UI test suite for about 6 months, only to realize it wasn’t maintainable. So instead what they decided to do was get a full team of testers to build a new one …

Einstein-Frame

DevOps rant: TFS merge discard strategy

Series Overview

I moved to a DevOps team about a year ago and although we’re not really doing DevOps, it’s a good team and we try really hard sometimes! While trying hard I have come across all sorts of funny stuff and recently I have decided to blog about it, maybe someone reading this won’t allow folks to do the same mistakes when presented with the same funny stuff.

Overview

Today, I’m a solid believer that most TFS projects should be on Git, not TFS SVC. Yes Git does have a learning curve over the massively supported by Visual Studio UI TFS SVC, but once that learning curve is climbed, the rewards are greater.

This is especially true on projects that are using PaaS components and are built by folks that love to over-engineer, so instead of a few components, you end up with tens of components and instead of a few config files you should avoid merging, you end up with tens or even hundreds of these. If you are in a Git repo you just combine clever use of Git Attributes with Git-Filter-Branch, however if you are on a TFS repo, your options are a lot more limited.

Real Life Example

I’m currently working with two projects, one should definitely be using Git as the repo as the level of over-engineering is high, and the other fits nicely in TFS.

The super engineered project never knew how to deal with merges, basically for a very long time what they did was do a “blind merge” then manually undo the changes they thought shouldn’t go in. While this was done by a single person, it actually worked, their problems started when other folks started to merge and they didn’t really know what not to merge.

So their solution was simple: let’s create a project configuration per environment per branch. Let’s not argue about the fact that this is a lot harder to maintain, because honestly if it’s over-engineered, going down the path of arguing about maintainability indexes is purely a waste of time for everyone. But instead focus on what this prevents my DevOps team from doing in the scope of this project.

Let’s imagine DevOps is now given the time resources to build a magic button, that when you press it you get a new branch, a new set of environments and a new release pipeline (after we have built the magic buttons that bring expressos and popcorn!). Currently we aren’t very far from this, the only real automation we are missing is the release pipeline, but that’s not that hard.

When you add the fact you now need new configurations and all sorts of crap related to that, like new config transforms, new service configuration files, etc. you immediately drop the idea of automating.

I have been blabbling about the notion of controlling the merge process through scripting a set of tf merge /discard commands for a while now, but every time I mention it I get that feeling I’m talking Portuguese to a bunch of Indian folks and although they always nod saying “Yes” they are actually thinking “I have no idea what this crazy guy is babbling about“.

So the other project, that’s more on the Lean side of things had this same problem recently. But due to its simplicity I decided to step in and instead of babbling anything just write the script for the project and kick-off the merge workflow instead of giving them the chance to wonder into the realms of creating 10 more solution configurations.

Later I sent the script to the first set of guys so that they could understand what I have been babbling about all this time, but the feedback I indirectly got was that it was “technically advanced”

The tf merge /discard PowerShell script


function ApplyMergeDiscard
{
[cmdletbinding(SupportsShouldProcess=$true)]
param
(
[Parameter(Mandatory=$true)]
[string] $LocalPath,
[Parameter(Mandatory=$true)]
[ValidateSet("MainIntoDev", "DevIntoMain")]
[string] $Direction,
[Parameter(Mandatory=$false)]
[string] $BaseDevBranch = "$/YOUR PROJECT/BRANCH1/",
[Parameter(Mandatory=$false)]
[string] $BaseMainBranch = "$/YOUR PROJECT/BRANCH2/"
)
$env:Path = $env:Path + ";C:\Program Files (x86)\Microsoft Visual Studio 14.0\Common7\IDE"
$discards = @( `
# Some stuff you shouldn't merge
"Stuff1.publish.proj", `
"Stuff2.publish.proj", `
# Some more stuff you shouldn't merge
"Some.Project/AConfiguration.Debug.config", `
"Some.Project/AConfiguration.Release.config" `
)
Set-Location $LocalPath
$discards | ForEach-Object {
if($Direction -eq "MainIntoDev") {
$sourcePath = $BaseMainBranch + $_
$targetPath = $BaseDevBranch + $_
}
else {
$sourcePath = $BaseDevBranch + $_
$targetPath = $BaseMainBranch + $_
}
if($WhatIfPreference -eq $false) {
Write-Verbose "Discarding $sourcePath into $targetPath"
& tf merge /discard $sourcePath $targetPath
}
else {
Write-Host "WhatIf: Discarding $sourcePath into $targetPath"
}
}
}

This scrip supports both -Verbose and -WhatIf commandlet bindings and it’s written in a way that the only thing you actually need to maintain is the array of strings of the sub paths of stuff you don’t want to merge.

So, unlike the feedback I got, this is definitely not rocket science to maintain and it’s a good starting foundation to deal with merges.

You run the script before you actually do the merge, if you didn’t have it right you can simply undo pending changes, tweak the script, and check again. When you’re happy with the discards you perform the merge and then check in.