Tag Archives: Web Performance Test

DevOps rant: The maintenance test

Series Overview

I moved to a DevOps team about a year ago and although we’re not really doing DevOps, it’s a good team and we try really hard sometimes! While trying hard I have come across all sorts of funny stuff and recently I have decided to blog about it, maybe someone reading this won’t allow folks to do the same mistakes when presented with the same funny stuff.

Overview

By now you’re probably wondering What the fuck is a maintenance test?

Well, it’s definitely not a test, but instead it’s an automated runbook that a developer, because he probably lacks operations/infrastructure knowledge, decided to write as a test and wire it in a test run in an automated release pipeline.

This specific one is worth mentioning, because the reasons that cause it to be written are the same old mistakes people were doing 10 years ago, and sadly keep repeating them today in hopes of different outcomes.

Context

There is a set of performance tests that create a lot of documents in Sharepoint (in Office365). After a while, the container of these documents has more than 5.000 of these, so Sharepoint, with the default list size applied, will start showing you nothing but an error page saying you have more than 5.000 documents in that list.

This means the test needs to clean up. Tests that require cleaning up after them will always do it “after” and never before, because you never want to leave a given environment dirty until to you at it again, it’s a bad principle. However, this set of performance tests decided to “try” to clean up, before the test run, leaving the environment unusable during performance test runs.

This is like you only cleaning up your house before a party, so that it’s always clean on parties but the rest of the time while you’re leaving there you get to enjoy all the dirt and the mess of the previous party.

Moral of the Story

About 10 years ago, all stacks had examples of frameworks or tools that were designed with the goal of anyone can build apps in mind. In the generic sense, without taking into account specific niches, they all failed. In the .Net space the biggest crash was webforms, which was designed around the notion of anyone can drag a few boxes around in the editor, populate a few properties and build any kind of app. The resulting programming model was awful and developers usually tried to stay away from it as much as they could!

The only platforms that truly succeeded in this space were the ones that were built on top of very strong programming frameworks and always allowed for developers to go in and customize/tweak things their way. A good example is Unity3D where the level designer can do a lot in the graphic designer by dragging boxes around, but Mono and C# are at the disposal of developers to build the boxes the other guy drags around.

So, you might think, with all these failures in the history of software, have we all learned that you always need developers around to actually build a piece of code? Obviously not, there are lots of folks out there that actually jump through hundreds of hoops trying to reach the utopia of software without developers.

So sadly we keep on witnessing people using testers to “build” automated UI tests, testers to “build” automated performance tests, etc. This specific example is one of these, where a tester built a performance suite. Because he’s a tester, he has a hard time coming up with a way to properly clean up Sharepoint after his test suite runs.

Because the developer doesn’t want anything to do with a bunch of generated code from the performance test recorder, he wants to stay away from the tester built performance suite, where, ideally, the clean-up code should be written.

My previous contract had a tester building an automated UI test suite for about 6 months, only to realize it wasn’t maintainable. So instead what they decided to do was get a full team of testers to build a new one …

Einstein-Frame

Recording a Web Performance test from a CodedUI test

On a project that’s well supported with tests, it is very common to have a good suite of automated tests. The two most common frameworks for test automation in the .Net stack are CodedUI and Watin. This article will cover utility code that improves recording a Web Performance test from a CodedUI test to automate the initial recording of the performance test. While it is possible to do the same with Watin, there is less control over the recording process, so I won’t cover Watin in this post.

There are two common tasks while going from a CodedUI into a Web Performance:

  • Find the Browser with the recorder.
  • Control the recording process. Often part of the CodedUI is getting to where we want to do the action, and this process isn’t part of the recording phase.

Finding a browser that is ready for recording

Finding a browser that is able to record is just going through the open browsers and look for the recording toolbar and the recording buttons. If we find them, then we have one and we can use it, otherwise just open a new browser and run the test normally.

Some things to note here:

  • Make sure that you wrap all the code that looks for recording controls in compiler directives. If the CodedUI is looking for these controls and can’t find them, it takes a lot longer to run, doing this as part of a build process will just increase the build time by a great amount.
  • While we are looking for things, keep track of the main buttons, Record and Resume, because we may want to click them later on, as part of scoping the recording process.
  • The method that launches the browser takes a Boolean parameter that allows the browser recorder to be paused at the start of the CodedUI test, instead of the default recording behavior.

The code that handles this:


public static class CodedUIExtensions
{
#if !DO_NOT_FIND_WEBRECORD
private static bool _recording;
private static WinButton _recordButton;
private static WinButton _pauseButton;
#endif
public static BrowserWindow Launch(bool pauseRecording = false)
{
return Launch("main.aspx", pauseRecording);
}
public static BrowserWindow Launch(string path, bool pauseRecording = false)
{
#if !DO_NOT_FIND_WEBRECORD
// Try to find an open browser that is recording to do a web performance recording session
try
{
var recordingBrowser = new BrowserWindow();
recordingBrowser.SearchProperties[UITestControl.PropertyNames.Name] = "Blank Page";
recordingBrowser.SearchProperties[UITestControl.PropertyNames.ClassName] = "IEFrame";
recordingBrowser.Find();
var recordWindow = new WinWindow(recordingBrowser);
recordWindow.SearchProperties[WinControl.PropertyNames.ControlName] = "toolStrip1";
recordWindow.Find();
var toolbar = new WinToolBar(recordWindow);
toolbar.SearchProperties[UITestControl.PropertyNames.Name] = "toolStrip1";
toolbar.Find();
_recordButton = new WinButton(toolbar);
_recordButton.SearchProperties[UITestControl.PropertyNames.Name] = "Record";
_recordButton.Find();
_pauseButton = new WinButton(toolbar);
_pauseButton.SearchProperties[UITestControl.PropertyNames.Name] = "Pause";
_pauseButton.Find();
if (pauseRecording)
{
Mouse.Click(_pauseButton);
recordingBrowser.WaitForControlReady();
}
recordingBrowser.NavigateToUrl(new Uri(path));
_recording = true;
return recordingBrowser;
}
catch
{
}
#endif
// A browser with a session ready to record couldn't be found, so open a new one
var browserWindow = BrowserWindow.Launch(path);
browserWindow.WaitForControlReady();
return browserWindow;
}
}

view raw

gistfile1.cs

hosted with ❤ by GitHub

Controlling the recording process

Besides finding the browser, that are 3 common things that we want, as part of controlling the recording process:

  • Be able to pause the recording process.
  • Be able to resume the recording process.
  • Some applications will spawn multiple windows, so at the end of the test an ALT+F4 is sent to the target app. However in the scope of recording a performance test, we want the browser to stay open, so we can do final adjustments or just stop recording and generate the test.

To accomplish this, just add 3 more methods to the utility class (also with compiler directives to improve test run speeds during builds):


public static void PauseRecording()
{
#if !DO_NOT_FIND_WEBRECORD
if (!_recording) return;
Mouse.Click(_pauseButton);
_pauseButton.WaitForControlReady();
#endif
}
public static void ResumeRecording()
{
#if !DO_NOT_FIND_WEBRECORD
if (!_recording) return;
Mouse.Click(_recordButton);
_recordButton.WaitForControlReady();
#endif
}
public static void CloseWindows()
{
#if !DO_NOT_FIND_WEBRECORD
if (!_recording)
{
Keyboard.SendKeys("%{F4}");
}
#else
Keyboard.SendKeys("%{F4}");
#endif
}

view raw

gistfile1.cs

hosted with ❤ by GitHub