New Pluralsight Course: Testing C# Code in Production with Scientist.NET

My latest Pluralsight course is now available for viewing. It demonstrates how to use the Scientist.NET library to execute candidate code in production alongside the existing production code. This allows the new candidate code to be additionally verified as able to work in production (for example with production data that may be of variable quality) and offers an additional check in addition to automated tests that have been executed in the development/QA environment.

From the course description: “Errors in production code can be costly to fix, more stressful for the development team, and frustrating for the end-user. In this course, Testing C# Code in Production with Scientist.NET, you will get to see how Scientist.NET allows sections of the application to be changed more safely by running both the existing code alongside the new code in production. You'll begin with an introduction to Scientist.NET before installing it and writing your first experiment. Next, you'll learn how to customize the configuration of experiments. Finally, you'll learn how to publish experiment data to SQL Server and analyze experiment results…”

You can check out the new course here.

You can start watching with a Pluralsight free trial.

SHARE:

Refactoring Production Code With Experiments and Scientist.NET

When refactoring a part of an application we can use the existing tests to give a level of confidence that the refactored code still produces the same result, i.e. the existing tests still pass with the new implementations.

A system that has been in production use for some time is likely to have amassed a lot of data that is flowing through the “legacy” code. This means that although the existing tests may still pass, when used in production the new refactored code may produce errors or unexpected results.

It would be helpful as an experiment to run the existing legacy code alongside the new code to see if the results differ, but still continue to use the result of the existing legacy code. Scientist.NET allows exactly this kind of experimentation to take place.

Scientist.NET is a port of the Scientist library for Ruby. It is currently in pre-release and has a NuGet package we can use.

An Example Experiment

Suppose there is an interface as shown in the following code that describes the ability to sum a list if integer values and return the result.

interface ISequenceSummer
{
    int Sum(int numberOfTerms);
}

This interface is currently implemented in the legacy code as follows:

class SequenceSummer : ISequenceSummer
{
    public int Sum(int numberOfTerms)
    {
        var terms = new int[numberOfTerms];


        // generate sequence of terms
        var currentTerm = 0;
        for (int i = 0; i < terms.Length; i++)
        {
            terms[i] = currentTerm;
            currentTerm++;
        }


        // Sum
        int sum = 0;
        for (int i = 0; i < terms.Length; i++)
        {
            sum += terms[i];
        }

        return sum;
    }
}

As part of the refactoring of the legacy code, this implementation is to be replaced with a version that utilizes LINQ as shown in the following code:

class SequenceSummerLinq : ISequenceSummer
{
    public int Sum(int numberOfTerms)
    {
        // generate sequence of terms
        var terms = Enumerable.Range(0, 5).ToArray();
            
        // sum
        return terms.Sum();
    }
}

After installing the Scientist.NET NuGet package, an experiment can be created using the following code:

int result;            

result = Scientist.Science<int>("sequence sum", experiment =>
{
    experiment.Use(() => new SequenceSummer().Sum(5)); // current production method

    experiment.Try("Candidate using LINQ", () => new SequenceSummerLinq().Sum(5)); // new proposed production method

}); // return the control value (result from SequenceSummer)

This code will run the .Use(…) code that contains the existing legacy implementation. It will also run the .Try(…) code that contains the new implementation. Scientist.NET will store both results for reporting on and then return the result from the .Use(…) code for use by the rest of the program. This allows any differences to be reported on but without actually changing the implementation of the production code. At some point in the future, if the results of the legacy code (the control) match that of the new code (the candidate), the refactoring can be completed by removing the old implementation (and the experiment code) and simply calling the new implementation.

To get the results of the experiment, a reporter can be written and configured. The following code shows a custom reporter that simply reports to the Console:

public class ConsoleResultPublisher : IResultPublisher
{
    public Task Publish<T>(Result<T> result)
    {
        Console.ForegroundColor = result.Mismatched ? ConsoleColor.Red : ConsoleColor.Green;

        Console.WriteLine($"Experiment name '{result.ExperimentName}'");
        Console.WriteLine($"Result: {(result.Matched ? "Control value MATCHED candidate value" : "Control value DID NOT MATCH candidate value")}");
        Console.WriteLine($"Control value: {result.Control.Value}");
        Console.WriteLine($"Control execution duration: {result.Control.Duration}");
        foreach (var observation in result.Candidates)
        {
            Console.WriteLine($"Candidate name: {observation.Name}");
            Console.WriteLine($"Candidate value: {observation.Value}");
            Console.WriteLine($"Candidate execution duration: {observation.Duration}");
        }

        if (result.Mismatched)
        {
            // saved mismatched experiments to event log, file, database, etc for alerting/comparison
        }

        Console.ForegroundColor = ConsoleColor.White;

        return Task.FromResult(0);
    }  
}

To plug this in, before the experiment code is executed:

Scientist.ResultPublisher = new ConsoleResultPublisher();

The output of the experiment (and the custom reporter) look like the following screenshot:

screenshot of console application using Scentist.NET with custom reporter

To learn more about Scientist.NET check out my Pluralsight course: Testing C# Code in Production with Scientist.NET.

You can start watching with a Pluralsight free trial.

SHARE:

New Pluralsight Course: Automated Business Readable Web Tests with Selenium and SpecFlow

SpecFlow is a tool that can translate natural language scenarios (e.g. writing in English or other spoken languages) into test code. This can allow business people, users, or other stakeholders to verify that the correct features are being built.

Selenium is a tool that allows test code (multiple programming languages supported) to automated a web browser. This allows the creation of automated UI tests that operate the web application as if an end user where doing it; for example clicking buttons and typing text into input boxes.

My new Pluralsight course shows how to integrate these two tools.

The course is organized into four modules:

  1. Introduction to Business Readable Web Testing
  2. Getting Started with Selenium
  3. Adding Business Readability with SpecFlow
  4. Creating More Maintainable Web Automation

If you’re new to SpecFlow I suggest watching this course first before moving on to Automated Business Readable Web Tests with Selenium and SpecFlow.

You can start watching with a Pluralsight free trial.

SHARE:

Hook Execution Order in SpecFlow 2

SpecFlow hooks allow additional code to be executed before and after various stages of the test execution lifecycle, for example running additional setup code before each scenario executes.

If there are multiple of the same type of hook specified, by default the execution order of the hook methods is unspecified. For example the following code has three [BeforeStep] hook methods that could be executed in any order before every step of the scenario executes:

[BeforeStep]
public void BeforeHook1()
{
}

[BeforeStep]
public void BeforeHook2()
{
}

[BeforeStep]
public void BeforeHook3()
{
}

To ensure these hook methods are executed in a specified order, the hook attributes allow an optional order to be specified. When there are multiple of the same hook methods defined, the lowest order values execute before the higher order methods:

[BeforeStep(Order = 100)]
public void BeforeHook1()
{
}

[BeforeStep(Order = 200)]
public void BeforeHook2()
{
}

[BeforeStep(Order = 300)]
public void BeforeHook3()
{
}

The values of the Order property are arbitrary, you may use whatever values you wish, though it is sensible to allow some “wriggle room” for future additional steps by working in increments of 10 or 100 for example.

The following code illustrates another example where the execution order of hooks is important; the database should be reset first before test users are added:

[Binding]
public class Hooks
{
    [BeforeScenario(Order = 100)]
    public void ResetDatabase()
    {
    }

    [BeforeScenario(Order = 200)]
    public void AddTestUsersToDatabase()
    {
    }        
}

To see hook ordering in action, check out my Pluralsight course: Business Readable Automated Tests with SpecFlow 2.0.

You can start watching with a Pluralsight free trial.

SHARE:

New Pluralsight Course: Business Readable Automated Tests with SpecFlow 2.0

My newest Pluralsight course was just published. Business Readable Automated Tests with SpecFlow 2.0 teaches how to create tests that the business can read, understand, and contribute to. These “English-like” tests (other spoken languages are supported) can be executed by writing test code that is associated with the “English-like” steps. Because the tests sit alongside the source code, they can become living (executable) documentation for the system, as opposed to an out-of-date Word document somewhere on the network for example. Check out the course here.

You can start watching with a Pluralsight free trial.

SHARE:

Improving Test Code Readability and Assert Failure Messages with Shouldly

Shouldly is an open source library that aims to improve the assert phase of tests; it does this in two ways. The first is offering a more “fluent like” syntax that for the most part leverages extension methods and obviates the need to keep remembering which parameter is the expected or actual as with regular Assert.Xxxx(1,2) methods. The second benefit manifests itself when tests fail; Shouldly outputs more readable, easily digestible test failure messages.

Failure Message Examples

The following are three failure messages from tests that don’t use Shouldly and instead use the assert methods bundled with the testing framework (NUnit, xUnit.net, etc):

  • “Expected: 9  But was:  5”
  • “Assert.NotNull() Failure”
  • “Not found: Monday In value:  List<String> ["Tuesday", "Wednesday", "Thursday"]”

In each of the preceding failure messages, there is not much helpful context in the failure message.

Compare the above to the following equivalent Shouldly failure messages:

  • “schedule.TotalHours should be 9 but was 5”
  • “schedule.Title should not be null or empty”
  • “schedule.Days should contain "Monday" but does not”

Notice the additional context in these failure messages. In each case here, Shouldly is telling us the name of the variable in the test code (“schedule”) and the name of the property/field being asserted (e.g. “Total Hours”).

Test Code Readability

 

For the preceding failure messages, the following test assert code is used (notice the use of the Shouldly extension methods):

  • schedule.TotalHours.ShouldBe(9);
  • schedule.Title.ShouldNotBeNullOrEmpty();
  • schedule.Days.ShouldContain("Monday");

In these examples there is no mistaking an actual value parameter for an expected value parameter and the test code reads more “fluently” as well.

To find out more about Shouldly check out the project on GitHub, install via NuGet, or checkout my Better Unit Test Assertions with Shouldly Pluralsight course.

You can start watching with a Pluralsight free trial.

SHARE:

xUnit.net 2 Cheat Sheet

To compliment the release of my Testing .NET Code with xUnit.net 2 Pluralsight course, I’ve updated my original xUnit.net cheat sheet.

xunit2 cheat sheet image

(To download just right-click, save-as)

To learn xUnit.net check out my xUnit.net Pluralsight training course.

You can start watching with a Pluralsight free trial.

SHARE:

Using Cyclomatic Complexity as an Indicator of Clean Code

Cyclomatic complexity is one way to measure how complicated code is, it measures how complicated the structure of the code is and by extension how likely it may be to attract bugs or additional cost in maintenance/readability.

The calculated value for the cyclomatic complexity indicates how many different paths through the code there are. This means that lower numbers are better than higher numbers.

Clean code is likely to have lower cyclomatic complexity that dirty code. High cyclomatic complexity increases the risk of the presence of defects in the code due to increased difficulty in its testability, readability, and maintainability.

Calculating Cyclomatic Complexity in Visual Studio

To calculate the cyclomatic complexity, go to the the Analyze menu and choose Calculate Code Metrics for Solution (or for a specific project within the solution).

This will open the Code Metrics Results window as seen in the following screenshot.

image

Take the following code (in a project called ClassLibrary1):

namespace ClassLibrary1
{
    public class Class1
    {
    }
}

If we expand the results in the Code Metrics Window we can drill down into classes and right down to individual methods as in the following screenshot.

image

More...

SHARE:

3 Ways to Pass State Between SpecFlow Step Definitions

Sometimes we need one step definition to know about parameter data that was passed to previous steps.

There’s 3 ways main ways to do this, each with various considerations.

Private Field in Binding Class

This is the simplest approach. Simply add a private field to your [Binding] class containing your step definitions.

In a step, just grab the parameter data and store it in this private field.

In later steps, retrieve the data from this private field.

This approach is simple but breaks down when we start to refactor our step definitions into multiple step definition classes; the private field data is not accessible between steps in different binding classes.

ScenarioContext

We can store data for the current executing scenario using the ScenarioContext class.

There's a number of ways to use it, but essentially it’s a dictionary that we can store arbitrary data and retrieve it by a key:

ScenarioContext.Current["age"] = 42;

This approach can be used even if we refactor step definitions out across multiple binding classes. The scenario context is shared between all steps for the lifetime that a scenario is executing.

This approach can start to get a little ugly, we also have to make sure our dictionary keys are correct in all the steps where we set or retrieve data.

By default, it’s also weakly-typed – the dictionary just stores objects so we can end up with casts every time we retrieve data.

Context Injection

The third method (which I wrote about previously) lets us take a dependency injection style approach, and define POCOs to represent the data (in a strongly typed way) that automatically gets injected into each of our binding classes.

Like ScenarioContext, this approach also works across multiple binding classes. Unlike ScenarioContext, we don’t have to worry about dictionary keys or casting.

 

If you’re new to SpecFlow and Gherkin, check out my Pluralsight course:  Automated Acceptance Testing with SpecFlow and Gherkin or if you understand the basics or have been using it for a while, check out  SpecFlow Tips and Tricks course to learn how to create more maintainable SpecFlow test automation solutions.

If you want to fill in the gaps in your C# knowledge be sure to check out my C# Tips and Traps training course from Pluralsight – get started with a free trial.

SHARE: