FeatureToggle v4 RC2 with .NET Core Configuration Changes

FeatureToggle logo

The pre-release RC2 version of FeatureToggle with .NET Core support is now available on NuGet.

See release notes and GitHub issues for additional background/breaking changes/limitations.

RC2 builds on RC1 and modifies the format of the json settings to make use of nesting as discussed in this issue, for example:

{
  "FeatureToggle": {
    "Printing": "true",
    "Saving": "false"
  }
}

Thanks to @steventmayer for the suggestions.

SHARE:

Architecting Azure Functions: Function Timeouts and Work Fan-Out with Queues

When moving to Azure Functions or other FaaS offerings it’s possible to fall into the trap of “desktop development’ thinking, whereby a function is implemented as if it were a piece of desktop code. This may negate the benefits of Azure Functions and may even cause function failures because of timeouts. An Azure Function can execute for 5 minutes before being shut down by the runtime when running under a Consumption Plan. This limit can be configured to be longer in the host.json (currently to a mx of 10 minutes). You could also investigate something like Azure Batch.

Non Fan-Out Example

Azure functions flow

In this initial attempt, a blob-triggered function is created that receives a blob containing a data file. Each line has some processing performed on it (simulated in the following code) and then writes multiple output blobs, one for each processed line.

using System.Threading;
using System.Diagnostics;

public static void Run(TextReader myBlob, string name, Binder outputBinder, TraceWriter log)
{
    var executionTimer = Stopwatch.StartNew();

    log.Info($"C# Blob trigger function Processed blob\n Name:{name}");

    string dataLine;
    while ((dataLine = myBlob.ReadLine()) != null)
    {
        log.Info($"Processing line: {dataLine}");
        string processedDataLine = ProcessDataLine(dataLine);
        
        string path = $"batch-data-out/{Guid.NewGuid()}";
        using (var writer = outputBinder.Bind<TextWriter>(new BlobAttribute(path)))
        {
            log.Info($"Writing output line: {dataLine}");
            writer.Write(processedDataLine);
        }
    }

    executionTimer.Stop();

    log.Info($"Procesing time: {executionTimer.Elapsed}");
     
}

private static string ProcessDataLine(string dataLine)
{
    // Simulate expensive processing
    Thread.Sleep(1000);

    return dataLine;
}

Uploading a normal sized input data file may not result in any errors, but if a larger file is attempted then you may get a function timeout:

Microsoft.Azure.WebJobs.Host: Timeout value of 00:05:00 was exceeded by function: Functions.ProcessBatchDataFile.

Fan-Out Example

Embracing Azure Functions more, the following pattern can be used, whereby there is no processing in the initial function. Instead the function just divides up each line of the file and puts it on a storage queue. Another function is triggered from these queue messages and does the actual processing. This means that as the number of messages in the queue grows, multiple instances of the queue-triggered function will be created to handle the load.

Azure functions fan-out flow

public async static Task Run(TextReader myBlob, string name, IAsyncCollector<string> outputQueue, TraceWriter log)
{
    log.Info($"C# Blob trigger function Processed blob\n Name:{name}");

    string dataLine;
    while ((dataLine = myBlob.ReadLine()) != null)
    {
        log.Info($"Processing line: {dataLine}");
               
        await outputQueue.AddAsync(dataLine);
    }
}

And the queue-triggered function that does the actual work:

using System;
using System.Threading; 

public static void Run(string dataLine, out string outputBlob, TraceWriter log)
{
    log.Info($"Processing data line: {dataLine}");

    string processedDataLine = ProcessDataLine(dataLine);

    log.Info($"Writing processed line to blob: {processedDataLine}");
    outputBlob = processedDataLine;
}


private static string ProcessDataLine(string dataLine)
{
    // Simulate expensive processing
    Thread.Sleep(1000);

    return dataLine;
}

When architecting processing this way there are other limits which may also cause problems such as (but not limited to) queue scalability limits.

To learn more about Azure Functions, check out my Pluralsight courses: Azure Function Triggers Quick Start  and  Reducing C# Code Duplication in Azure Functions.

SHARE:

Multiple Platform Targeting in Visual Studio 2017

Suppose you are creating a library that has a central set of features and also additional features that are only available on some platforms. This means that when the project is built there are multiple assemblies created, one for each platform.

One way to achieve multi platform targeting is to create a number of separate projects, for example one for .NET Core , one for UWP, another one for .NET framework, etc. Then a shared source code project can be added and referenced by each of these individual projects; when each project is built separate binaries are produced. I’ve used this approach in the past, for example when working on FeatureToggle but is a little clunky and results in many projects in the solution.

Another approach is to have a single project that is not limited to a single platform output, but rather compiles  to multiple platform assemblies.

For example, in Visual Studio 2017, create a new .NET Core class library project called TargetingExample and add a class called WhoAmI as follows:

using System;

namespace TargetingExample
{
    public static class WhoAmI
    {
        public static string TellMe()
        {
            return ".NET Core";
        }
    }
}

After building the following will be created: "…\MultiTargeting\TargetingExample\TargetingExample\bin\Debug\netcoreapp1.1\TargetingExample.dll". Notice the platform directory “netcoreapp1.1”.

If we add a new .NET Core console app project and reference the TargetingExample project:

using System;
using TargetingExample;

namespace ConsoleApp1
{
    class Program
    {
        static void Main(string[] args)
        {
            Console.WriteLine(WhoAmI.TellMe());
            Console.ReadLine();
        }
    }
}

This produces the output: .NET Core

If we edit the FeatureToggle.csproj file it looks like the following (notice the TargetFramework element has a single value netcoreapp1.1):

<Project Sdk="Microsoft.NET.Sdk">
  <PropertyGroup>
    <TargetFramework>netcoreapp1.1</TargetFramework>
  </PropertyGroup>
</Project>

The file can be modified as follows (notice the plural <TargetFrameworks>):

<Project Sdk="Microsoft.NET.Sdk">
  <PropertyGroup>
    <TargetFrameworks>netcoreapp1.1;net461;</TargetFrameworks>
  </PropertyGroup>
</Project>

Building now produces: "…\MultiTargeting\TargetingExample\TargetingExample\bin\Debug\netcoreapp1.1\TargetingExample.dll" and  "…\MultiTargeting\TargetingExample\TargetingExample\bin\Debug\net461\TargetingExample.dll"”.

A new Windows Classic Desktop Console App project can now be added (and the .NET framework version changed to 4.6.1) and a reference to TargetingExample  added.

using System;
using TargetingExample;

namespace ConsoleApp2
{
    class Program
    {
        static void Main(string[] args)
        {
            Console.WriteLine(WhoAmI.TellMe());
            Console.ReadLine();
        }
    }
}

The new console app contains the preceding code and when run produces the output: .NET Core.

Now we have a single project compiling for multiple target platforms. We can take things one step further by having different functionality depending on the target platform. One simple way to do this is to use conditional compiler directives as the following code shows:

using System;

namespace TargetingExample
{
    public static class WhoAmI
    {
        public static string TellMe()
        {
#if NETCOREAPP1_1
            return ".NET Core";
#elif NETFULL
            return ".NET Framework";
#else
            throw new NotImplementedException();  // Safety net in case of typos in symbols
#endif
        }
    }
}

The preceding code relies on the conditional compilation symbols being defined, this can be done by editing the project file once again as follows:

<Project Sdk="Microsoft.NET.Sdk">
  <PropertyGroup>
    <TargetFrameworks>netcoreapp1.1;net461;</TargetFrameworks>
  </PropertyGroup>

  <PropertyGroup Condition=" '$(TargetFramework)' == 'netcoreapp1.1' ">
    <DefineConstants>NETCOREAPP1_1</DefineConstants>
  </PropertyGroup>
  
  <PropertyGroup Condition=" '$(TargetFramework)' == 'net461' ">
    <DefineConstants>NETFULL</DefineConstants>
  </PropertyGroup>
</Project>

Now when the project is built, the netcoreapp1.1\TargetingExample.dll will return “.NET Core” and net461\TargetingExample.dll will return “.NET Framework”. Each dll has been compiled with different functionality depending on the platform.

Update: The explicit <DefineConstants> for the different platforms are not required if you want to use the defaults, e.g. "NETCOREAPP1_1", "NET461", etc as per this Twitter thread and GitHub.

SHARE:

FeatureToggle v4 RC1 with .NET Core Support

The pre-release RC1 version of FeatureToggle with .NET Core support is now available on NuGet.

See release notes and GitHub issues for additional background/breaking changes/limitations.

The main drive for v4 is to add initial .NET Core support.

Using Feature Toggle in a .NET Core Console App

In Visual Studio 2017, create a new .NET Core Console App and install the NuGet package. This will also install the dependent FeatureToggle.NetStandard package (.NET Standard 1.4).

Add the following code to Program.cs:

using System;
using FeatureToggle;

namespace ConsoleApp1
{

    public class Printing : SimpleFeatureToggle { }
    public class Saving : EnabledOnOrAfterDateFeatureToggle { }
    public class Tweeting : EnabledOnOrAfterAssemblyVersionWhereToggleIsDefinedToggle { }

    class Program
    {
        static void Main(string[] args)
        {
            var p = new Printing();
            var s = new Saving();
            var t = new Tweeting();

            Console.WriteLine($"Printing is {(p.FeatureEnabled ? "on" : "off")}");
            Console.WriteLine($"Saving is {(s.FeatureEnabled ? "on" : "off")}");
            Console.WriteLine($"Tweeting is {(t.FeatureEnabled ? "on" : "off")}");


            Console.ReadLine();
        }
    }
}

Running the application will result in an exception due to missing appSettings.config file: “System.IO.FileNotFoundException: 'The configuration file 'appSettings.json' was not found and is not optional.“ By default, FeatureToggle will expect toggles to be configured in this file, add an appSettings.json and set its Copy To Output Directory to “Copy if newer” and add the following content:

{
  "FeatureToggle.Printing": "true",
  "FeatureToggle.Saving": "01-Jan-2014 18:00:00",
  "FeatureToggle.Tweeting": "2.5.0.1" // Assembly version is set to 2.5.0.0
}

Running the app now result in:

Printing is on
Saving is on
Tweeting is off

Using Feature Toggle in an ASP.NET Core App

Usage in an ASP.NET core app currently requires the configuration to be provided when instantiating a toggle, this may be cleaned up in future versions. For RC1 the following code shows the Startup class creating a FeatureToggle AppSettingsProviderand and passing it the IConfigurationRoot from the startup class.

public void ConfigureServices(IServiceCollection services)
{
    // Set provider config so file is read from content root path
    var provider = new AppSettingsProvider { Configuration = Configuration };

    services.AddSingleton(new Printing { ToggleValueProvider = provider });
    services.AddSingleton(new Saving { ToggleValueProvider = provider });

    // Add framework services.
    services.AddMvc();
}

The appSettings would look something like the following:

{
  "FeatureToggle.Printing": "true",
  "FeatureToggle.Saving": "false",
  "Logging": {
    "IncludeScopes": false,
    "LogLevel": {
      "Default": "Warning"
    }
  }
}

As an example of using this configuration check out the example project on GitHub, in particular the following:

https://github.com/jason-roberts/FeatureToggle/blob/master/src/Examples/AspDotNetCoreExample/Models/Printing.cs

https://github.com/jason-roberts/FeatureToggle/blob/master/src/Examples/AspDotNetCoreExample/Models/Saving.cs

https://github.com/jason-roberts/FeatureToggle/blob/master/src/Examples/AspDotNetCoreExample/ViewModels/HomeIndexViewModel.cs

https://github.com/jason-roberts/FeatureToggle/blob/master/src/Examples/AspDotNetCoreExample/Controllers/HomeController.cs

https://github.com/jason-roberts/FeatureToggle/blob/master/src/Examples/AspDotNetCoreExample/Views/Home/Index.cshtml

https://github.com/jason-roberts/FeatureToggle/blob/master/src/Examples/AspDotNetCoreExample/Startup.cs

https://github.com/jason-roberts/FeatureToggle/blob/master/src/Examples/AspDotNetCoreExample/appsettings.json

SHARE:

Free eBook C# 7.0: What’s New Quick Start Complete

My new free eBook “C# 7.0: What’s New Quick Start” is now complete and available for download.

C# 7.0: What’s New Quick Start eBook Cover Image

The book covers the following:

  • Literal Digit Separators and Binary Literals
  • Throwing Exceptions in Expressions
  • Local Functions
  • Expression Bodied Accessors, Constructors and Finalizers
  • Out Variables
  • By-Reference Local Variables and Return Values
  • Pattern Matching
  • Switch Statements
  • Tuples

You can download it for free or pay what you think it is worth.

Happy reading!

If you want to fill in the gaps in your C# knowledge be sure to check out my C# Tips and Traps training course from Pluralsight – get started with a free trial.

SHARE:

Creating Versioned APIs with Azure Functions and Proxies

One of the interesting possibilities with the (currently in preview) Azure Function Proxies is the ability to create HTTP APIs that can be versioned and also deployed/managed independently.

For example, suppose there is a API that lives at the root “https://dctdemoapi.azurewebsites.net/api". We could have multiple resources under this root such as customer, products, etc.

So to get the product with an id of 42 we’d construct: “https://dctdemoapi.azurewebsites.net/api/products?id=42”.

If we wanted the ability to version the API we could construct “https://dctdemoapi.azurewebsites.net/api/v1/products?id=42” for version 1 and “https://dctdemoapi.azurewebsites.net/api/v2/products?id=42” for version 2, etc.

Using proxies we can use the format “https://dctdemoapi.azurewebsites.net/api/[VERSION]/[RESOURCE]?[PARAMS]”

Now we can create 2 proxies (for example in an Azure Function called “dctdemoapi”) that forwards the HTTP requests to other Function Apps (in this example dctdemoapiv1 and dctdemoapiv2).

Screenshots of the proxies are as follows:

Azure Function proxy settings for API version 1

Azure Function proxy settings for API version 2

And the respective proxies.json config file:

{
    "proxies": {
        "v1": {
            "matchCondition": {
                "route": "api/v1/{*restOfPath}"
            },
            "backendUri": "https://dctdemoapiv1.azurewebsites.net/api/{restOfPath}"
        },
        "v2": {
            "matchCondition": {
                "route": "api/v2/{*restOfPath}"
            },
            "backendUri": "https://dctdemoapiv2.azurewebsites.net/api/{restOfPath}"
        }
    }
}

Notice in the proxy config the use of the wildcard term “{*restOfPath}” – this will pass the remainder of the path segments to the backend URL, for example “products”, meaning a request to “https://dctdemoapi.azurewebsites.net/api/v1/products?id=42” will be sent to “https://dctdemoapiv1.azurewebsites.net/api/products?id=42”; and “https://dctdemoapi.azurewebsites.net/api/v2/products?id=42” will be sent to “https://dctdemoapiv2.azurewebsites.net/api/products?id=42”.

Now versions of the API can be updated/monitored/managed/etc independently because they are separate Function App instances, but code duplication is a potential problem; common business logic could however be compiled into an assembly and referenced in both Function Apps.

To jump-start your Azure Functions knowledge check out my Azure Function Triggers Quick Start Pluralsight course.

SHARE:

Custom Session Logging in Marten

Marten is a .NET document database library that uses an underlying PostgreSQL database to store objects as JSON. The library has a variety of features that allow the logging of SQL statements issued to the underlying PostgreSQL including previewing LINQ query SQL statements.  One of the other logging features available allows custom logging to be created for individual session operations such as successfully issued database SQL commands, failed commands, and changes that were saved. There are also numerous other logging/extension points that can be utilized such as logging schema change SQL and automatically using a logger for all sessions.

The following code shows a console application that writes two customers and then retrieves them:

using System;
using static System.Console;
using Marten;

namespace MartenFKDemo
{
    class Customer
    {
        public int Id { get; set; }
        public string Name { get; set; }
        public override string ToString() => $"{Id} {Name}";
    }

    class Program
    {
        static void Main(string[] args)
        {
            const string connectionString = "host = localhost; database = OrderDb; password = g7qo84nck22i; username = postgres";

            var store = DocumentStore.For(connectionString);

            WriteLine("Creating new customer");
            using (var session = store.OpenSession())
            {
                session.Store(new Customer {Name = "Amrit"}, new Customer {Name = "Sarah"});

                session.SaveChanges();
            }


            WriteLine("All customers:");
            using (var session = store.QuerySession())
            {
                foreach (Customer customer in session.Query<Customer>())
                {
                    WriteLine(customer);
                }
            }
            
            ReadLine();
        }
    }
}

Running the application results in the following output:

Creating new customer
All customers:
9001 Amrit
9002 Sarah

To create a custom session logger, the IMartenSessionLogger interface can be implemented, a simple version that logs to the console is shown as follows:

class ColorConsoleLogger : IMartenSessionLogger
{
    public void LogSuccess(Npgsql.NpgsqlCommand command)
    {
        ForegroundColor = ConsoleColor.Green;

        WriteLine(command.CommandText); // additional properties (e.g. SQL parameters) are available
    }

    public void LogFailure(Npgsql.NpgsqlCommand command, Exception ex)
    {
        ForegroundColor = ConsoleColor.Red;

        WriteLine(command.CommandText); // additional properties (e.g. SQL parameters) are available
        WriteLine(ex);
    }

    public void RecordSavedChanges(IDocumentSession session, IChangeSet commit)
    {
        ForegroundColor = ConsoleColor.Gray;

        foreach (object insertedItem in commit.Inserted) //  updated/deleted are also available
        {
            WriteLine(insertedItem);
        }
    }
}

To configure individual sessions to use this logger, the sessions Logger property can be set as the following modified code demonstrates:

ResetColor();
WriteLine("Creating new customer");
using (var session = store.OpenSession())
{
    // Set logger for this session
    session.Logger = new ColorConsoleLogger();

    session.Store(new Customer {Name = "Amrit"}, new Customer {Name = "Sarah"});

    session.SaveChanges();
}

ResetColor();
WriteLine("All customers:");
using (var session = store.QuerySession())
{
    // Set logger for this session
    session.Logger = new ColorConsoleLogger();

    foreach (Customer customer in session.Query<Customer>())
    {
        ResetColor();
        WriteLine(customer);
    }
}

This produces the following output:

Creating new customer
select public.mt_upsert_customer(doc := :p0, docDotNetType := :p1, docId := :p2, docVersion := :p3);select public.mt_upsert_customer(doc := :p4, docDotNetType := :p5, docId := :p6, docVersion := :p7);
13001 Amrit
13002 Sarah
All customers:
select d.data, d.id, d.mt_version from public.mt_doc_customer as d
13001 Amrit
13002 Sarah

screenshot of Marten custom logger

To learn more about the document database features of Marten check out my Pluralsight courses: Getting Started with .NET Document Databases Using Marten and Working with Data and Schemas in Marten or the documentation.

You can start watching with a Pluralsight free trial.

SHARE:

Previewing the Generated PostgreSQL SQL for a Query in Marten

Marten is a .NET document database library that uses an underlying PostgreSQL database to store objects as JSON. The library has a variety of features that allow the logging of SQL statements issued to the underlying PostgreSQL database in addition to being able to do things such as get the PostgreSQL query plan for a given LINQ query.

One simple way to get the generated SQL for a Marten LINQ query is to use the ToCommand() extension method.

As an example, suppose we are developing some query code as follows (this code uses the Include method to include the related documents in a single database round-trip):

Customer customer = null;

List<Order> orders = session.Query<Order>()
                            .Include<Customer>(joinOnOrder => joinOnOrder.CustomerId, includedCustomer => customer = includedCustomer)
                            .Where(x => x.CustomerId == 4001).ToList();

If we want to get an idea of what SQL Marten will generate for this LINQ query, we can change the code as shown in the following:

Customer customer = null;

IQueryable<Order> orders = session.Query<Order>()
                                  .Include<Customer>(joinOnOrder => joinOnOrder.CustomerId, includedCustomer => customer = includedCustomer)
                                  .Where(x => x.CustomerId == 4001);

// Get the SQL command that will be issued when the query executes
NpgsqlCommand cmd = orders.ToCommand();

// Output some selected command info
Console.WriteLine(cmd.CommandText);

foreach (NpgsqlParameter parameter in cmd.Parameters)
{
    Console.WriteLine($"Parameter {parameter.ParameterName} = {parameter.Value}");
}

// Ensure included customer variable is populated
List<Order> orderResults = orders.ToList();

Console.WriteLine(customer.Name);
foreach (Order order in orderResults)
{
    Console.WriteLine($" Order {order.Id} for {order.Quantity} items");
}

Running this preceding code  results in the following console output:

select d.data, d.id, d.mt_version, customer_id.data, customer_id.id, customer_id.mt_version from public.mt_doc_order as d INNER JOIN public.mt_doc_customer as customer_id ON d.customer_id = customer_id.id where d.customer_id = :arg0
Parameter arg0 = 4001
Sarah
 Order 3001 for 42 items
 Order 4001 for 477 items
 Order 5001 for 9 items

To learn more about the document database features of Marten check out my Pluralsight courses: Getting Started with .NET Document Databases Using Marten and Working with Data and Schemas in Marten.

You can start watching with a Pluralsight free trial.

SHARE:

Retrieving Raw JSON Data in Web API with Marten

Marten is a .NET document database library that uses an underlying PostgreSQL database to store objects as JSON.

Ordinarily, Marten takes care of retrieving the JSON from the database and deserializing it into an object. We can however instruct Marten to perform a query to retrieve document(s) and not perform the deserialization, instead giving us the JSON as it appears in the underlying PostgreSQL record. If we are exposing documents via a Web API, this feature can be taken advantage of to reduce some processing overhead on the server.

As an example, we could have the following Customer document defined:

public class Customer
{
    public int Id { get; set; }
    public string Name { get; set; }

    // etc.
}

And in a CustomersController we could start with a method as follows to add Customers to the database:

public void Post(Customer customer)
{
    // no validation

    // DocumentStore would normally be created only once in app, e.g. via IOC singleton 
    using (var store = DocumentStore.For(ConnectionString))
    {
        using (var session = store.LightweightSession())
        {
            session.Store(customer);
            session.SaveChanges();
        }
    }
}

To get all Customers, the following method could be written:

// GET api/customers
public IEnumerable<Customer> Get()
{
    // DocumentStore would normally be created only once in app, e.g. via IOC singleton 
    using (var store = DocumentStore.For(ConnectionString))
    {
        using (var session = store.QuerySession())
        {
            return session.Query<Customer>();
        }
    }
}

The preceding method however incurs the additional overhead of Marten deserializing the database JSON into Customer objects, only to be serializing it again back into JSON on the way out of the API.

When creating the LINQ query, the Marten ToJsonArray() method can be added to instruct Marten to simply return the JSON directly from the database.

We can then modify the Get method as follows:

public HttpResponseMessage Get()
{
    // DocumentStore would normally be created only once in app, e.g. via IOC singleton 
    using (var store = DocumentStore.For(ConnectionString))
    {
        using (var session = store.QuerySession())
        {
            string rawJsonFromDb = session.Query<Customer>().ToJsonArray();

            var response = Request.CreateResponse(HttpStatusCode.OK);
            response.Content = new StringContent(rawJsonFromDb, Encoding.UTF8, "application/json");
            return response;
        }
    }
}

We could also write the parameterized Get method and use Marten’s AsJson() method to get the JSON string for the individual Customer document as  in the following code:

public HttpResponseMessage Get(int id)
{
    // DocumentStore would normally be created only once in app, e.g. via IOC singleton 
    using (var store = DocumentStore.For(ConnectionString))
    {
        using (var session = store.LightweightSession())
        {
            var rawJsonFromDb = session.Query<Customer>().Where(x => x.Id == id).AsJson().FirstOrDefault();

            if (string.IsNullOrEmpty(rawJsonFromDb))
            {
                throw new HttpResponseException(HttpStatusCode.NotFound);
            }

            var response = Request.CreateResponse(HttpStatusCode.OK);
            response.Content = new StringContent(rawJsonFromDb, Encoding.UTF8, "application/json");
            return response;
        }
    }
}

To learn more about the document database features of Marten check out my Pluralsight courses: Getting Started with .NET Document Databases Using Marten and Working with Data and Schemas in Marten.

You can start watching with a Pluralsight free trial.

SHARE:

New Pluralsight Course: Working with Data and Schemas in Marten

Marten is a .NET document database library to allows objects to be stored, retrieved, and queried as documents stored as JSON in an underlying PostgreSQL database. This new course is a follow-on from the previous Getting Started with .NET Document Databases Using Marten course, if you’re new to Marten I’d recommend checking out the previous course first before continuing with this new one.

Among other topics, this new course covers how to log/diagnose the SQL that is being issued to PostgreSQL; how to enable offline optimistic concurrency; bulk document inserts; a number of ways to improve query performance; and the customization of database schema objects.

You can check out the new course on the Pluralsight site.

You can start watching with a Pluralsight free trial.

SHARE: