Recently I had written about Setting Up DbUP in Azure Pipelines at one of my clients. We had all our scripts run under Transaction Per Script mode and was all working fine until we had to deploy some SQL scripts that cannot be run under a transaction. So now I have a bunch of SQL script files that can be run under a transaction and some (like the ones below - Full-Text Search) that cannot be run under a transaction. By default, if you run this using DbUp under a transaction you get the error message CREATE FULLTEXT CATALOG statement cannot be used inside a user transaction and this is an existing issue.

Full Text Search Script
1
2
3
4
5
6
7
8
CREATE FULLTEXT CATALOG MyCatalog
GO

CREATE FULLTEXT INDEX
ON  [dbo].[Products] ([Description])
KEY INDEX [PK_Products] ON MyCatalog
WITH CHANGE_TRACKING AUTO
GO

One option would be to turn off transaction all together using builder.WithoutTransaction() (default transaction setting) and everything would work as usual. But in case you want each of your scripts to be run under a transaction you can choose either of the options below.

Using Pre-Processors to Modify Script Before Execution

Script Pre-Processors are an extensibility hook into DbUp and allows you to modify a script before it gets executed. So we can wrap each SQL script with a transaction before it gets executed. In this case, you have to configure your builder to run WithoutTransaction and modify each script file before execution and explicitly wrap with a transaction if required. Writing a custom pre-processor is quickly done by implementing the IScriptPreprocessor interface, and you get the contents of the script file to modify. In this case, all I do is check whether the text contains ‘CREATE FULLTEXT’ and wrap with a transaction if it does not. You could use file-name conventions or any other rules of your choice to perform the check and conditionally wrap with a transaction.

Conditionally Apply Transaction
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
public class ConditionallyApplyTransactionPreprocessor : IScriptPreprocessor
{
    public string Process(string contents)
    {
        if (!contents.Contains("CREATE FULLTEXT", StringComparison.InvariantCultureIgnoreCase))
        {
            var modified =
                $@"
BEGIN TRANSACTION   
BEGIN TRY
           {contents}
    COMMIT;
END TRY
BEGIN CATCH
    ROLLBACK;
    THROW;
END CATCH";

            return modified;
        }
        else
            return contents;
    }
}

Using Multiple UpgradeEngine to Deploy Scripts

If you are not particularly fine with tweaking the pre-processing step and want to use the default implementations of DbUp and still achieve keep transactions for you scripts where possible, you can use multiple upgraders to perform the job for you. Iterate over all your script files and then partition them into batches of files that need to be run under a transaction and those that can’t be run under a transaction. As shown in the image below you will end up with multiple batches with alternating sets of transaction/non-transaction set of scripts. When performing the upgrade over a batch, set the WithTransactionPerScript on the builder conditionally. If any of the batches fail, you can terminate the database upgrade.

Script file batches

Execute all batches (Might not be production ready)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
{
    Func<string,bool> canRunUnderTransaction = (fileName) => !fileName.Contains("FullText");
    Func<List<string>, string, bool> belongsToCurrentBatch = (batch, file) =>
      batch != null &&
        canRunUnderTransaction(batch.First()) == canRunUnderTransaction(file);

    var batches = allScriptFiles.Aggregate
        (new List<List<string>>(), (current, next) =>
            {
                if (belongsToCurrentBatch(current.LastOrDefault(),next))
                    current.Last().Add(next);
                else
                    current.Add(new List<string>() { next });

                return current;
            });

    foreach (var batch in batches)
    {
        var includeTransaction = !batch.Any(canRunUnderTransaction);

        var result = PerformUpgrade(batch.ToSqlScriptArray(), includeTransaction);

        if (!result.Successful)
        {
            Console.ForegroundColor = ConsoleColor.Red;
            Console.WriteLine(result.Error);
            Console.ResetColor();
            return -1;
        }
    }

    Console.ForegroundColor = ConsoleColor.Green;
    Console.WriteLine("Success!");
    Console.ResetColor();
    return 0;
}

private static DatabaseUpgradeResult PerformUpgrade(
    SqlScript[] scripts,
    bool includeTransaction)
{
    var builder = DeployChanges.To
        .SqlDatabase(connectionString)
        .WithScripts(scripts)
        .LogToConsole();

    if (includeTransaction)
        builder = builder.WithTransactionPerScript();

      var upgrader = builder.Build();

    var result = upgrader.PerformUpgrade();

    return result;
}

Keeping all your scripts in a single place and automating it through the build-release pipeline is something you need to strive for. Hope this helps you to continue using DbUp even if you want to execute scripts that are a mix of transactional and non-transactional.

Getting your application to provide capabilities based on the role of the User using the system is a common thing. When using Azure Active Directory (AD), the Groups feature allows organizing users of your system into different roles. In the applications that we build, the group information can be used to enable/disable functionality. For, e.g., if your application has the functionality to add new users you might want to restrict this to only users belonging to the administrator role.

Adding new groups can be done using the Azure portal. Select Group Type, Security as it is intended to provide permissions based on roles.

Azure AD Add Group

For the Groups to be returned as part of the claims, the groupMembershipClaims property in application manifest needs to be updated. Setting it to SecurityGroup will return all SecurityGroups of the user.

Azure AD Manifest - Group Membership Claims
1
2
3
{
    "groupMembershipClaims": "SecurityGroup"
}

For each group created an ObjectId is assigned to it which is what gets returned as part of the claims. You can either add it as part of your applications config file or use Microsoft Graph API to query the list of groups at runtime. Here I have chosen to keep it as part of the config file.

appsettings.json
1
2
3
4
5
6
7
8
9
10
"AdGroups": [
  {
    "GroupName": "Admin",
    "GroupId": "119f6fb5-a325-47f9-9889-ae6979e9e120"
  },
  {
    "GroupName": "Employee",
    "GroupId": "02618532-b2c0-4e58-a32e-e715ddf07f63"
  }
]

Now that we have all the groups and associated configuration setup, we can wire up the .Net Core web application to start using the groups from the claims to enable/disable features. Using the Policy-based authorization capabilities of .Net core application we can wire up policies for all the groups we have.

Role-based authorization and claims-based authorization use a requirement, a requirement handler, and a pre-configured policy. These building blocks support the expression of authorization evaluations in code. The result is a richer, reusable, testable authorization structure.

We have an IsMemberOfGroupRequirement class to represent the requirement for all the groups, the IsMemberOfGroupHandler that implements how to validate a group requirement. The Handler reads the current user’s claims and checks it contains the objectId associated with the Group as a claim. If a match is found the requirement check is marked as a success. Since we want the request to continue to match for any other group requirements the requirement is not failed explicitly.

IsMemberOfGroup Requirement
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
public class IsMemberOfGroupRequirement : IAuthorizationRequirement
{
    public readonly string GroupId ;
    public readonly string GroupName ;

    public IsMemberOfGroupRequirement(string groupName, string groupId)
    {
        GroupName = groupName;
        GroupId = groupId;
    }
}

public class IsMemberOfGroupHandler : AuthorizationHandler<IsMemberOfGroupRequirement>
{
    protected override Task HandleRequirementAsync(
        AuthorizationHandlerContext context, IsMemberOfGroupRequirement requirement)
    {
        var groupClaim = context.User.Claims
             .FirstOrDefault(claim => claim.Type == "groups" &&
                 claim.Value.Equals(requirement.GroupId, StringComparison.InvariantCultureIgnoreCase));

        if (groupClaim != null)
            context.Succeed(requirement);

        return Task.CompletedTask;
    }
}

Registering the policies for all the groups in the application’s configuration file and the handler can be done as below. Looping through all the groups in the config we create a policy for each with the associated GroupName. It allows us to use the GroupName as the policy name at places where we want to restrict features for users belonging to that group.

Registering Policy and Handler
1
2
3
4
5
6
7
8
9
10
11
12
13
services.AddAuthorization(options =>
{
    var adGroupConfig = new List<AdGroupConfig>();
    _configuration.Bind("AdGroups", adGroupConfig);

    foreach (var adGroup in adGroupConfig)
        options.AddPolicy(
            adGroup.GroupName,
            policy =>
                policy.AddRequirements(new IsMemberOfGroupRequirement(adGroup.GroupName, adGroup.GroupId)));
});

services.AddSingleton<IAuthorizationHandler, IsMemberOfGroupHandler>();

Using the policy is now as simple as decorating your controllers with the Authorize attribute and providing the required Policy names on it as shown below.

1
2
3
4
5
6
[Authorize(Policy = "Admin")]
[ApiController]
public partial class AddUsersController : ControllerBase
{
    ....
}

Hope this helps you to setup Role-based functionality for your ASP.Net Core applications using Azure AD as authentication/authorization provider.

When using Azure Active Directory for managing your users, it is a common requirement to add additional attributes to your Users like SkypeId, employee code, EmployeeId and similar. Even though this happens to be a common need, getting this done is not that straightforward. This post describes how you can get additional properties on User objects in Azure AD.

Recently when I had to do this at a client, we had users in Azure AD, the additional property, employeeCode for the user was available in an internal application which had the users Azure email-address mapped to it. We needed these to be synced across to the user Azure AD and make it available as part of claims for a Web site that uses Azure AD authentication

Adding Custom Attribute using Directory Schema Extensions

Azure AD user has a set of default properties, manageable through the Azure Portal. Any additional property to User gets added as an extension to the current user Schema. To add a new property we first need to register an extension. Adding a new extension can be done using the GraphExplorer website. You need to specify the appropriate directory name (e.g., contoso.onmicrosoft.com) and the applicationObjectId. The application object id is the Object Id of the AD application that the Web Application uses to authenticate with Azure AD.

Azure AD supports a similar type of extension, known as directory schema extensions, on a few directory object resources. Although you have to use the Azure AD Graph API to create and manage the definitions of directory schema extensions, you can use the Microsoft Graph API to add, get, update and delete data in the properties of these extensions.

1
2
3
4
5
6
7
8
9
POST https://graph.windows.net/contoso.onmicrosoft.com/applications/
    <applicationObjectId>/extensionProperties?api-version=1.5 HTTP/1.1
{
    "name": "employeeCode<optionalEnvironmentName>",
    "dataType": "String",
    "targetObjects": [
        "User"
    ]
}

The response gives back the fully-qualified extension property name, which is used to write values to the property. Usually the name is of the format extension_<adApplicationIdWithoutDashes>_extensionPropertyName

If you have multiple environments (like Dev, Test, UAT, Prod) all pointing to the same Active Directory, it is a good idea to append the environment name to the extension property. It avoids any bad data issues between environments as all these properties get written to the same User object. You can automate the above step using any scripting language of your choice if required.

Setting Values for Custom Attributes

Now that we have the extension property created on the AD application, we can set the property on the User object. If you want to set this manually, you can use the GraphExplorer website again to do this.

1
2
3
4
5
PATCH https://graph.windows.net/contoso.onmicrosoft.com/users
        /[email protected]?api-version=1.5
{
    "extension_ab603c56068041afb2f6832e2a17e237_employeeCode<optionalEnvironmentName>": "EMP124"
}

In our case it was not a one-off case of updating the User object, so better wanted this to be automated. Employee codes were available from a database with the associated Azure AD email address. So we created a windows service job that would sync these codes to Azure AD. You can write to Azure AD schema extension properties using Microsoft Graph API. Add a reference to the Microsoft Graph NuGet package, and you are all set to go. For the Graph API to authenticate, use a different Azure AD app (separate to the one that you registered the extension property on, which the web app uses to authenticate), just because it needs additional permissions as well and it is a good idea to isolate that. Under Settings -> Required Permissions, Add Microsoft Graph and provide the relevant permissions for it to write the user’s profile/directory data.

Azure AD Graph API Permissions

Get Graph Api Client
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
private static async Task<GraphServiceClient> GetGraphApiClient()
{
    var clientId = ConfigurationManager.AppSettings["AppId"];
    var secret = ConfigurationManager.AppSettings["Secret"];
    var domain = ConfigurationManager.AppSettings["Domain"];

    var credentials = new ClientCredential(clientId, secret);
    var authContext =
        new AuthenticationContext($"https://login.microsoftonline.com/{domain}/");
    var token = await authContext
        .AcquireTokenAsync("https://graph.microsoft.com/", credentials);

    var graphServiceClient = new GraphServiceClient(new DelegateAuthenticationProvider((requestMessage) =>
    {
        requestMessage
            .Headers
            .Authorization = new AuthenticationHeaderValue("bearer", token.AccessToken);

        return Task.CompletedTask;
    }));

    return graphServiceClient;
}
Update Extension Value
1
2
3
4
5
6
7
8
9
10
11
12
13
private async Task UpdateEmployeeCode(
    string employeeCodePropertyName, GraphServiceClient graphApiClient, Employee employee)
{
    var dictionary = new Dictionary<string, object>();
    dictionary.Add(employeeCodePropertyName, employee.Code);

    await graphApiClient.Users[employee.EmailAddress]
        .Request()
        .UpdateAsync(new User()
        {
            AdditionalData = dictionary
        });
}

Looping through all the employee codes, you can update all of them into Azure AD at regular intervals. To verify that the attributes are updated correctly, you can either use the Graph API client to read the extension property or use the Graph Explorer Website.

Accessing Custom Attributes through Claims

With the Azure AD updated with the employee code for each user, we can now set up the AD application to return the additional property as part of the claims, when the web application authenticates with it. The application manifest of the Azure AD application needs to be modified to return the extension property as part of the claims. By default optionalClaims property is set to null and you can update it with the below values.

Azure AD Application Manifest - Optional Claims

Optional Claims in Azure AD Application Manifest
1
2
3
4
5
6
7
8
9
10
11
12
"optionalClaims": {
    "idToken": [
      {
        "name": "extension_<id>_employeeCodeLocal",
        "source": "user",
        "essential": true,
        "additionalProperties": []
      }
    ],
    "accessToken": [],
    "saml2Token": []
  },

I updated the idToken property as the .Net Core Web Application was using JWT ID token. If you are unsure of what token you can use Fiddler to find what kind of token is used (as shown below).

Id token returned

With the optonalClaims set, the web application is all set to go. For an authenticated user (with the extension property set), the extension property is available as part of claims. The claim type will be ‘extn.employeeCode’. The below code can be used to extract the employee code from the claim.

Get Employee Code From Claim
1
2
3
4
5
6
7
8
9
10
public static string GetEmployeeCode(this ClaimsPrincipal claimsPrincipal)
{
    if (claimsPrincipal == null || claimsPrincipal.Claims == null)
        return null;

    var empCodeClaim = claimsPrincipal.Claims
        .FirstOrDefault(claim => claim.Type.StartsWith("extn.employeeCode"));

    return empCodeClaim?.Value;
}
Usually, the claims start flowing through immediately. However, once it did happen to me that the claims did not come for over a long period. Not sure what I did wrong, but once I deleted and recreated the AD application, it started working fine.

Although setting additional properties on Azure AD Users is a common requirement, setting it up is not that straight-forward. Hope the portal improves someday, and it would be as easy as setting a list of key-value properties as extension properties, and it would all seamlessly flow through as part of the claims. However, till that day, hope this helps you to set up extra information on your Azure AD users.

Azure Pipelines is part of the Azure Devops offerings which enables you to continuously build test and deploy to any platform and cloud environments. It’s been a while since this has been out and it’s only recently that I have got a chance to play around with it at one of my clients. We use DBUp, a .Net library to deploy schema changes to our SQL Server database. It tracks which SQL scripts have been run already, and runs the change scripts that are needed to get your database up to date.

Setting up DbUp is very easy, and you can use the script straight from the docs to get started. If you are using .Net core console application VS template to setup DbUp make sure to modify the return type of the main function to use int and to return the appropriate application exit codes (as from the script in the doc.) I made the mistake of removing the return statements, only to later realize that build scripts were successfully passing even though the DbUp scripts were failing.

If you are using the .Net Core console application VS template (like I did) make sure you modify the return type of the main function in Program.cs to int.

In Azure Pipelines I have the build step publish the build output as a zip artifact. Using this in the release pipeline is a 2 step process

1 - Extract Zip Package

Using the Extract Files Task extract the zip package from the build artifacts. You can specify a destination folder for the files to be extracted to (as shown below).

Extract package

2 - Execute DbUp Package

With the package extracted out into a folder, we can now execute the console application (using the dotnet command line) by passing in the connection string as a command line argument.

Execute package

You now have your database deployments automated through the Azure Pipelines.

With Azure Pipelines you can continuosly build, test and deploy to any cloud platform. Azure Pipelines has multiple options to start based on your project. Even if you are developing a private application, Pipelines offers you 1 Free parallel job with upto 1800 minutes per month and also 1 Free self hosted with unlimited months (as it’s anyway running on your infrastructure).

On the Microsoft-hosted CI/CD with 1800 minutes you might need to find the used/remaining time any time during the month. You can find the remaining minutes from the Azure Devops portal and select the relevant organization.

Organization settings -> Retention and parallel jobs -> Parallel Jobs

Azure Devops Pipelines - Remaining Build Minutes

Hope that helps you find the remaining free build minutes for your organization!

At times you might be working in environments where there are a lot of restrictions on the tools that you can use, the process that you need to follow, etc. Under these circumstances, it is essential that we stick to some core and fundamental principles and practices that we as an industry have adopted. We need to make sure that we have that in place no matter what the restrictions imposed are. Below are a few of the restrictions that I along with my team had to face at one of the clients and what we did to keep ourselves on top of it and still deliver at a higher speed.

Working under Constraints.

The issues discussed might or might not immediately relate to you, the important thing is your attitude towards such issues and finding ways around your constraints, keeping yourself productive on the long run.

No Build Deploy Pipeline

When I joined the project, it amazed that we were still building/packaging the application from a local developer system and manually deploying this to the various environments (Dev, Test, UAT, and PROD).

Whenever a release was to be made one of the developers was to pause his current work, switch to the appropriate branch for the release, make sure he had the latest code base, build with the correct configuration to generate a package.

This might sound an outdated practice (as it did to me) but here I am at a client, in the year 2018 and it’s still happening. What surprised me, even more, was that the team did have access to an Octopus server (backed by a Jenkins build server) but since the deployment server did not have access to UAT/PROD servers they chose not to use it. You bet this was the first thing I was keen on fixing, as generating a release package from my local system would be the last thing that I want to do.

After a quick chat with the team, we decided on the below.

  • Set up build/deploy pipeline up to Test environment. This would allow seamless integration while we are developing features and get it out for testing. Since we had access till the Test environment, this was hardly an hours work to get it all working.

  • Since we did not have access to UAT/PROD and the process required us to hand over a deployment package to the concerned team, we set up a ‘Packaging Project’ in Octopus. This project basically unzips the selected build package into our Dev environment server, applies the configuration transforms and zips up the folder into a deployment package. With this, we are now able to create a deployment package for any given build and for any environment. We are also having discussions to enable access to UAT/PROD servers for the deployment servers so that we can deploy automatically, all the way to production.

No longer was the process dependent on a developer or a developer machine and was completely automated. For those reading this and in a similar situation but who does not have access to a build/deploy system like Jenkins/Octopus I would basically set up a simple script to pull down the source given a commit hash/branch/tfs label and perform a build and package independent of the working directory of the developer. This script could run on a shared server (if you have access to one) or worst on a developer’s machine/VM. The fundamental thing that we are trying to achieve is to decouple it from the current working folder on a developer machine and the manual steps involved in generating a package. As long as you have an automated way to create a package irrespective of what tools/systems you use we should be safe and sound.

Out of Sync SQL and Code Artifacts

The application is heavily dependent on Stored Procedures for pulling/pushing data out of the SQL Server Database. Yes, you heard it right, Stored Procedures and ones with business logic in them, which is what makes it actually worse. Looking at how the Stored Procedures were maintained I could see that the team started off with good intentions using DbUp but soon moved away from it. When I joined the process was to share SQL artifacts as attachments in the Jira story/bug. The Db Administrator (DBA) would then pull that out and manage them separately in a source control repository that was not the same as the application code base.

There was not much information on why this was the case, but the primary reason that they moved away from DbUp was that there was no visibility of the SQL scripts when running updates as it was an executable file that was the output of the DbUp project. Also, there were poor development/deployment practices that led to the ad-hoc execution of scripts in environments without actually updating the source control. This soon put the DBA out of control, and the only way to gain control back was to maintain it separately.

Again we decided to have a quick chat with the team along with the DBA on how to improve the current process, as it was getting harder to track application package versions and the associated scripts to go with that package.

  • DbUp by default embeds the SQL artifacts into the executable which removes all visibility into the actual scripts. However, this behaviour is configurable using ScriptProviders. By using the FileSystemScriptProvider, we can specify the folder from which to load the SQL scripts. Configuring the msbuild to copy out all the folder files to the output and including them into the final package was an easy change. This provided the DBA with the actual SQL artifacts, and he could review them quickly. We also started a code review process and began including the DBA for any changes related to SQL artifacts. This gave even more visibility to the DBA and helped catch issues right at the time of development.

  • With automated build/deploy till Test environment in place, we no longer had to make ad-hoc changes to the databases, and everything was pushed through the source control as it was faster and more comfortable.

With this few tweaks we were now in a much better state, and there was a one to one trackability between source code and SQL artifacts. It all lived as part of one package, traceable all the way to the source code commit tag auto-generated by the build system.

Not Invented Here Syndrome

With the kind of restrictions you have seen till now, you can guess the approach towards third party services and off-the-shelf products. Most of the things are still done in-house (including trying to replicate a service bus). The problem with this approach is that there is a limit as to which you can go doing that after which you probably lose out on your team or the code takes over you where you just cannot maintain it. When starting out on a new project and when the code base is still small building your own mechanisms might seem to work well. But once past that point, you no longer want to continue down the path but invest in industry proven tools. These include logging servers, service-bus/queues (if you need one), email services (especially if you want to track and do statistics on top of the emails send out).

Biggest challenge introducing this is mostly not cost related (as there are a lot of really affordable services for every business)it’s mostly the fear of unknown and lack of interest to venture out into unfamiliar territory. The reasons might vary for you, but try to understand the core reason that is hindering the change.

One technique that worked to get over the fear of the unknown was to introduce this slowly into the system, one at a time, giving enough time for people to get used to the change.

Seq was one of the first things that we had proposed and long waiting in the wish list. The team was using Serilog to log, and all the logs were stored in SQL table making it really hard to query and monitor the logs. The infrastructure team did not want to install Seq as it was all new to them and were not sure about the additional task for managing a Seq instance. So we suggested to them to just have it on the development server and to get familiar with the application first. After a couple of days, the business was seeing a benefit with increased visibility into the logs and infrastructure team was also happy with the increased visibility into the logs. Even before a week, they were happy to install one for the test environment as well. At the time of writing we are looking at getting a Seq instance on the UAT server and soon to have a production instance as well. Getting the interested stakeholders to have a feel of the application and slowly introducing the change is a great way to get buy-in.

Now we are trying to push for a service bus!

Build Server without NuGet access

The build server we were using was in-house hosted, and the box that it ran did not have internet access. This meant that you cannot have any external package dependencies that can be pulled at build time. We chose to include the package references along with the source code, which is anyways what I tend to prefer more. All our third-party libraries were pushed along with the source code repository, so the build machine had all the required dependencies and did not have to have internet connectivity to make a build.

Those are just a subset of the issues that we had to run into and bet you there were so many smaller ones. At times problems are not technical in nature, but more about communication and how effectively you are able to get all the people involved to get along with each other.

Any journey to advancement is about valuing the people around you, understanding them and taking them along with the change. It’s a journey that the team needs to make together and not a solo one.

Different people have different experiences, pain points, concerns and targets to check off. So as a team you need to understand what works for everyone and come to a collective agreement. Just getting all of the concerned parties into a room and having a healthy discussion (mainly by not being prescriptive but descriptive of the issues that you are facing) solves most of the problems.

Do you work in a similar environment? What challenges do you face at work? Sound off in the comments!

This article is part of a series of articles - Ok I have got HTTPS! What Next?. In this post, we explore how to use Subresource Integrity and the issues it solves.

Subresource Integrity (SRI) is a security feature that enables browsers to verify that files they fetch (for example, from a CDN) are delivered without unexpected manipulation. It works by allowing you to provide a cryptographic hash that a fetched file must match.

Subresource integrity

Using the integrity attribute on script and link element enables browsers to verify externally linked files before loading them. The integrity attribute takes a base64-encoded hash prefixed the corresponding hash algorithm prefix(at present sha256,sha3384, sha512), as shown in the example below.

Integrity attribute as part of the script tag
1
2
3
4
<script
  src="https://cdnjs.cloudflare.com/ajax/libs/redux/4.0.0/redux.js"
  integrity="sha256-KLkq+W1kKUA6iR5s5Xa/tdzU0yAmXNu7qIGKR/PBoUE="
  crossorigin="anonymous" />

Generating SRI Hash

To generate the SRI hash for files that are accessible over a URL, you can use srihash.org or srigenerator depending on what hash algorithm version you want. If you’re going to generate it on your local files, you can use OpenSSL command-line tool (which should be part of your git bash shell if you are looking around for it, like I did)

1
openssl dgst -sha256 -binary FILENAME.js | openssl base64 -A

Third-Party Libraries

For third-party libraries (js and CSS) referred via CDN, you can grab the script/link element along with the integrity attribute from the CDN sites. Here is an example below from cdnjs.

Generate script tag along with SRI Hash

When referring third party libraries via CDN its good to fall back to a local copy. In cases where the CDN is unreachable or the integrity check fails it can fall back to a local copy. I chose to include the integrity attribute on the fallback copy as well.

1
2
3
4
<script>
    window.jQuery ||
    document.write('<script src="/javascripts/libs/jquery/jquery-2.0.3.min.js" crossorigin="anonymous" integrity="sha256-ruuHogwePywKZ7bI1vHGGs7ScbBLhkNUcSSeRjhSUko=">\x3C/script>')
</script>

Application Specific Files

For application specific javascript files, you need to generate the hash everytime you modify it. You could look at integrating this with your build pipeline to make it seamless. You can use the OpenSSL command line tool as shown above to generate the hash during your application build process.

Inline JavaScript

The integrity attribute must not be specified when embedding a module script or when the src attribute is not specified. This means that SRI cannot be used for inline javascript. Even though inline javascript should be avoided, there still are scenarios where you might use that or have dynamically generated javascript. In these cases, we can use nonce attribute on script tag and whitelist that nonce in the CSP Headers.

nonce-<base64-value>
A whitelist for specific inline scripts using a cryptographic nonce (number used once). The server must generate a unique nonce value each time it transmits a policy. It is critical to provide an unguessable nonce, as bypassing a resource’s policy is otherwise trivial. See unsafe inline script for example. Specifying nonce makes a modern browser ignore ‘unsafe-inline’ which could still be set for older browsers without nonce support.

For the jquery fallback above we need a nonce attribute since this is loaded inline.

Nonce attribute
1
2
3
4
<script nonce="anF1ZXJ5ZmFsbGJhY2s=">
    window.jQuery ||
    document.write('<script src="/javascripts/libs/jquery/jquery-2.0.3.min.js" crossorigin="anonymous" integrity="sha256-ruuHogwePywKZ7bI1vHGGs7ScbBLhkNUcSSeRjhSUko=">\x3C/script>')
</script>

We can then specify this nonce on the CSP headers for the script-src. The nonce value can be anything that is base64 encoded.

Web.config CSP header
1
2
3
<add
  name="Content-Security-Policy"
  value="default-src 'self';script-src c.disquscdn.com 'self' 'nonce-anF1ZXJ5ZmFsbGJhY2s=' 'nonce-ZGlzcXVzc2NyaXB0'; />

Using nonce allows us to get away with having an inline script. However, this should be avoided if possible. As you may have noticed, by having a nonce on the attribute does not validate the script contents of the associated tag. It executes anything that is within that tag. So if you have dynamic content within the script block, this can be used to your disadvantage by attackers. So use it only if it’s absolutely necessary. However, having the nonce attribute for those cases is better so that you can limit inline javascript to those specific script tags.

Browser Support

Check if your browser supports Subresource Integrity. Compared to a while back most of the browsers now support SRI.

SRI Browser Support

Using SRI, we can make sure that the dependencies that we have are loaded are as expected and not modified in flight or at source by a malicious attacker. There is always a risk that you need to be willing to take when including external dependencies as they could be already having a threat embedded at the time of hash generation. For popular libraries, this is less likely. For those unpopular ones, it’s always a good idea to take a quick look at the code to ensure it’s not malicious. Using some tools to assist you with this is also a good idea, which we will look into in a separate article.

I was setting up an API at one of the client’s place recently and found that currently, they allow any origin to hit their API by setting the CorsOptions.AllowAll options. In this post, we will look at how to set the CORS options and restrict it to only the domains that you want your API to be accessed from.

What is Cross-Origin Resource Sharing (CORS)

Cross-Origin Resource Sharing is a way to relax the browsers Same-Origin Policy whereby to tell a browser to let a web application running at one origin (domain) have permission to access selected resources from a server at a different origin. By specifying the CORS header you instruct the browser to allow all allowed domains to access your resource. Most of the time for the API endpoints you want to be explicit on the hosts that can access your API. By setting CORS, you are only restricting/allowing cross-domain access originating from a browser. Setting CORS should not be mistaken for a Security feature whereby you are restricting access from any other sources. Any requests that are formed outside of the browser like using Postman, Fiddler, etc. can still make to your API and you need appropriate authorization/authentication to make sure you are not exposing data to unintended people.

Cross-Origin Request

Enabling in Web API

In Web API there are multiple ways that you can set CORS.

In the below snippet I am using the Microsoft.Owin.Cors pipeline to setup CORS for the API. The code first reads the application configuration file to get a list of semicolon (;) separated hostnames which are added to the list of allowed origins in the CorsPolicy. By setting the corsOptions with UseCors extension method, the policy gets applied to all the requests coming through the website.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
var allowedOriginsConfig = ConfigurationManager.AppSettings["origins"];
var allowedOrigins = allowedOriginsConfig
    .Split(new[] { ";" }, StringSplitOptions.RemoveEmptyEntries);

var corsPolicy = new CorsPolicy()
{
    AllowAnyHeader = true,
    AllowAnyMethod = true,
    SupportsCredentials = true
};
foreach (var origin in allowedOrigins)
    corsPolicy.Origins.Add(origin);

var policyProvider = new CorsPolicyProvider()
{
    PolicyResolver = (context) => Task.FromResult(corsPolicy)
};
var corsOptions = new CorsOptions()
{
    PolicyProvider = policyProvider
};

app.UseCors(corsOptions);

Setting Multiple CORS Policy

If you want to have different CORS policies based on different Controllers/route path, you can use the Map function to set up the CorsOptions for specific route paths. In the below example we apply a different CorsOptions to all routes that match ‘/api/SpecificController’ and defaults to another for all other requests.

1
2
3
4
5
app.Map(
    "/api/SpecificController",
    (appbuilder) => appbuilder.UseCors(corsOptions2));
...
app.UseCors(corsOptions1);

CORS ≠ Security

CORS is a way to relax the Cross-Origin Policy and in no way should be seen as a security feature. By setting CORS headers what we are saying is to allow all the additional domains in the headers also to be able to access the resource from a browser environment. However setting this, does not restrict access to your API’s from other sources like Postman, Fiddler or from any non-browser environments. Even within browser environments, older versions of Flash allows modifying and spoofing of request headers. Ensure that you are using CORS for the correct reasons and not assume that it is providing you security against unauthorized access.

Hope this allows you to setup CORS on your API’s!

This article is part of a series of articles - Ok I have got HTTPS! What Next?. In this post, we explore how to use HSTS security header and the issues it solves.

Content Security Policy (CSP) is a security response header or a element that instructs the browser, sources of information that it should trust for our website. A browser that supports CSP’s then treats this list specified as a whitelist and only allows resources to be loaded only for those sources. CSP’s allow you to specify source locations for a variety of resource types which are referred to as fetch directives(e.g. _script-src, img-src,style-src* etc).

Content Security Policy

CSP is an added layer of security that helps to detect and mitigate certain types of attacks, including Cross Site Scripting (XSS) and data injection attacks. These attacks are used for everything from data theft to site defacement or distribution of malware.

Example
1
Content-Security-Policy: default-src 'self' *.rahulpnath.com

Setting CSP Headers

Web Server Configuration

CSP’s can be set via the configuration file of your web server host if you want to specify it as part of the header. In my case I use Azure Web App, so all I need to do is add in a web.config file to my root with the header values. Below is an example which specified CSP headers (including Report Only) and STS headers.

Web.config Sample
1
2
3
4
5
6
7
8
9
10
<configuration>
  <system.webServer>
    <httpProtocol>
      <customHeaders>
        <add name="Content-Security-Policy" value="upgrade-insecure-requests;"/>
        <add name="Content-Security-Policy-Report-Only" value="default-src 'none';report-uri https://rahulpnath.report-uri.com/r/d/csp/reportOnly" />
        <add name="Strict-Transport-Security" value="max-age=31536000; includeSubDomains; preload"/>
      </customHeaders>
    </httpProtocol>
    ...

Using Fiddler

However if all you want is to play around with the CSP header and don’t have access to your Web server or the configuration file, you can still test these headers. You can inject in the headers into the response using a Web Proxy like Fiddler

To modify the request/response in-flight you can use one of the most powerful feature in Fiddler - Fiddler Script

Fiddler Script allows you to enhance Fiddler’s UI, add new features, and modify requests and responses “on the fly” to introduce any behavior you’d like.

Using the below script, we can inject ‘Content-Security-Policy’ header whenever the request matches a specific criteria.

Fiddler Script to update CSP

Fiddler Script - Inject CSP Header
1
2
3
4
if (oSession.HostnameIs("rahulpnath.com")) {
  oSession.oResponse.headers["Content-Security-Policy"] =
    "default-src 'none'; img-src 'self';script-src 'self';style-src 'self'";
}

By injecting these headers, we can play around with the CSP headers for the webiste without affecting other users. Once you have the CSP rules that cater to your site you can commit this to the actual website. Even with all the CSP headers set, you can additionally set the report-to (or deprecated report-uri) directive on the policy to capture any policies that you may have missed.

Content-Security-Policy-Report-Only

The Content-Security-Policy_Report-Only header allows to test the header settings without any impact and also to capture any CSP headers that you might have missed on your website. The browser uses this for reporting purposes only and does not enforce the policies. We can specify a report endpoint to which the browser will send any CSP violations as a JSON object.

Below is an example of a CSP violation POST request send from the browser to the report URL that I had specified for this blog. I am using an endpoint from the Report URI service (more on this later)

Example
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
POST https://rahulpnath.report-uri.com/r/d/csp/reportOnly HTTP/1.1
{
    "csp-report": {
        "document-uri": "https://www.rahulpnath.com/",
        "referrer": "",
        "violated-directive": "img-src",
        "effective-directive": "img-src",
        "original-policy": "default-src 'none';report-uri https://rahulpnath.report-uri.com/r/d/csp/reportOnly",
        "disposition": "report",
        "blocked-uri": "https://www.rahulpnath.com/apple-touch-icon-120x120.png",
        "line-number": 29,
        "source-file": "https://www.rahulpnath.com/",
        "status-code": 0,
        "script-sample": ""
    }
}

Generating CSP Policies

Coming up with the CSP policies for your site can be a bit tricky as there are a lot of options and directives involved. Your site might also be pulling in dependencies from a variety of sources. Setting CSP policies is also an excellent time to review your application dependencies and manage them correctly. For e.g., if you have a javascript file from an untrusted source, etc. There are a few ways by which you can go about generating CSP policies. Below are two ways I found useful and easy to get started.

Using Fiddler

The CSP Fiddler Extension is a Fiddler extension that helps you produce a strong CSP for a web page (or website). Install the extension and with Fiddler running navigate to your web pages using a browser that supports CSP.

The extension adds mock Content-Security-Policy-Report-Only headers to servers’ responses and uses the report-uri https://fiddlercsp.deletethis.net/unsafe-inline. The extension then listens to the specified report-uri and generates a CSP based on the gathered information

Fiddler CSP Rule Collector

Using Report URI

ReportURI is a real-time security reporting tool which can be used to collect various metrics about your website. One of the features it provides is giving a nice little wizard interface for creating your CSP headers. Pricing is usage based and provides the first 10000 reports of the month free (which is what I am using for this blog).

ReportURI gives a dashboard summarizing the various stats of your site and also provides features to explore these in detail.

Report Uri Dashboard

One of the cool features is the CSP Wizard which as the name suggests, provides a wizard-like UI to build out CSP’s for the site. The websites need to be configured to report CSP errors to a specific endpoint on your ReportURI endpoint (as shown below). The header value can be set either on CSP header or the Report Only header.

You can find your report URL from the Setup tab on Report URI. Make sure you use the URL under the options Report Type: CSP and Report Disposition: Wizard

1
Content-Security-Policy-Report-Only: default-src 'none';report-uri https://<subdomain>.report-uri.com/r/d/csp/wizard

Once all configured and reports start coming in you can use the Wizard to pick and choose what sources you need to whitelist for your website. You might see a lot of unwanted sources and entries in the wizard as it just reflects what is reported to it. You need to filter it out manually and build the list.

Once you have the CSP’s set you can check out if your site does the Harlem Shake by pressing F12 and running the below script. Though this is not any sort of test, it is a fun exercise to do.

Copy pasting scripts from unknown source is not at all recommended and is one of the most powerful ways that an attacker can get access to your account. Having a well defined CSP prevents such script attacks as well on your sites. Don’t be suprised if your banking site also shakes to the tune of the script below.

That said do give the below script a try! I did go through the code pasted below and it is not malicious. All it does modify your dom elements and plays a music. The original source is available below but I do not control it and it could have change since the time of writing.

Harlem Shake - F12 on Browser tab and run below script (Check your Volume)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
//Source: http://pastebin.com/aJna4paJ
javascript:(function(){function c(){var e=document.createElement("link");e.setAttribute("type","text/css");
e.setAttribute("rel","stylesheet");e.setAttribute("href",f);e.setAttribute("class",l);
document.body.appendChild(e)}function h(){var e=document.getElementsByClassName(l);
for(var t=0;t<e.length;t++){document.body.removeChild(e[t])}}function p(){var e=document.createElement("div");
e.setAttribute("class",a);document.body.appendChild(e);setTimeout(function(){document.body.removeChild(e)},100)}
function d(e){return{height:e.offsetHeight,width:e.offsetWidth}}function v(i){var s=d(i);
return s.height>e&&s.height<n&&s.width>t&&s.width<r}function m(e){var t=e;var n=0;
while(!!t){n+=t.offsetTop;t=t.offsetParent}return n}function g(){var e=document.documentElement;
if(!!window.innerWidth){return window.innerHeight}else if(e&&!isNaN(e.clientHeight)){return e.clientHeight}return 0}
function y(){if(window.pageYOffset){return window.pageYOffset}return Math.max(document.documentElement.scrollTop,document.body.scrollTop)}
function E(e){var t=m(e);return t>=w&&t<=b+w}function S(){var e=document.createElement("audio");e.setAttribute("class",l);
e.src=i;e.loop=false;e.addEventListener("canplay",function(){setTimeout(function(){x(k)},500);
setTimeout(function(){N();p();for(var e=0;e<O.length;e++){T(O[e])}},15500)},true);
e.addEventListener("ended",function(){N();h()},true);
e.innerHTML=" <p>If you are reading this, it is because your browser does not support the audio element. We recommend that you get a new browser.</p> <p>";
document.body.appendChild(e);e.play()}function x(e){e.className+=" "+s+" "+o}
function T(e){e.className+=" "+s+" "+u[Math.floor(Math.random()*u.length)]}function N(){var e=document.getElementsByClassName(s);
var t=new RegExp("\\b"+s+"\\b");for(var n=0;n<e.length;){e[n].className=e[n].className.replace(t,"")}}var e=30;var t=30;
var n=350;var r=350;var i="//s3.amazonaws.com/moovweb-marketing/playground/harlem-shake.mp3";var s="mw-harlem_shake_me";
var o="im_first";var u=["im_drunk","im_baked","im_trippin","im_blown"];var a="mw-strobe_light";
var f="//s3.amazonaws.com/moovweb-marketing/playground/harlem-shake-style.css";var l="mw_added_css";var b=g();var w=y();
var C=document.getElementsByTagName("*");var k=null;for(var L=0;L<C.length;L++){var A=C[L];if(v(A)){if(E(A)){k=A;break}}}
if(A===null){console.warn("Could not find a node of the right size. Please try a different page.");return}c();S();
var O=[];for(var L=0;L<C.length;L++){var A=C[L];if(v(A)){O.push(A)}}})()

I am still playing around with the CSP headers for this blog and currently testing it out using the ReportOnly header along with ReportURI. Hope this helps you to start putting the correct CSP headers for your site as well!

This article is part of a series of articles - Ok I have got HTTPS! What Next?. In this post, we explore how to use HSTS security header and the issues it solves.

When you enter a domain name in the browser without specifying the protocol (HTTP or HTTPS) the browser by default sends the first request over HTTP. For a server that supports only HTTPS, when it sees such a request, it redirects the request over to HTTPS. The server responds to the client with a 302 Redirect, redirecting the client to HTTPS, from which on the browser starts requesting over HTTPS. As you can see here, the very first request that the client makes is over an insecure channel (HTTP), so is also vulnerable to attacks. You could be prone to man-in-the-middle (MITM) attack, and someone could spoof that request and point you to a different site, inject malicious scripts, etc. The first insecure HTTP request is made everytime you enter the domain name in the browser or make an explicit call over HTTP.

Trust on First Use

The HTTP Strict-Transport-Security response header (often abbreviated as HSTS) lets a website tell browsers that it should only be accessed using HTTPS, instead of using HTTP

By using the HTTP Strict Transport Security (HSTS) header on your response headers, you are instructing the browser to make calls over HTTPS instead of HTTP for your site.

Syntax
1
2
3
Strict-Transport-Security: max-age=<expire-time>
Strict-Transport-Security: max-age=<expire-time>; includeSubDomains
Strict-Transport-Security: max-age=<expire-time>; preload

There are a few directives that you can set on the header which determines how the browser uses the header. By just setting the header with a max-age (required) directive, you tell the browser the time in seconds that the browser should remember that a site is only to be accessed using HTTPS. By default, the setting affects only the current subdomain. Additionally, you can set the includeSubDomains directive to apply this rule to all subdomain of the site. Before including all subdomains make sure those are served over HTTPS as well so that you do end up blocking your other sites on the same domain (if any).

As you can see with the HSTS header specified, the browser now only makes one insecure request (the one that it makes everytime the cache expires or the very first request). Once it has established a successful connection with the server, all further requests are over HTTPS for the max_age (cache expiry) set. With the HSTS header the surface area of the attack gets reduced to just one request as compared to all initial requests going over HTTP (when we did not have the HSTS header).

To verify the HSTS header setting has been applied for your website open your browser in Incognito/In-Private browsing mode. It is to make sure that the browser acts as if it is seeing the site for the very first time (as HSTS header caches do not get shared across regular/incognito sessions)

The HSTS header settings do not get shared across between the regular and incognito browsing session (at least in Chrome and think this is the same for other browsers as well).

Open the Developer tools window and monitor the Network requests made by the browser. Request your website over HTTP (either explicitly or just entering the domain name) in this case my blog http://rahulpnath.com. As you can see the very first request go over HTTP and the server returns a 301 Moved Permanently status with the https version of the site. For any subsequent requests over HTTP, the browser returns a 307 Internal Redirect. This redirect happens within the boundary of your browser and redirects to the HTTPS site. You can use Fiddler to verify that this request does not cross the browser boundary. (The request does not get to Fiddler.)

HSTS without preload

We could still argue that there is still a potential threat with the very first request sent over HTTP which is still vulnerable to MITM attack. To solve that we can use the preload directive and submit out domain to an HSTS Preload list, which when successfully added, propagates to the source code of Browsers.

Most major browsers (Chrome, Firefox, Opera, Safari, IE 11 and Edge) also have HSTS preload lists based on the Chrome list.

The Browsers hardcode these domains from the preload approved list into their source code (e.g., here is Chrome’s list) and gets shipped with their releases. You can check for the preloaded site in the browser as well. Again for chrome navigate to chrome://net-internals/#hsts and query for the HSTS domain.

HSTS preloaded site hardcoded

If STS is not set at all or you have not made the very first request to the server (when preload is false) querying for the domain returns ‘Not Found.’ Below are two variations that you can see depending on whether you have preload set or not. The dynamic_*_* indicates that STS is set after the first load of the site and the static_*_* indicates that it is set from the preload list.

If you are wondering why this blog does not have _static_*_* set it is because the preload list that it is part of has not yet made into a stable version of Chrome. However, the preload site does show that it is currently preloaded (probably in a beta version at the time of writing)._

Verifying HSTS preload

With the preload set and your domain hardcoded into the preload list and available as part of the browser version you are on, any request made over HTTP is redirected internally (307) to HTTPS without even going to the server. It means we have entirely got rid of the first untrusted HTTP request.

HSTS preload request flow

Have you already got HSTS set on your site?