I was setting up an API at one of the client’s place recently and found that currently, they allow any origin to hit their API by setting the CorsOptions.AllowAll options. In this post, we will look at how to set the CORS options and restrict it to only the domains that you want your API to be accessed from.

What is Cross-Origin Resource Sharing (CORS)

Cross-Origin Resource Sharing is a way to relax the browsers Same-Origin Policy whereby to tell a browser to let a web application running at one origin (domain) have permission to access selected resources from a server at a different origin. By specifying the CORS header you instruct the browser to allow all allowed domains to access your resource. Most of the time for the API endpoints you want to be explicit on the hosts that can access your API. By setting CORS, you are only restricting/allowing cross-domain access originating from a browser. Setting CORS should not be mistaken for a Security feature whereby you are restricting access from any other sources. Any requests that are formed outside of the browser like using Postman, Fiddler, etc. can still make to your API and you need appropriate authorization/authentication to make sure you are not exposing data to unintended people.

Cross-Origin Request

Enabling in Web API

In Web API there are multiple ways that you can set CORS.

In the below snippet I am using the Microsoft.Owin.Cors pipeline to setup CORS for the API. The code first reads the application configuration file to get a list of semicolon (;) separated hostnames which are added to the list of allowed origins in the CorsPolicy. By setting the corsOptions with UseCors extension method, the policy gets applied to all the requests coming through the website.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
var allowedOriginsConfig = ConfigurationManager.AppSettings["origins"];
var allowedOrigins = allowedOriginsConfig
    .Split(new[] { ";" }, StringSplitOptions.RemoveEmptyEntries);

var corsPolicy = new CorsPolicy()
{
    AllowAnyHeader = true,
    AllowAnyMethod = true,
    SupportsCredentials = true
};
foreach (var origin in allowedOrigins)
    corsPolicy.Origins.Add(origin);

var policyProvider = new CorsPolicyProvider()
{
    PolicyResolver = (context) => Task.FromResult(corsPolicy)
};
var corsOptions = new CorsOptions()
{
    PolicyProvider = policyProvider
};

app.UseCors(corsOptions);

Setting Multiple CORS Policy

If you want to have different CORS policies based on different Controllers/route path, you can use the Map function to set up the CorsOptions for specific route paths. In the below example we apply a different CorsOptions to all routes that match ‘/api/SpecificController’ and defaults to another for all other requests.

1
2
3
4
5
app.Map(
    "/api/SpecificController",
    (appbuilder) => appbuilder.UseCors(corsOptions2));
...
app.UseCors(corsOptions1);

CORS ≠ Security

CORS is a way to relax the Cross-Origin Policy and in no way should be seen as a security feature. By setting CORS headers what we are saying is to allow all the additional domains in the headers also to be able to access the resource from a browser environment. However setting this, does not restrict access to your API’s from other sources like Postman, Fiddler or from any non-browser environments. Even within browser environments, older versions of Flash allows modifying and spoofing of request headers. Ensure that you are using CORS for the correct reasons and not assume that it is providing you security against unauthorized access.

Hope this allows you to setup CORS on your API’s!

This article is part of a series of articles - Ok I have got HTTPS! What Next?. In this post, we explore how to use HSTS security header and the issues it solves.

Content Security Policy (CSP) is a security response header or a element that instructs the browser, sources of information that it should trust for our website. A browser that supports CSP’s then treats this list specified as a whitelist and only allows resources to be loaded only for those sources. CSP’s allow you to specify source locations for a variety of resource types which are referred to as fetch directives(e.g. _script-src, img-src,style-src* etc).

Content Security Policy

CSP is an added layer of security that helps to detect and mitigate certain types of attacks, including Cross Site Scripting (XSS) and data injection attacks. These attacks are used for everything from data theft to site defacement or distribution of malware.

Example
1
Content-Security-Policy: default-src 'self' *.rahulpnath.com

Setting CSP Headers

Web Server Configuration

CSP’s can be set via the configuration file of your web server host if you want to specify it as part of the header. In my case I use Azure Web App, so all I need to do is add in a web.config file to my root with the header values. Below is an example which specified CSP headers (including Report Only) and STS headers.

Web.config Sample
1
2
3
4
5
6
7
8
9
10
<configuration>
  <system.webServer>
    <httpProtocol>
      <customHeaders>
        <add name="Content-Security-Policy" value="upgrade-insecure-requests;"/>
        <add name="Content-Security-Policy-Report-Only" value="default-src 'none';report-uri https://rahulpnath.report-uri.com/r/d/csp/reportOnly" />
        <add name="Strict-Transport-Security" value="max-age=31536000; includeSubDomains; preload"/>
      </customHeaders>
    </httpProtocol>
    ...

Using Fiddler

However if all you want is to play around with the CSP header and don’t have access to your Web server or the configuration file, you can still test these headers. You can inject in the headers into the response using a Web Proxy like Fiddler

To modify the request/response in-flight you can use one of the most powerful feature in Fiddler - Fiddler Script

Fiddler Script allows you to enhance Fiddler’s UI, add new features, and modify requests and responses “on the fly” to introduce any behavior you’d like.

Using the below script, we can inject ‘Content-Security-Policy’ header whenever the request matches a specific criteria.

Fiddler Script to update CSP

Fiddler Script - Inject CSP Header
1
2
3
4
if (oSession.HostnameIs("rahulpnath.com")) {
  oSession.oResponse.headers["Content-Security-Policy"] =
    "default-src 'none'; img-src 'self';script-src 'self';style-src 'self'";
}

By injecting these headers, we can play around with the CSP headers for the webiste without affecting other users. Once you have the CSP rules that cater to your site you can commit this to the actual website. Even with all the CSP headers set, you can additionally set the report-to (or deprecated report-uri) directive on the policy to capture any policies that you may have missed.

Content-Security-Policy-Report-Only

The Content-Security-Policy_Report-Only header allows to test the header settings without any impact and also to capture any CSP headers that you might have missed on your website. The browser uses this for reporting purposes only and does not enforce the policies. We can specify a report endpoint to which the browser will send any CSP violations as a JSON object.

Below is an example of a CSP violation POST request send from the browser to the report URL that I had specified for this blog. I am using an endpoint from the Report URI service (more on this later)

Example
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
POST https://rahulpnath.report-uri.com/r/d/csp/reportOnly HTTP/1.1
{
    "csp-report": {
        "document-uri": "https://www.rahulpnath.com/",
        "referrer": "",
        "violated-directive": "img-src",
        "effective-directive": "img-src",
        "original-policy": "default-src 'none';report-uri https://rahulpnath.report-uri.com/r/d/csp/reportOnly",
        "disposition": "report",
        "blocked-uri": "https://www.rahulpnath.com/apple-touch-icon-120x120.png",
        "line-number": 29,
        "source-file": "https://www.rahulpnath.com/",
        "status-code": 0,
        "script-sample": ""
    }
}

Generating CSP Policies

Coming up with the CSP policies for your site can be a bit tricky as there are a lot of options and directives involved. Your site might also be pulling in dependencies from a variety of sources. Setting CSP policies is also an excellent time to review your application dependencies and manage them correctly. For e.g., if you have a javascript file from an untrusted source, etc. There are a few ways by which you can go about generating CSP policies. Below are two ways I found useful and easy to get started.

Using Fiddler

The CSP Fiddler Extension is a Fiddler extension that helps you produce a strong CSP for a web page (or website). Install the extension and with Fiddler running navigate to your web pages using a browser that supports CSP.

The extension adds mock Content-Security-Policy-Report-Only headers to servers’ responses and uses the report-uri https://fiddlercsp.deletethis.net/unsafe-inline. The extension then listens to the specified report-uri and generates a CSP based on the gathered information

Fiddler CSP Rule Collector

Using Report URI

ReportURI is a real-time security reporting tool which can be used to collect various metrics about your website. One of the features it provides is giving a nice little wizard interface for creating your CSP headers. Pricing is usage based and provides the first 10000 reports of the month free (which is what I am using for this blog).

ReportURI gives a dashboard summarizing the various stats of your site and also provides features to explore these in detail.

Report Uri Dashboard

One of the cool features is the CSP Wizard which as the name suggests, provides a wizard-like UI to build out CSP’s for the site. The websites need to be configured to report CSP errors to a specific endpoint on your ReportURI endpoint (as shown below). The header value can be set either on CSP header or the Report Only header.

You can find your report URL from the Setup tab on Report URI. Make sure you use the URL under the options Report Type: CSP and Report Disposition: Wizard

1
Content-Security-Policy-Report-Only: default-src 'none';report-uri https://<subdomain>.report-uri.com/r/d/csp/wizard

Once all configured and reports start coming in you can use the Wizard to pick and choose what sources you need to whitelist for your website. You might see a lot of unwanted sources and entries in the wizard as it just reflects what is reported to it. You need to filter it out manually and build the list.

Once you have the CSP’s set you can check out if your site does the Harlem Shake by pressing F12 and running the below script. Though this is not any sort of test, it is a fun exercise to do.

Copy pasting scripts from unknown source is not at all recommended and is one of the most powerful ways that an attacker can get access to your account. Having a well defined CSP prevents such script attacks as well on your sites. Don’t be suprised if your banking site also shakes to the tune of the script below.

That said do give the below script a try! I did go through the code pasted below and it is not malicious. All it does modify your dom elements and plays a music. The original source is available below but I do not control it and it could have change since the time of writing.

Harlem Shake - F12 on Browser tab and run below script (Check your Volume)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
//Source: http://pastebin.com/aJna4paJ
javascript:(function(){function c(){var e=document.createElement("link");e.setAttribute("type","text/css");
e.setAttribute("rel","stylesheet");e.setAttribute("href",f);e.setAttribute("class",l);
document.body.appendChild(e)}function h(){var e=document.getElementsByClassName(l);
for(var t=0;t<e.length;t++){document.body.removeChild(e[t])}}function p(){var e=document.createElement("div");
e.setAttribute("class",a);document.body.appendChild(e);setTimeout(function(){document.body.removeChild(e)},100)}
function d(e){return{height:e.offsetHeight,width:e.offsetWidth}}function v(i){var s=d(i);
return s.height>e&&s.height<n&&s.width>t&&s.width<r}function m(e){var t=e;var n=0;
while(!!t){n+=t.offsetTop;t=t.offsetParent}return n}function g(){var e=document.documentElement;
if(!!window.innerWidth){return window.innerHeight}else if(e&&!isNaN(e.clientHeight)){return e.clientHeight}return 0}
function y(){if(window.pageYOffset){return window.pageYOffset}return Math.max(document.documentElement.scrollTop,document.body.scrollTop)}
function E(e){var t=m(e);return t>=w&&t<=b+w}function S(){var e=document.createElement("audio");e.setAttribute("class",l);
e.src=i;e.loop=false;e.addEventListener("canplay",function(){setTimeout(function(){x(k)},500);
setTimeout(function(){N();p();for(var e=0;e<O.length;e++){T(O[e])}},15500)},true);
e.addEventListener("ended",function(){N();h()},true);
e.innerHTML=" <p>If you are reading this, it is because your browser does not support the audio element. We recommend that you get a new browser.</p> <p>";
document.body.appendChild(e);e.play()}function x(e){e.className+=" "+s+" "+o}
function T(e){e.className+=" "+s+" "+u[Math.floor(Math.random()*u.length)]}function N(){var e=document.getElementsByClassName(s);
var t=new RegExp("\\b"+s+"\\b");for(var n=0;n<e.length;){e[n].className=e[n].className.replace(t,"")}}var e=30;var t=30;
var n=350;var r=350;var i="//s3.amazonaws.com/moovweb-marketing/playground/harlem-shake.mp3";var s="mw-harlem_shake_me";
var o="im_first";var u=["im_drunk","im_baked","im_trippin","im_blown"];var a="mw-strobe_light";
var f="//s3.amazonaws.com/moovweb-marketing/playground/harlem-shake-style.css";var l="mw_added_css";var b=g();var w=y();
var C=document.getElementsByTagName("*");var k=null;for(var L=0;L<C.length;L++){var A=C[L];if(v(A)){if(E(A)){k=A;break}}}
if(A===null){console.warn("Could not find a node of the right size. Please try a different page.");return}c();S();
var O=[];for(var L=0;L<C.length;L++){var A=C[L];if(v(A)){O.push(A)}}})()

I am still playing around with the CSP headers for this blog and currently testing it out using the ReportOnly header along with ReportURI. Hope this helps you to start putting the correct CSP headers for your site as well!

This article is part of a series of articles - Ok I have got HTTPS! What Next?. In this post, we explore how to use HSTS security header and the issues it solves.

When you enter a domain name in the browser without specifying the protocol (HTTP or HTTPS) the browser by default sends the first request over HTTP. For a server that supports only HTTPS, when it sees such a request, it redirects the request over to HTTPS. The server responds to the client with a 302 Redirect, redirecting the client to HTTPS, from which on the browser starts requesting over HTTPS. As you can see here, the very first request that the client makes is over an insecure channel (HTTP), so is also vulnerable to attacks. You could be prone to man-in-the-middle (MITM) attack, and someone could spoof that request and point you to a different site, inject malicious scripts, etc. The first insecure HTTP request is made everytime you enter the domain name in the browser or make an explicit call over HTTP.

Trust on First Use

The HTTP Strict-Transport-Security response header (often abbreviated as HSTS) lets a website tell browsers that it should only be accessed using HTTPS, instead of using HTTP

By using the HTTP Strict Transport Security (HSTS) header on your response headers, you are instructing the browser to make calls over HTTPS instead of HTTP for your site.

Syntax
1
2
3
Strict-Transport-Security: max-age=<expire-time>
Strict-Transport-Security: max-age=<expire-time>; includeSubDomains
Strict-Transport-Security: max-age=<expire-time>; preload

There are a few directives that you can set on the header which determines how the browser uses the header. By just setting the header with a max-age (required) directive, you tell the browser the time in seconds that the browser should remember that a site is only to be accessed using HTTPS. By default, the setting affects only the current subdomain. Additionally, you can set the includeSubDomains directive to apply this rule to all subdomain of the site. Before including all subdomains make sure those are served over HTTPS as well so that you do end up blocking your other sites on the same domain (if any).

As you can see with the HSTS header specified, the browser now only makes one insecure request (the one that it makes everytime the cache expires or the very first request). Once it has established a successful connection with the server, all further requests are over HTTPS for the max_age (cache expiry) set. With the HSTS header the surface area of the attack gets reduced to just one request as compared to all initial requests going over HTTP (when we did not have the HSTS header).

To verify the HSTS header setting has been applied for your website open your browser in Incognito/In-Private browsing mode. It is to make sure that the browser acts as if it is seeing the site for the very first time (as HSTS header caches do not get shared across regular/incognito sessions)

The HSTS header settings do not get shared across between the regular and incognito browsing session (at least in Chrome and think this is the same for other browsers as well).

Open the Developer tools window and monitor the Network requests made by the browser. Request your website over HTTP (either explicitly or just entering the domain name) in this case my blog http://rahulpnath.com. As you can see the very first request go over HTTP and the server returns a 301 Moved Permanently status with the https version of the site. For any subsequent requests over HTTP, the browser returns a 307 Internal Redirect. This redirect happens within the boundary of your browser and redirects to the HTTPS site. You can use Fiddler to verify that this request does not cross the browser boundary. (The request does not get to Fiddler.)

HSTS without preload

We could still argue that there is still a potential threat with the very first request sent over HTTP which is still vulnerable to MITM attack. To solve that we can use the preload directive and submit out domain to an HSTS Preload list, which when successfully added, propagates to the source code of Browsers.

Most major browsers (Chrome, Firefox, Opera, Safari, IE 11 and Edge) also have HSTS preload lists based on the Chrome list.

The Browsers hardcode these domains from the preload approved list into their source code (e.g., here is Chrome’s list) and gets shipped with their releases. You can check for the preloaded site in the browser as well. Again for chrome navigate to chrome://net-internals/#hsts and query for the HSTS domain.

HSTS preloaded site hardcoded

If STS is not set at all or you have not made the very first request to the server (when preload is false) querying for the domain returns ‘Not Found.’ Below are two variations that you can see depending on whether you have preload set or not. The dynamic_*_* indicates that STS is set after the first load of the site and the static_*_* indicates that it is set from the preload list.

If you are wondering why this blog does not have _static_*_* set it is because the preload list that it is part of has not yet made into a stable version of Chrome. However, the preload site does show that it is currently preloaded (probably in a beta version at the time of writing)._

Verifying HSTS preload

With the preload set and your domain hardcoded into the preload list and available as part of the browser version you are on, any request made over HTTP is redirected internally (307) to HTTPS without even going to the server. It means we have entirely got rid of the first untrusted HTTP request.

HSTS preload request flow

Have you already got HSTS set on your site?

Security is of more and more a concern these days and we saw why it’s important to move on to HTTPs. Hope you have already moved on to HTTPS, If not right now is a perfect time - this is one of the things that you should not be putting off for later. Also, do check out the HTTPS is Easy series by Troy Hunt on how simple it is to get onboard with HTTPS. Most of the things mentioned here, I started exploring after getting introduced to it by Scott Helme and Troy Hunt at the NDC Security conference.

Security Report Summary from SecurityHeaders.com

Once you have moved on to HTTPS you might be thinking is that it? Or is there still more that I need to be doing? An excellent place to start is SecurityHeaders.com, which analyses security headers on your website and provides a rating score for the site. The site also gives a short description of the various headers, appropriate links to explore more about them. Some of the headers are easy to add and immediately provides added security to your website.

Just because you have an A+ Rating (or a good rating) does not mean that your site is not vulnerable to any attacks. These are just some guidelines to help you along the way to tighten up your website security.

I have been trying to walk the talk here - implementing these headers one by one on this blog as and when I am writing this

In this post will walk through some of the headers that I added to this blog. I am planning to write this as a multi-series article, with each article specific to a header/other feature, why I added it and how I went about adding and verifying it.

I will be updating the links to the relevant posts here as I publish them and will add to the list as and when I come across new topics. One of the tricky thing with security is that very often something new comes up. The best we can do is to proactively do things that we can do to support and protect ourselves.

It’s always hard to keep young minds engaged and fresh. As a parent, it is good to have a few different options to keep your child engaged and also at the same time help them learn. Activity books are a great way of keeping kids involved and help them develop the skills that they need in their lives.

Kumon Books offer a wide variety of books for different age groups and subjects. The books follow a progression and are meant to be used in the sequence to helps kids progress by building on previous skills. The workbook chart helps if you are new to Kumon Books. You can also choose by your kids age if you don’t find the chart useful.

center

The books cover a variety of activities like coloring, cutting paper, folding, pasting stickers, reading, writing, maths, etc. Each book easily introduces the concept and makes your child repeat it over and over again until it becomes clear to them before moving to advanced skills. Activity books like cutting, folding, paste, etc. are intended to be used once but for the other ones like writing, reading, math, etc. you can get your child to use a pencil and erase it off if you want them to refresh on the skills sometimes.

We have got books across most of the activities for Gautham and found it valuable. The books have helped him learn to read, write, cut, paint, etc. and highly recommend to others. These books also give an excellent way to interact with your child and help him learn and grow.

Your local bookstore or online stores should be having these books. Google should be of help otherwise. Hope you find it useful!

In the previous post, we explored how to use Postman for testing API endpoints. Postman is an excellent tool to manage API specs as well, so that you can try API requests individually to see how things are working. It also acts as documentation for all your API endpoints and serves as a good starting point for someone new to the team. When it comes to managing the API specs for your application, there are a few options that we have and let’s explore what they are.

Organizing API Specs

Postman supports the concept of Collections, which are nothing but a Folder to group of saved API requests/Specs. Collections support nesting which means you can add Folders within a collection to further group them. As you can see below the MyApplication and Postman Echo are collections, and there are subfolders inside them which in turn contains API requests. The multi-level hierarchy helps you to organize your requests the way you want to.

Postman Collections

Sharing API Specs

Any Collection that you create in Postman is automatically synced to Postman Cloud if you are logged in with an account. It allows you to share collections through a link. With paid version of Postman you get to create team workspaces, which means a team can collaborate on the shared versions. It allows easy sharing of specs across your team and manages them in a centralized place.

However, if you are not logged in or don’t have a paid version of Postman, you can maintain the specs along with your Source Code. Postman allows you to export Collections and share specs as a JSON file. You can then check this file into your source code repository. Other team members can Import the exported file to get the latest specs. The only disadvantage with this is that you need to make sure to export/import every time you/other team members make a change to the JSON file. However, I have seen this approach work well in teams and one way we made sure that the JSON file was up to date is to have to update the API spec as a Work Item and which required to be peer reviewed(through Pull Requests)

Managing Environments

Typically any application/API would be deployed to multiple environments (like localhost, Development, Testing, Production, etc.) and you would want to switch between these environments to test your API endpoints seamlessly. Postman makes this easy by using the Environment Feature.

Postman Environment

Again as with Collections, Environments are also synced to Postman Cloud when you are logged in. It makes all your environments available to all your team seamlessly. However, if you are not logged in you can again export the environments as a JSON file and then share that out of band (in a secure manner as this might have sensitive information like tokens, keys, etc.) with your team.

Publishing API Specs

Postman allows you to publish API specs (even to a custom URL), which can act like your API Documentation. You can publish it per environments and also easily execute them. Publishing is available only if you log in to an account as it requires the API Specs and environment details in the first place.

Postman Published

Security Considerations

When using the sync feature of Postman (logged in to the application with Postman account), it is recommended that you do not have any sensitive information (like passwords/tokens) as part of the API request spec/Collection. These should be extracted out as Environment variables and stored as part of the appropriate environment.

If you are logged in, all the data that you add to it is automatically synced, which means it will be living in Postman’s cloud server. This might not be a desirable option for every company but looks like there is no option to turn sync off at the Collection level. The only way to not sync collections is to not log into an account in Postman.

If you are logged into Postman then any collection that you create is automatically synced to Postman server. Only way to prevent sync is not to log in

We have seen the options by which you can share API collections and environments amongst your team even if you are logged in. However, one thing to be aware of is if any of your team members are logged into Postman and imports a collection shared via Repository/out of band methods, it will be synced to Postman server. So at the organization/team level, you would need ways to prevent this from happening if it is essential for you. Best is to have your API’s designed in such a way that you do not have to expose such sensitive information, which anyways is a better practice.

Hope this allows to manage your API specs better!

A while back we looked at how we can use Postman to chain multiple requests to speed up out Manual API Testing. For those who are not familiar with Postman, it is an application that assists in API testing and development, which I see as sitting a level top of a tool like Fiddler.

In this post, we will see how we can use Postman to test some basic CRUD operations over an API using a feature called Postman Runner. Using this still involves some manual intervention. However, we can automate them using a combination of different tools.

Setting Up the API

To start with I create a simple API endpoint using the out of the box Web API project from Visual Studio 2017. It is a Values Controller which stores key-value pairs to which you can send GET, POST, DELETE requests. Below is the API implementation. It is a simple in-memory implementation and does not use any persistent store. However, the tests would not change much even if the store was to be persistent. The importance here is not in the implementation of the API, but how you can use Postman to add some quick tests.

ValuesController
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
public class ValuesController : ApiController
{
    static Dictionary<int, string> values = new Dictionary<int, string>();

    public IEnumerable<string> Get()
    {
        return values.Values;
    }

    public IHttpActionResult Get(int id)
    {
        if (values.ContainsKey(id))
            return Ok(values[id]);

        return NotFound();
    }

    public IHttpActionResult Post(int id, [FromBody]string value)
    {
        values[id] = value;
        return Ok();
    }

    public IHttpActionResult Delete(int id)
    {
        if (!values.ContainsKey(id))
            return NotFound();

        values.Remove(id);
        return Ok();
    }
}

Setting Up Postman

To start with we will create a new Collection in Postman to hold our tests for the Values Controller - I have named it ‘Values CRUD - Test’. The collection is a container for all the API requests that we are going to write. First, we will add all the request definitions into postman which we can later reorder for the tests.

Postman Request

The / in the URL are parameters defined as part of the selected Environment. Environments in Postman allow you to switch between different application environments like Development, Test, Production. You can configure different values for each environment and Postman will send the requests as per the configuration.

Below are the environment variables for my local environment. You can define as many environments as you want and switch between them.

Postman Environment

Now that I have all the request definitions for the API added let’s add some tests to verify our API functionality.

Writing The First Test

Postman allows executing scripts before and after running API requests. We did see this in the API Chaining post where we grabbed the messageId from the POST request and added it to the environment variable for use in the subsequent requests. Similarly, we can also add scripts to verify that the API request returns expected results, status code, etc.

Let’s first write a simple test on our GET API request that it returns a 200 OK response when called. The below test uses the Postmans PM API to assert that status code of the response is 200. Check the Response Assertion API in test scripts to see the other assertion options available like pm.response.to.have.status. The tests are under the Tests section similar to where wrote the scripts to chain API requests. When executing the API request, the Tests tab shows the successful test run for the particular request.

200 Status Code
1
2
3
pm.test("Status code is 200", function() {
  pm.response.to.have.status(200);
});

Postman Tests

Similarly, you can also write Pre-request Script to set variables or perform any other operation. Below I am setting the Value environment variable to “Test”. You could generate a random value here or set a random id, or set an identifier that does not already exists. It’s test/application specific, so leave it you to decide what works best for you.

Pre-request Script.
1
pm.environment.set("Value", "Test");

Collection Runner

The collection runner allows you to manage multiple API requests and run them as a set. Once completed it shows a summary of all the tests included within each request and details of tests that passed/failed in the run. You can target the Runner to run against your environment of choice.

Postman Collection Runner

Running these tests still involves some manual effort of selecting environments and running them. However using Newman, you can run Postman Collections from the command line, which means even in your build pipeline.

Using Postman, we can quickly test our API’s across multiple environments. The Collection Runner also shows an excellent visual summary of the tests and helps us in API development. However, I found these tests to violate the DRY principle. You need to repeat the same API request structure if you have to use them in a different context. Like in the example above I had to create two Get Value By Id requests to test for the value existing and also for when it does not exists. You could use some conditional looping inside the scripts, but then that makes your tests complicated and gets into the loop of how to test your tests. Postman does allow you to export the API request to the language of your choice. So once you have the basic schema, you can export them and write tests that compose them. I find Postman tests and the Runner a quick way to start testing your API endpoints and then for more complicated cases use a stronger programming language. Having the tests in Postman also allows us to have an API spec in place and can be useful to play around with the API.

A couple of months back Gautham (my son) started playing Reading Eggs. We started off initially with a free trial after it being recommended by our friend Asha. We got a 21-day extended trial in addition to the initial 21 days free trial (i.e a total of 42 days), which helped a lot with confirming that Gautham was to use this app. We noticed that he was reading small words quite comfortably and he grew an interest to read various things around. We took an annual subscription and feels totally worth it.

Reading Eggs Levels

Using the five essential keys to reading success, the program unlocks all aspects of learning to read for your child.

  • The lessons use colourful animation, fun characters, songs, and rewards to keep children motivated.
  • The program is completely interactive to keep children on task.
  • When children start the program, they can complete a placement quiz to ensure they are starting at the correct reading level.
  • Parents can access detailed progress reports as well as hundreds of full-colour downloadable activity sheets that correspond with the lessons in the program.
  • The program includes over 2000 online books for kids – each ending with a comprehension quiz that assesses your child’s understanding.

Each level explores different letters/word combinations and has a knowledge quiz at the end of it to pass to the next level. The program unlocks all aspects of learning to read for your child, focusing on a core curriculum of phonics and phonemic awareness, sight words, vocabulary, comprehension, and reading for meaning.

Do give the app a try if you have kids at home!

Over the last weekend, I was playing around with Visual Studio Connected Services support for Azure Key Vault. The new feature allows seamless integration of ASP.NET Web applications with Azure Key Vault, making it as simple as using the ConfigurationManager to retrieve the Secrets from the Key Vault - just like you would retrieve it from the config file.

In this post, we will look detailed into the AzureKeyVaultConfigBuilder class that allows the seamless integration provided by Connected Services. As we saw in the previous post when you add Key Vault as a Connected Service, it modifies the applications configuration file to add in the AzureKeyVaultConfigBuilder references.

Make sure to update the Microsoft.Configuration.ConfigurationBuilders.Azure and Microsoft.Configuration.ConfigurationBuilders.Base Nuget packages to the latest version.

Loading Connection String and App Settings

The AzureKeyVaultConfigBuilder can be specified on both appsettings and connectionString element using the configBuilders attribute.

Configuration File
1
2
3
4
5
6
 <appSettings configBuilders="AzureKeyVault">
 ...
 </appSettings>
 <connectionStrings configBuilders="AzureKeyVault">
 ...
 </connectionStrings>

Accessing Multiple Key Vaults

The configBuilders element supports comma-separated list of builders. Using this feature, we can specify multiple Vaults as a source for our secrets. Note how we pass in ‘keyVault1,keyVault2’ to configBuilders option below.

Configuration File
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
<configBuilders>
    <builders>
      <add
        name="keyVault1"
        vaultName="keyVault1"
        type="Microsoft.Configuration.ConfigurationBuilders.AzureKeyVaultConfigBuilder, Microsoft.Configuration.ConfigurationBuilders.Azure, Version=1.0.0.0, Culture=neutral" />

      <add
        name="keyVault2"
        vaultName="keyVault2"
        type="Microsoft.Configuration.ConfigurationBuilders.AzureKeyVaultConfigBuilder, Microsoft.Configuration.ConfigurationBuilders.Azure, Version=1.0.0.0, Culture=neutral" />
    </builders>
  </configBuilders>
  <appSettings configBuilders="keyVault1,keyVault2">
  ...
  </appSettings>

If the same key has a value in multiple sources, then the value from the last builder in the list takes precedence. (But I assume you would not need that feature!)

Modes

All config builders have the options of setting a mode, which allows three options.

  • Strict - This is the default. In this mode, the config builder will only operate on well-known key/value-centric configuration sections. It will enumerate each key in the section, and if a matching key is found in the external source, it will replace the value in the resulting config section with the value from the external source.

  • Greedy - This mode is closely related to Strict mode, but instead of being limited to keys that already exist in the original configuration, the config builders will dump all key/value pairs from the external source into the resulting config section.

  • Expand - This last mode operates on the raw XML before it gets parsed into a config section object. It can be thought of as a simple expansion of tokens in a string. Any part of the raw XML string that matches the pattern ${token} is a candidate for token expansion. If no corresponding value is found in the external source, then the token is left alone.

In short when set to Strict it matches the names in configuration file to Secrets in the Vault’s configured. If it does not find corresponding Secret it ignores that key. When set to Greedy, irrespective of what keys are there in the configuration file, it makes all the secrets available in the Vaults specified via Configuration. This to me sounds like magic and would not prefer to do in an application that I build.

Greedy Mode Filtering and Formatting Secrets

When using Greedy mode, we can filter on the list of keys that are made available by using the prefix option. Only Secret Names starting with the prefix is made available in the configuration. The other secrets are ignored. This feature can be used in conjunction with stripPrefix option. When stripPrefix is set to true (defaults to false), the Secret is made available in the configuration after stripping off the prefix.

For e.g. if we have a Secret with the name connectionString-MyConnection, having the below configuration will add the connection string with name MyConnection.

Configuration File
1
2
3
4
5
6
<add
  name="keyVault1"
  vaultName="keyVault1"
  prefix="connectionString-"
  stripPrefix="true"
  type="Microsoft.Configuration.ConfigurationBuilders.AzureKeyVaultConfigBuilder, Microsoft.Configuration.ConfigurationBuilders.Azure, Version=1.0.0.0, Culture=neutral" />
1
var connectionString = ConfigurationManager.ConnectionStrings["MyConnection"];

Use prefix and stripPrefix in conjunction with the Greedy mode. For keys mentioned in the config it will try to match it with the prefix appended to the key name

Preloading Secrets

By default, the Key Vault Config Builder is set to preload the available Secrets in key vault. By doing this the config builder knows the list of configuration values that the key vault can resolve. For preloading the Secrets, the config builder uses the List call on Secrets. If you don’t have list access on Secrets you can turn this feature off using the preloadSecretNames configuration option. At the time of writing the config builder version (1.0.1) throws an exception when preloading Secrets in turned on and List policy is not available on the Vault. I have raised a PR to fix this issue, which if accepted would no longer throw the exception and would invalidate this configuration option.

Configuration File
1
2
3
4
5
<builders>
    <add
    name="keyVault1"
    preloadSecretNames="false"
    vaultName="keyVault1"

Authentication Modes

The connectionString attribute allows you to specify the authentication mechanism with Key Vault. By default when using the Connected Service to create the key vault it adds the Visual Studio user to the access policies of the Key Vault. When connecting it uses the same. But this does not help you in a large team scenario. Most likely the Vault will be created under your organization subscription and you might want to share the same vault between all developers in the team. You could add the users individually and give them the appropriate access policies, but this might soon become cumbersome for a large team. Instead of using the Client Id/Secret or Certificate authentication along with Managed Service Identity configuration for localhost works the best. The configuration provider will then use the AzureServicesAuthConnectionString value from environment variable to connect to the key vault.

Local System
1
2
3
4
Set AzureServicesAuthConnectionString Environment variable
RunAs=App;AppId=AppId;TenantId=TenantId;AppKey=Secret.
Or
RunAs=App;AppId=AppId;TenantId=TenantId;CertificateThumbprint=Thumbprint;CertificateStoreLocation=CurrentUser

As you can see the AzureKeyVaultConfigBuilder does provide good integration with Key Vault and makes using the Key Vault seamless. It does have a few issues, especially around handling different Secret versions, which might be fixed in future releases.

PS: At the time of writing there were a few issues that I had found while playing around. You can follow up on the individual issues on Github. Fingers crossed hope at least one of my PR’s makes its way through to master!

Visual Studio (VS) now supports adding Azure Key Vault as a Connected Service, for Web Projects ( ASP.NET Core or any ASP.NET project). Enabling this from the Connected Service makes it easier for you to get started with Azure Key Vault. Below are the prerequisites to use the Connected Service feature

Prerequisites

  • An Azure subscription. If you do not have one, you can sign up for a free account.
  • Visual Studio 2017 version 15.7 with the Web Development workload installed. Download it now.
  • An ASP.NET 4.7.1 or ASP.NET Core 2.0 web project open.

Visual Studio, Azure Key Vault Connected Services

When selecting ‘Secure Secrets with Azure Key Vault’ option from the list of Connected Services provided it takes you to a new page within Visual Studio with your Azure Subscription associated with Visual Studio Account and gives you the ability to add a Key Vault to it. VS does generate some defaults for the Vault Name, Resource Group, Location and the Pricing Tier which you can edit as per your requirement. Once you confirm to the Add the Key Vault, VS provisions the Key Vault with the selected configuration and modifies some things in your project.

Visual Studio, Azure Key Vault Connected Services

In short, VS adds

  • a bunch of NuGet packages to access Azure Key Vault
  • Adds in the Keyvault Url details
  • In ASP.NET Web project VS modifies the configuration file to add in the AzureKeyVaultConfigBuilder as shown below.
Web.config
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
<configuration>
<configSections>
<section
      name="configBuilders"
      type="System.Configuration.ConfigurationBuildersSection, System.Configuration, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a"
      restartOnExternalChanges="false"
      requirePermission="false" />
</configSections>
<configBuilders>
<builders>
<add
      name="AzureKeyVault"
      vaultName="webapplication-47-dev-kv"
      type="Microsoft.Configuration.ConfigurationBuilders.AzureKeyVaultConfigBuilder, Microsoft.Configuration.ConfigurationBuilders.Azure, Version=1.0.0.0, Culture=neutral"
      vaultUri="https://WebApplication-47-dev-kv.vault.azure.net" />
</builders>
</configBuilders>

To start using Azure Key Vault from your application we first need to add some Secrets to the Key Vault created by Visual Studio. You can add a secret to the portal using multiple ways, the most straightforward being using the Azure Portal. Once you add the Secret to the Key Vault, update the configuration file with the Secret names. Below is how you would do for an ASP.NET Web Project. (MySecret and VersionedSecret keys)

Make sure to add configBuilders=”AzureKeyVault” to the appSettings tag. This tells the Configuraion Manager to use the configured AzureKeyVaultConfigBuilder
1
2
3
4
5
6
7
8
<appSettings configBuilders="AzureKeyVault">
      <add key="webpages:Version" value="3.0.0.0" />
      <add key="webpages:Enabled" value="false" />
      <add key="ClientValidationEnabled" value="true" />
      <add key="UnobtrusiveJavaScriptEnabled" value="true" />
      <add key="MySecret" value="dummy1"/>
      <add key="VersionedSecret" value="dummy2"/>
</appSettings>

The values dummy* are just dummy values and will be overridden at runtime from the Secret Values created in the Key Vault. If the Secret with the corresponding name does not exist in Key Vault, then the dummy values will be used.

Authentication

When VS creates the Vault, it adds in the user logged into VS to the Access Policies list. When running the application, the AzureKeyVaultConfigBuilder uses the same details to authenticate with the Key Vault.

If you are not logged in as the same user or not logged in at all the provider will not be able to authenticate with the Key Vault and will fallback to use the dummy values in the configuration file. Alternatively you could specify connection option avaiable for AzureServiceTokenProvider

Visual Studio, Azure Key Vault Connected Services

Secrets and Versioning

The AzureKeyVaultConfigBuilder requests to get all the Secrets in the Key Vault at application startup using the Secrets endoint. This call returns all the Secrets in the Key Vault. For whatever keys in the AppSettings that has a match with a Secret in the vault, a request is made to get the Secret details, which returns the actual Secret value for the keys. Below are the traces of the calls going out captured using Fiddler.

AzureKeyVaultConfigBuilder Fiddler Traces

It looks like at the moment the AzureKeyVaultConfigBuilder get only the latest version of the Secrets. As you can tell from one of my Secret names (VersionedSecret), I have created two versions for the Secret, and the config builder picks the latest version. I don’t see a way right now whereby I can specify a specific secret version.

The Visual Studio Connected Services makes it easy to get started with Azure Key Vault and move your secrets to a more secure store, than having it around in your configuration files.