At times you might be working in environments where there are a lot of restrictions on the tools that you can use, the process that you need to follow, etc. Under these circumstances, it is essential that we stick to some core and fundamental principles and practices that we as an industry have adopted. We need to make sure that we have that in place no matter what the restrictions imposed are. Below are a few of the restrictions that I along with my team had to face at one of the clients and what we did to keep ourselves on top of it and still deliver at a higher speed.

Working under Constraints.

The issues discussed might or might not immediately relate to you, the important thing is your attitude towards such issues and finding ways around your constraints, keeping yourself productive on the long run.

No Build Deploy Pipeline

When I joined the project, it amazed that we were still building/packaging the application from a local developer system and manually deploying this to the various environments (Dev, Test, UAT, and PROD).

Whenever a release was to be made one of the developers was to pause his current work, switch to the appropriate branch for the release, make sure he had the latest code base, build with the correct configuration to generate a package.

This might sound an outdated practice (as it did to me) but here I am at a client, in the year 2018 and it’s still happening. What surprised me, even more, was that the team did have access to an Octopus server (backed by a Jenkins build server) but since the deployment server did not have access to UAT/PROD servers they chose not to use it. You bet this was the first thing I was keen on fixing, as generating a release package from my local system would be the last thing that I want to do.

After a quick chat with the team, we decided on the below.

  • Set up build/deploy pipeline up to Test environment. This would allow seamless integration while we are developing features and get it out for testing. Since we had access till the Test environment, this was hardly an hours work to get it all working.

  • Since we did not have access to UAT/PROD and the process required us to hand over a deployment package to the concerned team, we set up a ‘Packaging Project’ in Octopus. This project basically unzips the selected build package into our Dev environment server, applies the configuration transforms and zips up the folder into a deployment package. With this, we are now able to create a deployment package for any given build and for any environment. We are also having discussions to enable access to UAT/PROD servers for the deployment servers so that we can deploy automatically, all the way to production.

No longer was the process dependent on a developer or a developer machine and was completely automated. For those reading this and in a similar situation but who does not have access to a build/deploy system like Jenkins/Octopus I would basically set up a simple script to pull down the source given a commit hash/branch/tfs label and perform a build and package independent of the working directory of the developer. This script could run on a shared server (if you have access to one) or worst on a developer’s machine/VM. The fundamental thing that we are trying to achieve is to decouple it from the current working folder on a developer machine and the manual steps involved in generating a package. As long as you have an automated way to create a package irrespective of what tools/systems you use we should be safe and sound.

Out of Sync SQL and Code Artifacts

The application is heavily dependent on Stored Procedures for pulling/pushing data out of the SQL Server Database. Yes, you heard it right, Stored Procedures and ones with business logic in them, which is what makes it actually worse. Looking at how the Stored Procedures were maintained I could see that the team started off with good intentions using DbUp but soon moved away from it. When I joined the process was to share SQL artifacts as attachments in the Jira story/bug. The Db Administrator (DBA) would then pull that out and manage them separately in a source control repository that was not the same as the application code base.

There was not much information on why this was the case, but the primary reason that they moved away from DbUp was that there was no visibility of the SQL scripts when running updates as it was an executable file that was the output of the DbUp project. Also, there were poor development/deployment practices that led to the ad-hoc execution of scripts in environments without actually updating the source control. This soon put the DBA out of control, and the only way to gain control back was to maintain it separately.

Again we decided to have a quick chat with the team along with the DBA on how to improve the current process, as it was getting harder to track application package versions and the associated scripts to go with that package.

  • DbUp by default embeds the SQL artifacts into the executable which removes all visibility into the actual scripts. However, this behaviour is configurable using ScriptProviders. By using the FileSystemScriptProvider, we can specify the folder from which to load the SQL scripts. Configuring the msbuild to copy out all the folder files to the output and including them into the final package was an easy change. This provided the DBA with the actual SQL artifacts, and he could review them quickly. We also started a code review process and began including the DBA for any changes related to SQL artifacts. This gave even more visibility to the DBA and helped catch issues right at the time of development.

  • With automated build/deploy till Test environment in place, we no longer had to make ad-hoc changes to the databases, and everything was pushed through the source control as it was faster and more comfortable.

With this few tweaks we were now in a much better state, and there was a one to one trackability between source code and SQL artifacts. It all lived as part of one package, traceable all the way to the source code commit tag auto-generated by the build system.

Not Invented Here Syndrome

With the kind of restrictions you have seen till now, you can guess the approach towards third party services and off-the-shelf products. Most of the things are still done in-house (including trying to replicate a service bus). The problem with this approach is that there is a limit as to which you can go doing that after which you probably lose out on your team or the code takes over you where you just cannot maintain it. When starting out on a new project and when the code base is still small building your own mechanisms might seem to work well. But once past that point, you no longer want to continue down the path but invest in industry proven tools. These include logging servers, service-bus/queues (if you need one), email services (especially if you want to track and do statistics on top of the emails send out).

Biggest challenge introducing this is mostly not cost related (as there are a lot of really affordable services for every business)it’s mostly the fear of unknown and lack of interest to venture out into unfamiliar territory. The reasons might vary for you, but try to understand the core reason that is hindering the change.

One technique that worked to get over the fear of the unknown was to introduce this slowly into the system, one at a time, giving enough time for people to get used to the change.

Seq was one of the first things that we had proposed and long waiting in the wish list. The team was using Serilog to log, and all the logs were stored in SQL table making it really hard to query and monitor the logs. The infrastructure team did not want to install Seq as it was all new to them and were not sure about the additional task for managing a Seq instance. So we suggested to them to just have it on the development server and to get familiar with the application first. After a couple of days, the business was seeing a benefit with increased visibility into the logs and infrastructure team was also happy with the increased visibility into the logs. Even before a week, they were happy to install one for the test environment as well. At the time of writing we are looking at getting a Seq instance on the UAT server and soon to have a production instance as well. Getting the interested stakeholders to have a feel of the application and slowly introducing the change is a great way to get buy-in.

Now we are trying to push for a service bus!

Build Server without NuGet access

The build server we were using was in-house hosted, and the box that it ran did not have internet access. This meant that you cannot have any external package dependencies that can be pulled at build time. We chose to include the package references along with the source code, which is anyways what I tend to prefer more. All our third-party libraries were pushed along with the source code repository, so the build machine had all the required dependencies and did not have to have internet connectivity to make a build.

Those are just a subset of the issues that we had to run into and bet you there were so many smaller ones. At times problems are not technical in nature, but more about communication and how effectively you are able to get all the people involved to get along with each other.

Any journey to advancement is about valuing the people around you, understanding them and taking them along with the change. It’s a journey that the team needs to make together and not a solo one.

Different people have different experiences, pain points, concerns and targets to check off. So as a team you need to understand what works for everyone and come to a collective agreement. Just getting all of the concerned parties into a room and having a healthy discussion (mainly by not being prescriptive but descriptive of the issues that you are facing) solves most of the problems.

Do you work in a similar environment? What challenges do you face at work? Sound off in the comments!

This article is part of a series of articles - Ok I have got HTTPS! What Next?. In this post, we explore how to use Subresource Integrity and the issues it solves.

Subresource Integrity (SRI) is a security feature that enables browsers to verify that files they fetch (for example, from a CDN) are delivered without unexpected manipulation. It works by allowing you to provide a cryptographic hash that a fetched file must match.

Subresource integrity

Using the integrity attribute on script and link element enables browsers to verify externally linked files before loading them. The integrity attribute takes a base64-encoded hash prefixed the corresponding hash algorithm prefix(at present sha256,sha3384, sha512), as shown in the example below.

Integrity attribute as part of the script tag
1
2
3
4
<script
  src="https://cdnjs.cloudflare.com/ajax/libs/redux/4.0.0/redux.js"
  integrity="sha256-KLkq+W1kKUA6iR5s5Xa/tdzU0yAmXNu7qIGKR/PBoUE="
  crossorigin="anonymous" />

Generating SRI Hash

To generate the SRI hash for files that are accessible over a URL, you can use srihash.org or srigenerator depending on what hash algorithm version you want. If you’re going to generate it on your local files, you can use OpenSSL command-line tool (which should be part of your git bash shell if you are looking around for it, like I did)

1
openssl dgst -sha256 -binary FILENAME.js | openssl base64 -A

Third-Party Libraries

For third-party libraries (js and CSS) referred via CDN, you can grab the script/link element along with the integrity attribute from the CDN sites. Here is an example below from cdnjs.

Generate script tag along with SRI Hash

When referring third party libraries via CDN its good to fall back to a local copy. In cases where the CDN is unreachable or the integrity check fails it can fall back to a local copy. I chose to include the integrity attribute on the fallback copy as well.

1
2
3
4
<script>
    window.jQuery ||
    document.write('<script src="/javascripts/libs/jquery/jquery-2.0.3.min.js" crossorigin="anonymous" integrity="sha256-ruuHogwePywKZ7bI1vHGGs7ScbBLhkNUcSSeRjhSUko=">\x3C/script>')
</script>

Application Specific Files

For application specific javascript files, you need to generate the hash everytime you modify it. You could look at integrating this with your build pipeline to make it seamless. You can use the OpenSSL command line tool as shown above to generate the hash during your application build process.

Inline JavaScript

The integrity attribute must not be specified when embedding a module script or when the src attribute is not specified. This means that SRI cannot be used for inline javascript. Even though inline javascript should be avoided, there still are scenarios where you might use that or have dynamically generated javascript. In these cases, we can use nonce attribute on script tag and whitelist that nonce in the CSP Headers.

nonce-<base64-value>
A whitelist for specific inline scripts using a cryptographic nonce (number used once). The server must generate a unique nonce value each time it transmits a policy. It is critical to provide an unguessable nonce, as bypassing a resource’s policy is otherwise trivial. See unsafe inline script for example. Specifying nonce makes a modern browser ignore ‘unsafe-inline’ which could still be set for older browsers without nonce support.

For the jquery fallback above we need a nonce attribute since this is loaded inline.

Nonce attribute
1
2
3
4
<script nonce="anF1ZXJ5ZmFsbGJhY2s=">
    window.jQuery ||
    document.write('<script src="/javascripts/libs/jquery/jquery-2.0.3.min.js" crossorigin="anonymous" integrity="sha256-ruuHogwePywKZ7bI1vHGGs7ScbBLhkNUcSSeRjhSUko=">\x3C/script>')
</script>

We can then specify this nonce on the CSP headers for the script-src. The nonce value can be anything that is base64 encoded.

Web.config CSP header
1
2
3
<add
  name="Content-Security-Policy"
  value="default-src 'self';script-src c.disquscdn.com 'self' 'nonce-anF1ZXJ5ZmFsbGJhY2s=' 'nonce-ZGlzcXVzc2NyaXB0'; />

Using nonce allows us to get away with having an inline script. However, this should be avoided if possible. As you may have noticed, by having a nonce on the attribute does not validate the script contents of the associated tag. It executes anything that is within that tag. So if you have dynamic content within the script block, this can be used to your disadvantage by attackers. So use it only if it’s absolutely necessary. However, having the nonce attribute for those cases is better so that you can limit inline javascript to those specific script tags.

Browser Support

Check if your browser supports Subresource Integrity. Compared to a while back most of the browsers now support SRI.

SRI Browser Support

Using SRI, we can make sure that the dependencies that we have are loaded are as expected and not modified in flight or at source by a malicious attacker. There is always a risk that you need to be willing to take when including external dependencies as they could be already having a threat embedded at the time of hash generation. For popular libraries, this is less likely. For those unpopular ones, it’s always a good idea to take a quick look at the code to ensure it’s not malicious. Using some tools to assist you with this is also a good idea, which we will look into in a separate article.

I was setting up an API at one of the client’s place recently and found that currently, they allow any origin to hit their API by setting the CorsOptions.AllowAll options. In this post, we will look at how to set the CORS options and restrict it to only the domains that you want your API to be accessed from.

What is Cross-Origin Resource Sharing (CORS)

Cross-Origin Resource Sharing is a way to relax the browsers Same-Origin Policy whereby to tell a browser to let a web application running at one origin (domain) have permission to access selected resources from a server at a different origin. By specifying the CORS header you instruct the browser to allow all allowed domains to access your resource. Most of the time for the API endpoints you want to be explicit on the hosts that can access your API. By setting CORS, you are only restricting/allowing cross-domain access originating from a browser. Setting CORS should not be mistaken for a Security feature whereby you are restricting access from any other sources. Any requests that are formed outside of the browser like using Postman, Fiddler, etc. can still make to your API and you need appropriate authorization/authentication to make sure you are not exposing data to unintended people.

Cross-Origin Request

Enabling in Web API

In Web API there are multiple ways that you can set CORS.

In the below snippet I am using the Microsoft.Owin.Cors pipeline to setup CORS for the API. The code first reads the application configuration file to get a list of semicolon (;) separated hostnames which are added to the list of allowed origins in the CorsPolicy. By setting the corsOptions with UseCors extension method, the policy gets applied to all the requests coming through the website.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
var allowedOriginsConfig = ConfigurationManager.AppSettings["origins"];
var allowedOrigins = allowedOriginsConfig
    .Split(new[] { ";" }, StringSplitOptions.RemoveEmptyEntries);

var corsPolicy = new CorsPolicy()
{
    AllowAnyHeader = true,
    AllowAnyMethod = true,
    SupportsCredentials = true
};
foreach (var origin in allowedOrigins)
    corsPolicy.Origins.Add(origin);

var policyProvider = new CorsPolicyProvider()
{
    PolicyResolver = (context) => Task.FromResult(corsPolicy)
};
var corsOptions = new CorsOptions()
{
    PolicyProvider = policyProvider
};

app.UseCors(corsOptions);

Setting Multiple CORS Policy

If you want to have different CORS policies based on different Controllers/route path, you can use the Map function to set up the CorsOptions for specific route paths. In the below example we apply a different CorsOptions to all routes that match ‘/api/SpecificController’ and defaults to another for all other requests.

1
2
3
4
5
app.Map(
    "/api/SpecificController",
    (appbuilder) => appbuilder.UseCors(corsOptions2));
...
app.UseCors(corsOptions1);

CORS ≠ Security

CORS is a way to relax the Cross-Origin Policy and in no way should be seen as a security feature. By setting CORS headers what we are saying is to allow all the additional domains in the headers also to be able to access the resource from a browser environment. However setting this, does not restrict access to your API’s from other sources like Postman, Fiddler or from any non-browser environments. Even within browser environments, older versions of Flash allows modifying and spoofing of request headers. Ensure that you are using CORS for the correct reasons and not assume that it is providing you security against unauthorized access.

Hope this allows you to setup CORS on your API’s!

This article is part of a series of articles - Ok I have got HTTPS! What Next?. In this post, we explore how to use HSTS security header and the issues it solves.

Content Security Policy (CSP) is a security response header or a element that instructs the browser, sources of information that it should trust for our website. A browser that supports CSP’s then treats this list specified as a whitelist and only allows resources to be loaded only for those sources. CSP’s allow you to specify source locations for a variety of resource types which are referred to as fetch directives(e.g. _script-src, img-src,style-src* etc).

Content Security Policy

CSP is an added layer of security that helps to detect and mitigate certain types of attacks, including Cross Site Scripting (XSS) and data injection attacks. These attacks are used for everything from data theft to site defacement or distribution of malware.

Example
1
Content-Security-Policy: default-src 'self' *.rahulpnath.com

Setting CSP Headers

Web Server Configuration

CSP’s can be set via the configuration file of your web server host if you want to specify it as part of the header. In my case I use Azure Web App, so all I need to do is add in a web.config file to my root with the header values. Below is an example which specified CSP headers (including Report Only) and STS headers.

Web.config Sample
1
2
3
4
5
6
7
8
9
10
<configuration>
  <system.webServer>
    <httpProtocol>
      <customHeaders>
        <add name="Content-Security-Policy" value="upgrade-insecure-requests;"/>
        <add name="Content-Security-Policy-Report-Only" value="default-src 'none';report-uri https://rahulpnath.report-uri.com/r/d/csp/reportOnly" />
        <add name="Strict-Transport-Security" value="max-age=31536000; includeSubDomains; preload"/>
      </customHeaders>
    </httpProtocol>
    ...

Using Fiddler

However if all you want is to play around with the CSP header and don’t have access to your Web server or the configuration file, you can still test these headers. You can inject in the headers into the response using a Web Proxy like Fiddler

To modify the request/response in-flight you can use one of the most powerful feature in Fiddler - Fiddler Script

Fiddler Script allows you to enhance Fiddler’s UI, add new features, and modify requests and responses “on the fly” to introduce any behavior you’d like.

Using the below script, we can inject ‘Content-Security-Policy’ header whenever the request matches a specific criteria.

Fiddler Script to update CSP

Fiddler Script - Inject CSP Header
1
2
3
4
if (oSession.HostnameIs("rahulpnath.com")) {
  oSession.oResponse.headers["Content-Security-Policy"] =
    "default-src 'none'; img-src 'self';script-src 'self';style-src 'self'";
}

By injecting these headers, we can play around with the CSP headers for the webiste without affecting other users. Once you have the CSP rules that cater to your site you can commit this to the actual website. Even with all the CSP headers set, you can additionally set the report-to (or deprecated report-uri) directive on the policy to capture any policies that you may have missed.

Content-Security-Policy-Report-Only

The Content-Security-Policy_Report-Only header allows to test the header settings without any impact and also to capture any CSP headers that you might have missed on your website. The browser uses this for reporting purposes only and does not enforce the policies. We can specify a report endpoint to which the browser will send any CSP violations as a JSON object.

Below is an example of a CSP violation POST request send from the browser to the report URL that I had specified for this blog. I am using an endpoint from the Report URI service (more on this later)

Example
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
POST https://rahulpnath.report-uri.com/r/d/csp/reportOnly HTTP/1.1
{
    "csp-report": {
        "document-uri": "https://www.rahulpnath.com/",
        "referrer": "",
        "violated-directive": "img-src",
        "effective-directive": "img-src",
        "original-policy": "default-src 'none';report-uri https://rahulpnath.report-uri.com/r/d/csp/reportOnly",
        "disposition": "report",
        "blocked-uri": "https://www.rahulpnath.com/apple-touch-icon-120x120.png",
        "line-number": 29,
        "source-file": "https://www.rahulpnath.com/",
        "status-code": 0,
        "script-sample": ""
    }
}

Generating CSP Policies

Coming up with the CSP policies for your site can be a bit tricky as there are a lot of options and directives involved. Your site might also be pulling in dependencies from a variety of sources. Setting CSP policies is also an excellent time to review your application dependencies and manage them correctly. For e.g., if you have a javascript file from an untrusted source, etc. There are a few ways by which you can go about generating CSP policies. Below are two ways I found useful and easy to get started.

Using Fiddler

The CSP Fiddler Extension is a Fiddler extension that helps you produce a strong CSP for a web page (or website). Install the extension and with Fiddler running navigate to your web pages using a browser that supports CSP.

The extension adds mock Content-Security-Policy-Report-Only headers to servers’ responses and uses the report-uri https://fiddlercsp.deletethis.net/unsafe-inline. The extension then listens to the specified report-uri and generates a CSP based on the gathered information

Fiddler CSP Rule Collector

Using Report URI

ReportURI is a real-time security reporting tool which can be used to collect various metrics about your website. One of the features it provides is giving a nice little wizard interface for creating your CSP headers. Pricing is usage based and provides the first 10000 reports of the month free (which is what I am using for this blog).

ReportURI gives a dashboard summarizing the various stats of your site and also provides features to explore these in detail.

Report Uri Dashboard

One of the cool features is the CSP Wizard which as the name suggests, provides a wizard-like UI to build out CSP’s for the site. The websites need to be configured to report CSP errors to a specific endpoint on your ReportURI endpoint (as shown below). The header value can be set either on CSP header or the Report Only header.

You can find your report URL from the Setup tab on Report URI. Make sure you use the URL under the options Report Type: CSP and Report Disposition: Wizard

1
Content-Security-Policy-Report-Only: default-src 'none';report-uri https://<subdomain>.report-uri.com/r/d/csp/wizard

Once all configured and reports start coming in you can use the Wizard to pick and choose what sources you need to whitelist for your website. You might see a lot of unwanted sources and entries in the wizard as it just reflects what is reported to it. You need to filter it out manually and build the list.

Once you have the CSP’s set you can check out if your site does the Harlem Shake by pressing F12 and running the below script. Though this is not any sort of test, it is a fun exercise to do.

Copy pasting scripts from unknown source is not at all recommended and is one of the most powerful ways that an attacker can get access to your account. Having a well defined CSP prevents such script attacks as well on your sites. Don’t be suprised if your banking site also shakes to the tune of the script below.

That said do give the below script a try! I did go through the code pasted below and it is not malicious. All it does modify your dom elements and plays a music. The original source is available below but I do not control it and it could have change since the time of writing.

Harlem Shake - F12 on Browser tab and run below script (Check your Volume)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
//Source: http://pastebin.com/aJna4paJ
javascript:(function(){function c(){var e=document.createElement("link");e.setAttribute("type","text/css");
e.setAttribute("rel","stylesheet");e.setAttribute("href",f);e.setAttribute("class",l);
document.body.appendChild(e)}function h(){var e=document.getElementsByClassName(l);
for(var t=0;t<e.length;t++){document.body.removeChild(e[t])}}function p(){var e=document.createElement("div");
e.setAttribute("class",a);document.body.appendChild(e);setTimeout(function(){document.body.removeChild(e)},100)}
function d(e){return{height:e.offsetHeight,width:e.offsetWidth}}function v(i){var s=d(i);
return s.height>e&&s.height<n&&s.width>t&&s.width<r}function m(e){var t=e;var n=0;
while(!!t){n+=t.offsetTop;t=t.offsetParent}return n}function g(){var e=document.documentElement;
if(!!window.innerWidth){return window.innerHeight}else if(e&&!isNaN(e.clientHeight)){return e.clientHeight}return 0}
function y(){if(window.pageYOffset){return window.pageYOffset}return Math.max(document.documentElement.scrollTop,document.body.scrollTop)}
function E(e){var t=m(e);return t>=w&&t<=b+w}function S(){var e=document.createElement("audio");e.setAttribute("class",l);
e.src=i;e.loop=false;e.addEventListener("canplay",function(){setTimeout(function(){x(k)},500);
setTimeout(function(){N();p();for(var e=0;e<O.length;e++){T(O[e])}},15500)},true);
e.addEventListener("ended",function(){N();h()},true);
e.innerHTML=" <p>If you are reading this, it is because your browser does not support the audio element. We recommend that you get a new browser.</p> <p>";
document.body.appendChild(e);e.play()}function x(e){e.className+=" "+s+" "+o}
function T(e){e.className+=" "+s+" "+u[Math.floor(Math.random()*u.length)]}function N(){var e=document.getElementsByClassName(s);
var t=new RegExp("\\b"+s+"\\b");for(var n=0;n<e.length;){e[n].className=e[n].className.replace(t,"")}}var e=30;var t=30;
var n=350;var r=350;var i="//s3.amazonaws.com/moovweb-marketing/playground/harlem-shake.mp3";var s="mw-harlem_shake_me";
var o="im_first";var u=["im_drunk","im_baked","im_trippin","im_blown"];var a="mw-strobe_light";
var f="//s3.amazonaws.com/moovweb-marketing/playground/harlem-shake-style.css";var l="mw_added_css";var b=g();var w=y();
var C=document.getElementsByTagName("*");var k=null;for(var L=0;L<C.length;L++){var A=C[L];if(v(A)){if(E(A)){k=A;break}}}
if(A===null){console.warn("Could not find a node of the right size. Please try a different page.");return}c();S();
var O=[];for(var L=0;L<C.length;L++){var A=C[L];if(v(A)){O.push(A)}}})()

I am still playing around with the CSP headers for this blog and currently testing it out using the ReportOnly header along with ReportURI. Hope this helps you to start putting the correct CSP headers for your site as well!

This article is part of a series of articles - Ok I have got HTTPS! What Next?. In this post, we explore how to use HSTS security header and the issues it solves.

When you enter a domain name in the browser without specifying the protocol (HTTP or HTTPS) the browser by default sends the first request over HTTP. For a server that supports only HTTPS, when it sees such a request, it redirects the request over to HTTPS. The server responds to the client with a 302 Redirect, redirecting the client to HTTPS, from which on the browser starts requesting over HTTPS. As you can see here, the very first request that the client makes is over an insecure channel (HTTP), so is also vulnerable to attacks. You could be prone to man-in-the-middle (MITM) attack, and someone could spoof that request and point you to a different site, inject malicious scripts, etc. The first insecure HTTP request is made everytime you enter the domain name in the browser or make an explicit call over HTTP.

Trust on First Use

The HTTP Strict-Transport-Security response header (often abbreviated as HSTS) lets a website tell browsers that it should only be accessed using HTTPS, instead of using HTTP

By using the HTTP Strict Transport Security (HSTS) header on your response headers, you are instructing the browser to make calls over HTTPS instead of HTTP for your site.

Syntax
1
2
3
Strict-Transport-Security: max-age=<expire-time>
Strict-Transport-Security: max-age=<expire-time>; includeSubDomains
Strict-Transport-Security: max-age=<expire-time>; preload

There are a few directives that you can set on the header which determines how the browser uses the header. By just setting the header with a max-age (required) directive, you tell the browser the time in seconds that the browser should remember that a site is only to be accessed using HTTPS. By default, the setting affects only the current subdomain. Additionally, you can set the includeSubDomains directive to apply this rule to all subdomain of the site. Before including all subdomains make sure those are served over HTTPS as well so that you do end up blocking your other sites on the same domain (if any).

As you can see with the HSTS header specified, the browser now only makes one insecure request (the one that it makes everytime the cache expires or the very first request). Once it has established a successful connection with the server, all further requests are over HTTPS for the max_age (cache expiry) set. With the HSTS header the surface area of the attack gets reduced to just one request as compared to all initial requests going over HTTP (when we did not have the HSTS header).

To verify the HSTS header setting has been applied for your website open your browser in Incognito/In-Private browsing mode. It is to make sure that the browser acts as if it is seeing the site for the very first time (as HSTS header caches do not get shared across regular/incognito sessions)

The HSTS header settings do not get shared across between the regular and incognito browsing session (at least in Chrome and think this is the same for other browsers as well).

Open the Developer tools window and monitor the Network requests made by the browser. Request your website over HTTP (either explicitly or just entering the domain name) in this case my blog http://rahulpnath.com. As you can see the very first request go over HTTP and the server returns a 301 Moved Permanently status with the https version of the site. For any subsequent requests over HTTP, the browser returns a 307 Internal Redirect. This redirect happens within the boundary of your browser and redirects to the HTTPS site. You can use Fiddler to verify that this request does not cross the browser boundary. (The request does not get to Fiddler.)

HSTS without preload

We could still argue that there is still a potential threat with the very first request sent over HTTP which is still vulnerable to MITM attack. To solve that we can use the preload directive and submit out domain to an HSTS Preload list, which when successfully added, propagates to the source code of Browsers.

Most major browsers (Chrome, Firefox, Opera, Safari, IE 11 and Edge) also have HSTS preload lists based on the Chrome list.

The Browsers hardcode these domains from the preload approved list into their source code (e.g., here is Chrome’s list) and gets shipped with their releases. You can check for the preloaded site in the browser as well. Again for chrome navigate to chrome://net-internals/#hsts and query for the HSTS domain.

HSTS preloaded site hardcoded

If STS is not set at all or you have not made the very first request to the server (when preload is false) querying for the domain returns ‘Not Found.’ Below are two variations that you can see depending on whether you have preload set or not. The dynamic_*_* indicates that STS is set after the first load of the site and the static_*_* indicates that it is set from the preload list.

If you are wondering why this blog does not have _static_*_* set it is because the preload list that it is part of has not yet made into a stable version of Chrome. However, the preload site does show that it is currently preloaded (probably in a beta version at the time of writing)._

Verifying HSTS preload

With the preload set and your domain hardcoded into the preload list and available as part of the browser version you are on, any request made over HTTP is redirected internally (307) to HTTPS without even going to the server. It means we have entirely got rid of the first untrusted HTTP request.

HSTS preload request flow

Have you already got HSTS set on your site?

Security is of more and more a concern these days and we saw why it’s important to move on to HTTPs. Hope you have already moved on to HTTPS, If not right now is a perfect time - this is one of the things that you should not be putting off for later. Also, do check out the HTTPS is Easy series by Troy Hunt on how simple it is to get onboard with HTTPS. Most of the things mentioned here, I started exploring after getting introduced to it by Scott Helme and Troy Hunt at the NDC Security conference.

Security Report Summary from SecurityHeaders.com

Once you have moved on to HTTPS you might be thinking is that it? Or is there still more that I need to be doing? An excellent place to start is SecurityHeaders.com, which analyses security headers on your website and provides a rating score for the site. The site also gives a short description of the various headers, appropriate links to explore more about them. Some of the headers are easy to add and immediately provides added security to your website.

Just because you have an A+ Rating (or a good rating) does not mean that your site is not vulnerable to any attacks. These are just some guidelines to help you along the way to tighten up your website security.

I have been trying to walk the talk here - implementing these headers one by one on this blog as and when I am writing this

In this post will walk through some of the headers that I added to this blog. I am planning to write this as a multi-series article, with each article specific to a header/other feature, why I added it and how I went about adding and verifying it.

I will be updating the links to the relevant posts here as I publish them and will add to the list as and when I come across new topics. One of the tricky thing with security is that very often something new comes up. The best we can do is to proactively do things that we can do to support and protect ourselves.

It’s always hard to keep young minds engaged and fresh. As a parent, it is good to have a few different options to keep your child engaged and also at the same time help them learn. Activity books are a great way of keeping kids involved and help them develop the skills that they need in their lives.

Kumon Books offer a wide variety of books for different age groups and subjects. The books follow a progression and are meant to be used in the sequence to helps kids progress by building on previous skills. The workbook chart helps if you are new to Kumon Books. You can also choose by your kids age if you don’t find the chart useful.

center

The books cover a variety of activities like coloring, cutting paper, folding, pasting stickers, reading, writing, maths, etc. Each book easily introduces the concept and makes your child repeat it over and over again until it becomes clear to them before moving to advanced skills. Activity books like cutting, folding, paste, etc. are intended to be used once but for the other ones like writing, reading, math, etc. you can get your child to use a pencil and erase it off if you want them to refresh on the skills sometimes.

We have got books across most of the activities for Gautham and found it valuable. The books have helped him learn to read, write, cut, paint, etc. and highly recommend to others. These books also give an excellent way to interact with your child and help him learn and grow.

Your local bookstore or online stores should be having these books. Google should be of help otherwise. Hope you find it useful!

In the previous post, we explored how to use Postman for testing API endpoints. Postman is an excellent tool to manage API specs as well, so that you can try API requests individually to see how things are working. It also acts as documentation for all your API endpoints and serves as a good starting point for someone new to the team. When it comes to managing the API specs for your application, there are a few options that we have and let’s explore what they are.

Organizing API Specs

Postman supports the concept of Collections, which are nothing but a Folder to group of saved API requests/Specs. Collections support nesting which means you can add Folders within a collection to further group them. As you can see below the MyApplication and Postman Echo are collections, and there are subfolders inside them which in turn contains API requests. The multi-level hierarchy helps you to organize your requests the way you want to.

Postman Collections

Sharing API Specs

Any Collection that you create in Postman is automatically synced to Postman Cloud if you are logged in with an account. It allows you to share collections through a link. With paid version of Postman you get to create team workspaces, which means a team can collaborate on the shared versions. It allows easy sharing of specs across your team and manages them in a centralized place.

However, if you are not logged in or don’t have a paid version of Postman, you can maintain the specs along with your Source Code. Postman allows you to export Collections and share specs as a JSON file. You can then check this file into your source code repository. Other team members can Import the exported file to get the latest specs. The only disadvantage with this is that you need to make sure to export/import every time you/other team members make a change to the JSON file. However, I have seen this approach work well in teams and one way we made sure that the JSON file was up to date is to have to update the API spec as a Work Item and which required to be peer reviewed(through Pull Requests)

Managing Environments

Typically any application/API would be deployed to multiple environments (like localhost, Development, Testing, Production, etc.) and you would want to switch between these environments to test your API endpoints seamlessly. Postman makes this easy by using the Environment Feature.

Postman Environment

Again as with Collections, Environments are also synced to Postman Cloud when you are logged in. It makes all your environments available to all your team seamlessly. However, if you are not logged in you can again export the environments as a JSON file and then share that out of band (in a secure manner as this might have sensitive information like tokens, keys, etc.) with your team.

Publishing API Specs

Postman allows you to publish API specs (even to a custom URL), which can act like your API Documentation. You can publish it per environments and also easily execute them. Publishing is available only if you log in to an account as it requires the API Specs and environment details in the first place.

Postman Published

Security Considerations

When using the sync feature of Postman (logged in to the application with Postman account), it is recommended that you do not have any sensitive information (like passwords/tokens) as part of the API request spec/Collection. These should be extracted out as Environment variables and stored as part of the appropriate environment.

If you are logged in, all the data that you add to it is automatically synced, which means it will be living in Postman’s cloud server. This might not be a desirable option for every company but looks like there is no option to turn sync off at the Collection level. The only way to not sync collections is to not log into an account in Postman.

If you are logged into Postman then any collection that you create is automatically synced to Postman server. Only way to prevent sync is not to log in

We have seen the options by which you can share API collections and environments amongst your team even if you are logged in. However, one thing to be aware of is if any of your team members are logged into Postman and imports a collection shared via Repository/out of band methods, it will be synced to Postman server. So at the organization/team level, you would need ways to prevent this from happening if it is essential for you. Best is to have your API’s designed in such a way that you do not have to expose such sensitive information, which anyways is a better practice.

Hope this allows to manage your API specs better!

A while back we looked at how we can use Postman to chain multiple requests to speed up out Manual API Testing. For those who are not familiar with Postman, it is an application that assists in API testing and development, which I see as sitting a level top of a tool like Fiddler.

In this post, we will see how we can use Postman to test some basic CRUD operations over an API using a feature called Postman Runner. Using this still involves some manual intervention. However, we can automate them using a combination of different tools.

Setting Up the API

To start with I create a simple API endpoint using the out of the box Web API project from Visual Studio 2017. It is a Values Controller which stores key-value pairs to which you can send GET, POST, DELETE requests. Below is the API implementation. It is a simple in-memory implementation and does not use any persistent store. However, the tests would not change much even if the store was to be persistent. The importance here is not in the implementation of the API, but how you can use Postman to add some quick tests.

ValuesController
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
public class ValuesController : ApiController
{
    static Dictionary<int, string> values = new Dictionary<int, string>();

    public IEnumerable<string> Get()
    {
        return values.Values;
    }

    public IHttpActionResult Get(int id)
    {
        if (values.ContainsKey(id))
            return Ok(values[id]);

        return NotFound();
    }

    public IHttpActionResult Post(int id, [FromBody]string value)
    {
        values[id] = value;
        return Ok();
    }

    public IHttpActionResult Delete(int id)
    {
        if (!values.ContainsKey(id))
            return NotFound();

        values.Remove(id);
        return Ok();
    }
}

Setting Up Postman

To start with we will create a new Collection in Postman to hold our tests for the Values Controller - I have named it ‘Values CRUD - Test’. The collection is a container for all the API requests that we are going to write. First, we will add all the request definitions into postman which we can later reorder for the tests.

Postman Request

The / in the URL are parameters defined as part of the selected Environment. Environments in Postman allow you to switch between different application environments like Development, Test, Production. You can configure different values for each environment and Postman will send the requests as per the configuration.

Below are the environment variables for my local environment. You can define as many environments as you want and switch between them.

Postman Environment

Now that I have all the request definitions for the API added let’s add some tests to verify our API functionality.

Writing The First Test

Postman allows executing scripts before and after running API requests. We did see this in the API Chaining post where we grabbed the messageId from the POST request and added it to the environment variable for use in the subsequent requests. Similarly, we can also add scripts to verify that the API request returns expected results, status code, etc.

Let’s first write a simple test on our GET API request that it returns a 200 OK response when called. The below test uses the Postmans PM API to assert that status code of the response is 200. Check the Response Assertion API in test scripts to see the other assertion options available like pm.response.to.have.status. The tests are under the Tests section similar to where wrote the scripts to chain API requests. When executing the API request, the Tests tab shows the successful test run for the particular request.

200 Status Code
1
2
3
pm.test("Status code is 200", function() {
  pm.response.to.have.status(200);
});

Postman Tests

Similarly, you can also write Pre-request Script to set variables or perform any other operation. Below I am setting the Value environment variable to “Test”. You could generate a random value here or set a random id, or set an identifier that does not already exists. It’s test/application specific, so leave it you to decide what works best for you.

Pre-request Script.
1
pm.environment.set("Value", "Test");

Collection Runner

The collection runner allows you to manage multiple API requests and run them as a set. Once completed it shows a summary of all the tests included within each request and details of tests that passed/failed in the run. You can target the Runner to run against your environment of choice.

Postman Collection Runner

Running these tests still involves some manual effort of selecting environments and running them. However using Newman, you can run Postman Collections from the command line, which means even in your build pipeline.

Using Postman, we can quickly test our API’s across multiple environments. The Collection Runner also shows an excellent visual summary of the tests and helps us in API development. However, I found these tests to violate the DRY principle. You need to repeat the same API request structure if you have to use them in a different context. Like in the example above I had to create two Get Value By Id requests to test for the value existing and also for when it does not exists. You could use some conditional looping inside the scripts, but then that makes your tests complicated and gets into the loop of how to test your tests. Postman does allow you to export the API request to the language of your choice. So once you have the basic schema, you can export them and write tests that compose them. I find Postman tests and the Runner a quick way to start testing your API endpoints and then for more complicated cases use a stronger programming language. Having the tests in Postman also allows us to have an API spec in place and can be useful to play around with the API.

A couple of months back Gautham (my son) started playing Reading Eggs. We started off initially with a free trial after it being recommended by our friend Asha. We got a 21-day extended trial in addition to the initial 21 days free trial (i.e a total of 42 days), which helped a lot with confirming that Gautham was to use this app. We noticed that he was reading small words quite comfortably and he grew an interest to read various things around. We took an annual subscription and feels totally worth it.

Reading Eggs Levels

Using the five essential keys to reading success, the program unlocks all aspects of learning to read for your child.

  • The lessons use colourful animation, fun characters, songs, and rewards to keep children motivated.
  • The program is completely interactive to keep children on task.
  • When children start the program, they can complete a placement quiz to ensure they are starting at the correct reading level.
  • Parents can access detailed progress reports as well as hundreds of full-colour downloadable activity sheets that correspond with the lessons in the program.
  • The program includes over 2000 online books for kids – each ending with a comprehension quiz that assesses your child’s understanding.

Each level explores different letters/word combinations and has a knowledge quiz at the end of it to pass to the next level. The program unlocks all aspects of learning to read for your child, focusing on a core curriculum of phonics and phonemic awareness, sight words, vocabulary, comprehension, and reading for meaning.

Do give the app a try if you have kids at home!