The Criminal, the Extortionist and the Dirty Cow - Part 2

The Criminal, the Extortionist and the Dirty Cow - Part 2

In the last blog, The Criminal, the Extortionist and the Dirty Cow (Part 1), I discussed why it’s difficult to manage security and compliance of infrastructure and application in a fast paced world of IT.  In this post I will discuss what you can do to embed controls into the IT systems we build using automation and DevOps practices. I will discuss what you can do with your DevOps automation to secure infrastructure, applications and data.

When we start to automate IT at scale we need to have a solid foundation of infrastructure to host our applications onto.  By that, I mean we need to have reproducible, consistent and hardened infrastructure and move away from the ‘snowflake’ servers and have a more standardised approach.  The most practical and efficient way to achieve this is by managing your infrastructure with configuration management tooling.  

When you start to use configuration management, you define the state of the server rather than the individual commands you need to run.  This is all defined in code.  Configuration tools such as Chef, Puppet and Ansible allow you to express the state in code and it manages the state of the server without manual intervention.  The code is packaged into a service or patterns that get applied to the servers and you then get a high level of consistency of the state of each server.  This code is responsible for applying the latest patches to packages and ensuring your organisation's security settings are applied to all servers, also known as OS hardening.  

The Center of Internet Security provides very comprehensive guides on how to secure base operating systems and have a list of benchmarks that you can assess your infrastructure against.  The configuration management code is treated like any other application codebase and is checked into source control. The release of that code can be managed in a controlled pipeline into development, test and production environments.  It is imperative that all the runtime components in your infrastructure have their security settings managed and enforced as secure.  One of the biggest data breaches of the MongoDB was because simple authentication was not switched on and configured, which exposed terabytes of data to the internet.  These are simple steps to implement, but often ignored.

One of the patterns that has emerged from DevOps is immutable infrastructure.  Immutable infrastructure is an approach to managing services and software deployments on IT resources wherein components are replaced rather than changed. An application or services is effectively redeployed each time any change occurs.  It is often easier to to replace a broken server with a new server rather than trying to fix a broken one in-place.  The infrastructure and the networking components for the infrastructure can be defined by code using tooling such as Packer, Terraform, CloudFormation for AWS or ARM templates for Azure.   These infrastructure provisioning tools allow you to define, in detail, the networking components that the infrastructure run in as as well as the RBAC controls given to these resources.  AWS and Azure have very detailed guide on best practices to networking which you can apply in this codebase.  

These best practices and tooling can be used in conjunction with Vault (to manage secrets, keys, passwords and certificates), can provide a very flexible and secure provisioning process. This code can be executed in the release pipeline so the infrastructure can be brought online on demand or at scheduled times.  Within this pipeline you can continually validate the compliance of the node by using tools such as InSpec, a tool which automates the continuous testing and compliance auditing of your infrastructure.  When a verification check fails then the infrastructure will not progress in the pipeline.  Here is an example architecture using InSpec in the pipeline:

If you are managing Docker containers then I recommend reading another one of our blogs “Docker Security: What You Need to Know”. The Dirty COW vulnerability also affects containers, read about how it can be fixed here “Dirty COW - (CVE-2016-5195) - Docker Container Escape”.

What we do with immutable infrastructure is divide our application infrastructure into two components, things that are data and things that are everything else.  For those things that are everything else we never replace or we never make updates in place.  This type of operating model also can involve re-architecting systems into 12-factor apps, stateless services and micro-service architectures.  Take a look at my previous blog “Building a Twelve Factor Application” to find out about 12-factor apps.  With this type of architecture, we create a new version definition that says what is the state of the thing we want and then we launch in parallel.  Once that thing is in place then we go ahead and destroy the old infrastructure.  There are different deployment strategies that work well with immutable; Blue/Green, Canary and Phoenix. When you start down this path where you end up with the tool chain that is pretty flexible and resilient. It forces you to respond to failure and build resilience into the applications, infrastructure and the architectures you implement.

When you start working in this manner, you are able to begin building a culture that enforces  all changes existing in source control.  If you make a change outside the bounds of that infrastructure code base that you’ve defined, your configuration management framework is going to undo that change.  If the configuration management framework doesn’t undo that change, well that server is probably not going to live anyway so any changes you’ve made by hand are going to go away.  

So, we get to a place where out of band changes are going to be lost.  It means that a new type of culture develops.  Whether your Devs or your Ops, suddenly you’re operating in this model where there is a big giant compliance engine insuring that we have an audit-able trail of actions and that everything is captured.  We get to a point where we’re suddenly working in ways that InfoSec has wanted us to work for years.  It is no longer an afterthought to make sure that you are in compliance; it is the default way of working with your infrastructure.

This is great, we can now guarantee the state of our infrastructure and networks, but what about our applications and data?  DevOps gives companies the opportunity to improve application security, however they often fail to realise that potential.   Application security is often not enforced due to lack in knowledge in application security practices and lack of basic hygiene in the software delivery lifecycle.  Basic application security testing is not baked into the release pipeline.  This is sometimes referred as “Shifting Security Left”, Shannon Leitz from Intuit and DevSecOps.org has a great post explaining this concept.

There are many tools that expose security vulnerabilities through static analysis, such as: Snyk, FindBugs and SonarQube.  The Open Web Application Security Project (OWASP) is a great resource to find out about these types of tools that you can use on your applications.   There are also tools in the market that allow your developers to find security, and operational, issues in application code at development time rather than in a later deployment phase. Tools such as from vendors such as New Relic allow developers to spot issues early and remediate them.  In order for developers to be invested, they have the right tools at their disposal and receive the relevant training on how to code secure applications.

With the increased adoption of ransomware, we now have to get smarter and more efficient with encrypting and backing up our data so that if you are compromised you can tear that infrastructure down and rebuild a new stack based off configuration management code, plus the most recently backed up data.  There are some really cool practices developing in the Serverless space where functions are being created to backup and restore data on real-time basis using Lambda and DynamoDB.  Keep an eye out for developments in this space.

As I have discussed, there are a lot of vectors to address when it comes to addressing security and DevOps practices.  We have the opportunity to add more consistency and security into our IT estate using DevOps and automation, however it is up to the organisations to invest in the time and education of IT staff to be able to implement these practices.  Further resources for security and DevOps can be found here at awesome-devsecops.

To view or add a comment, sign in

More articles by Alex M.

  • The Criminal, The Extortionist and The Dirty Cow

    With DevOps and cloud computing practices becoming more and more commonplace in the IT industry, companies are starting…

  • Building a Twelve Factor App

    By Alex Manly @ContinoHQ I've spent most of my career writing and architecting large bespoke Java implementation for…

    6 Comments
  • Perspectives from Tesco hosted first GitHub MeetUp in Bengaluru

    I was delighted to be invited to attend and present at Tesco’s first ever GitHub Bengaluru, an amazing event on Monday…

    2 Comments
  • Pitfalls to avoid on your DevOps journey

    Working at a number of clients across the banking, investment and media industries has given me the insights and…

    6 Comments

Insights from the community

Others also viewed

Explore topics