Duncan's Security Blog An enthusiasts musings

27Nov/160

Gaps in DevOpsSec Part 2

In my previous article, I briefly went over some of the default security testing options available in existing DevOps deployment tools. In this article, I will cover what can reasonably be included in the DevOps pipeline, and a few caveats faced by security engineers who are trying to add security testing into a fully automated deployment pipeline.

The fact that security scanning can take time is just the tip of the iceberg. What must be remembered, is that no amount of automated scanning will catch every security vulnerability. Ultimately a professional penetration tester will still need to manually perform tests of their own. The trick with automated testing therefore, is to perform enough testing to account for the "low hanging fruit", but also complete the scan in "reasonable" time. While 4-5 minutes per scan may still sound unreasonable to the automation engineers, 10 minutes or more per scan will certainly be a test of their character.

The goal here is to use the APIs of security scanning tools to provide a degree of confidence in the code being deployed to production, until such time as a penetration tester can conduct a more thorough scrutiny of application. It is up to the security engineers to review scan results, identify false positives, and assist the development teams with urgently fixing any identified vulnerabilities.

How?

There are a number of ways that we can achieve our needs, given the features that automation tooling provide. The first approach automation engineers may suggest, is to build a recipe/playbook to act as an orchestration point to trigger scans on whichever server the recipe/playbook is run from. This would work, but some crafty design decisions need to be made with respect to how to make the script generic enough to work everywhere, and the waiting and post processing of results can become tricky.

For option two we could try webhooks from the configuration manager itself. This too would work, but may require a few "hacks" to get this working well, and if you are doing multiple deployments at once, this method may be prone to a lot of confusion and ultimate failure.

Option three would be to use something like a Bag-of-holding(OWASP) to provide an abstraction layer which manages security activities. This way, the pipeline will delegate security to the orchestration service. Depending on the design of this option, you could pre-install all tools on a server or virtual machine, or simply use containers(Docker). The Bag-of-holding can cater for all scanning, polling for completion, and making sense of the results.

Now that we have some options, lets look at what we might want to do with our security tools.

Network scan

To start with, we can run a simple network scan to scan the application server itself, which would have just been built from scratch by the configuration manager. Any changes to a recipe/playbook could result in a change in configuration that can leave the application server exposed. A basic network scan by a tool like Tenable's Nessus should do the trick. Nessus has a very nice API, and with a little pre-work, a generic scan template can be set up and re-used by all deployments.

Application scan

Next up we have an application scan. This one is tricky, because this is usually done manually by clicking through the application. OWASP ZAP or Portswigger's Burp would be options here. The idea is to probe the actual application for any obvious vulnerabilities. However, both ZAP and Burp have APIs available through which a decent amount of coverage can be achieved. Be warned though, that ZAP and Burp were designed to be used manually. Their APIs are crude, but they work. Don't expect any vast improvements to these APIs, either.

This scan can really run away with time if you have not tuned it correctly. At a minimum, you would want to run a spider, an active scan, and a passive scan. There are tuning guides available for both ZAP and Burp. If you are using ZAP, stopping by the OWASP Slack channel will do you some good(they can be quite helpful). In addition, the ZAP team have put together a "baseline" scan, which should give you reasonable coverage in a short time(ZAP Baseline scan).

For bonus points, you might like to run your development teams' functional tests(done with automated testing tools such as Selenium) directly through the proxy.

Compliance scan

Lastly, we can run some scans to satisfy our governance friends. Having an application server consistently, and provably compliant, has a lot of perks. To do this, you would need to decide on what your application server needs to comply to(e.g. CIS benchmarks, NIST, COBIT, etc), and then have a battery of tests to check. Ideally, during the rebuild of the application server, the configuration manager would have configured everything for compliance. The compliance tests are just for verification. Luckily, there are plenty of existing recipes/playbooks to help with this. For example, The hardening framework, CIS Ansible playbooks for CentOS/RHEL.

...and then?

Up till this point, we have launched various scans to cover different areas of the application server. These scans will invariably run till completion, and then as it stands, nothing further will happen. We have now come across a major problem: We have run our scans, but our continuous deployment tools have carried on without the results. While the deployment can be halted to wait for scans not being run directly from the deployment tool, something else needs to happen in order to proceed.

We have three sets of scan results(still in their respective scanner databases), each with potentially different severity report formats. These severities have also been determined by a third party, and may not correspond to your organisations interpretation of risk. So now the task of retrieving the results, parsing them, and making a decision about allowing the deployment to proceed or not, needs to be made. For this, something like Etsy's 411 security alerting framework would be useful. The framework would generate alerts, and based on the alert, the deployment tool may be notified to proceed or abort the deployment(this may sound easy, but I am not currently aware of any available hook on any CD tool that has this kind of functionality, natively).

Whether the deployment proceeds or not, any potential bug should be automatically added to an API-enabled issue tracking software such as Atlassian's Jira. The security engineers can then review the issues, and either declare them as false positives, or re-assign the issue to the relevant development manager for fixing.

On that note, any false positives need to be recorded in a central repository. Ideally, you would try and filter out the known false positives from being added onto the issue tracker, to prevent the re-work of again declaring the issue as a false positive.

In closing

As you can see, there are ways to include automated security scans into a continuous delivery pipeline, but any experienced security engineer or developer will know that it wont be trivially easy. The important thing is that you work out what is best for you and your organisation. However deciding that it either can't be done, or is too difficult, will potentially expose your organisation to attack should you proceed with production deployments that have not been tested.

I hope this has been an interesting read. Feel free to share this post with your friends 🙂

Facebooktwitterredditpinterestlinkedinmail
18Sep/160

Gaps in DevOpsSec Part 1

[Part 2 of this article can be found here.]

I recently did some work with a bunch of great automation engineers. My task was to assist them with adding some automation with respect to security testing. It was an awesome experience, but it left me feeling a bit worried about the continuous deployment world, as far as security is concerned.

I know what you are thinking: "But all the fancy DevOps tools provide 'security' out-the-box". This is partly true, all of the DevOps pipeline tooling I have scrutinized(all of the widely adopted ones), provided static code analysis steps in the very early stages of the deployment pipeline. Static code analysis is a fantastic start, but for the most part, that is where the line is drawn.

Continuous deployment or continuous delivery?

Continuous deployment and continuous delivery are often confused. Continuous deployment is when a development team will deploy every change straight through to production(the change will still go through a battery of automated tests, if its done right). Continuous delivery differs in that the change will be ready for deployment just as quickly, but the developers may choose not to deploy immediately(the change would usually not be left "hanging" for too long though).

It is up to the automation engineers to build a deployment pipeline such that, from code to compiling, testing and deployment; everything will be done at the click of a button, and will be in production within minutes. The engineers from some of the bigger software companies, have been known to claim up to 50 deployments every day. Smaller adopters of DevOps admire the big guys, and so they tend to start chasing metrics: "How many deployments can WE do per day?".

Taking this into account, would you be satisfied that only static code analysis has been done?

Static code analysis

Diving into the static code analysis realm should quickly raise a lot of questions from an experienced programmer. SCA tools do work, and should certainly be used. However among the results, there are usually a plethora of false positives. The truth is, SCA is extremely hard to do well. Even the best tools will have their problems, and will typically only do one, maybe two languages effectively. With the assistance of a security engineer, the developers will learn to spot what is, and what isn't a false positive, but this will take time.

Time is of short supply in DevOps. Granted that the same false positives for unchanged code will pop up(which should be catalogued and marked as "OK", for future scans), any false positives picked up on new code should be verified. A decision therefore needs to be made: "Do we deploy as-is to production, and flag the code for security to verify?, or do we halt this run, until verification has taken place?"...But...Metrics...

What else should we do?

Up till now, we have scanned the code for vulnerabilities. We trust that our SCA tool is good, and that our teams didn't make a mistake when picking out false positives. We have not, however, tested the finished application for vulnerabilities, nor have we tested the server on which it is hosted. How about compliance? Have we tested that the application and infrastructure we are about to deploy comply with our security policy?

While SCA is generally well understood and handled by deployment tools, these other testing areas are completely alien. While it is certainly possible to launch these additional tests, they bring with them some major caveats for the deployment pipeline. One of these caveats is of course, time.

As for the rest, I will cover those in Part 2.

Facebooktwitterredditpinterestlinkedinmail