Duncan's Security Blog An enthusiasts musings

18Sep/160

Gaps in DevOpsSec Part 1

[Part 2 of this article can be found here.]

I recently did some work with a bunch of great automation engineers. My task was to assist them with adding some automation with respect to security testing. It was an awesome experience, but it left me feeling a bit worried about the continuous deployment world, as far as security is concerned.

I know what you are thinking: "But all the fancy DevOps tools provide 'security' out-the-box". This is partly true, all of the DevOps pipeline tooling I have scrutinized(all of the widely adopted ones), provided static code analysis steps in the very early stages of the deployment pipeline. Static code analysis is a fantastic start, but for the most part, that is where the line is drawn.

Continuous deployment or continuous delivery?

Continuous deployment and continuous delivery are often confused. Continuous deployment is when a development team will deploy every change straight through to production(the change will still go through a battery of automated tests, if its done right). Continuous delivery differs in that the change will be ready for deployment just as quickly, but the developers may choose not to deploy immediately(the change would usually not be left "hanging" for too long though).

It is up to the automation engineers to build a deployment pipeline such that, from code to compiling, testing and deployment; everything will be done at the click of a button, and will be in production within minutes. The engineers from some of the bigger software companies, have been known to claim up to 50 deployments every day. Smaller adopters of DevOps admire the big guys, and so they tend to start chasing metrics: "How many deployments can WE do per day?".

Taking this into account, would you be satisfied that only static code analysis has been done?

Static code analysis

Diving into the static code analysis realm should quickly raise a lot of questions from an experienced programmer. SCA tools do work, and should certainly be used. However among the results, there are usually a plethora of false positives. The truth is, SCA is extremely hard to do well. Even the best tools will have their problems, and will typically only do one, maybe two languages effectively. With the assistance of a security engineer, the developers will learn to spot what is, and what isn't a false positive, but this will take time.

Time is of short supply in DevOps. Granted that the same false positives for unchanged code will pop up(which should be catalogued and marked as "OK", for future scans), any false positives picked up on new code should be verified. A decision therefore needs to be made: "Do we deploy as-is to production, and flag the code for security to verify?, or do we halt this run, until verification has taken place?"...But...Metrics...

What else should we do?

Up till now, we have scanned the code for vulnerabilities. We trust that our SCA tool is good, and that our teams didn't make a mistake when picking out false positives. We have not, however, tested the finished application for vulnerabilities, nor have we tested the server on which it is hosted. How about compliance? Have we tested that the application and infrastructure we are about to deploy comply with our security policy?

While SCA is generally well understood and handled by deployment tools, these other testing areas are completely alien. While it is certainly possible to launch these additional tests, they bring with them some major caveats for the deployment pipeline. One of these caveats is of course, time.

As for the rest, I will cover those in Part 2.

Facebooktwitterredditpinterestlinkedinmail
Comments (0) Trackbacks (0)

No comments yet.


Leave a comment

No trackbacks yet.