Duncan's Security Blog An enthusiasts musings

27Nov/160

Gaps in DevOpsSec Part 2

In my previous article, I briefly went over some of the default security testing options available in existing DevOps deployment tools. In this article, I will cover what can reasonably be included in the DevOps pipeline, and a few caveats faced by security engineers who are trying to add security testing into a fully automated deployment pipeline.

The fact that security scanning can take time is just the tip of the iceberg. What must be remembered, is that no amount of automated scanning will catch every security vulnerability. Ultimately a professional penetration tester will still need to manually perform tests of their own. The trick with automated testing therefore, is to perform enough testing to account for the "low hanging fruit", but also complete the scan in "reasonable" time. While 4-5 minutes per scan may still sound unreasonable to the automation engineers, 10 minutes or more per scan will certainly be a test of their character.

The goal here is to use the APIs of security scanning tools to provide a degree of confidence in the code being deployed to production, until such time as a penetration tester can conduct a more thorough scrutiny of application. It is up to the security engineers to review scan results, identify false positives, and assist the development teams with urgently fixing any identified vulnerabilities.

How?

There are a number of ways that we can achieve our needs, given the features that automation tooling provide. The first approach automation engineers may suggest, is to build a recipe/playbook to act as an orchestration point to trigger scans on whichever server the recipe/playbook is run from. This would work, but some crafty design decisions need to be made with respect to how to make the script generic enough to work everywhere, and the waiting and post processing of results can become tricky.

For option two we could try webhooks from the configuration manager itself. This too would work, but may require a few "hacks" to get this working well, and if you are doing multiple deployments at once, this method may be prone to a lot of confusion and ultimate failure.

Option three would be to use something like a Bag-of-holding(OWASP) to provide an abstraction layer which manages security activities. This way, the pipeline will delegate security to the orchestration service. Depending on the design of this option, you could pre-install all tools on a server or virtual machine, or simply use containers(Docker). The Bag-of-holding can cater for all scanning, polling for completion, and making sense of the results.

Now that we have some options, lets look at what we might want to do with our security tools.

Network scan

To start with, we can run a simple network scan to scan the application server itself, which would have just been built from scratch by the configuration manager. Any changes to a recipe/playbook could result in a change in configuration that can leave the application server exposed. A basic network scan by a tool like Tenable's Nessus should do the trick. Nessus has a very nice API, and with a little pre-work, a generic scan template can be set up and re-used by all deployments.

Application scan

Next up we have an application scan. This one is tricky, because this is usually done manually by clicking through the application. OWASP ZAP or Portswigger's Burp would be options here. The idea is to probe the actual application for any obvious vulnerabilities. However, both ZAP and Burp have APIs available through which a decent amount of coverage can be achieved. Be warned though, that ZAP and Burp were designed to be used manually. Their APIs are crude, but they work. Don't expect any vast improvements to these APIs, either.

This scan can really run away with time if you have not tuned it correctly. At a minimum, you would want to run a spider, an active scan, and a passive scan. There are tuning guides available for both ZAP and Burp. If you are using ZAP, stopping by the OWASP Slack channel will do you some good(they can be quite helpful). In addition, the ZAP team have put together a "baseline" scan, which should give you reasonable coverage in a short time(ZAP Baseline scan).

For bonus points, you might like to run your development teams' functional tests(done with automated testing tools such as Selenium) directly through the proxy.

Compliance scan

Lastly, we can run some scans to satisfy our governance friends. Having an application server consistently, and provably compliant, has a lot of perks. To do this, you would need to decide on what your application server needs to comply to(e.g. CIS benchmarks, NIST, COBIT, etc), and then have a battery of tests to check. Ideally, during the rebuild of the application server, the configuration manager would have configured everything for compliance. The compliance tests are just for verification. Luckily, there are plenty of existing recipes/playbooks to help with this. For example, The hardening framework, CIS Ansible playbooks for CentOS/RHEL.

...and then?

Up till this point, we have launched various scans to cover different areas of the application server. These scans will invariably run till completion, and then as it stands, nothing further will happen. We have now come across a major problem: We have run our scans, but our continuous deployment tools have carried on without the results. While the deployment can be halted to wait for scans not being run directly from the deployment tool, something else needs to happen in order to proceed.

We have three sets of scan results(still in their respective scanner databases), each with potentially different severity report formats. These severities have also been determined by a third party, and may not correspond to your organisations interpretation of risk. So now the task of retrieving the results, parsing them, and making a decision about allowing the deployment to proceed or not, needs to be made. For this, something like Etsy's 411 security alerting framework would be useful. The framework would generate alerts, and based on the alert, the deployment tool may be notified to proceed or abort the deployment(this may sound easy, but I am not currently aware of any available hook on any CD tool that has this kind of functionality, natively).

Whether the deployment proceeds or not, any potential bug should be automatically added to an API-enabled issue tracking software such as Atlassian's Jira. The security engineers can then review the issues, and either declare them as false positives, or re-assign the issue to the relevant development manager for fixing.

On that note, any false positives need to be recorded in a central repository. Ideally, you would try and filter out the known false positives from being added onto the issue tracker, to prevent the re-work of again declaring the issue as a false positive.

In closing

As you can see, there are ways to include automated security scans into a continuous delivery pipeline, but any experienced security engineer or developer will know that it wont be trivially easy. The important thing is that you work out what is best for you and your organisation. However deciding that it either can't be done, or is too difficult, will potentially expose your organisation to attack should you proceed with production deployments that have not been tested.

I hope this has been an interesting read. Feel free to share this post with your friends 🙂

Facebooktwitterredditpinterestlinkedinmail
15Jul/150

Regarding WordPress security

Over the last few years, WordPress has been the subject of much abuse. First from people of questionable intent, who may or may not disclose any security holes they find, and secondly from bystanders who comment on any disclosed vulnerabilities (of which there have been many). However, there are a few important things to note when regarding WordPress security. I will be discussing these points in this blog post.

Responsibility

A question needs to be asked as to where the responsibility lies with respect to securing a WordPress website. In short, both WordPress and the website owner are responsible. WordPress is a framework for creating websites (it is not just a CMS system). As such, WordPress is responsible for delivering secure code to their customers. However, they cannot do much more than that. The site owner (who may or may not be the developer), is responsible for decisions made both during implementation, as well as post implementation. Decisions like hosting platform, plugins, SSL configuration and database configuration are all out of WordPress' control.

WordPress have consistently proven that if a vulnerability is their responsibility, it is taken care of. It then falls on the website owner to run updates. Minor updates now happen automatically, but major updates still require administrative action. Updates aside, WordPress cant be held responsible for installation of compromised plugins (which is where most breaches are coming from these days; 3rd party code). WordPress should also not be held responsible for failure to use security certificates (without which, dashboard credentials can be sniffed in plaintext). Hosting is also a website owners decision. The choice of a shared hosting package for example, introduces a great deal of risk (a server compromise on someone elses website could expose all websites on the cluster).

WordPress is the most secure and most vulnerable framework, at the same time

Thats a bold statement. But did you know that 25% of all websites on the internet, are built with WordPress? Furthermore, WordPress' market share varies between 60-65% of all CMS frameworks in use. By numbers alone, that paints a massive target on WordPress implementations. Since WordPress can also be installed by a user with no technical "know-how", a good portion of implementations will be left in their default configurations, making them vulnerable (and easy targets).

Bug fixing in software is an endless process. A study done by WordPress over a period of five years on theme, core and plugin code showed that 80% of all updates were security related. History has shown that the WordPress developers are quick to patch security vulnerabilities. The same cannot be said for other popular CMS systems, which could take weeks or even months to release a patch which WordPress had implemented within 24 hours of disclosure. The quick turn around time for security patches make WordPress a secure platform (provided the site owner runs the update).

Bottom line

No website or web application will ever be 100% secure. The only way to prevent ever getting hacked, is to unplug from the internet. The most a website owner can do, is keep their software up to date, and periodically review security measures to make sure they are sufficient to protect the required assets. But with just a few key security tweaks, your WordPress installation can be orders of magnitude securer. WordPress has an army of developers collaborating and working towards a better framework for their customers.

So its all doom and gloom then... what can you do?

This post will be the first of a series of posts I will be making, going over WordPress security and best practice. At a bare minimum, you can install a trusted security plugin (I can vouch for ithemes security, wordfence, and securi). These plugins help a great deal, I will compare them in a future post.

Feel free to leave your comments, even if its a dispute 😉

Facebooktwitterredditpinterestlinkedinmail