Duncan's Security Blog An enthusiasts musings

27Nov/160

Gaps in DevOpsSec Part 2

In my previous article, I briefly went over some of the default security testing options available in existing DevOps deployment tools. In this article, I will cover what can reasonably be included in the DevOps pipeline, and a few caveats faced by security engineers who are trying to add security testing into a fully automated deployment pipeline.

The fact that security scanning can take time is just the tip of the iceberg. What must be remembered, is that no amount of automated scanning will catch every security vulnerability. Ultimately a professional penetration tester will still need to manually perform tests of their own. The trick with automated testing therefore, is to perform enough testing to account for the "low hanging fruit", but also complete the scan in "reasonable" time. While 4-5 minutes per scan may still sound unreasonable to the automation engineers, 10 minutes or more per scan will certainly be a test of their character.

The goal here is to use the APIs of security scanning tools to provide a degree of confidence in the code being deployed to production, until such time as a penetration tester can conduct a more thorough scrutiny of application. It is up to the security engineers to review scan results, identify false positives, and assist the development teams with urgently fixing any identified vulnerabilities.

How?

There are a number of ways that we can achieve our needs, given the features that automation tooling provide. The first approach automation engineers may suggest, is to build a recipe/playbook to act as an orchestration point to trigger scans on whichever server the recipe/playbook is run from. This would work, but some crafty design decisions need to be made with respect to how to make the script generic enough to work everywhere, and the waiting and post processing of results can become tricky.

For option two we could try webhooks from the configuration manager itself. This too would work, but may require a few "hacks" to get this working well, and if you are doing multiple deployments at once, this method may be prone to a lot of confusion and ultimate failure.

Option three would be to use something like a Bag-of-holding(OWASP) to provide an abstraction layer which manages security activities. This way, the pipeline will delegate security to the orchestration service. Depending on the design of this option, you could pre-install all tools on a server or virtual machine, or simply use containers(Docker). The Bag-of-holding can cater for all scanning, polling for completion, and making sense of the results.

Now that we have some options, lets look at what we might want to do with our security tools.

Network scan

To start with, we can run a simple network scan to scan the application server itself, which would have just been built from scratch by the configuration manager. Any changes to a recipe/playbook could result in a change in configuration that can leave the application server exposed. A basic network scan by a tool like Tenable's Nessus should do the trick. Nessus has a very nice API, and with a little pre-work, a generic scan template can be set up and re-used by all deployments.

Application scan

Next up we have an application scan. This one is tricky, because this is usually done manually by clicking through the application. OWASP ZAP or Portswigger's Burp would be options here. The idea is to probe the actual application for any obvious vulnerabilities. However, both ZAP and Burp have APIs available through which a decent amount of coverage can be achieved. Be warned though, that ZAP and Burp were designed to be used manually. Their APIs are crude, but they work. Don't expect any vast improvements to these APIs, either.

This scan can really run away with time if you have not tuned it correctly. At a minimum, you would want to run a spider, an active scan, and a passive scan. There are tuning guides available for both ZAP and Burp. If you are using ZAP, stopping by the OWASP Slack channel will do you some good(they can be quite helpful). In addition, the ZAP team have put together a "baseline" scan, which should give you reasonable coverage in a short time(ZAP Baseline scan).

For bonus points, you might like to run your development teams' functional tests(done with automated testing tools such as Selenium) directly through the proxy.

Compliance scan

Lastly, we can run some scans to satisfy our governance friends. Having an application server consistently, and provably compliant, has a lot of perks. To do this, you would need to decide on what your application server needs to comply to(e.g. CIS benchmarks, NIST, COBIT, etc), and then have a battery of tests to check. Ideally, during the rebuild of the application server, the configuration manager would have configured everything for compliance. The compliance tests are just for verification. Luckily, there are plenty of existing recipes/playbooks to help with this. For example, The hardening framework, CIS Ansible playbooks for CentOS/RHEL.

...and then?

Up till this point, we have launched various scans to cover different areas of the application server. These scans will invariably run till completion, and then as it stands, nothing further will happen. We have now come across a major problem: We have run our scans, but our continuous deployment tools have carried on without the results. While the deployment can be halted to wait for scans not being run directly from the deployment tool, something else needs to happen in order to proceed.

We have three sets of scan results(still in their respective scanner databases), each with potentially different severity report formats. These severities have also been determined by a third party, and may not correspond to your organisations interpretation of risk. So now the task of retrieving the results, parsing them, and making a decision about allowing the deployment to proceed or not, needs to be made. For this, something like Etsy's 411 security alerting framework would be useful. The framework would generate alerts, and based on the alert, the deployment tool may be notified to proceed or abort the deployment(this may sound easy, but I am not currently aware of any available hook on any CD tool that has this kind of functionality, natively).

Whether the deployment proceeds or not, any potential bug should be automatically added to an API-enabled issue tracking software such as Atlassian's Jira. The security engineers can then review the issues, and either declare them as false positives, or re-assign the issue to the relevant development manager for fixing.

On that note, any false positives need to be recorded in a central repository. Ideally, you would try and filter out the known false positives from being added onto the issue tracker, to prevent the re-work of again declaring the issue as a false positive.

In closing

As you can see, there are ways to include automated security scans into a continuous delivery pipeline, but any experienced security engineer or developer will know that it wont be trivially easy. The important thing is that you work out what is best for you and your organisation. However deciding that it either can't be done, or is too difficult, will potentially expose your organisation to attack should you proceed with production deployments that have not been tested.

I hope this has been an interesting read. Feel free to share this post with your friends 🙂

Facebooktwitterredditpinterestlinkedinmail
2Feb/140

Website technology enumeration

I have been deliberating since my last post as to what this post should consist of. I knew I wanted to do some domain and technology footprinting, but there is so much extra stuff that can be included in the post. I have decided to limit the scope, therefore not including the extras(these posts will come later).

Information gathering

There is an abundance of web services that can be used to enumerate all sorts of information about anything on the web. For this post, I have specifically chosen to focus on what information can be enumerated from websites. Knowing information about the domain and technologies used for the website, can help you determine where the website is vulnerable.

To begin with, we have a website called Builtwith. Builtwith will scan the website headers, looking for clues as to what technologies are used. The standard results will include the web server type, CSS and html version. The more variable results will have information about any CMS systems used, frameworks, databases etc.

To use Builtwith, you enter a web address in the search field, and click on "Lookup".

Builtwith search

Builtwith search

Builtwith will scan the website, and return its findings as follows:

Some of the Builtwith results

Some of the Builtwith results

As you can see, we have already started building a picture in our minds as to how the website is put together. A little further digging and you will know what versions are being used. We can then search for vulnerabilities associated with those technologies. Try it out with a few of your favourite websites.

Netcraft

A second website that returns similar information, is Netcraft. Netcraft tends to miss some of the stuff that Builtwith returns, however it includes in its results some things that Builtwith does not.
netcraft
For example, information about the network and hosting. As can be seen here:

Netcraft results

Netcraft results

Domain registration details

Lastly, we may want some information about domain registration. There are a few websites that can give you this information, most of which are domain specific. Eg. Some will only be able to provide information from the .com domain, while others will be specific to a country. The website will perform a whois on the domain, and return the results. The kind of information you can expect to find will be billing history, names of people who registered the domain(sometimes helpful, it could be someone working in the company), their contact details, physical address of the person/company etc.

A commonly used tool is the whois function on InterNIC. InterNIC can provide whois information for the following domains: .aero, .arpa, .asia, .biz, .cat, .com, .coop, .edu, .info, .int, .jobs, .mobi, .museum, .name, .net, .org, .pro, or .travel.

For co.za, you can use the whois function on co.za.

On the face of it, not much can be done with all of this information. However, in the reconnaisance phase of an attack, any information is good information. The more you have, the more you can use to infer useful plans of attack.

Facebooktwitterredditpinterestlinkedinmail
11Jan/140

The Internet Archive

The internet has seen many resources come and go since its inception, but where do these resources go when they are long forgotten? I stumbled across the answer(to a certain extent) a while back, and decided to share it. Archive.org is an internet library of web resources, and is a pretty cool website to play around on.

The Internet Archive

The Internet Archive

At first, I was looking for something that archived web pages. Sure, you get googles page cache; but this only reveals the last snapshot, which may or may not be of any use. What I ended up finding, was The way back machine. More on the way back machine later. For now, I would like to go over some of the other cool features of The Internet Archive.

The way back machine

The way back machine

The internet archive turned out to be an archive for not only websites, but also video, audio, text, and software. Things I have not seen for years where right there, available for perusal. Founded in 1996, The Internet Archive has been receiving data donations for almost twenty years. Just like other libraries, The Internet Archive also provides facilities for disabled users, maximising their audience.

Video

The video collection is a catalogue of present and past clips, videos, and even full length feature films(most of which are from the golden days). The content ranges from animation, to community videos, as well as educational and music videos. If you recall a video you once watched, and would like to find it again, this could be the place you will find it.

Computer security video, 1984

Computer security video, 1984

Audio

Likewise with audio, the range of available resources are vast: audio books, poetry, music, podcasts and more.

Software

In the software category, the main attraction was the games archive. Not only were there an abundance of the ol' computer games available, but there are also console games. Some of these older games also have a built-in emulator for you to play through your browser:
Historical games
Games aside, there is also a shareware CD archive, and a whole host of console emulators.

Text

The text section has over 5 million books and articles from over 1500 curated collections. The variety available is immense, and I think every possible category is covered.
Example book

The way back machine

Last, but not least, is The wayback machine. You may wonder what relevance this has to security, but it does indeed have its place. During the reconnaissance phase of an attack, adversaries will attempt to learn as many details about their target as possible. One of these methods, is called foot printing. No amount of information is ever too much, as each piece of information may contribute to the inference of new ideas to tackle the problem. In this case, we have a mechanism that has archived website snapshots over a long period of time. Information a company may have had on their website in the past, could potentially be of assistance in building a profile about a company, its technologies, or its employees. This information could since have been removed from the website, when it became known that such information could be harmful to the company. I wont provide an example of such information, but I have no problem showing the capability of the way back machine, with this blast from the past:

Netscape website, on 18 February 1999

Netscape website, on 18 February 1999

The way back machine is very useful for getting some historical context about a website, as well as the historical information that would come with it. Any snapshot ever taken by the way back machine is available for viewing, by selecting it on the time slider:

Netscape histogram. 4821 snapshots from 1996, till 2014

 

To conclude, I hope you check archive.org out, and trigger a bit of nostalgia. I am sure there is something there you will be interested to find.

Facebooktwitterredditpinterestlinkedinmail
22May/130

ITWeb security summit 2013

Hello everyone,

I was lucky enough to be able to go to day 2 of ITWebs security summit. I have always wanted to go and check it out, and so this year was my lucky year.

I started off day two by browsing the exhibition, checking out various things on show, before heading off to the presentations.

I think I enjoyed the first presentation I went to, which was given by Richard Bejtlich of Mandiant in the USA. He presented a very interesting topic, detailing how a typical breach situation goes down. Of particular note, was his comments on Mandiants APT1 report. Do yourself a favour, and browse through it (available on Mandiants website), if you have not already.

 

Overall I really enjoyed the event, and I will most certainly attend the next one.

 

Facebooktwitterredditpinterestlinkedinmail
Tagged as: , No Comments
1Nov/120

ZaCon IV

ZaCon 2012 was, as always, well worth the attendance. The organisers put together a schedule with presenters from all walks of the hacker domain, ranging from android vulnerabilities to physical security and hardware hacking.

Of particular interest to me, where the presentations on game hacking, physical security, android penetration testing and HTML 5 exploits.

Video recordings of the presentations may be viewed here.

 

Facebooktwitterredditpinterestlinkedinmail
Tagged as: , No Comments
19Jun/120

Proof reading

So I have been proof reading for Hakin9 magazine for a while now, and today a great opportunity arose in that area. William Stallings has put together a new edition of his book
Cryptography and network security, and I have been approached to proof read a chapter of it.

As a result, I will be sent a copy of the book when it goes to print, and possibly have my name printed in the preface 🙂

Facebooktwitterredditpinterestlinkedinmail
Tagged as: , No Comments
12Mar/120

HTTPS security

 

HTTPS is a secure layering of the HTTP protocol used for communication over a computer network, most notably used on the internet. It achieves this security by using the SSL/TLS protocol, which is the standard as far as securing web applications go. In particular, HTTPS is used by banks, social networks, live streaming services, email, instant messaging and more. SSL/TLS extends HTTP by providing a secure tunnel through which a web browser and web server communicate. By encrypting transmissions, SSL/TLS provides confidentiality and prevents unauthorised and undetected modifications, which preserves integrity.

Authenticity is ensured with a digital certificate, which establishes a binding between a public key and an entity(Eg. address, company name, persons name, hostname etc). In HTTPS' case, the public key is used by SSL/TLS to negotiate a key between the browser and server. The certificate contains the entity with its key, and is digitally signed by a certification authority(CA).

The CA is responsible for checking that the public key and entity really belong together. The entity itself can be a certificate, so the original entity can be wrapped up in certificates signed by multiple CA's. In general, a certificate can be considered valid by a web browser if there is a chain of certificates from a CA the web browser trusts, to the certificate which is checked.

Public Key Infrastructure, in a nutshell
By default, web browsers ship with a list of CAs which they may consider trustworthy. These CAs have been added to the list by the browsers creators, once they have reviewed them. The CA is audited to see if the standards imposed by the CA are sufficient (so far as verifying key-entity relationships are concerned). It is also important that the CAs systems are secure and well tested. The audit should ideally be redone annually, to ensure that the standards are kept, and that new security developments are accomodated for. Of course, the browser vendor has an interest in keeping its list of trusted CAs up to date, or else they will expose their users to potential security risks. Ultimately, the certificate acceptance is determined by the browser if it establishes at least one chain of trust to a CA it trusts.

Public key infrastructure failures

VeriSign-Microsoft certificate
In 2001, and anonymous attacker managed to persuade VeriSign(a major CA) that he was an employee of Microsoft. He was granted several certificates(with his public key) to use as a result. VeriSigns checks in 2001 were clearly not good enough to ensure a secure infrastructure.

MD5 collisions
Cryptographic hashes of the certificate are signed instead of signing the entire certificate(for various technical reasons). In the early days, the most commonly used hash function used was the MD5 hash. Many weaknesses have been found in the MD5 hash, and its use is therefore discouraged these days(in favour of one of the SHA hash functions).

An MD5 collision occurs when two different certificates have the same hash. In this case, a signature on either certificate would also be valid for the other. This was proved in 2008 when security researchers managed to generate their own sub-CA using an MD5 collision. This sub-CA then requested a signature from a CA, and then copied the signature received into their own sub-CA certificate. This meant that they were able to operate their own CA, and issue arbitrary certificates for any hostname and email address they wanted to. Soon after this proof of concept, CAs stopped using MD5.

DigiNotar
Dutch CA, DigiNotar, had become compromised when an attacker gained access to their systems and granted a certificate for *.google.com to himself, and subsequently launched a man-in-the-middle attack on Gmail. DigiNotor was removed from all browser vendors' trusted lists, and soon afterwards DigiNotar went bankrupt. The attacker who claimed responsibility remarked that other CAs were susceptible to attack, and it is therefor only a matter of time before a similar attack happens again.

Preventative measures available
Three measures available can be implemented by the website administrator, and do not require CA involvement, nor do they need to modify certificates.

HSTS
The default assumed protocol for web browsers is HTTP. If a user types an address in their browser, the browser will send an HTTP request. This request would be redirected to HTTPS in the event that the website uses HTTPS. The problem comes in where the redirect request is not protected, and can be exploited via a man-in-the-middle attack which would suppress the HTTPS request, and return a bogus page to the user, in the hope that the user enters valuable details which can be recorded.

HSTS was designed to avoid this, by specifying that every subsequent request sent to the web server shall be done over HTTPS, and that the web browser should never use HTTP on that particular website. Although still in its infancy, HSTS is used by some high profile websites, such as paypal.

Public key pinning
Public key pinning works by the web server specifying which public keys may be used in the certificate chain for a particular website. When a browser connects to the website a second time, the public keys offered in the server certificate chain are compared to the list of allowed public keys for that website. If no certificate in the chain matches at least one allowed public key, the connection is terminated.

DANE (DNS-Based Authentication of Named Entities)
If the CA is compromised, public key pinning wont work because a first time user can be impersonated. DANE can protect even the first visit to a website. Instead of pinning a public key on the first visit, the pins are stored in the users DNS records. Instead of a chain of trust from an arbitrary CA to a certificate for your hostname, a chain of trust from the operator of the DNS root zone to the DNS records for your hostname is established. The benefit of such as system is that there are over 1000 CAs which can issue valid certificates for any host name on the internet, but there is only one authority responsible for a top level domain. DANE is also not limited to HTTPS, and may be used on any SSL/TLS connection. As with public key pinning and HSTS, DANE is still in draft stages.

Presently a combination of HSTS and public key pinning is the best solution for securing a HTTPS web server against compromise or other attacks on CAs. DANEs complexity makes it more preferable when a higher level of security is required, but presently is badly supported amongst the browsers.

Facebooktwitterredditpinterestlinkedinmail
Tagged as: No Comments
2Nov/110

Session riding

I decided to make this post about web application session riding, known more formally as cross site request forgery, following a presentation about javascript malware done at one of the ISG Durban meetings. There are many ways in which a web application can be designed insecure, and much more ways in which to exploit them.

Web application security has a lot of followers, and their respective communities. One such community is OWASP. OWASP is an open source application security project, and their website is perhaps the most beneficial to anyone looking for a source of information about web security.

Vulnerabilities like XSS or SQL injection are widely used by attackers and security professionals alike, and are the most popular methods of attack. Session riding however, although not as popular, is used just as frequently.

Why is it such a threat?

Web applications do not determine whether a logged in users action is authorised or not, instead it trusts the web users instructions and considers requests as legitimate. A valid session is all that is required by the web application to confirm the validity of an action.

So how does it work?

The web uses the HTTP protocol to deliver information to the user. Every HTTP request made to the web server from the same source is taken as unrelated to any previous or future requests. Neglecting keepalive connections, this is the way a browser interacts with the web server. It is required in some circumstances that the web server remembers the previous requests, for example, when shopping online. This activity needs to be kept track of, and kept seperate to the activities of other users. As a result, cookies were brought in by developers to do just that.

Cookies are bits of information sent by the application on the server, to the user via HTTP response headers. The browsers are designed to remember these cookies. These cookies are then sent to the web pages that have the same application.

Sessions of different users are maintained by random and unique cookie strings, known as session tokens(or session IDs). The randomness and unpredictability of the session token guarantees that the server side application knows who it is communicating with.

The trouble comes in, when a web application only evaluates the origin and validity of actions solely based on a valid session token. A clever attacker only needs a pre-made URL doing the same action as the target web application, and then produce it to the user in a recognisable form, and hope that the user is logged into the legitimate web application at the time. This URL could be sent to the user in a number of ways, including via email. The valid session token would then be obtained by the attacker, and they could get access to the web application.

Such an attack could even disable a network. Say for instance on a home ADSL router. All these devices utilise the HTTP protocol to access the device control panel. Hijacking the session token would allow the attacker access to the network infrastructure, and could lead to network failure if the attacker changes settings(or perhaps they could even change all the passwords).

As more and more devices are using such interfaces, more and more session riding becomes possible. The sky is the limit, as they say. What the attacker really needs to know, is a clever way to deliver the prepared html or javascript to their targets browser (email is but one method). Even standard html tags can be used. For example, in an <img> tag one could display an image, and link it through to the desired URL. Or the URL could be opened in an <iframe> tag. Even <script> tags may be used. Links such as these can easily be displayed in web applications such as facebook, where every visitor could potentially click on it.

But it doesnt stop there

To make things worse, when session riding, the attacks are always performed from the victims IP address. Thus, according to the web server logs, everything seems to have been done by the victim.

Enough with the doom and gloom

There are, however, some limitation with session riding. If the web application checks the referrer header, the attack will fail, because the origin of the request is usually not the same as the target. However, very few web applications perform this check, and could still be unreliable if a proxy, web filtering software or other such "man in the middle" is being used.

So what measures can be taken to prevent such an attack?

The most common advice to web application developers would be adding a random challenge string to each request. This random string is tied with the user session, and is different for every new login, which prevents the attacker from fetching a valid session token. It is also good practice to provide a lease on the lifetime of a session, such that it expires after a defined period of time.

These measures, however, require the developers to do additional work to the application. In the event that the application is already complete; the redesign costs are not favourable(and would use up too many human resources), and thus the applications remain vulnerable.

On the client side, a user should be very careful as to what links they open, especially if the link has been provided in an email. Browser plugins such as noscript for firefox will also go a long way toward detecting such attacks, especially where javascript malware is concerned.

Facebooktwitterredditpinterestlinkedinmail
18Oct/110

First post, ZaCon III report back

I have finally decided to get my act together and begin posting on my blog. Albeit still in need of a great deal of design work, I will make this post and make good on design promises soon.

So to kick off the blog, I will report back on the ZaCon III InfoSec conference (www.zacon.org.za) which I attended about a week ago. Almost straight after beginning my attendance at ISG Durban(Information Security Group), I heard about ZaCon, and as fate would have it, I would be in Johannesburg on the scheduled weekend.

With great excitement, off I went to attend presentations on both days. The first evening started a little slow, but after some drinks and socialising with the attendees, the first presentation began. Footprinting has always been the source of fun for me, and over the years I have developed a few methods of manually gathering information about certain targets. So it came as great surprise that the first presentation was about Maltego; footprinting software that I was unfamiliar with. I was enthralled by the presentation, which demonstrated exactly those skills I have developed. The second (and last) presentation of the evening was about pickle exploitation in the python programming language. Due to not being familiar with python, the presentation was mostly lost to me.

The next day was the main attraction, and more "conners" were in attendance. After some coffee, the conners filtered into the lecture theatre for the first presentation of the day: Real world SoC. Being a student, it was good to hear about how the industry enforces security related issues. Next up was a presentation on hash cracking, a subject I know a great deal about. While the theory behind it all was mostly already known to me, I was interested in the statistics and specific applications which the presenter included in his presentation. The presenter himself was a pleasure to listen to, given his accomplishments in the field. Following this was a presentation on NNTP cache enumeration and poisoning. At first I just glossed over the name of the presentation, but after it began, I realised it was something very close to home. Needless to say the presenter had my undivided attention as soon as I realised the implications to my use of usenet.

After a short tea break, presentations resumed with a presentation by a phD student from Rhodes, who was presenting on functional programming. I have heard a bit about functional programming through my years of studying computer science, but have never actively practised it. Due in large to its unproven status as a programming paradigm I suppose. The presenter proceeded to demonstrate just how effective functional programming is, from a security point of view, as well as a programming point of view in general. Following this, were presentations on OSX sandboxing, and systems application proxies.

After lunch, the first presentation was interesting to all those in attendance, entitled: "Can I go to jail if...". The presenters selected computing practices(particularly to do with hacking), and reasoned the laws surrounding them. The next presentation was done by a masters student (I think it was masters anyway) from UJ. He presented on his research into rootkits. His methods were interesting, despite being somewhat disturbing. Especially since my netbook began showing signs he mentioned were side affects of being infected with his rootkit (mainly BSODs). However, later I discovered that my netbook troubles were nothing to do with a rootkit, but rather a hard drive which was about to fail.

Next up was a relaxing presentation on a different side of hacking: lockpicking. Another topic which I have dabbled in over the years, so the theory was known to me. But that didnt stop me from remaining captivated (it was nice to know I am not the only one who is interested by such things). Following another tea break, were the last three presentations, and the keynote via skype.

"The protocol trench" was the next presentation, which was mainly about access restrictions to clients on the network (for their own safety of course), after which was a presentation (or more a plea) for security enthusiasts such as the audience, to be "builders", and not "breakers" of security measures. The last of the in house presentations was about enterprise security.

Lastly, was the keynote speech delivered via skype by Richard Thieme. His speech was very interesting, and posed many questions regarding the current state of affairs, but more importantly what is possibly going to arise in the future. I was particularly pleased that he mentioned threats on which I have had lectures on by researchers from MIT (who are actively involved in the development of), namely the usage of cellular devices as a means of tracking and recording data(and the associated AI systems).

So here ends my first post, I hope that I shall not be as lazy for the posts which are to follow.

Facebooktwitterredditpinterestlinkedinmail
Tagged as: , No Comments