Duncan's Security Blog An enthusiasts musings

13Sep/170

On security architecture and certification, part 1

Professional certification is a necessity in the IT industry. Tertiary education barely scratches the surface of the required skills in the broad range of career paths available to an IT graduate. Some career paths have many certification options available to them. With security being no exception, the problem is that security certifications are misunderstood and often very expensive.

In this post, I will dispense with my thoughts on certification in the security industry and how they may be useful to a security architect. *Note: I had planned one post for my thoughts on this topic, but it seems I have a lot to say. So I have decided to split the post up into a series of posts(this being the first).

Security architecture is a cross-cutting field requiring skills in a wide range of areas. In particular, technical knowledge of enterprise-wide disciplines is essential. In this article, I shall be focusing specifically on the needs of an application security architect.

What certifiable skills does a security architect need?

It is not enough to have a strong affinity towards security. The security architect needs to provide inputs into the other specialized architectures, from a security perspective. Therefore a security architect needs to have good knowledge of network, data, infrastructure, application and enterprise architecture and how to apply security to them. This doesn't mean the security architect should be looking at getting certification in these areas, but reading about the topics would certainly help where experience in the area has not materialized yet. Network security certifications in particular are usually vendor driven, and this wont help the architect who needs to ideally be technology agnostic in this area.

Having technology specific certifications has its place depending on the architects role, but having generalized certifications will help the architect adapt to a broader business need. Security specific certifications for the most part are technology agnostic, and cover a broad range of topics. But this is where deciding on which certification to do becomes tricky. An application security architect needs to have strong technical skills, knowledge of architectural frameworks, and knowledge of compliance frameworks. In addition they need to know about risk, but most importantly how to communicate that risk in a language the audience understands.

Traditionally, architects from all fields have strayed a bit too far from where they got their hands dirty coding in the early days of their careers. This tends to result in the typical "ivory tower" kind of architecture, which is undesirable. Personally, I feel an architect that is willing to throw together some code to elucidate an idea or proof of concept is extremely valuable. Their code doesn't have to be amazing, it just has to get the idea across. The engineers can then work with an actual technical artifact as opposed to written documentation. I would stop short of certifying in this area though, as many architects tend to have some level of development background (usually a computer science degree). Provided that the coding skills are exercised once in a while, a computer science degree would typically suffice. An architects code will rarely (if ever) make it to production, so the architect need not be a coding rockstar (it does help, though).

Security Architecture skills venn diagram

*By no means a complete skillset

There are of course, many soft skills required of a security architect, but these cant be gained through certification. Similarly to a consultant, an architect will never be an expert in every field. What matters is the ability to use what knowledge they have to understand the environment in which they are working.

This is the first post of a series of posts, which I will be releasing as I complete them. The next posts will be:
Part 2, The state of play: Security certification
Part 3, CISSP: So special it needs its own post
Part 4, What other certifications are out there, and their value

 

Facebooktwitterredditpinterestlinkedinmail
6Sep/170

Update, 2017

Check out my new resources page, where I plan on maintaining a list of useful resources for a professional or aspiring security architect. It is a work in progress, so don't shoot it down just yet. As a rule though, I will be updating it with new things on a regular basis.

Other than that, my upcoming posts will be the following:
My views on security certification
Protection of personal information acts: A comparison of PoPI(RSA) and PIPEDA(Canada)

Take care!

Facebooktwitterredditpinterestlinkedinmail
Filed under: Notices No Comments
27Nov/160

Gaps in DevOpsSec Part 2

In my previous article, I briefly went over some of the default security testing options available in existing DevOps deployment tools. In this article, I will cover what can reasonably be included in the DevOps pipeline, and a few caveats faced by security engineers who are trying to add security testing into a fully automated deployment pipeline.

The fact that security scanning can take time is just the tip of the iceberg. What must be remembered, is that no amount of automated scanning will catch every security vulnerability. Ultimately a professional penetration tester will still need to manually perform tests of their own. The trick with automated testing therefore, is to perform enough testing to account for the "low hanging fruit", but also complete the scan in "reasonable" time. While 4-5 minutes per scan may still sound unreasonable to the automation engineers, 10 minutes or more per scan will certainly be a test of their character.

The goal here is to use the APIs of security scanning tools to provide a degree of confidence in the code being deployed to production, until such time as a penetration tester can conduct a more thorough scrutiny of application. It is up to the security engineers to review scan results, identify false positives, and assist the development teams with urgently fixing any identified vulnerabilities.

How?

There are a number of ways that we can achieve our needs, given the features that automation tooling provide. The first approach automation engineers may suggest, is to build a recipe/playbook to act as an orchestration point to trigger scans on whichever server the recipe/playbook is run from. This would work, but some crafty design decisions need to be made with respect to how to make the script generic enough to work everywhere, and the waiting and post processing of results can become tricky.

For option two we could try webhooks from the configuration manager itself. This too would work, but may require a few "hacks" to get this working well, and if you are doing multiple deployments at once, this method may be prone to a lot of confusion and ultimate failure.

Option three would be to use something like a Bag-of-holding(OWASP) to provide an abstraction layer which manages security activities. This way, the pipeline will delegate security to the orchestration service. Depending on the design of this option, you could pre-install all tools on a server or virtual machine, or simply use containers(Docker). The Bag-of-holding can cater for all scanning, polling for completion, and making sense of the results.

Now that we have some options, lets look at what we might want to do with our security tools.

Network scan

To start with, we can run a simple network scan to scan the application server itself, which would have just been built from scratch by the configuration manager. Any changes to a recipe/playbook could result in a change in configuration that can leave the application server exposed. A basic network scan by a tool like Tenable's Nessus should do the trick. Nessus has a very nice API, and with a little pre-work, a generic scan template can be set up and re-used by all deployments.

Application scan

Next up we have an application scan. This one is tricky, because this is usually done manually by clicking through the application. OWASP ZAP or Portswigger's Burp would be options here. The idea is to probe the actual application for any obvious vulnerabilities. However, both ZAP and Burp have APIs available through which a decent amount of coverage can be achieved. Be warned though, that ZAP and Burp were designed to be used manually. Their APIs are crude, but they work. Don't expect any vast improvements to these APIs, either.

This scan can really run away with time if you have not tuned it correctly. At a minimum, you would want to run a spider, an active scan, and a passive scan. There are tuning guides available for both ZAP and Burp. If you are using ZAP, stopping by the OWASP Slack channel will do you some good(they can be quite helpful). In addition, the ZAP team have put together a "baseline" scan, which should give you reasonable coverage in a short time(ZAP Baseline scan).

For bonus points, you might like to run your development teams' functional tests(done with automated testing tools such as Selenium) directly through the proxy.

Compliance scan

Lastly, we can run some scans to satisfy our governance friends. Having an application server consistently, and provably compliant, has a lot of perks. To do this, you would need to decide on what your application server needs to comply to(e.g. CIS benchmarks, NIST, COBIT, etc), and then have a battery of tests to check. Ideally, during the rebuild of the application server, the configuration manager would have configured everything for compliance. The compliance tests are just for verification. Luckily, there are plenty of existing recipes/playbooks to help with this. For example, The hardening framework, CIS Ansible playbooks for CentOS/RHEL.

...and then?

Up till this point, we have launched various scans to cover different areas of the application server. These scans will invariably run till completion, and then as it stands, nothing further will happen. We have now come across a major problem: We have run our scans, but our continuous deployment tools have carried on without the results. While the deployment can be halted to wait for scans not being run directly from the deployment tool, something else needs to happen in order to proceed.

We have three sets of scan results(still in their respective scanner databases), each with potentially different severity report formats. These severities have also been determined by a third party, and may not correspond to your organisations interpretation of risk. So now the task of retrieving the results, parsing them, and making a decision about allowing the deployment to proceed or not, needs to be made. For this, something like Etsy's 411 security alerting framework would be useful. The framework would generate alerts, and based on the alert, the deployment tool may be notified to proceed or abort the deployment(this may sound easy, but I am not currently aware of any available hook on any CD tool that has this kind of functionality, natively).

Whether the deployment proceeds or not, any potential bug should be automatically added to an API-enabled issue tracking software such as Atlassian's Jira. The security engineers can then review the issues, and either declare them as false positives, or re-assign the issue to the relevant development manager for fixing.

On that note, any false positives need to be recorded in a central repository. Ideally, you would try and filter out the known false positives from being added onto the issue tracker, to prevent the re-work of again declaring the issue as a false positive.

In closing

As you can see, there are ways to include automated security scans into a continuous delivery pipeline, but any experienced security engineer or developer will know that it wont be trivially easy. The important thing is that you work out what is best for you and your organisation. However deciding that it either can't be done, or is too difficult, will potentially expose your organisation to attack should you proceed with production deployments that have not been tested.

I hope this has been an interesting read. Feel free to share this post with your friends 🙂

Facebooktwitterredditpinterestlinkedinmail
18Sep/160

Gaps in DevOpsSec Part 1

[Part 2 of this article can be found here.]

I recently did some work with a bunch of great automation engineers. My task was to assist them with adding some automation with respect to security testing. It was an awesome experience, but it left me feeling a bit worried about the continuous deployment world, as far as security is concerned.

I know what you are thinking: "But all the fancy DevOps tools provide 'security' out-the-box". This is partly true, all of the DevOps pipeline tooling I have scrutinized(all of the widely adopted ones), provided static code analysis steps in the very early stages of the deployment pipeline. Static code analysis is a fantastic start, but for the most part, that is where the line is drawn.

Continuous deployment or continuous delivery?

Continuous deployment and continuous delivery are often confused. Continuous deployment is when a development team will deploy every change straight through to production(the change will still go through a battery of automated tests, if its done right). Continuous delivery differs in that the change will be ready for deployment just as quickly, but the developers may choose not to deploy immediately(the change would usually not be left "hanging" for too long though).

It is up to the automation engineers to build a deployment pipeline such that, from code to compiling, testing and deployment; everything will be done at the click of a button, and will be in production within minutes. The engineers from some of the bigger software companies, have been known to claim up to 50 deployments every day. Smaller adopters of DevOps admire the big guys, and so they tend to start chasing metrics: "How many deployments can WE do per day?".

Taking this into account, would you be satisfied that only static code analysis has been done?

Static code analysis

Diving into the static code analysis realm should quickly raise a lot of questions from an experienced programmer. SCA tools do work, and should certainly be used. However among the results, there are usually a plethora of false positives. The truth is, SCA is extremely hard to do well. Even the best tools will have their problems, and will typically only do one, maybe two languages effectively. With the assistance of a security engineer, the developers will learn to spot what is, and what isn't a false positive, but this will take time.

Time is of short supply in DevOps. Granted that the same false positives for unchanged code will pop up(which should be catalogued and marked as "OK", for future scans), any false positives picked up on new code should be verified. A decision therefore needs to be made: "Do we deploy as-is to production, and flag the code for security to verify?, or do we halt this run, until verification has taken place?"...But...Metrics...

What else should we do?

Up till now, we have scanned the code for vulnerabilities. We trust that our SCA tool is good, and that our teams didn't make a mistake when picking out false positives. We have not, however, tested the finished application for vulnerabilities, nor have we tested the server on which it is hosted. How about compliance? Have we tested that the application and infrastructure we are about to deploy comply with our security policy?

While SCA is generally well understood and handled by deployment tools, these other testing areas are completely alien. While it is certainly possible to launch these additional tests, they bring with them some major caveats for the deployment pipeline. One of these caveats is of course, time.

As for the rest, I will cover those in Part 2.

Facebooktwitterredditpinterestlinkedinmail
29Feb/160

WordPress plugin security

The WordPress framework makes it very convenient for website owners(both novice and experienced) to extend the core functionality by adding plugins. The trouble is, installing plugins could potentially increase the websites attack surface. In this post, I will discuss the reasons, and how to limit exposure.

WordPress does a good job of securing its framework(the core code), most often releasing security fixes within 24 hours of discovery. Securing the web server, and following good development practice though, is ultimately left to the website owner/maintainer. While WordPress does produce(and maintain) some plugins of their own, they are not responsible for the code of 3rd party plugins.

3rd party plugins
3rd party plugins can be made by anyone: individual, or company. While a company will typically continue to support their plugin(whether it is paid, or free), an individual may not. It is worth noting when last a plugin was updated, before installing it. If possible, it would be even more beneficial to see if the developer intends on maintaining the code(with bug fixes, or additional functionality). Security fixes are especially of concern here: if the code will not be maintained, then using the plugin will leave you indefinitely vulnerable.

Furthermore, be aware that you are essentially installing code from strangers(including companies). It would be beneficial to attempt to establish some kind of assurance that the developers are credible. Any assurance would of course be discretionary, but a little digging could be unsettling enough to look at an alternate option.

Guidelines for using plugins on WordPress
With that in mind, here are some guidelines to follow when considering installing a new plugin.

Install the minimum amount of 3rd party plugins needed. Ask yourself, "Do I really need what this plugin will give me?". The more plugins installed, the greater the chance that one of them could be used to compromise your website. I personally don't believe that most 3rd party code is intentionally vulnerable, but I certainly believe that some of it could be. There are plenty of developers with good intentions, contributing their useful plugins to the community. However good intentions do not mean the developer has taken security into consideration(or perhaps they do not know how).

Check the plugin for known vulnerabilities. Thankfully, there are online databases which keep record of known WordPress vulnerabilities. My personal favourite, is wpvulndb. Wpvulndb has a nice interface for checking not only plugins, but also themes and WordPress itself. Any plugin you are thinking of installing, should be checked against this database. In addition, all your existing plugins should be checked, and rechecked on at least a monthly basis. There some plugins that offer to check automatically against wpvulndb for you(via an API), but I have not used them(e.g. Vulnerabilities check, and Plugin security scanner. In theory, their use would be helpful.

If it's not activated, delete it. This goes for themes, too. If the plugin is not in use, then remove it from your webserver. Plugins exist as files on your webserver. If not deleted, they will remain there and could be forgotten. If a vulnerability is discovered in this unused plugin, an attacker could potentially access it directly from its location on your webserver.

Trust, but verify. As already mentioned, you should attempt to verify the credibility of the plugin developer. By this, I dont mean harassing the developer, hoping for a confession. I mean look at their previous work(does it look dodgy?), look at their activity(Recent updates? users complaining of no support?). A personal or company blog is often a good place to start, if one exists. One thing to note, however, is that just because a plugin has not been updated in a while, doesn't mean it is a problem. It may not have had any updates, because none were needed(the plugin ratings should help with this). You will never be able to fully verify a developer. As long as you are content with your analysis, thats as close as you will get.

Keep your plugins updated. As with all security guidelines, I will suggest keeping your plugins updated. This will ensure that any available fixes have always been installed. Since 80% of WordPress updates are security related, it is in your best interest to update when possible. Remember, that a security vulnerability may not just be something that allows an attacker access to your data. It could also be something that affects the performance and/or availability of your website.

 

The above guidelines will not guarantee the safety of your website, but they will go a long way toward understanding and limiting the risks associated with using WordPress plugins. Feel free to provide your comments below.

 

Please note: This article is written in the context of a WordPress implementation(specifically for private users or small companies), and not meant as general guidelines for using 3rd party code. In the case of a larger company(and for other use cases outside of WordPress), vetting the code and provider would include a more comprehensive analysis, most commonly site visits, code reviews, penetration tests etc.

 

 

Facebooktwitterredditpinterestlinkedinmail
15Jul/150

Regarding WordPress security

Over the last few years, WordPress has been the subject of much abuse. First from people of questionable intent, who may or may not disclose any security holes they find, and secondly from bystanders who comment on any disclosed vulnerabilities (of which there have been many). However, there are a few important things to note when regarding WordPress security. I will be discussing these points in this blog post.

Responsibility

A question needs to be asked as to where the responsibility lies with respect to securing a WordPress website. In short, both WordPress and the website owner are responsible. WordPress is a framework for creating websites (it is not just a CMS system). As such, WordPress is responsible for delivering secure code to their customers. However, they cannot do much more than that. The site owner (who may or may not be the developer), is responsible for decisions made both during implementation, as well as post implementation. Decisions like hosting platform, plugins, SSL configuration and database configuration are all out of WordPress' control.

WordPress have consistently proven that if a vulnerability is their responsibility, it is taken care of. It then falls on the website owner to run updates. Minor updates now happen automatically, but major updates still require administrative action. Updates aside, WordPress cant be held responsible for installation of compromised plugins (which is where most breaches are coming from these days; 3rd party code). WordPress should also not be held responsible for failure to use security certificates (without which, dashboard credentials can be sniffed in plaintext). Hosting is also a website owners decision. The choice of a shared hosting package for example, introduces a great deal of risk (a server compromise on someone elses website could expose all websites on the cluster).

WordPress is the most secure and most vulnerable framework, at the same time

Thats a bold statement. But did you know that 25% of all websites on the internet, are built with WordPress? Furthermore, WordPress' market share varies between 60-65% of all CMS frameworks in use. By numbers alone, that paints a massive target on WordPress implementations. Since WordPress can also be installed by a user with no technical "know-how", a good portion of implementations will be left in their default configurations, making them vulnerable (and easy targets).

Bug fixing in software is an endless process. A study done by WordPress over a period of five years on theme, core and plugin code showed that 80% of all updates were security related. History has shown that the WordPress developers are quick to patch security vulnerabilities. The same cannot be said for other popular CMS systems, which could take weeks or even months to release a patch which WordPress had implemented within 24 hours of disclosure. The quick turn around time for security patches make WordPress a secure platform (provided the site owner runs the update).

Bottom line

No website or web application will ever be 100% secure. The only way to prevent ever getting hacked, is to unplug from the internet. The most a website owner can do, is keep their software up to date, and periodically review security measures to make sure they are sufficient to protect the required assets. But with just a few key security tweaks, your WordPress installation can be orders of magnitude securer. WordPress has an army of developers collaborating and working towards a better framework for their customers.

So its all doom and gloom then... what can you do?

This post will be the first of a series of posts I will be making, going over WordPress security and best practice. At a bare minimum, you can install a trusted security plugin (I can vouch for ithemes security, wordfence, and securi). These plugins help a great deal, I will compare them in a future post.

Feel free to leave your comments, even if its a dispute 😉

Facebooktwitterredditpinterestlinkedinmail
30Apr/150

Ultima Online nostalgia

After updating my about page, a few readers have asked me what I got up to, and in which game. I spent many years playing on the local emulated Ultima Online "shards" (most of them). It has now been many years since I last roamed Britannia, but there are still many who do so. Thinking back to my escapades brings strong feelings of nostalgia, so I decided to write a quick post about some of my favorite anecdotes.

#1 Pot plant black market

On Chyrellos, I discovered that certain static objects (specifically pot plants, of all types) where not locked down. This meant I was able to pick them up, and put them into my backpack/bank. This was before crafters were able to make pot plants, and I got the idea of selling these pot plants to homeowners. The business was a booming success, no one knew where I was getting them from. The sale of these plants had to be done quietly though, because the staff members would put a stop to it, if they knew. I also discovered that the plants would respawn every day. So I had a map and schedule of every single city and its unlocked plants, ready for me to collect on schedule.

#2 Intelligence gathering

In times of guild war, each side would always look for an unsuspecting enemy to pounce on. The best piece of information at a guild leaders disposal, was the location of all the houses of the enemy guilds members (or better, an unattended macro spot). Knowing this, I would run around the entire map, every morning at 2am. While doing so, I would record the location and owner of every house I found. Needless to say, I saw some very interesting things while doing the rounds. Trading my information was also rather lucrative.

#3 Macro scripts

Having a bit of programming experience, it was always very rewarding when a complicated script worked perfectly. Scripts complete with world save event detection, lag detection, backpack management, recall logic and enemy detection were works of art. My personal favorite was a mining script which mined every block of a cave, threw out trash, kept myself fed, dropped a portable smelting forge (illegal, of course), smelted ingots, moved ingots to a pack llama, kept the pack llama fed, and eventually recalled home to start my smithy script.

#5 Arrow stealing

Some people would macro archery by finding a bowyer in a secluded city, and shoot at the butte. Every once in a while, they would retrieve their arrows from the butte, and repeat. Finding these people was quite rewarding, because there were plenty of free arrows to be had.

#6 Explosion trading

In the early days, there was a bug in the player trading window. If you were quick, once the other user had accepted the trade, you could drop an activated explosion potion into your side of the window, and accept the trade. The potion would appear in the other players backpack, locked down, and counting down till it exploded. Boom!

#7 Gate riding

Stealth was a marvelous skill. Using it to quietly slip through someone elses gate without them knowing, could often take you into their house. This could be very beneficial, because people have loads of loot in their houses. I once snuck in to an enemy guilds guild house, and poisoned all of their sparring weapons. Fun times.

#8 Social engineering

The things I could make people believe... I was given houses, ritted weapons, good quality armour... Pretty much everything. Just by convincing users I was someone who I was not. Authentication anyone?

#9 Tampering with a macroers backpack

With the snooping skill, I would rifle through a macroers backpack. Users would often place items in specific places, and configure their script to click on a certain screen location. Moving their items around would render their script useless, and they would stand doing nothing all night. I know that some staff abused their powers to do this too. Luckily, my scripts selected items based on their ID, not their location.

#10 Looting high value targets

Looting other players was usually forbidden, but stealthing up to a high value critter while a group of people are having a go at it, would put me in position to loot the corpse as it dropped.

#11 Strange coloured horses

I discovered that if I was riding a horse, and I came across a doppelganger, after killing the doppelganger a wild horse would be left behind... In the same colour as my armour!

Thats all I can think of right now, but I will add more as I remember them. I got up to a lot of mischief in my days, but it was all part of the fun, and thats what gaming is about, right? I never did something as bad as interrupting a ritual... That would be just evil.

Facebooktwitterredditpinterestlinkedinmail
Tagged as: No Comments
19Apr/140

Heartbleed: my comments

It has been 11 days since the public disclosure of a major bug in OpenSSL, known as Heartbleed. I have been asked about my thoughts by a few people(both technical and non-technical), and so I find myself writing this blog post. I must mention that security disclosures occur on a weekly(if not, daily) basis, but heartbleed has been one that has stood out, even to laymen. To clear the air, there is a lot of discussion about stolen passwords. This isnt necessarily true. The passwords could potentially have been stolen, but it doesnt mean they have been(and unfortunately, there is no way to know if they have been compromised).

So what is it about? Is it important? What is being overlooked?

What is it, in a nutshell?
Heartbleed is the name that has been given to a bug which was found in OpenSSL. OpenSSL is an open source implementation of the SSL and TLS protocols, which are used for secure communications across a network. OpenSSL is also by far, the most widely used implementation of SSL/TLS in the world. If you have ever logged in to any website on the internet, there is a very good chance that OpenSSL was involved. Some of the major websites that use OpenSSL, are listed in the Heartbleed hit-list.

The bug revolves around a piece of code committed to OpenSSLs codebase in 2012, known as heartbeat. OpenSSL makes use of a process of secret exchanges in order to authenticate the client and server, in order to establish a secure connection. The heartbeat feature is used to keep existing sessions "alive", when there is no data being transmitted. Heartbeat involves an exchange of an arbitrary message from the client to the server, and the subsequent reply from the server, with the same arbitrary message. However, security researchers found that there was no bound check done on this arbitrary message. So they discovered that when receiving their arbitrary message back, they could request a bigger message in response, up to 64K in size.

This meant that the server would "bleed"(hence the name, "heartbleed") the extra bits of information that where not in the original message from heartbeat, and this additional data would come directly from the servers memory. This has been nicely summarised in a comic by XKCD. Due to the nature of heartbleed, there is unfortunately no way that the use of heartbleed could have been detected, and so it is not known if or where the exploit has been used.

What can be bled from the servers memory?
The spoils are random dumps of the next blocks of memory after the heartbeat message. So this may or may not be of use to an attacker. However, it is entirely possible that the data could consist (either in full, or in part) of sensitive information, such as; the servers private key(the holy grail), user names and passwords, credit card information, sensitive documents, confidential communications. The fact that the entire key/password etc wont necessarily be returned, is not a comfort. Because the attacker can simply keep exploiting heartbleed in order to get more data, and ideally all the pieces needed to put the puzzle together.

What I would like to see, is the capture of the random numbers which have been used to create the secret keys. With these, the attacker can generate their own copy of the private key(and perhaps infer some other sensitive details to add to the spoils).

Who is vulnerable?
Heartbeat was introduced with OpenSSL version 1.0.1, and remained unchanged up to version 1.0.1f, so anyone using these versions of OpenSSL are vulnerable. The emergency release of OpenSSL version 1.0.1g on 7th April 2014 fixed the bug, by applying bounds to the heartbeat message. There are a few web services available which you can use to test a website if it is vulnerable(the websites you use, for example). One such service can be found at filippo.io. Java haters will be fascinated to know that the Java standard edition (SE) is unaffected. Microsofts servers are also mostly unaffected, because they tend to use their own implementation of TLS(SChannel). However, other products, such as OpenVPN(which can be used on a microsoft box), do use OpenSSL.

What should you do about it?
The most obvious steps for you to take, would be to change your passwords at any website that has been affected. Indeed, just about every article on heartbleed recommends this. While this is good advice, dont rush out and do it immediately. The reason behind password changes is due to the possibility that your password has already been compromised. But if you change it before the servers OpenSSL has been updated, then your astuteness will not have paid off. If you arent sure that your websites where compromised before the disclosure of heartbleed, you can be sure that there is a much higher chance that they are now. Rather verify that the server has applied the bugfix first(either by testing it with a webservice, or by taking note of any public announcements made by the website).

If you run your own server, you should update your OpenSSL, or recompile it with the -DOPENSSL_NO_HEARTBEATS option, as well as revoking and reissuing all certificates that use SSL/TLS(with new keys, of course. Have fun...), and force client password resets.

One thing which has been bugging me about all the current media coverage to date, is that they are all server centric. I havent seen much mention at all about the possibility that the client is also at risk. Indeed, if a server has been compromised, there is nothing to stop it from using heartbleed against the client. I have found an article about the possibility of a reverse heartbleed attack, where it has been shown that although it is harder to do, the clients(IE. your own devices) are also at risk.

Should we be pointing fingers at anyone about this?
In short, no. Remember that OpenSSL is an open source project. It is the most widely used implementation of SSL/TLS in the world. Not bad for a project run by a core group of four developers, only one of which is considered to be working on it full time. And the pay is next to non-existant. On average, donations to the OpenSSL Software Foundation (OSF) are a meagre $2000 per annum. They do not have the man power to do extensive code reviews, however the larger companies that use the software, do have that capability(and should have reviewed all outsourced code fully anyway, whether the code came from a small group of developers or not).

Some parting thoughts
One question on many peoples minds, is whether or not the NSA has been exploiting heartbleed all along. They claim to have had no prior knowledge of heartbleed, but then they wouldnt admit it if they had known. With their resources and tenacity, I would be surprised if heartbleed was news to them. Other, more paranoid people are saying that since it is known that the US government has worked to weaken encryption standards, they further undermined internet security by not donating any significant amount to the OSF. I think this last is a bit of a stretch. But the real question is, was the bug introduced to the code base deliberately?

Re-issuing certificates is going to be an extremely daunting task to the Certificate Authorities. I doubt they have infrastructure capable of doing all the work in the time frame they have(IE. immediately).

The use of two factor authentication(depending on which kind of 2FA), as well as perfect forward secrecy would have greatly reduced your risk to exposure. Affected servers should be sending out emails explaining the plan of action(change of password etc), but do not trust any link you may find in any of these emails. It could be a phishing attempt(we have all received such emails many times in the past, and this is the perfect opportunity to go phishin').

Lastly, I think it falls on large companies that use OpenSSL to make an effort and donate to the OpenSSL project. The OpenSSL team have done one hell of a job, and have achieved an astounding market share. Perhaps with adequate funding, heartbleed could have been prevented.

Have you got any further questions? Feel free to leave your questions and comments on this page.

Some additional info

Facebooktwitterredditpinterestlinkedinmail
2Feb/140

Website technology enumeration

I have been deliberating since my last post as to what this post should consist of. I knew I wanted to do some domain and technology footprinting, but there is so much extra stuff that can be included in the post. I have decided to limit the scope, therefore not including the extras(these posts will come later).

Information gathering

There is an abundance of web services that can be used to enumerate all sorts of information about anything on the web. For this post, I have specifically chosen to focus on what information can be enumerated from websites. Knowing information about the domain and technologies used for the website, can help you determine where the website is vulnerable.

To begin with, we have a website called Builtwith. Builtwith will scan the website headers, looking for clues as to what technologies are used. The standard results will include the web server type, CSS and html version. The more variable results will have information about any CMS systems used, frameworks, databases etc.

To use Builtwith, you enter a web address in the search field, and click on "Lookup".

Builtwith search

Builtwith search

Builtwith will scan the website, and return its findings as follows:

Some of the Builtwith results

Some of the Builtwith results

As you can see, we have already started building a picture in our minds as to how the website is put together. A little further digging and you will know what versions are being used. We can then search for vulnerabilities associated with those technologies. Try it out with a few of your favourite websites.

Netcraft

A second website that returns similar information, is Netcraft. Netcraft tends to miss some of the stuff that Builtwith returns, however it includes in its results some things that Builtwith does not.
netcraft
For example, information about the network and hosting. As can be seen here:

Netcraft results

Netcraft results

Domain registration details

Lastly, we may want some information about domain registration. There are a few websites that can give you this information, most of which are domain specific. Eg. Some will only be able to provide information from the .com domain, while others will be specific to a country. The website will perform a whois on the domain, and return the results. The kind of information you can expect to find will be billing history, names of people who registered the domain(sometimes helpful, it could be someone working in the company), their contact details, physical address of the person/company etc.

A commonly used tool is the whois function on InterNIC. InterNIC can provide whois information for the following domains: .aero, .arpa, .asia, .biz, .cat, .com, .coop, .edu, .info, .int, .jobs, .mobi, .museum, .name, .net, .org, .pro, or .travel.

For co.za, you can use the whois function on co.za.

On the face of it, not much can be done with all of this information. However, in the reconnaisance phase of an attack, any information is good information. The more you have, the more you can use to infer useful plans of attack.

Facebooktwitterredditpinterestlinkedinmail
11Jan/140

The Internet Archive

The internet has seen many resources come and go since its inception, but where do these resources go when they are long forgotten? I stumbled across the answer(to a certain extent) a while back, and decided to share it. Archive.org is an internet library of web resources, and is a pretty cool website to play around on.

The Internet Archive

The Internet Archive

At first, I was looking for something that archived web pages. Sure, you get googles page cache; but this only reveals the last snapshot, which may or may not be of any use. What I ended up finding, was The way back machine. More on the way back machine later. For now, I would like to go over some of the other cool features of The Internet Archive.

The way back machine

The way back machine

The internet archive turned out to be an archive for not only websites, but also video, audio, text, and software. Things I have not seen for years where right there, available for perusal. Founded in 1996, The Internet Archive has been receiving data donations for almost twenty years. Just like other libraries, The Internet Archive also provides facilities for disabled users, maximising their audience.

Video

The video collection is a catalogue of present and past clips, videos, and even full length feature films(most of which are from the golden days). The content ranges from animation, to community videos, as well as educational and music videos. If you recall a video you once watched, and would like to find it again, this could be the place you will find it.

Computer security video, 1984

Computer security video, 1984

Audio

Likewise with audio, the range of available resources are vast: audio books, poetry, music, podcasts and more.

Software

In the software category, the main attraction was the games archive. Not only were there an abundance of the ol' computer games available, but there are also console games. Some of these older games also have a built-in emulator for you to play through your browser:
Historical games
Games aside, there is also a shareware CD archive, and a whole host of console emulators.

Text

The text section has over 5 million books and articles from over 1500 curated collections. The variety available is immense, and I think every possible category is covered.
Example book

The way back machine

Last, but not least, is The wayback machine. You may wonder what relevance this has to security, but it does indeed have its place. During the reconnaissance phase of an attack, adversaries will attempt to learn as many details about their target as possible. One of these methods, is called foot printing. No amount of information is ever too much, as each piece of information may contribute to the inference of new ideas to tackle the problem. In this case, we have a mechanism that has archived website snapshots over a long period of time. Information a company may have had on their website in the past, could potentially be of assistance in building a profile about a company, its technologies, or its employees. This information could since have been removed from the website, when it became known that such information could be harmful to the company. I wont provide an example of such information, but I have no problem showing the capability of the way back machine, with this blast from the past:

Netscape website, on 18 February 1999

Netscape website, on 18 February 1999

The way back machine is very useful for getting some historical context about a website, as well as the historical information that would come with it. Any snapshot ever taken by the way back machine is available for viewing, by selecting it on the time slider:

Netscape histogram. 4821 snapshots from 1996, till 2014

 

To conclude, I hope you check archive.org out, and trigger a bit of nostalgia. I am sure there is something there you will be interested to find.

Facebooktwitterredditpinterestlinkedinmail