Duncan's Security Blog An enthusiasts musings


New year, new resolutions, old habits?

Sigh. So I know I have made promises before, but this time its a new years resolution(if that means anything). Yes, I plan to be posting more frequently. I have already scheduled deadlines in my calendar, so that should help me keep on track. In addition to the reminders, I have also put together a plan for at least eight posts.


So before I get carried away with my new commitments, I think I should mention a few things about the passing of 2013. 2013 turned out to be a fantastic year for me, in pretty much every way. A new work opportunity came up which I was not going to let slip past. I attended a day of the ITweb security summit, where it was great to see some really high quality presentations from international speakers. I hope to go this year too, budget willing.

I also attended my third ZACon, which is always an awesome con to attend. It started off with an evening of soldering my own Con badge, and then the next day was a tight schedule of presentations. The organisers always put together a good programme, with a mixed batch of presentations from all forms of hacking. I was particularly interested in the presentations about markov chains in hash cracking, mains signalling, mobile advert framework (in)security, and directional antenna design.

ZACon V "build a badge"

ZACon V "build a badge"


So enough about the past. What will I not post on my blog in 2014?

I had a few technical articles that I wanted to post last year, but never got around to it. I still plan on putting them together, but for starters I am going to run with a series of posts about some interesting websites/webservices that can be used in the security space. My plan is to select a few good services, and blog about them; so that they can all be found from the same place. Personally, I tend to find cool stuff and then forget about them.

So I am hoping that my posts will serve as store for both me, and anyone else interested.


Filed under: Notices No Comments

Update: posts will be coming more frequently in the future

My, it has been a long time since I posted last. So to full you in quick, I contributed to the book mentioned in my last post, and have received my signed copy with my name printed in the contributions. I hope I will be contributing to more books in the future.

So a lot has happened to me over the last few months. Most notably the beginning of my career in the information security industry. I anticipate that I will be making more frequent posts on my blog, and I already have a few topics lined up.

So keep checking back! I promise the topics are interesting and worth a read.

Filed under: Notices No Comments

ITWeb security summit 2013

Hello everyone,

I was lucky enough to be able to go to day 2 of ITWebs security summit. I have always wanted to go and check it out, and so this year was my lucky year.

I started off day two by browsing the exhibition, checking out various things on show, before heading off to the presentations.

I think I enjoyed the first presentation I went to, which was given by Richard Bejtlich of Mandiant in the USA. He presented a very interesting topic, detailing how a typical breach situation goes down. Of particular note, was his comments on Mandiants APT1 report. Do yourself a favour, and browse through it (available on Mandiants website), if you have not already.


Overall I really enjoyed the event, and I will most certainly attend the next one.


Tagged as: , No Comments

ZaCon IV

ZaCon 2012 was, as always, well worth the attendance. The organisers put together a schedule with presenters from all walks of the hacker domain, ranging from android vulnerabilities to physical security and hardware hacking.

Of particular interest to me, where the presentations on game hacking, physical security, android penetration testing and HTML 5 exploits.

Video recordings of the presentations may be viewed here.


Tagged as: , No Comments

Proof reading

So I have been proof reading for Hakin9 magazine for a while now, and today a great opportunity arose in that area. William Stallings has put together a new edition of his book
Cryptography and network security, and I have been approached to proof read a chapter of it.

As a result, I will be sent a copy of the book when it goes to print, and possibly have my name printed in the preface 🙂

Tagged as: , No Comments

HTTPS security


HTTPS is a secure layering of the HTTP protocol used for communication over a computer network, most notably used on the internet. It achieves this security by using the SSL/TLS protocol, which is the standard as far as securing web applications go. In particular, HTTPS is used by banks, social networks, live streaming services, email, instant messaging and more. SSL/TLS extends HTTP by providing a secure tunnel through which a web browser and web server communicate. By encrypting transmissions, SSL/TLS provides confidentiality and prevents unauthorised and undetected modifications, which preserves integrity.

Authenticity is ensured with a digital certificate, which establishes a binding between a public key and an entity(Eg. address, company name, persons name, hostname etc). In HTTPS' case, the public key is used by SSL/TLS to negotiate a key between the browser and server. The certificate contains the entity with its key, and is digitally signed by a certification authority(CA).

The CA is responsible for checking that the public key and entity really belong together. The entity itself can be a certificate, so the original entity can be wrapped up in certificates signed by multiple CA's. In general, a certificate can be considered valid by a web browser if there is a chain of certificates from a CA the web browser trusts, to the certificate which is checked.

Public Key Infrastructure, in a nutshell
By default, web browsers ship with a list of CAs which they may consider trustworthy. These CAs have been added to the list by the browsers creators, once they have reviewed them. The CA is audited to see if the standards imposed by the CA are sufficient (so far as verifying key-entity relationships are concerned). It is also important that the CAs systems are secure and well tested. The audit should ideally be redone annually, to ensure that the standards are kept, and that new security developments are accomodated for. Of course, the browser vendor has an interest in keeping its list of trusted CAs up to date, or else they will expose their users to potential security risks. Ultimately, the certificate acceptance is determined by the browser if it establishes at least one chain of trust to a CA it trusts.

Public key infrastructure failures

VeriSign-Microsoft certificate
In 2001, and anonymous attacker managed to persuade VeriSign(a major CA) that he was an employee of Microsoft. He was granted several certificates(with his public key) to use as a result. VeriSigns checks in 2001 were clearly not good enough to ensure a secure infrastructure.

MD5 collisions
Cryptographic hashes of the certificate are signed instead of signing the entire certificate(for various technical reasons). In the early days, the most commonly used hash function used was the MD5 hash. Many weaknesses have been found in the MD5 hash, and its use is therefore discouraged these days(in favour of one of the SHA hash functions).

An MD5 collision occurs when two different certificates have the same hash. In this case, a signature on either certificate would also be valid for the other. This was proved in 2008 when security researchers managed to generate their own sub-CA using an MD5 collision. This sub-CA then requested a signature from a CA, and then copied the signature received into their own sub-CA certificate. This meant that they were able to operate their own CA, and issue arbitrary certificates for any hostname and email address they wanted to. Soon after this proof of concept, CAs stopped using MD5.

Dutch CA, DigiNotar, had become compromised when an attacker gained access to their systems and granted a certificate for *.google.com to himself, and subsequently launched a man-in-the-middle attack on Gmail. DigiNotor was removed from all browser vendors' trusted lists, and soon afterwards DigiNotar went bankrupt. The attacker who claimed responsibility remarked that other CAs were susceptible to attack, and it is therefor only a matter of time before a similar attack happens again.

Preventative measures available
Three measures available can be implemented by the website administrator, and do not require CA involvement, nor do they need to modify certificates.

The default assumed protocol for web browsers is HTTP. If a user types an address in their browser, the browser will send an HTTP request. This request would be redirected to HTTPS in the event that the website uses HTTPS. The problem comes in where the redirect request is not protected, and can be exploited via a man-in-the-middle attack which would suppress the HTTPS request, and return a bogus page to the user, in the hope that the user enters valuable details which can be recorded.

HSTS was designed to avoid this, by specifying that every subsequent request sent to the web server shall be done over HTTPS, and that the web browser should never use HTTP on that particular website. Although still in its infancy, HSTS is used by some high profile websites, such as paypal.

Public key pinning
Public key pinning works by the web server specifying which public keys may be used in the certificate chain for a particular website. When a browser connects to the website a second time, the public keys offered in the server certificate chain are compared to the list of allowed public keys for that website. If no certificate in the chain matches at least one allowed public key, the connection is terminated.

DANE (DNS-Based Authentication of Named Entities)
If the CA is compromised, public key pinning wont work because a first time user can be impersonated. DANE can protect even the first visit to a website. Instead of pinning a public key on the first visit, the pins are stored in the users DNS records. Instead of a chain of trust from an arbitrary CA to a certificate for your hostname, a chain of trust from the operator of the DNS root zone to the DNS records for your hostname is established. The benefit of such as system is that there are over 1000 CAs which can issue valid certificates for any host name on the internet, but there is only one authority responsible for a top level domain. DANE is also not limited to HTTPS, and may be used on any SSL/TLS connection. As with public key pinning and HSTS, DANE is still in draft stages.

Presently a combination of HSTS and public key pinning is the best solution for securing a HTTPS web server against compromise or other attacks on CAs. DANEs complexity makes it more preferable when a higher level of security is required, but presently is badly supported amongst the browsers.

Tagged as: No Comments


Hi readers,

So I have been lax in my posts lately, but this has been because I have been preparing for exams(Which I am now writing).

But be assured that as soon as exams are done, the posts shall commence.

Thank you for understanding


Session riding

I decided to make this post about web application session riding, known more formally as cross site request forgery, following a presentation about javascript malware done at one of the ISG Durban meetings. There are many ways in which a web application can be designed insecure, and much more ways in which to exploit them.

Web application security has a lot of followers, and their respective communities. One such community is OWASP. OWASP is an open source application security project, and their website is perhaps the most beneficial to anyone looking for a source of information about web security.

Vulnerabilities like XSS or SQL injection are widely used by attackers and security professionals alike, and are the most popular methods of attack. Session riding however, although not as popular, is used just as frequently.

Why is it such a threat?

Web applications do not determine whether a logged in users action is authorised or not, instead it trusts the web users instructions and considers requests as legitimate. A valid session is all that is required by the web application to confirm the validity of an action.

So how does it work?

The web uses the HTTP protocol to deliver information to the user. Every HTTP request made to the web server from the same source is taken as unrelated to any previous or future requests. Neglecting keepalive connections, this is the way a browser interacts with the web server. It is required in some circumstances that the web server remembers the previous requests, for example, when shopping online. This activity needs to be kept track of, and kept seperate to the activities of other users. As a result, cookies were brought in by developers to do just that.

Cookies are bits of information sent by the application on the server, to the user via HTTP response headers. The browsers are designed to remember these cookies. These cookies are then sent to the web pages that have the same application.

Sessions of different users are maintained by random and unique cookie strings, known as session tokens(or session IDs). The randomness and unpredictability of the session token guarantees that the server side application knows who it is communicating with.

The trouble comes in, when a web application only evaluates the origin and validity of actions solely based on a valid session token. A clever attacker only needs a pre-made URL doing the same action as the target web application, and then produce it to the user in a recognisable form, and hope that the user is logged into the legitimate web application at the time. This URL could be sent to the user in a number of ways, including via email. The valid session token would then be obtained by the attacker, and they could get access to the web application.

Such an attack could even disable a network. Say for instance on a home ADSL router. All these devices utilise the HTTP protocol to access the device control panel. Hijacking the session token would allow the attacker access to the network infrastructure, and could lead to network failure if the attacker changes settings(or perhaps they could even change all the passwords).

As more and more devices are using such interfaces, more and more session riding becomes possible. The sky is the limit, as they say. What the attacker really needs to know, is a clever way to deliver the prepared html or javascript to their targets browser (email is but one method). Even standard html tags can be used. For example, in an <img> tag one could display an image, and link it through to the desired URL. Or the URL could be opened in an <iframe> tag. Even <script> tags may be used. Links such as these can easily be displayed in web applications such as facebook, where every visitor could potentially click on it.

But it doesnt stop there

To make things worse, when session riding, the attacks are always performed from the victims IP address. Thus, according to the web server logs, everything seems to have been done by the victim.

Enough with the doom and gloom

There are, however, some limitation with session riding. If the web application checks the referrer header, the attack will fail, because the origin of the request is usually not the same as the target. However, very few web applications perform this check, and could still be unreliable if a proxy, web filtering software or other such "man in the middle" is being used.

So what measures can be taken to prevent such an attack?

The most common advice to web application developers would be adding a random challenge string to each request. This random string is tied with the user session, and is different for every new login, which prevents the attacker from fetching a valid session token. It is also good practice to provide a lease on the lifetime of a session, such that it expires after a defined period of time.

These measures, however, require the developers to do additional work to the application. In the event that the application is already complete; the redesign costs are not favourable(and would use up too many human resources), and thus the applications remain vulnerable.

On the client side, a user should be very careful as to what links they open, especially if the link has been provided in an email. Browser plugins such as noscript for firefox will also go a long way toward detecting such attacks, especially where javascript malware is concerned.


First post, ZaCon III report back

I have finally decided to get my act together and begin posting on my blog. Albeit still in need of a great deal of design work, I will make this post and make good on design promises soon.

So to kick off the blog, I will report back on the ZaCon III InfoSec conference (www.zacon.org.za) which I attended about a week ago. Almost straight after beginning my attendance at ISG Durban(Information Security Group), I heard about ZaCon, and as fate would have it, I would be in Johannesburg on the scheduled weekend.

With great excitement, off I went to attend presentations on both days. The first evening started a little slow, but after some drinks and socialising with the attendees, the first presentation began. Footprinting has always been the source of fun for me, and over the years I have developed a few methods of manually gathering information about certain targets. So it came as great surprise that the first presentation was about Maltego; footprinting software that I was unfamiliar with. I was enthralled by the presentation, which demonstrated exactly those skills I have developed. The second (and last) presentation of the evening was about pickle exploitation in the python programming language. Due to not being familiar with python, the presentation was mostly lost to me.

The next day was the main attraction, and more "conners" were in attendance. After some coffee, the conners filtered into the lecture theatre for the first presentation of the day: Real world SoC. Being a student, it was good to hear about how the industry enforces security related issues. Next up was a presentation on hash cracking, a subject I know a great deal about. While the theory behind it all was mostly already known to me, I was interested in the statistics and specific applications which the presenter included in his presentation. The presenter himself was a pleasure to listen to, given his accomplishments in the field. Following this was a presentation on NNTP cache enumeration and poisoning. At first I just glossed over the name of the presentation, but after it began, I realised it was something very close to home. Needless to say the presenter had my undivided attention as soon as I realised the implications to my use of usenet.

After a short tea break, presentations resumed with a presentation by a phD student from Rhodes, who was presenting on functional programming. I have heard a bit about functional programming through my years of studying computer science, but have never actively practised it. Due in large to its unproven status as a programming paradigm I suppose. The presenter proceeded to demonstrate just how effective functional programming is, from a security point of view, as well as a programming point of view in general. Following this, were presentations on OSX sandboxing, and systems application proxies.

After lunch, the first presentation was interesting to all those in attendance, entitled: "Can I go to jail if...". The presenters selected computing practices(particularly to do with hacking), and reasoned the laws surrounding them. The next presentation was done by a masters student (I think it was masters anyway) from UJ. He presented on his research into rootkits. His methods were interesting, despite being somewhat disturbing. Especially since my netbook began showing signs he mentioned were side affects of being infected with his rootkit (mainly BSODs). However, later I discovered that my netbook troubles were nothing to do with a rootkit, but rather a hard drive which was about to fail.

Next up was a relaxing presentation on a different side of hacking: lockpicking. Another topic which I have dabbled in over the years, so the theory was known to me. But that didnt stop me from remaining captivated (it was nice to know I am not the only one who is interested by such things). Following another tea break, were the last three presentations, and the keynote via skype.

"The protocol trench" was the next presentation, which was mainly about access restrictions to clients on the network (for their own safety of course), after which was a presentation (or more a plea) for security enthusiasts such as the audience, to be "builders", and not "breakers" of security measures. The last of the in house presentations was about enterprise security.

Lastly, was the keynote speech delivered via skype by Richard Thieme. His speech was very interesting, and posed many questions regarding the current state of affairs, but more importantly what is possibly going to arise in the future. I was particularly pleased that he mentioned threats on which I have had lectures on by researchers from MIT (who are actively involved in the development of), namely the usage of cellular devices as a means of tracking and recording data(and the associated AI systems).

So here ends my first post, I hope that I shall not be as lazy for the posts which are to follow.

Tagged as: , No Comments