Heartbleed has captured the public’s imagination like no other security bug and has drawn lots of attention to open source, some of it positive and some negative.
Half a million of the web’s most secure certified servers were believed to be vulnerable to the attack.
Joseph Steinberg, cyber security columnist for Forbes, even commented that “some might argue that Heartbleed is the worst vulnerability found (at least in terms of its potential impact) since commercial traffic began to flow on the Internet”.
The potential for damage was truly catastrophic but the actual impact was, in context, not nearly as negative as it could’ve been.
Heartbleed was made possible by a flaw in OpenSSL, an open source implementation of the SSL and TLS protocols.
Despite it being relied upon by three quarters of the world’s web servers, the OpenSSL project was ran on a meagre operational budget of $1 million.
As such, it seems unfair to try and pin the blame on the fact it was an open source project. The fact of the matter is that open source was not the cause of the bug. Like all bugs, the cause is always the same – people.
The key to understanding the fallout around Heartbleed lies in identifying the lessons to be learned. We must appreciate just how much the open source approach has limited the damage caused by humans being human.
OpenSSL has a core team of eleven members, and with such limited testing resources it’s hardly surprising that the bug went unnoticed like it did.
Simply put, the amount of testing was not adequate considering how widely it was used. This would be the case if it were open source, or proprietary – but the flexible, proactive approach synonymous with open source projects ensured it was fixed incredibly quickly.
Heartbleed has shown us that we need to create a sense of urgency to scrupulously test the security measures we all rely on, and not just take them for granted.
People are fallible and always will be. Therefore we need to find ways to cater for that fallibility and to implement preventative methods and, most importantly, to apply them properly.
One solution that should be considered is the ever-evolving science of testing software.
Historically, developers themselves simply responded to user’s bug reports and fixed the issues as they arose.
This can become logistically untenable when considering the size of the code and the number of developers working on the project.
Similarly, the days of using users as testers are for the large part behind us; nowadays robust testing is more focussed on bug prevention rather than bug fixing. The problems arise when software is inadequately tested – Heartbleed falls very much into this category.
Tech companies often employ over-arching strategies such as Test Driven Design (TDD) and unit testing, as well as functional and user testing.
However, none of these methods would have exposed Heartbleed; what may be required for high-security software is lengthy fuzz testing.
Fuzz testing is often automated, or semi-automated, and involves intentionally barraging the inputs of the software with large amounts of unexpected and invalid data.
It then monitors the software for crashes, memory leaks, and, in Heartbleed’s case, unexpected output.
Fuzz testing is typically used for testing closed-source software, not open source, as it makes no assumptions as to how the code is written.
>See also: Cyber security: the solutions aren’t working?
Discoveries of severe bugs, like Heartbleed, have increased as fuzz testing has become more commonplace as the same techniques of random bombardment are often used by attackers.
A lot of testing and fault conditions are difficult, if not impossible, to impose by humans. It wouldn’t matter if you had ten or a hundred testers working to find and fix bugs, there will always be instances where a Heartbleed-esque bug can take advantage of our human fallibility.
Development is undoubtedly important but testing is equally, if not more, crucial.
Fuzz testing shows the ability of automation to match and counterpoint the erratic and random nature of bugs in a way that would be untenable for human testers.
If there is anything that Heartbleed has truly exposed it is that while ever humans are still writing code, we will always need machines to test it.
Sourced from Steve Nice, CTO, Reconnix