Risk might be mission critical, such as software on a scientific robot crawling another planet, or risk might be associated with sensitive financial information.
In the first example the integrity of the software is paramount; it is hard to fix something on another planet. In the latter example both quality and security are important, with security perhaps paramount.
There’s also a fundamental difference in how quality and security are each regarded. A quality assurance test at the end of a production cycle will tell you whether a software product is stable enough for release – a simple “Yes” or “No.” Whereas a security test will be vaguer – “It depends” – and in the race to market, that result may be overridden by management. Quality code may not always be secure, but secure code must always be quality code.
>See also: The current state of software security
Software quality
Software developers can’t escape code quality – either it compiles or it does not. That is why it is important to have a robust software development lifecycle, one with gated with software sign-offs along the way. If the code fails a test early on, it is both cost effective and time efficient to fix it then rather than later in the process.
So, quality should be assumed first. If the final production code hangs or crashes, that’s not good. Yet it is entirely possible to create stable, quality code that’s still vulnerable.
Finding common ground
For example, there may be implementation problems which might not cause the application to crash but might lead to vulnerabilities. There may also be architectural problems which are more subtle and, again, won’t crash but might lead to vulnerabilities.
Without an external pen test or security review, these problems might make it through QA testing only to show up later.
Heartbleed, for example, was an implementation flaw in the SSL heartbeat function within OpenSSL. It existed in the wild for nearly two years before anyone caught it.
CWEs
How do we identify these software weaknesses? The MITRE organisation has created a common weakness enumeration (CWE) database that includes such categories as improper input validation, uncontrolled resource consumption (‘resource exhaustion’), and integer overflow or wraparound.
Some of these are rolled up into Top N lists such as the OWASP Top 10 or the SANS Top 25. Checking code against these lists is a good first step.
>See also: How up-to-date is your software security training programme?
A good automated static analysis checker will pinpoint most all CWEs within the code and allow the developer to evaluate and remediate as needed.
Checking against various CWEs can also be a step toward achieving industry compliance. And CWEs can also be associated with common vulnerabilities and exposures (CVE), another intersection between quality and security.
An automated software composition analysis (SCA) tool, a tool that breaks down the individual components and produces a bill of materials, should be able to match a codebase against the latest CVEs as maintained by the National Vulnerabilities Database. Additionally, a good SCA should also provide updated alerts whenever new CVEs are released that impacts the managed code.
CVEs by themselves are not always exploitable. They are only vulnerabilities. Yet there is a way to rank their relative severity, with or without a known exploit. FIRST.org provides a common vulnerability scoring system (CVSS) which provides a general benchmark risk score based on common criteria.
Many organisations use 5 or higher CVSS as evidence for further investigation. It is also possible to personalise a CVSS; there’s an online calculator that can calculate a specific industry’s risk.
Composition matters
As mentioned, a good SCA tool should be able to tell you what’s inside a application. Software today is composed of various components with a majority being open source. Let’s say you’re invited to a security bake off–do you know right now what’s in your software?
Maybe at the time of release you had the latest and greatest versions of all the open source libraries in your code – but are those libraries still current today?
>See also: Think before you speak: voice recognition replacing the password
As software ages, there’s the concept of software decay, where code libraries, particularly the open source environment, fall rapidly out of date. Unless the software is well maintained, with updates when necessary, quality code can become increasingly vulnerable.
Managing a product against software decay can be a nightmare, but again a good SCA tool should be able to take care of that. It should update the developers when a new open source library becomes available.
Fuzz testing
Just knowing and removing known vulnerabilities isn’t enough either. Security is moving target. What’s secure today may not be secure tomorrow. So how can you guard against the future and test against the infinite space problem of unknown unknowns?
Automated fuzz testing creates malformed input (also called negative input) to see what causes your code to crash. Fuzz testing was the method by which both Heartbleed and Cloudbleed were detected. Today, some commercial software vendors require a minimum number of hours of fuzz testing a product without failure before they will release it.
>See also: The security spell check
Conclusion
A simple way to think about all this is that quality is binary – the software either works or it doesn’t – where security is not – the software may be secure today but not tomorrow. It is important to have quality code but quality code may not be secure. Then again, secure code must always be quality code.
Producing software free of CWEs or CVEs makes it quality code. However, failure to maintain the code with the latest updates of its individual component and/or using fuzz testing to truly harden the code against future threats is vital. Both are necessary to have secure software applications.
Sourced by Robert Vamosi, security strategist at Synopsys