It is undeniable that modern society is very dependent on software.
Governments and employers rely on databases that keep sensitive and personal information. Hospitals rely on machines and software that provide life support and complex semi-automated treatments like radiotherapy. Banks rely on software that manages money and on databases that store financial data.
On a more individual level, most people cannot even imagine life without their favourite mobile apps, their email client or their social media presence on sites like Facebook, Twitter, Instagram, Pinterest or LinkedIn.
Some of the statistics are impressive. In the UK alone, there are at least 38 million active social media users –approximately 58% of the entire population.
Software technology is so popular and widespread because it improves our lives.
But it has been known for many decades that software failures can cause serious problems. A well-known example of software failure is Therac-25, a radiation therapy machine that was involved in at least 6 accidents between 1985 and 1987, in which patients were given overdoses of radiation. Because of software errors, the machine gave its patients radiation doses that were hundreds of times greater than normal, resulting in death or serious injury.
>See also: Securing the connected car in the digital age
Given the massive technological developments of the last few decades, one would expect that software failures like the ones that affected Therac-25 would not occur anymore.
The reality is that, even though software engineers learned substantially from past mistakes and software engineering practice is now more effective and reliable, serious problems caused by software failures still occur on a regular basis.
Two illustrative examples reported by the media in the last year are the software problems that caused HSBC substantial financial losses and the software glitch that resulted in 3,200 US prisoners being released earlier than expected.
These examples demonstrate clearly that the current status of software reliability can improve – and must do so in line with recent developments.
Today, software security – and more generally cybersecurity – is on everyone’s lips. At the start of November the UK government announced plans to spend £1.9 billion to retaliate against cyber-attacks, and address ways to tackle cyber-scammers and defend businesses.
Many organisations miss a crucial point about ‘cybersecurity’: the first step to a safe and resilient system is software analysis and verification.
Whilst it is impossible to create a system that is absolutely secure, discussions around information security and risk management are often being approached incorrectly as they do not effectively address the ‘science’.
>See also: Consolidation in the security market
For the past 10 years it has become clear mathematical approaches that can improve the current scientific standards in algorithm and software design.
At Teesside University, tools and methods have been developed that enable and support the construction of reliable, safe and secure software systems.
Teesside University applies formal methods to the specification, development and verification of software because it uses mathematically-based techniques that are able to obtain strong provable guarantees about systems.
One of the key ideas in the research at Teesside is that by treating computer programs as mathematical entities, you can reason about them in a very precise way.
In some of the most recent work, for example, Teesside University used a mathematical theory called separation logic to verify properties of FreeRTOS, one of the most popular operating systems for embedded devices.
Verification tools based on separation logic enable reasoning about small, independent parts of the program memory, rather than having to consider its entire status. This makes the verification process manageable and scalable, so it can be applied to large software systems.
Despite the increasing popularity of verification tools that prove the absence of errors, most software companies still rely solely on software testing to determine whether their products are reliable.
Testing is an important step in the development of software, but, as the famous Dutch computer scientist Edsger Dijkstra once put it, “program testing can be used to show the presence of bugs, but never to show their absence”.
Indeed, software testing produces weaker guarantees than verification tools that prove mathematically the absence of errors.
However, testing can be a cheaper process, it is required by many industry standards, and, when well-performed, can increase considerably the quality of software.
An example is the test-generation software that is being developed at Teesside University, as part of a 2-year knowledge transfer partnership (KTP) funded by Innovate UK, with Applied Integration, who are one of the UK’s independent systems integrators specialising in complex safety critical control systems.
>See also: Biometric authentication: sometimes, proving you are you just isn’t enough
Applied Integration designs these systems for a wide range of customers across a number of industrial sectors, including automated systems for nuclear attack submarines, petrochemicals, oil and gas sectors.
Typically, for each project, engineers interpret briefs and capture requirements in individual ways, often producing very large documents manually. If a client’s requirements have been misunderstood, all those documents then need to be retrospectively amended.
As part of this KTP project, a new standard methodology to capture client requirements for new control systems is being developed at the university.
This will allow more consistency across projects.
Moreover, testing the systems developed is a very expensive process, often taking 60% of the total cost of a project.
The project will develop and implement mathematically-based techniques that allow automated generation of test sets directly from the user requirements captured, and automated testing of these sets.
This will give Applied Integration a unique advantage over global competitors.
The company estimates that the positive commercial impact of the KTP will be worth over £5 million over the next 3 years. This encompasses increases in turnover, new efficiencies in capturing user requirements, and the savings associated with automated testing – an exciting innovation.
As a university, Teesside is committed to improving the current status of software reliability by developing innovative research, by transferring knowledge into industry, and by adequately training the next generation of software engineers.
>See also: Bolstering contact centre security with voice biometrics
Teesside University has extensive experience working with companies, helping them with expert advice and helping them attract external funding to solve their software challenges.
There is also a plan to launch a cyber security clinic that will provide free cyber security services to the community. Services will include advice and security assessments on software safety and other security topics.
The issue of effective software or cyber security strategy for businesses may seem simple on paper, but getting the right software verification systems in place in the first instance can save a lot of future headaches.
Sourced by Dr João Ferreira, computer scientist working in the School of Computing at Teesside University