It was inevitable that the demise of Code Spaces at the hands of a hacker would see the wagging of the hypothetical finger. Hindsight is a wonderful thing.
But in among the articles lamenting the closure of the business was an interesting piece covering a very similar case that happened the very same week.
One More Cloud, a hosted search-as-a service provider, also suffered an attack but received less coverage because it was able to rally its defences and survive. So perhaps rather than hold up the attacks as an example of the insecurity of the cloud, we should be regarding them as a indicator of the need for change in how the cloud is managed.
>See also: Cyber security guide to the 10 most disruptive enterprise technologies
The two hacks had much in common: both happened over AWS EC2, both saw their GitHubs repositories targeted, and both saw the compromise of customer data.
Indeed, the Code Spaces hack was in some ways more complex, as it involved a DDoS attack, ransom demand, and deletion of data via the control panel.
In contrast, One More Cloud made just one potentially fatal error in the form of a mislabelled, old API key which was believed to have been accessed or leaked via a connection with a third party collaborator. Investigations were still ongoing at the time of writing.
And yet the two saw very different outcomes. Code Spaces, a code hosting and software collaboration platform, suffered severe data losses due to poor back-up, separation and crisis response ultimately resulting in the closure of the company.
One More Cloud, on the other hand, faced a week long fight to rescue customer data but were able to resume full service. The experience revealed problems with its customer communication strategy but its ability to bounce back through an isolation strategy that protected accounts stood it in good stead.
What they had in common is a confused security structure and an inadequate response plan. It’s important to emphasise that both companies did have security measures in place. The trouble was there were chinks in the armour that the attacker was able to exploit.
Whether we like it or not, the hands of the cloud service provider in any similar scenario remain clean. Access management, back-up and the way data is separated are still very much the user’s responsibility. Security tools are increasingly being offered by the cloud service provider but whether you choose to use them is entirely up to you.
Unfortunately, for many businesses, that takes them into uncharted territory and robs time and resource from their primary line of business. Many of those moving to the cloud can ill afford to sacrifice either, leaving them exposed.
This creates a real crisis for the cloud. Security has become a hot potato that neither the cloud service provider nor the business are willing or able to grapple with. Both make token advances while secretly hoping the hackers don’t come calling.
But rather than perceiving Code Spaces and One More Cloud as a direct hit and near miss, perhaps we should be heeding the attacks as a warning and a catalyst for change. The current ‘hands-off’ strategy is not working and there is a real need for the adoption of some basic security principles across the board.
In essence, a cloud network is no different to any other form of network architecture. There will be inherent risks that need to be assessed, with associated impacts which can be used to identify the risk appetite of the business.
There will still need to be a tried and tested incident response plan, effective lines of communication both internally and externally, and a detailed recovery strategy to resume critical business operations. Yet, at some point, there can and should be liaison with the cloud service provider to coordinate the implementation of these steps. For example, being able to identify access to the management console and view and terminate active sessions must come from the provider.
Over recent days, its transpires that one of the very ways in which developers have sought to circumvent these hurdles by using a tool called Elasticsearch, has been used to launch DDoS attacks in the cloud.
Elasticsearch can be used to perform searches of, among other things, log files over cloud environments, including AWS EC2. Users have been urged to update their software to fix the vulnerability but, because the responsibility for app updates is purely down to them, there is a risk that that update may only be implemented piecemeal. This perpetuates the culture of self-reliance, where security is applied ad-hoc, with ignorance and confusion allowing attacks to continue unabated.
Clearly, there are still security blindspots in the cloud but these can be mitigated. Organisations must ensure they have the rudimentaries in place: role-based access control, two factor authentication, encrypted key stores, and remote, offline back-up.
There must be vigilance, with activity monitored and anomalies reported in line with the incident response plan, and regular security audits performed to ensure sufficient controls are in place. Organisations can and should seek external assistance with these elements but the organisation should also be brave enough to enter into some difficult discussions with the cloud provider.
Ascertain if the CSP holds any accreditations, specifically ISO27001, ISO9001 and ISO20000, and check to see if the services offered are included within the scope of any such certifications. Does the CSP have its own DDoS mitigation solution or does it rely on an ISP? What firewall capabilities does it have and do these extend to web application firewalls that can be tailored to the user’s applications and business needs? Does the CSP offer secure VPN access with multifactor authentication? What vulnerability scanning takes place and how is this monitored?
Don’t be afraid to ask where data will be stored (geographically and in terms of separation), how active sessions will be tracked (in terms of recording and termination), and communication maintained and redress sought, should the provider be in breach of service.
In the case of the latter, a specialised standard such as BS 10008:2008, can provide additional protection in the form of the ‘evidential weight and legal admissibility of electronic information specification’, if the provider adheres to this.
The spate of recent attacks are the proof many have been waiting for to condemn the cloud. But the de-perimeterisation of many networks means we are all now effectively working with or operating in a virtual environment at some time. We cannot turn back the clock and the cloud is here to stay.
>See also: Why businesses must urgently change the way they think about cyber security
Entering the cloud without adequate security is like failing to keep a spare key, or worse, neglecting to lock the door altogether. It’s true, you may never be burgled, but are you willing to take that chance? To continue the analogy, in addition to an effective lock and key, the door has to be fit for purpose, so there has to be a security relationship between the user and the cloud provider.
These sad and cautionary tales serve to remind us that security needs to be implemented effectively, with responsibility designated and documented, and data backed-up and held in multiple locations to spread the risk. And, when things go wrong, there is no substitute for a tried and tested response strategy.
But let’s not lose sight of the fact the cloud is a meritocracy that has enabled organisations to scale and compete on equal grounds. What is at stake is no less than people’s livelihoods and future economic growth and that has to be worth defending.
Sourced from Jamal Elmellas, technical director, Auriga