From pet care to pedometers, and from medication reminders to pacemakers, the potential for the Internet of Things (IoT) to impact our daily lives is massive – and to date we’ve only really scratched the surface of its potential.
According to Gartner, IoT will support total services to the value of $235 billion in 2016. But when you also consider that by 2020, 20.8 billion ‘things’ will be connected to the Internet – up from 4.9 billion in 2015 – the possibilities are seemingly endless.
When implemented appropriately, IoT can substantially increase our productivity and enhance our quality of life.
A household can be controlled with a touch of a button, even while the homeowner is in another country.
The healthcare sector has also been quick to adopt IoT to track health information – including foetal heartbeats and blood glucose levels, which has helped reduce the need for direct patient-physician interaction.
>See also: How blockchain will defend the Internet of Things
For businesses, IoT solutions and services also offer the possibility of new revenue streams.
But as we become more dependent on IoT, connected devices and connected services, the pressure increases on organisations to ensure those services go uninterrupted.
To be reliable, service levels cannot waver from 24.7.365 availability.
Failure to deliver always-on access to IoT services, particularly in its early stages, could severely hamper the progress of the sector.
Even in the seemingly more light-hearted and jovial examples of IoT deployments, poor service and connectivity will make consumers think twice about paying for, and placing trust in, connected ‘things’.
Counting the cost
Financial loss is just one threat for those who cannot guarantee service availability.
According to the Veeam Availability Report, downtime costs enterprise-sized organisations an average of $16 million annually.
Meanwhile, 68% of IT decision-makers acknowledge that the impact of downtime can affect customers’ confidence in the organisation and the brand.
>See also: Internet of Things: a retail perspective
The biggest challenge for IoT is to ensure that a service meets the high expectation that customer data, which is increasingly stored in the cloud, is available for a user where they want it, when they want it.
A recent tale of woe arose when PetNet’s automatic pet feeder system suffered a third-party server outage causing the intelligent feeding product to malfunction, leaving many animals bereft of food in the height of the summer holidays.
It was a 10-hour outage that affected about 10% of PetNet’s user base.
Mind the gap
Failing to recover highlights how essential it is for businesses adopting IoT to have the right disaster recovery and backup solution in place to ensure availability in this increasingly connected, digital age.
Disruption can be minimised by having appropriate solutions in place – some of which can reduce downtime to a matter of minutes rather than 10 hours.
An obvious way of rectifying this issue happening to your business would be to ensure you have regular backups and snapshots from which you can restore.
However, it isn’t just as simple as having a backup. The current backup and replication standards in most businesses mean that they just settle for a low performing, legacy backup solution that can’t keep up with the multiple environments in which their data sits – let alone testing those backups to ensure they are actually of a quality you can use.
From a security perspective, the increased flow of connected device data must also be protected against loss and unauthorised access.
To avoid the threat of data loss, businesses must deploy near-continuous data protection, verify their protection to guarantee recovery, and use appropriate encryption tools to protect against unauthorised access.
>See also: 4 practices in IoT software development
Veeam research has found that, while organisations have increased their service level requirements to minimise application downtime (96%) or guarantee access to data (94%) to some extent over the past two years, many are still making costly errors.
With IoT becoming an important asset in people’s lives, failure to recover data in a timely manner is not only inconvenient, but also potentially fatal.
Reconnecting users to their devices and services must be the number one priority in such instances.
Getting a grip on availability
Forward-thinking businesses are now incorporating availability into their data centre strategies and modernisation plans.
Virtualisation is now mainstream and few applications aren’t deemed ‘mission-critical’ by a business or its customers.
In the modern data centre, that often means protecting data using the 3-2-1 rule – keeping 3 different copies of data, on 2 different media, with 1 of those locations off-site.
As IoT continues to transform industries, making experiences at home and work more efficient, businesses must ensure availability is built-in to the planning of these deployments.
>See also: Big data vs. the Internet of Things: how the projects differ
Such technologies are enabling highly disruptive business models and along with blockchain and AI platforms, show promise in delivering competitive advantages. However, these advantages mean nothing if they cannot adequately support the exponential rise of data that comes with them.
IT leaders need to properly asses these platforms and ensure a solid availability strategy is in place to underpin their digital goals.
With availability at the centre of an IoT strategy, innovation can occur, consumer trust will build, your business will reap the benefit of digitisation, and yes, the dog will get fed on time.
Sourced by Richard Agnew, VP NW EMEA, Veeam