“I believe that “one size does not fit all”… in every vertical market I can think of, there is a way to beat legacy relational DBMSs by 1-2 orders of magnitude” – Michael Stonebraker, Database Guru, Founder Ingres, Vertica, VoltDB and former CTO of Informix.
Things are really heating up in the database industry; the last ten years have seen an explosion in the number of products and vendors (analyst firm 451 Research now tracks 386 different products on its popular ‘tube-map’ of the database landscape).
>See also: The mainframe is most secure but the insider threat looms
The rapid evolution of the internet and the move towards data driven decisions across all industries has created a rich environment for database innovation.
The worldwide adoption of services from Google and Facebook has accelerated this trend and created a need for backend database technology to operate at webscale, allowing millions or even billions of users to access different services whenever they want them, through a mobile app which relies on real-time decision making and instant responses.
For over thirty years things were pretty simple; relational database technology was ‘good enough’ for almost everything and became the dominant storage model for transactional platforms and enterprise applications. You could use the same standard database technology for all applications from managing customer data, to transaction processing, to tracking Telco usage records.
These applications could be deployed and run across businesses without too many surprises as volumes grew at a reasonable pace and were fairly predictable. The core technology was typically main-frame or client-server based, and sized for known transaction rates and number of users, and no one ever got fired for choosing a relational database.
>See also: Hybrid cloud adoption dominates
The internet changed everything for databases as suddenly companies were able to sell direct to consumers across the globe. Traditional relational databases were not equipped to handle the technology shift to rapid unplanned adoption rates, flexible data sets to handle new use cases and new types of users as business models evolved.
The pressure for increased performance aligned with the falling cost of memory opened the door to optimise using in-memory database technology. But ‘falling’ does not mean ‘cheap’, and in-memory systems remained a luxury for key platforms that could justify the cost.
The launch of the iPhone and the global take-up of social media and other Apps created a new problem in the late 2000’s. Facebook and other internet giants were trying to figure out how they could handle billions of users and still give rapid responses to queries.
They developed their own technology, Cassandra, to power these ‘web-scale’ requirements for large scale data storage, high availability and a cost-efficient hardware model that could scale to unprecedented volumes.
This is one of the earliest high-profile examples of where a company’s unique vertical requirements could not be met by existing generic technologies and they were forced to build a specialised solution.
>See also: The UK’s top 50 data leaders 2017
The switch to on-demand services across many industries following the success of Uber, Netflix and others has driven the latest wave of database innovation. This transition requires that companies be able to extract insights and make decisions in real-time despite massive volumes of dynamic data.
The need for real-time analytics is creating opportunities for vertical databases in mobile, financial services, advertising and other industries to deliver specialised requirements around performance, scale and consistency.
The next part of this series will discuss what’s so different about the needs of a Telco, and why existing database technologies struggle to meet these needs
Sourced by Dave Labuda, founder, CEO and CTO of MATRIXX Software