As big data becomes further entrenched as a mainstream challenge suitable to organisations both large and small, much of the industry discussion continues to revolve around two critical business needs: the need to find more effective ways to manage the explosion of data and data types, and the need to better capitalise on the opportunity this proliferation promises to make smarter, more insightful decisions.
Amid this trend, which has received so much industry attention, another critical need is often overlooked: the need to build an ethical big data practice, one with proper sensitivity to the privacy concerns of customers.
The ethics of big data is a complicated subject, with much of the complexity stemming from the quirks inherent within the concept of privacy itself. What exactly is privacy, and more importantly, what are an individual’s rights to privacy?
The boundaries of privacy differ across cultures, and while it’s generally understood that individuals are in entitled to some level of privacy, the question of whose responsibility it is to protect that privacy has no single, clearly definable answer.
The issue becomes even murkier when dealing with the nuances of information exchanged between customer and business. In some cases, namely with data disclosed to service providers such as doctors and lawyers, the onus has always been on the receiver to protect the privacy of that information.
In the case of customer-business relationships that are more transactional in nature, however, such as with a retailer, the protection of private information has historically been seen as the responsibility of the discloser. If people don’t want information exposed to a given business, they simply shouldn’t provide those details in the course of their dealings with that business.
There was a time in which that concept made sense. In the big data era, however, that’s no longer the case. Digital information is fluid, its exchange is simple, and its distribution is instantaneous, global, and increasingly essential.
Anyone in marketing will attest to the importance of understanding individual consumer identity and patterns, making the need to obtain detailed information about a given customer’s background and behavior critical to the survival of many modern businesses. In other words, businesses aren’t just receiving information anymore; they’re using information, with a purpose.
The sea change inherent in businesses’ need to benefit from personal information means the responsibility to ensure privacy has permanently transitioned from the discloser to the user, and that means organisations must take immediate steps to ensure their burgeoning big data programs are implemented with the ethics of privacy in mind.
>See also: Big data vs. big regulation: Will changing the rules empower consumers?
Though the blueprint for doing so is not nearly complete, here are four steps that can help businesses move in the right direction:
1. Understand the risk factors
Many big data privacy breaches are unintentional and unbeknownst to the analyst. That’s why it’s critical for everyone involved in a big data initiative to understand the risks associated with the handling of customer information.
Perhaps the biggest risk lies in the merging of bought data with other pattern data to infer or detect non-disclosed information – information that may be private in the eyes of the customer.
That’s precisely what happened in the case of one name-brand retailer when it sent coupons for pregnancy-related items to a teenager, which prompted an angry father, unaware his daughter was in fact pregnant, to berate a local store manager. To ensure an ethical practice, only target customers based on information disclosed within any single transfer of data.
2. Educate users
Putting policies in place that prevent employees from detecting non-disclosed information is only part of the equation. Employees must still take personal responsibility for what they do with customer data – even data obtained in an ethical manner.
It’s thus imperative for a company to educate its staff on what level of targeting is acceptable, and, conversely, where the grey areas are that can potentially be harmful to both the customer and the company’s brand.
>See also: The failures of big data
For example, if data shows that a specific customer has a heightened cancer risk, should a company treat him as such with their promotional materials and product offerings, despite the fact that he may be perfectly healthy today? Employees need clear guidelines as to how specific situations should be handled.
3. Design and use tools with privacy in mind
This applies to both the vendors creating analytical tools and the analysts using them. Today’s big data analytics technologies are undeniably powerful and only getting better, enabling analysts to reach further than they ever could under the do-not-disclose privacy paradigm.
As the saying goes, with great power comes great responsibility. Without careful consideration by both the designer and user of a given analytical system, the over-stepping of acceptable boundaries can and will happen.
4. Lead from the top down
Employees take their cues from company leaders, and big data is no exception in this regard. Top-level executives and business leaders must make clear to the company’s analysts, marketers and line-of-business managers that it is not acceptable to achieve company objectives at the expense of customer privacy.
The need to derive business value from big data is paramount, but in a world where technology regularly outpaces social norms, and customers haven’t yet had enough time to express their comfort zones over the use of their personal and usage pattern data, the onus is on business leaders to demand employees follow an ethical path forward.
Sourced from Matt Wolken, VP and GM, information management, Dell Software