The past several years have seen conversations around consumer privacy in tech become increasingly prominent. Whether it be concerns about data collection practices, the clash between child safeguarding and privacy, the developing regulatory environment around tech privacy, or how some states are undermining privacy via tech: privacy is a major commercial, regulatory, and social concern.
Yet, despite many of our collective woes regarding privacy, many of us continue to voluntarily give our data away. On the surface, this merely seems like a sign that convenience continues to beat out privacy concerns, regardless of how many of us at the same time fret about the broader social or philosophical problems that this may cause. But I think this is a sign of something else – ‘privacy’ is a far less binary concept than many of us claim.
What’s often missing from the constellation of rhetoric around privacy is a clear understanding that there are actually multiple types of privacy concerns, such that consumer privacy can be jeopardised in one regard but not in another. We’re not inherently opposed to the idea of our data being collected, retained, and used: instead, we’re concerned with the people who we give it to being transparent, fair, and consistent with its use.
That’s why I’ve come to see privacy in tech as referring to three dimensions of data use concerns faced by consumers: freedom of information vs control of information; tracking vs anonymity; and controlled vs surrendered personal data. Through making sense of each of these dimensions, we can make a great deal of headway in discussions regarding consumer privacy.
What are the best ways to ensure user privacy?
Axis 1: Freedom of information vs control of information
Our first axis of consumer privacy is that of freedom of information vs control of information. This dimension considers the legal situation consumers find themselves in when it comes to their freedom to exchange information.
The freedom of information axis most obviously relates to how much a state tolerates open and free communications: a maximally free state on this axis is one that pursues no control over speech and data sharing, whereas a minimally free one sees the state heavily regulate what information can be shared between citizens.
But alongside the question of direct state intervention, the freedom of information question also relates to the freedom of people to communicate and share data without the fear of reprisal by private actors. In a state that’s far more willing to permit frivolous lawsuits, for example, while the state may not intervene many people may find their speech stifled by legal interventions from aggrieved individuals and businesses.
This also extends to how much autonomy online platforms have to be draconian in their code of conduct – if all the main platforms people use to communicate regularly ban people at a moment’s notice with no appeals process, then it’s clear that freedom to communicate is amply restricted in practice.
Axis 2: Tracking vs anonymity
Our second axis of privacy relates to that of the technological environment consumers find themselves in – tracking vs anonymity. This dimension looks at how extensively the services used by consumers log their traffic or input for future reference.
For many services or products, tracking is a prerequisite to them being viable in the first place. Whether it be Google AdSense, Facebook’s ad feed, or Amazon’s product recommendations, many of the core business models of many large tech platforms are dependent on tracking user activity.
Some are less invasive than others, however, and also are more transparent as to what information they do collect and track. For example, DuckDuckGo, Neeva, and Apple have all distinguished themselves among the public through radically limiting tracking or making it very clear what data is being tracked on these platforms.
The security implications of Apple’s latest iOS update
Axis 3: Controlled vs surrendered data
Our third and final axis of privacy looks at how much control and autonomy users have with regards to the data they delegate to a platform and services.
Think of this axis as demarcating the difference between Facebook, which despite being aggressive with its tracking has extensive settings to allow users to control the collection and use of their data, and a platform such as Google which gives users little discretion as to how their data is collected or used.
What does this mean for privacy?
It’s true that many people will happily exchange privacy for convenience, but each person’s appetite for privacy reduction is different and their weighting on the respective three axes is going to vary: some users may just care that they’re safe under the law, while others may only care about tracking from platforms, while yet many more extremely concerned about all three privacy axes at once.
All this means that there’s no unilateral response to privacy concerns. What considering the three axes shows us, rather, is that privacy should be treated by companies and platforms wanting to connect with consumers as a differentiator. Rather than a bolt-on privacy policy, companies should consider concerns about privacy, data, and security as fundamental questions to help build and refine their identity, culture, and product fit so they can better identify their niche in the market and tap it appropriately.
At the heart of it, if people feel companies are truly transparent about it, many won’t have a problem with surrendering some privacy. The public wants personalisation, cheaper goods and services, and our products to meet our needs. We want products that make our lives more convenient – we just want to know the truth on what companies are doing and what they stand for.
Through better understanding a product’s place on the axes of privacy through clear communications from its creators, consumers can have that transparency. And that will help us shift the conversation around privacy away from one of simplistic good and bad to one of what fits us, our needs, and our identity.