As long-time advocates of privacy on the web, we at eyeo want to mark the upcoming Data Protection Day by shining a light on concerns and trends within the realm of online privacy, for we see it as a societal problem. Thus, we sat down with our Group Data Protection Officer, Cornelius Witt, to discuss the state of affairs, solutions and complications prevalent in the field
Data Protection Day – can you explain what this day means and where the idea comes from?
Originally, the Council of Europe initiated the Data Protection Day in 2006. Back then, polls showed that 70% of European citizens did not understand how their personal data was being protected. Thus, as a reaction to this lack of awareness, the Data Protection Day was born.
The 28th of January was chosen as it was the date when the Council of Europe’s data protection convention, known as “Convention 108”, was opened for signature. The “Convention 108” is the first international treaty that aims to protect the personal data of individuals. By now, 55 countries have signed and ratified the Convention.
But what does Data Protection Day stand for today?
Well, over time, Data Protection Day has become a global phenomenon. Organizations across the globe celebrate Data Protection Day – or Data Protection week, even – for instance with companies releasing new privacy-enhancing features or NGOs initiating awareness campaigns on the importance of privacy.
And what is meant specifically by “privacy” in our current online world? Sometimes it’s used as a catchall.
I do think there are two dimensions to this answer: on the one hand, from a more legal point of view, privacy is a fundamental right. The Declaration of Human Rights of the United Nations defined privacy and the protection against attacks against or interference with personal data as a universal right, similar to other individual rights, such as the right to freedom of expression or the right to seek asylum. And this fundamental right is then manifested in more concrete laws, such as the GDPR in Europe.
On the other hand, privacy has also become a more critical topic from a societal and business perspective: More and more laws provide legal protection for citizens, whereas at the same time many businesses are shifting their stance on privacy as well. One example for this is the privacy tech sector, meaning technologies that preserve or enhance privacy by having fundamental data protection principles built-in. This market is booming right now, with privacy startups receiving more than $4 billion in investment in the over 500 funding rounds.
So, how has privacy become also a socio-political issue? What implications does it have on our lives and society in general?
It’s interesting to think about this question from a historical perspective: the very first attempts to legally protect an individual’s privacy aimed to safeguard citizens against interference from government or rulers. Nowadays, at least in most parts of the world, the focus has shifted to protect the privacy of users against the practices of private corporations, even though government access practices remain a data-protection issue as well.
A very current example on the societal implications of privacy is the use of facial recognition software: many organizations and politicians are calling for a general ban on facial recognition, arguing that this technology is fundamentally undermining privacy and, when adopted in public places for instance, creates a de-facto surveillance. Others see benefits in this technology, for example in law enforcement. And, no matter which side of the issue you stand on, this crossroads of privacy and technology clearly shows the huge societal implications we are witnessing.
As you mentioned the role of the user – what is your take on the so-called “privacy paradox”?
The “privacy paradox” was first coined by researcher Barry Brown in 2001, who worked at HP and found out that there is a disconnect between what users say about privacy and how they actually behave. In a nutshell, the paradox says that users are saying that they care about their privacy, for instance in surveys, but they actually don’t make privacy-friendly choices regarding the services or apps they use.
For a long time, I was a strong supporter of this hypothesis, as so much evidence pointed in that direction: while studies often recorded that users say they care about their privacy, the actual user behavior showed the opposite, and people did not seem to, in truth, care if services were infringing upon their privacy.
So, my hypothesis is that the “privacy paradox” is becoming less and less applicable. We have ever more wells of information to suggest that actual, tangible changes are happening.
What can businesses do to protect consumers’ data?
In general, I think it’s crucial that companies fully implement the core privacy principles, such as collecting as little user data as possible and only to the extent that is really necessary. This is not something particularly new, but I see so many online services that do not follow these principles, e.g., by requiring users to set up an account and providing personal information where it would not be relevant for the service as such. Also, I believe that companies should have a very good understanding on data governance, meaning that organizations should be fully aware of all personal data they hold and how this information is processed. Especially for data-driven companies working across many teams, this can be challenging, but it is an absolute necessity, and more and more privacy tech providers offer innovative solutions for this.
Lastly, I think the potential for data anonymization and pseudonymisation is not yet being fully implemented in the market, and there are many ways how data can be used compliantly when properly anonymized or pseudonymized.