When Mario Costeja González googled himself in 2009, two prominent results appeared: home-foreclosure notices from 1998, when he was in temporary financial trouble. It took the next five years to have 18 words delisted from Google search results on his name. The notices had been published in Spanish newspaper La Vanguardia and recently digitised. But their original purpose – attracting buyers to auction – had lapsed a decade ago, as had the debt. Costeja González asked the newspaper to remove them. When that was unsuccessful, he challenged Google, and the case was eventually elevated to the European Court of Justice, Europe’s highest court.
Forgetting and remembering are complex, messy, human processes. Our minds reconstruct, layer, contextualise and sediment. The worldwide web is different. As Google founders Sergey Brin and Larry Page described in their original Stanford research paper, the web is “a vast collection of completely uncontrolled heterogeneous documents”.
And search engines take that corpus and give it perpetual, decontextualised freshness. Vast catalogues of human sentiments and stories get served up at the mercurial whims of black box algorithms – algorithms that Brin and Page initially described as “inherently biased towards the advertisers and away from the needs of the consumers”, in a way that is “difficult even for experts to evaluate” and therefore is “particularly insidious”.
The crude, timeless nature of digital memory – and the unquestioned power of private, commercially motivated companies that control it – was a challenge that 59-year-old Costeja González decided to tackle directly.
In May 2014, the ECJ found against Google. It recognised that when we enter someone’s name as a search query, scattered moments of their life are presented mechanistically, with a significance distorted by lack of context, building a detailed but selective profile. So what are the rights of the individuals to whom those profiles relate? And what are the rights of those seeking information?
The question produces an interesting philosophical divide. One position is that once online, information should stay online (except when unlawful under defamation, copyright, or criminal law). This is generally the starting point of most US internet companies, free speech organisations and the media; a typical view for those raised on US First Amendment thought.
On the other hand, there are all manner of reasons to remove data, other than being compelled by law. One might want to remove information for emotional reasons, ethical reasons, or “just because”, when there is no countervailing interest. Some recent removal requests approved by Google included patient medical histories; intimate private photos; and old threads in private group conversations that ended up online.
They include prominent reminders that an individual was the victim of rape, assault or other criminal acts; that they were once an incidental witness to tragedy; that those close to them – a partner or child – were murdered. The original sources are often many years or decades old. They are static, unrepresentative reminders of lives past, lacking the dynamic of reality.
In the real world, information sediments over time, affording people the capacity to move on, remembering but not being burdened by their past. Offline, we communicate in different ways with different “publics” and purposes.
And this was the view of the court in Luxembourg, which drew its justification from EU data protection law. The court ruled that personal data should be removed from search results on a person’s name when outdated, inaccurate, inadequate, irrelevant, or devoid of purpose, and when there is no public interest.
‘Right to be forgotten’
A public interest demands that relevant information remains accessible, which is why pertinent information about elected politicians, public officials, professionals and criminals – and even just bad reviews – all rightly remain accessible, hence Google rejecting such requests. Importantly, nothing in the ruling suggested that source material should be deleted: it was solely about the prominence of information in search engine results.
The phrase “right to be forgotten” was mentioned only briefly in the judgment but was immediately seized upon by the media, Google and regulators. Though now replaced by the more accurate “right to delist”, the impact of the label “right to be forgotten” was to force the debate into binaries: forgetting vs remembering, privacy vs freedom of expression, censorship vs truth or history. These are false dichotomies, insufficiently nuanced to cope with the reality of our lives and the complexities of human existence.
The point of having rights against search engines is not to manipulate memory or eliminate information, but to make it less prominent, where justified, and combat the side-effects of this uniquely modern phenomenon that information is instantly, globally, and perpetually accessible.
Since when has the internet become “truth”, or “memory”? And since when has “history” been reduced to Google’s commercially prioritised list of an imperfect collection of digital traces? Such elisions ignore the nuance of forgiveness and understanding, in conjunction with memory itself, in building truth and justice. They undervalue privacy and autonomy, at the price of near-total transparency, in building community and security.
The all-or-nothing framings imposed on this case constrain, influence and shape the narrative of a much broader war: the struggle for our digital identities. We have reached a critical moment. Control over our personal data has been all but lost online: lost to corporations, to governments; lost to each other. How can we, as individuals, be empowered by the huge benefits of digital connectivity and global information flows, yet still retain some personal control over the way our identities are represented and traded online? Costeja González’s case is a small but critical battle on that broader terrain.
Nine months after the European ruling, it is clear that Google’s implementation has been fast, idiosyncratic, and allowed the company to shape interpretation to its own ends, as well as to gain an advantage on competitors and regulators forced into reactive mode. It avoided a broader and much deeper reflection on digital public space, information sedimentation, and the exploration of collaborative solutions between public and private actors – such as a joint request service across different search engines, with processes for getting confidential advice from publishers and public officials.
Veneer of authenticity
A little more than two weeks after the ruling, Google launched an online form for citizens to identify search result links about themselves that are “irrelevant, outdated or otherwise objectionable” – only a partial reading of the governing law, which also includes “incorrect, inadequate or misleading”. It started to remove links one month later. At the time of writing, the company had received 218,427 requests, comprising a total of 789,496 links. It has reached a decision on 83% of the links and actually removed 264,450 of them, or 34%. Yet all this has been done without disclosing its internal processes, removal criteria or how it is prioritising cases.
Google established an impressive ”advisory council” of formidable experts, insulating its processes with a veneer of authenticity and respectability – despite being excluded from any actual knowledge of what Google is doing internally because it has revealed so few details of the cases it is processing. Created to compete with democratically legitimate expert regulatory bodies, the council’s work culminated in the “independent” report released in February 2015, setting out its recommendations based on seven recent consultations across Europe.
When the ECJ announced its ruling, Google criticised it as “disappointing”, “striking the wrong balance” and “going too far”. Yet despite highlighting the “many open questions” of the ruling, Google chose not to wait for guidance from the regulators, which emerged eventually in late November (and which Google has since ignored). It has taken every opportunity to passively promote its role as a “truth” engine while avoiding discussion on the deficiencies of search: algorithmic bias, incomplete coverage, murky reputation management practices and heavy cultural-bias.
Most controversially, Google’s interpretation is that successful requests are removed only under European Google domains such as google.fr, google.uk and google.de. In contrast, requests alleging copyright infringement – which outnumber privacy requests by 1000:1 – are implemented under US law on all its domains worldwide, including the largest, google.com.
Google’s decision to only remove links on its European domains was “unacceptable” and creates a trivial workaround to undermine the ruling, according to the November report by regulators from 28 European countries and the European Commission. But, backed by its advisory council, Google’s decision was tactical, shifting discussion away from the core issue of establishing and protecting digital rights, and instead encouraging conflict between apparent European and US viewpoints.
A line repeatedly used by executive chairman Eric Schmidt and chief legal counsel David Drummond is that Google has always seen itself as a “card index” for the web – an oddly archaic analogy that implies objectivity, memory and public record. Yet Google can and does curate its search results, including material it judges to be promoting terrorism or child abuse.
Google refuses to countenance the possibility of an algorithmic flaw, yet the question remains why Costeja’s old debts – and the private, incorrect or outdated information of others who have made requests to Google since – were featured so prominently in search results.
While Google previously removed personal information in limited cases of clear and imminent harm, such as identity theft or financial fraud, this case represents the first generally accessible speed-bump on what has been an open road for Google to aggregate and proliferate publicly accessible personal information.
The right to delist forces us to look at the privatised reality of digital life, and take responsibility for what we see. Internet companies have been successful in making us believe that the internet is “public space”, when, in reality, it is just an algebraic representation of privately owned services. Not public parks, or the Greek agora to build politics, but a long run of amusement parks. The notion of public space is fundamental to democratic, community-orientated rule.
Asymmetries of power
So, if we concede that the internet is public space, that the web is the public record, then Google, on its logic, is the custodian and indexer of our personal records. We must be careful to distinguish the offerings of a handful of internet services from the real public record guaranteed by law, from archives, and even from human memory itself – which will all continue to be available when the amusement park closes.
Citizens have unwittingly come to comprehensively rely on privately owned, culturally biased, black-box services in navigating the digital ecosystem. Google has benefited vastly from this custom, creating enormous asymmetries of power when compared to the creators, subjects and consumers of digital content.
The selective information flowing from Google’s sophisticated PR machine has seen little pushback from publishers. After the web-form for requests was introduced, the press was told of tens of thousands of requests, and that many of them concerned criminals, paedophiles, dodgy doctors, and politicians.
The first removal was made public not by Google, with a clear breakdown of what was being delisted and why, but by prominent journalists reacting to alarming emails sent to their webmasters headed “Notice of removal from Google search”.
By selectively covering the most sensational removal requests, the media created the false impression that most seeking delisting are “bad people”. The concern that they might be allowed to cover their tracks is understandable, but the broader implication is dangerously misleading.
Regulators need to take a more active and central role in these kind of legal and ethical debates, but have struggled to keep pace with technology. Most of Europe’s 31 national data protection authorities are cumbersome, under-resourced bureaucracies issuing occasional, random fines and reacting when a court occasionally clarifies the law. Europe’s data protection laws need to be deconstructed, simplified, and rebuilt into more workable form. Nevertheless, their aspirations are critically important.
The same regulators should be encouraging a more nuanced and transparent discussion with Google and other search engines, confronting complex issues such as how the ruling sits alongside laws on processing sensitive data, the role of the media and of sources. Yet regulators have been far too inactive, tolerating misrepresentation of removal requests by the press and failing to insist on greater transparency, both of which have undermined the protection of individuals’ data.
Nuance, empathy and respect
Publishers, too, have a case to answer. Was La Vanguardia entirely in the right to republish its entire archive, or was it careless? Balancing transparency and the protection of the individual, publishers should consider tailored responses: removing the article at source or pseudonymising the subject; removing data from the search engine; geo-filtering; a right to reply, or updated contextual information.
We need more sophisticated technical processes to improve how personal data is handled, flagging data as sensitive so that search engines and data processors apply data protection principles in a more intelligent way – with the nuance, empathy and respect individuals command in real life.
As information security expert Dan Geer characterised it, the right to delist is “the only check on the tidal wave of observability that a ubiquitous sensor fabric is birthing now, observability that changes the very quality of what ‘in public’ means”.
This struggle for freedom, autonomy and control is unfolding within a digital ecosystem defined by surveillance. We are already a long way along the path of a parasitic system, offering ‘free’ services in return for the exploitation of personal data. But this should not mean that we throw up our hands in despair, abandoning responsibility for digital identity, the permanent circulation of personal data beyond all control.
Blunt, binary logic might work for machines, but it doesn’t work for humans. Our right, and our basic human need, to disclose, seek, find, transform, and distribute information must be reconciled with our equal right and need to be left alone. We have a right to decide to withhold, to remain silent, to resist. This is what is at stake here: our own rightful sovereignty over our life stories, our personal narratives, our communications and even our very memories themselves.