There’s a fascinating article in the Guardian about how Berlin has become a centre for “digital exiles”, people — mainly Americans — whose online activism has put them in the crosshairs of various security services, leading to low-level harassment, or occasionally high-level harassment, such as this frightening story
Anne Roth, a political scientist who’s now a researcher on the German NSA inquiry, tells me perhaps the most chilling story. How she and her husband and their two children – then aged two and four – were caught in a “data mesh”. How an algorithm identified her husband, an academic sociologist who specialises in issues such as gentrification, as a terrorist suspect on the basis of seven words he’d used in various academic papers.
Seven words? “Identification was one. Framework was another. Marxist-Leninist was another, but you know he’s a sociologist… ” It was enough for them to be placed under surveillance for a year. And then, at dawn, one day in 2007, armed police burst into their Berlin home and arrested him on suspicion of carrying out terrorist attacks.
But what was the evidence, I say? And Roth tells me. “It was his metadata. It was who he called. It was the fact that he was a political activist. That he used encryption techniques – this was seen as highly suspicious. That sometimes he would go out and not take his cellphone with him… ”
He was freed three weeks later after an international outcry, but the episode has left its marks. “Even in the bathroom, I’d be wondering: is there a camera in here?”
This highlights a dichotomy that I’ve never seen well formulated, that pertains to many legal questions concerning damage inflicted by publication or withholding of information: Are we worried about true information or false information? Is it more disturbing to think that governments are collecting vast amounts of private and intimate information about our lives, or that much of that information (or the inferences that also count as information) is wrong?
As long as the security services are still in their Keystone Cops phase, and haven’t really figured out how to deploy the information effectively, it’s easier to get aroused by the errors, as in the above. When they have learned to apply the information without conspicuous blunders, then the real damage will be done by the ruthless application of broadly correct knowledge of everyone’s private business, and the crushing certainty everyone has that we have no privacy.
It’s probably a theorem that there is a maximally awful level of inaccuracy. If the information is completely accurate, then at least we avoid the injustice of false accusation. If the information is all bogus, then people will ignore it. Somewhere in between people get used to trusting the information, and will act crushingly on the spurious as well as the accurate indications. What is that level? It’s actually amazing how much tolerance people have for errors in an information source before they will ignore it — cf., tabloid newspapers, astrology, economic forecasts — particularly if it’s a secret source that seems to give them some private inside knowledge.
On a somewhat related note, Chris Bertram at Crooked Timber has given concise expression to a reaction that I think many people have had to the revelations of pervasive electronic espionage by Western democratic governments against their own citizens:
It isn’t long since the comprehensive surveillance of citizens… was emblematic of how communist states would trample on the inalienable rights of people in pursuit of state security. Today we know that our states do the same. I’m not making the argument that Western liberal democracies are “as bad” as those states were,… but I note that these kinds of violations were not seen back then as being impermissible because those states were so bad in other ways — undemocratic, dirigiste — but rather were portrayed to exemplify exactly why those regimes were unacceptable.