Thursday, 23 February 2012

Big Brother is watching you - and so are his commercial partners

Today, President Obama unveiled a proposal for an internet 'bill of rights':


“American consumers can’t wait any longer for clear rules of the road that ensure their personal information is safe online,” said Mr. Obama.

In a lot of ways, this is to be applauded. The idea, as reported in the media, is to "give consumers greater online privacy protection", which for privacy advocates and researchers such as myself is of course a most laudable aim. Why, then, am I somewhat wary of what is being proposed? Anyone who works in the field is of course naturally sceptical - but there's more to it than that. There's one word in Obama's statement, repeated without real comment in the media reports that I've read, that is crucial. That word is 'consumers'.

Consumers, citizens or human beings?

The use of the word 'consumer' has two key implications. First of all, it betrays an attitude to the internet and to the people who use it. If we're consumers, that makes the net a kind of 'product' to be consumed. It makes us passive rather than active. It means we don't play a part in the creation of the net - and it means that the net is all about money and the economy, rather than about communication, about (free) expression, about social interaction, about democratic discourse and participation. It downplays the political role that the net can be played - and misunderstands the transformations that have gone on in the online world over the last decades. The net isn't just another part of the great spectrum of 'entertainment' - much though the 'entertainment' industry might like to think it is, and hence have free rein to enforce intellectual property rights over anything else.

That's not to downplay the role of economic forces on the net - indeed, as I've argued many times before, business has driven many of the most important developments on the net, and the vast expansion and wonderful services we all enjoy have come from business. Without Google, Facebook and the like, the internet would be a vastly less rich environment than it is - but that's not all... and treating users merely as 'consumers' implies that it is.

The second, perhaps more sinister side to portraying us all as consumers rather than citizens - or even human beings - is that it neatly sidesteps the role that governments have in invading rather than protecting our privacy. Treating us as consumers, and privacy as a 'consumer right', makes it look as though the government are the 'good guys' protecting us from the 'bad' businesses - and tries to stop us even thinking about the invasions of privacy, the snooping, the monitoring, the data gathering and retention, done by governments and their agencies.

Big Brother is watching you...

The reality is, of course, that governments do snoop, they do gather information, they do monitor our activities on social networks and so forth. What's more, we should be worried about it, and we should be careful about how much we 'let' them do it. We need protection from government snooping - we need privacy rights not just as consumers, but as citizens. Further, as I've argued elsewhere, rights to privacy (and other rights) on the internet can be viewed as human rights - indeed I believe they should be viewed as human rights. From an American perspective, this is problematic - but it should at least be possible to cast privacy rights on the net as civil rights rather than consumer rights.

...and so are his commercial partners

At the same time, however, Obama is right that we need protection from the invasions of privacy perpetrated by businesses. For that reason, his initiative should be applauded, though his claiming of credit for the idea should be treated with scepticism, as similar ideas have been floating around the net for a long time - better late than never, though.

There is another side to it that may be even more important - the relationship between businesses and governments. They're not snooping on us, or invading our privacy independently - in practice, and in effect, the biggest problems can come when they work together. Facebook gathers the data, encourages us to 'share' information, to 'self-profile' - and then governments use the information that Facebook has gathered. Email systems, telephone services, ISPs and the like may well gather information for their own purposes - but through data retention they're required not only to keep that information for longer than they might wish to, but to make it available to authorities when the 'need' arises.

Worse, authorities may encourage or even force companies to build 'back-doors' into their products so that 'when needed' the authorities can use them to tap into our conversations, or to discover who we've been socialising with. They may require that photos on networks are subject to facial recognition analysis to hunt down people they wish to find for some reason or other - legitimate or otherwise. Facebook may well build their facial recognition systems for purely commercial reasons - but that doesn't mean that others, including the authorities, might use them for more clearly malign purposes.

We need protection from both

So what's the conclusion? Yes, Obama's right, we need protection from commercial intrusions into our privacy. That, however, is just a small part of what we need. We need protection as human beings, as citizens, AND as consumers. Don't let's be distracted by looking at just a small part of the picture.

Sunday, 12 February 2012

What Muad’Dib can teach us about personal data…


With all the current debate about the so-called 'right to be forgotten', I thought I'd post one of my earlier, somewhat less than serious takes on the matter. A geeky take. A science fiction take...

I've written about it before in more serious ways - both in blogs (such as the two part one on the INFORRM blog, part 1 here and part 2 here) and in an academic paper (here, in the European Journal of Law and Technology) - and I've ranted about it on this blog too ('Crazy Europeans!?!').

This, however, is a very different take - one I presented at the GiKii conference in Gothenburg last summer. In it I look back at that classic of science fiction, Dune. There's a key point in the book, a key issue in the book, that has direct relevance to the issue of personal data. As the protagonist, Paul-Muad'Dib, puts it:

“The power to destroy a thing is the absolute control over it."

In the book, Muad'Dib has the power to destroy the supply of the spice 'Melange', the most valuable commodity in the Dune universe. In a similar manner, if a way can be found for individuals to claim the right to delete personal data, control over that data can begin to shift from businesses and governments back to the individuals.

Here's an animated version of the presentation I gave at Gikii...



This is what it's supposed to suggest...

Melange in Dune

In Frank Herbert’s Dune series, the most essential and valuable commodity in the universe is melange, a geriatric drug that gives the user a longer life span, greater vitality, and heightened awareness; it can also unlock prescience in some humans, depending upon the dosage and the consumer's physiology. This prescience-enhancing property makes safe and accurate interstellar travel possible. Melange comes with a steep price, however: it is addictive, and withdrawal is fatal.

Personal data in the online world

In our modern online world, personal data plays a similar role to the spice melange. It is the most essential and valuable commodity in the online world. It can give those who gather and control it heightened awareness, and can unlock prescience (through predictive profiling). This prescience enhancing property makes all kinds of things possible. It too comes with a steep price, however: it is addictive, and withdrawal can be fatal – businesses and governments are increasingly dependent on their gathering, processing and holding of personal data.

What we can learn from Muad’Dib

For Muad'Dib to achieve ascendency, he had to assert control over the spice - we as individuals need to assert the same control over personal data. We need to assert our rights over the data - both over its 'production' and over its existence afterwards. The most important of these rights, the absolute control over it, is the right to destroy it – the right to delete personal data. That's what the right to be forgotten is about - and what, in my opinion, it should be called. If we have the right to delete data - and the mechanisms to make that right reality - then businesses and governments need to take what we say and want into account before they gather, hold or use our data. If they ride roughshod over our views, we'll have a tool to hold them to account...

The final solution, as for Arrakis, the proper name for the planet known as 'Dune', should be a balance. Production of personal data should still proceed, just as production of spice on Arrakis could still proceed, but on our own terms, and to mutual benefit. Most people don't want a Jihad, just as Paul Atreides didn't want a Jihad – though some may seek confrontation with the authorities and businesses rather than cooperation with them. In Dune, Paul Muad’Dib was not strong enough to prevent that Jihad – and though there has certainly been a ramping up of activism and antagonism over the last year or two, it should be possible to prevent it. If that is to happen, an assertion of rights, and in particular rights over the control over personal data, could be a key step.

A question of control - not of censorship

Looked at from this direction, the right to be forgotten (which I still believe is better understood as a right to delete) is not, as some suggest, about censorship, or about restricting free expression. Instead, it should be seen as a salvo in a conflict over control – a move towards giving netizens more power over the behemoths who currently hold sway.

If people are too concerned about the potential censorship issues - and personally I don't think they should be, but I understand why they are - then perhaps they can suggest other ways to give people more control over what's happening. Right now, as things like the Facebook 'deleted' photos issue I blogged about last week suggest, those who are in control don't seem to be doing much to address our genuine concerns....

Otherwise, they might have to deal with the growing power of the internet community...




Tuesday, 7 February 2012

Do you want a camera in your kid's bedroom??

This morning's disturbing privacy story is the revelation that live feeds from thousands of 'home security cameras' run by the US company Trendnet have been 'breached', allowing anyone on the net access to video feeds, without the need for a password. It was reported in the BBC here, by their technology reporter Leo Kelion.

It's a disturbing tale. As Kelion describes it:

"Internet addresses which link to the video streams have been posted to a variety of popular messageboard sites. Users have expressed concern after finding they could view children's bedrooms among other locations. US-based Trendnet says it is in the process of releasing updates to correct a coding error introduced in 2010."

The internet being what it is, news of the problem seems to have spread faster than Trendnet has been able to control it. This is from Kelion's piece again:

"Within two days a list of 679 web addresses had been posted to one site, and others followed - in some cases listing the alleged Google Maps locations associated with each camera. Messages on one forum included: "someone caught a guy in denmark (traced to ip) getting naked in the bathroom." Another said: "I think this guy is doing situps."


One user wrote "Baby Spotted," causing another to comment "I feel like a pedophile watching this".

A cautionary tale, one might think, and to privacy people like me a lot of questions immediately come to mind. Many of them, particularly the technical ones, have been answered in Kelion's piece. There is one question, however, that is conspicuous by its absence from Kelion's otherwise excellent piece: what are the cameras doing in children's bedrooms in the first place? Is it normal, now, to have this kind of level of surveillance in our private homes? In our children's bedrooms?

I asked Kelion about this on twitter, and his initial (and admirably instant) response was that security cameras were nothing new, but the breach in the feeds was. That was news, the presence of the cameras was not. That set me thinking - and made me write this blog. Is he right? Should we just 'accept' the presence of surveillance even in our most intimate and private places? The success of companies like Trendnet suggests that many thousands of people do accept it - but I hope that millions more don't. I also hope that an affair like this will make some people think twice before installing their own 'private' big brother system.

Surveillance is a double-edged sword. Just as any data on the internet is ultimately vulnerable, so is any data feed - the only way for data not to be vulnerable is for it not to exist. Those parents wanting to protect their children from being watched in the internet have a simple solution: don't install the cameras in the first place!

It's the same story over and over again in the world of privacy and surveillance. We build systems, gather data, set up infrastructures and then seem shocked and amazed when they prove vulnerable. It shouldn't be a surprise... we should think before we build, think before we design, think before we install...

Monday, 6 February 2012

Facebook, Photos and the Right to be Forgotten

Another day, another story about the right to be forgotten. This time it's another revelation about how hard it is to delete stuff from Facebook. In this case it's photos - with Ars Technica giving an update on their original story from 2009 about how 'deleted' photos weren't really deleted. Now, according to their new story, three years later, the photos they tried to remove back then are STILL there.

The Ars Technica story gives a lot more detail - and does suggest that Facebook are at least trying to do something about the problem, though without much real impact at this stage. As Ars Technica puts it:

"....with the process not expected to be finished until a couple months from now—and unfortunately, with a company history of stretching the truth when asked about this topic—we'll have to see it before we believe it."

I'm not going to try to analyse why Facebook has been so slow at dealing with this - there are lots of potential reasons, from the technical to the political and economic - but from the perspective of someone who's been watching developments over the years one thing is very important to understand: this slowness and apparent unwillingness (or even disinterest) has had implications. Indeed, it can be seen as one of the main drivers behind the push by the European Union to bring in a 'right to be forgotten'.

I've written (and most recently ranted in my blog 'Crazy Europeans') about the subject many times before, but I think it bears repeating. This kind of legislative approach, which seems to make some people in the field very unhappy, doesn't arise from nothing, just materialising at the whim of a few out-of-touch privacy advocates or power-hungry bureaucrats. It emerges from a real concern, from the real worries of real people. As the Ars Technica article puts it:

"That's when the reader stories started pouring in: we were told horror stories about online harassment using photos that were allegedly deleted years ago, and users who were asked to take down photos of friends that they had put online. There were plenty of stories in between as well, and panicked Facebook users continue to e-mail me, asking if we have heard of any new way to ensure that their deleted photos are, well, deleted."


When people's real concerns aren't being addressed - and when people feel that their real concerns aren't being addressed - then things start to happen. Privacy advocates bleat - and those in charge of regulation think about changing that regulation. In Europe we seem to be more willing to regulate than in the US, but with Facebook facing regular privacy audits from the FTC in the US, they're going to have to start to face up to the problem, to take it more seriously.

There's something in it for Facebook too. It's in Facebook's interest that people are confident that their needs will be met.  What's more, if they want to encourage sharing, particularly immediate, instinctive, impulsive sharing, they need to understand that when people do that kind of thing they can and do make mistakes – and they would like the opportunity to rectify those mistakes. Awareness of the risks appears to be growing among users of these kinds of system – and privacy is now starting to become a real selling point on the net. Google and Microsoft's recent advertising campaigns on privacy are testament to that - and Google's attempts to portray its new privacy policy as something positive are quite intense.

That in itself is a good sign, and with Facebook trying to milk as much as they can from the upcoming IPO, they might start to take privacy with the seriousness that their users want and need. Taking down photos when people want them taken down - and not keeping them for years after the event - would be a good start. If it doesn't happen soon, and isn't done well, then Facebook can expect an even stronger push behind regulation like the Right to be Forgotten. If they don't want this kind of thing, then they need to pre-empt it by implementing better privacy, better user rights, themselves.