There was an error in this gadget

Tuesday, 6 March 2012

This blog is moving....

...to Wordpress. Anyone who's followed the Google Privacy Policy debate will understand some of the reasons...

The address of the new site is:

http://paulbernal.wordpress.com/

...and the first post solely on that site is

"Infamy, Infamy, they've all got it in for me"

Please follow me there!

Thanks!

Paul

Thursday, 1 March 2012

Ready to Rumble?


This morning I attended a lecture given by European Commissioner Viviane Reding – and I have to say I was impressed. The lecture was at my old Alma Mater, the LSE, with the estimable Professor Andrew Murray in the chair, and was officially about the importance of data protection in keeping businesses competitive – but in practice it turned about to be a vigorous defence of the new Data Protection Regulation. Commissioner Reding was robust, forthright – and remarkably straightforward for someone in her position.

Her speech started off by looking at the changes that have taken place since the original Data Protection Directive – which was brought in in 1995. She didn’t waste much time – most of the changes are pretty much self-evident to anyone who’s paid much attention, and she knew that her audience wasn’t the kind that would need to be told. The key, though, was that she was looking from the perspective of business. The needs of businesses have changed – and as she put it, the new regulation was designed to meet those needs.

The key points from this perspective will be familiar to most who have studied the planned regulation. First and foremost, because it is a regulation rather than a directive, it applies uniformly throughout the EU, creating both an even playing field and a degree of certainty. Secondly, it is intended to remove ‘red tape’ – multinational companies will only have to deal with the data protection authorities in the country that is their primary base, rather than having to deal with a separate authority for each country they operate in. Taken together, she said that the administrative burden for companies would go down by 2.3 billion Euro a year. It was very direct and clear – she certainly seems to believe what she’s saying.

She also made the point (which she’s made before) that the right to be forgotten, which has received a lot of press, and which I’ve written about before (ad nauseam I suspect), is NOT a threat to free expression, and not a tool for censorship, regardless of how that point seems to be misunderstood or misrepresented. The key, as she described, is to understand that no rights are absolute, and that they have to compete with other rights – and they certainly don’t override them. As I’ve also noted before, this is something that isn’t really understood in the US as well as it is in Europe – the American ‘take’ on rights is much more absolutists, which is one of the reason they accept as ‘rights’ a much narrower range of things that most of the rest of the world.

I doubt her words on the right to be forgotten will cut much mustard with the critics of the right on either side of the Atlantic – but I’m not sure that will matter that much to Commissioner Reding. She’s ready for a fight on this, it seems to me, and for quite a lot else besides. Those who might be expecting her to back down, to compromise, I think are in for a surprise. She’s ready to rumble…

The first and biggest opponent she’s ready to take on looks like being Google. She name-checked them several times both in the speech and in her answers to questions. She talked specifically about the new Google privacy policy – coming into force today – and in answer to a question I asked about the apparent resistance of US companies to data protection she freely admitted that part of the reason for the form and content of the regulation is to give the Commission teeth in its dealings with companies like Google. Now, she said, there was little that Europe could do to Google. Each of the individual countries in the EU could challenge Google, and each could potentially fine Google. ‘Peanuts’ was the word that she used about these fines, freely acknowledging that she didn’t have the weapons with which to fight. With the new regulations, however, they could fine Google 2% of their worldwide revenue. 560 million euro was the figure she quoted: enough to get even Google to stand up and take notice.

She showed no sign of backing down on cookies either – reiterating the need for explicit, informed consent whenever data is gathered, including details of the purposes to which the data is to be put. She seemed ready for a fight on that as well.

Overall, it was a combative Commissioner that took to the lectern this morning – and I was impressed. She’s ready for the fight, whether businesses and governments want it or not. As I’ve blogged elsewhere, the UK government doesn’t share her enthusiasm for a strengthening of data protection, and the reaction from the US has been far from entirely positive either. Commissioner Reding had a few words for the US too, applauding Obama’s moves for online privacy (about which I've blogged here) but suggesting that the US is a good way behind the EU in dealing with privacy. They’re still playing catch-up, talking about it and suggesting ideas, but not ready to take the bull by the horns yet. We may yet lead them to the promised land, seemed to be the message…. and only with her tongue half in her cheek.

She's not going to give up - and neither should she, in my opinion. This is important stuff, and it needs fighting for. She's one of the 'Crazy Europeans' about which I've written before - but we need them. As @spinzo tweeted to me there's 'nothing more frightening than a self-righteous regulator backed by federal fiat and federal coffers' - but I'd LIKE some of the companies involved in privacy invasive practices around the net to be frightened. If they behaved in a bit more of a privacy friendly way we wouldn't need the likes of Commissioner Reding to be ready to rumble. They don't - and we do!

Thursday, 23 February 2012

Big Brother is watching you - and so are his commercial partners

Today, President Obama unveiled a proposal for an internet 'bill of rights':


“American consumers can’t wait any longer for clear rules of the road that ensure their personal information is safe online,” said Mr. Obama.

In a lot of ways, this is to be applauded. The idea, as reported in the media, is to "give consumers greater online privacy protection", which for privacy advocates and researchers such as myself is of course a most laudable aim. Why, then, am I somewhat wary of what is being proposed? Anyone who works in the field is of course naturally sceptical - but there's more to it than that. There's one word in Obama's statement, repeated without real comment in the media reports that I've read, that is crucial. That word is 'consumers'.

Consumers, citizens or human beings?

The use of the word 'consumer' has two key implications. First of all, it betrays an attitude to the internet and to the people who use it. If we're consumers, that makes the net a kind of 'product' to be consumed. It makes us passive rather than active. It means we don't play a part in the creation of the net - and it means that the net is all about money and the economy, rather than about communication, about (free) expression, about social interaction, about democratic discourse and participation. It downplays the political role that the net can be played - and misunderstands the transformations that have gone on in the online world over the last decades. The net isn't just another part of the great spectrum of 'entertainment' - much though the 'entertainment' industry might like to think it is, and hence have free rein to enforce intellectual property rights over anything else.

That's not to downplay the role of economic forces on the net - indeed, as I've argued many times before, business has driven many of the most important developments on the net, and the vast expansion and wonderful services we all enjoy have come from business. Without Google, Facebook and the like, the internet would be a vastly less rich environment than it is - but that's not all... and treating users merely as 'consumers' implies that it is.

The second, perhaps more sinister side to portraying us all as consumers rather than citizens - or even human beings - is that it neatly sidesteps the role that governments have in invading rather than protecting our privacy. Treating us as consumers, and privacy as a 'consumer right', makes it look as though the government are the 'good guys' protecting us from the 'bad' businesses - and tries to stop us even thinking about the invasions of privacy, the snooping, the monitoring, the data gathering and retention, done by governments and their agencies.

Big Brother is watching you...

The reality is, of course, that governments do snoop, they do gather information, they do monitor our activities on social networks and so forth. What's more, we should be worried about it, and we should be careful about how much we 'let' them do it. We need protection from government snooping - we need privacy rights not just as consumers, but as citizens. Further, as I've argued elsewhere, rights to privacy (and other rights) on the internet can be viewed as human rights - indeed I believe they should be viewed as human rights. From an American perspective, this is problematic - but it should at least be possible to cast privacy rights on the net as civil rights rather than consumer rights.

...and so are his commercial partners

At the same time, however, Obama is right that we need protection from the invasions of privacy perpetrated by businesses. For that reason, his initiative should be applauded, though his claiming of credit for the idea should be treated with scepticism, as similar ideas have been floating around the net for a long time - better late than never, though.

There is another side to it that may be even more important - the relationship between businesses and governments. They're not snooping on us, or invading our privacy independently - in practice, and in effect, the biggest problems can come when they work together. Facebook gathers the data, encourages us to 'share' information, to 'self-profile' - and then governments use the information that Facebook has gathered. Email systems, telephone services, ISPs and the like may well gather information for their own purposes - but through data retention they're required not only to keep that information for longer than they might wish to, but to make it available to authorities when the 'need' arises.

Worse, authorities may encourage or even force companies to build 'back-doors' into their products so that 'when needed' the authorities can use them to tap into our conversations, or to discover who we've been socialising with. They may require that photos on networks are subject to facial recognition analysis to hunt down people they wish to find for some reason or other - legitimate or otherwise. Facebook may well build their facial recognition systems for purely commercial reasons - but that doesn't mean that others, including the authorities, might use them for more clearly malign purposes.

We need protection from both

So what's the conclusion? Yes, Obama's right, we need protection from commercial intrusions into our privacy. That, however, is just a small part of what we need. We need protection as human beings, as citizens, AND as consumers. Don't let's be distracted by looking at just a small part of the picture.

Sunday, 12 February 2012

What Muad’Dib can teach us about personal data…


With all the current debate about the so-called 'right to be forgotten', I thought I'd post one of my earlier, somewhat less than serious takes on the matter. A geeky take. A science fiction take...

I've written about it before in more serious ways - both in blogs (such as the two part one on the INFORRM blog, part 1 here and part 2 here) and in an academic paper (here, in the European Journal of Law and Technology) - and I've ranted about it on this blog too ('Crazy Europeans!?!').

This, however, is a very different take - one I presented at the GiKii conference in Gothenburg last summer. In it I look back at that classic of science fiction, Dune. There's a key point in the book, a key issue in the book, that has direct relevance to the issue of personal data. As the protagonist, Paul-Muad'Dib, puts it:

“The power to destroy a thing is the absolute control over it."

In the book, Muad'Dib has the power to destroy the supply of the spice 'Melange', the most valuable commodity in the Dune universe. In a similar manner, if a way can be found for individuals to claim the right to delete personal data, control over that data can begin to shift from businesses and governments back to the individuals.

Here's an animated version of the presentation I gave at Gikii...

video


This is what it's supposed to suggest...

Melange in Dune

In Frank Herbert’s Dune series, the most essential and valuable commodity in the universe is melange, a geriatric drug that gives the user a longer life span, greater vitality, and heightened awareness; it can also unlock prescience in some humans, depending upon the dosage and the consumer's physiology. This prescience-enhancing property makes safe and accurate interstellar travel possible. Melange comes with a steep price, however: it is addictive, and withdrawal is fatal.

Personal data in the online world

In our modern online world, personal data plays a similar role to the spice melange. It is the most essential and valuable commodity in the online world. It can give those who gather and control it heightened awareness, and can unlock prescience (through predictive profiling). This prescience enhancing property makes all kinds of things possible. It too comes with a steep price, however: it is addictive, and withdrawal can be fatal – businesses and governments are increasingly dependent on their gathering, processing and holding of personal data.

What we can learn from Muad’Dib

For Muad'Dib to achieve ascendency, he had to assert control over the spice - we as individuals need to assert the same control over personal data. We need to assert our rights over the data - both over its 'production' and over its existence afterwards. The most important of these rights, the absolute control over it, is the right to destroy it – the right to delete personal data. That's what the right to be forgotten is about - and what, in my opinion, it should be called. If we have the right to delete data - and the mechanisms to make that right reality - then businesses and governments need to take what we say and want into account before they gather, hold or use our data. If they ride roughshod over our views, we'll have a tool to hold them to account...

The final solution, as for Arrakis, the proper name for the planet known as 'Dune', should be a balance. Production of personal data should still proceed, just as production of spice on Arrakis could still proceed, but on our own terms, and to mutual benefit. Most people don't want a Jihad, just as Paul Atreides didn't want a Jihad – though some may seek confrontation with the authorities and businesses rather than cooperation with them. In Dune, Paul Muad’Dib was not strong enough to prevent that Jihad – and though there has certainly been a ramping up of activism and antagonism over the last year or two, it should be possible to prevent it. If that is to happen, an assertion of rights, and in particular rights over the control over personal data, could be a key step.

A question of control - not of censorship

Looked at from this direction, the right to be forgotten (which I still believe is better understood as a right to delete) is not, as some suggest, about censorship, or about restricting free expression. Instead, it should be seen as a salvo in a conflict over control – a move towards giving netizens more power over the behemoths who currently hold sway.

If people are too concerned about the potential censorship issues - and personally I don't think they should be, but I understand why they are - then perhaps they can suggest other ways to give people more control over what's happening. Right now, as things like the Facebook 'deleted' photos issue I blogged about last week suggest, those who are in control don't seem to be doing much to address our genuine concerns....

Otherwise, they might have to deal with the growing power of the internet community...




Tuesday, 7 February 2012

Do you want a camera in your kid's bedroom??

This morning's disturbing privacy story is the revelation that live feeds from thousands of 'home security cameras' run by the US company Trendnet have been 'breached', allowing anyone on the net access to video feeds, without the need for a password. It was reported in the BBC here, by their technology reporter Leo Kelion.

It's a disturbing tale. As Kelion describes it:

"Internet addresses which link to the video streams have been posted to a variety of popular messageboard sites. Users have expressed concern after finding they could view children's bedrooms among other locations. US-based Trendnet says it is in the process of releasing updates to correct a coding error introduced in 2010."

The internet being what it is, news of the problem seems to have spread faster than Trendnet has been able to control it. This is from Kelion's piece again:

"Within two days a list of 679 web addresses had been posted to one site, and others followed - in some cases listing the alleged Google Maps locations associated with each camera. Messages on one forum included: "someone caught a guy in denmark (traced to ip) getting naked in the bathroom." Another said: "I think this guy is doing situps."


One user wrote "Baby Spotted," causing another to comment "I feel like a pedophile watching this".

A cautionary tale, one might think, and to privacy people like me a lot of questions immediately come to mind. Many of them, particularly the technical ones, have been answered in Kelion's piece. There is one question, however, that is conspicuous by its absence from Kelion's otherwise excellent piece: what are the cameras doing in children's bedrooms in the first place? Is it normal, now, to have this kind of level of surveillance in our private homes? In our children's bedrooms?

I asked Kelion about this on twitter, and his initial (and admirably instant) response was that security cameras were nothing new, but the breach in the feeds was. That was news, the presence of the cameras was not. That set me thinking - and made me write this blog. Is he right? Should we just 'accept' the presence of surveillance even in our most intimate and private places? The success of companies like Trendnet suggests that many thousands of people do accept it - but I hope that millions more don't. I also hope that an affair like this will make some people think twice before installing their own 'private' big brother system.

Surveillance is a double-edged sword. Just as any data on the internet is ultimately vulnerable, so is any data feed - the only way for data not to be vulnerable is for it not to exist. Those parents wanting to protect their children from being watched in the internet have a simple solution: don't install the cameras in the first place!

It's the same story over and over again in the world of privacy and surveillance. We build systems, gather data, set up infrastructures and then seem shocked and amazed when they prove vulnerable. It shouldn't be a surprise... we should think before we build, think before we design, think before we install...

Monday, 6 February 2012

Facebook, Photos and the Right to be Forgotten

Another day, another story about the right to be forgotten. This time it's another revelation about how hard it is to delete stuff from Facebook. In this case it's photos - with Ars Technica giving an update on their original story from 2009 about how 'deleted' photos weren't really deleted. Now, according to their new story, three years later, the photos they tried to remove back then are STILL there.

The Ars Technica story gives a lot more detail - and does suggest that Facebook are at least trying to do something about the problem, though without much real impact at this stage. As Ars Technica puts it:

"....with the process not expected to be finished until a couple months from now—and unfortunately, with a company history of stretching the truth when asked about this topic—we'll have to see it before we believe it."

I'm not going to try to analyse why Facebook has been so slow at dealing with this - there are lots of potential reasons, from the technical to the political and economic - but from the perspective of someone who's been watching developments over the years one thing is very important to understand: this slowness and apparent unwillingness (or even disinterest) has had implications. Indeed, it can be seen as one of the main drivers behind the push by the European Union to bring in a 'right to be forgotten'.

I've written (and most recently ranted in my blog 'Crazy Europeans') about the subject many times before, but I think it bears repeating. This kind of legislative approach, which seems to make some people in the field very unhappy, doesn't arise from nothing, just materialising at the whim of a few out-of-touch privacy advocates or power-hungry bureaucrats. It emerges from a real concern, from the real worries of real people. As the Ars Technica article puts it:

"That's when the reader stories started pouring in: we were told horror stories about online harassment using photos that were allegedly deleted years ago, and users who were asked to take down photos of friends that they had put online. There were plenty of stories in between as well, and panicked Facebook users continue to e-mail me, asking if we have heard of any new way to ensure that their deleted photos are, well, deleted."


When people's real concerns aren't being addressed - and when people feel that their real concerns aren't being addressed - then things start to happen. Privacy advocates bleat - and those in charge of regulation think about changing that regulation. In Europe we seem to be more willing to regulate than in the US, but with Facebook facing regular privacy audits from the FTC in the US, they're going to have to start to face up to the problem, to take it more seriously.

There's something in it for Facebook too. It's in Facebook's interest that people are confident that their needs will be met.  What's more, if they want to encourage sharing, particularly immediate, instinctive, impulsive sharing, they need to understand that when people do that kind of thing they can and do make mistakes – and they would like the opportunity to rectify those mistakes. Awareness of the risks appears to be growing among users of these kinds of system – and privacy is now starting to become a real selling point on the net. Google and Microsoft's recent advertising campaigns on privacy are testament to that - and Google's attempts to portray its new privacy policy as something positive are quite intense.

That in itself is a good sign, and with Facebook trying to milk as much as they can from the upcoming IPO, they might start to take privacy with the seriousness that their users want and need. Taking down photos when people want them taken down - and not keeping them for years after the event - would be a good start. If it doesn't happen soon, and isn't done well, then Facebook can expect an even stronger push behind regulation like the Right to be Forgotten. If they don't want this kind of thing, then they need to pre-empt it by implementing better privacy, better user rights, themselves.

Saturday, 28 January 2012

Phorm - a chapter closes?

Another chapter of the long-running Phorm saga seems to have come to a close, with the announcement by the European Commission that they have closed the infringement case with the UK about their implementation of rules on privacy in electronic communications. In order to get this closure, the UK had, in the words of the Commission press release

'amended its national legislation so as not to allow interception of users' electronic communications without their explicit consent, and established an additional sanction and supervisory mechanism to deal with breaches of confidentiality in electronic communications.'

This case came about as a result of the big mess that the UK government got into over Phorm - something which I've written about both academically and in blogs on more than one occasion before. In essence, the government decided to back Phorm, a business which privacy advocates and others had been telling them from the very beginning was deeply problematic, and that decision backfired pretty spectacularly. The amount of egg that ended up on government faces as a result of the affair was pretty spectacular. The action of the Commission was a direct result of the admirable work of campaigners like Alexander Hanff at Privacy International, drawing on the excellent investigatory analysis by the University of Cambridge Computer Lab's Richard Clayton and the legal work of Nicholas Bohm for the Foundation for Information Policy Research - work that was effectively in direct opposition to the government. This work led to questions to the commission, upon which the commission acted, as well as, more directly, to the collapse of the Phorm business model as its business allies deserted it and opposition from the public became clearer and clearer.

Phorm's business model was particularly pernicious from a privacy perspective. They took behavioural advertising (which is problematic in most of its forms) to an extreme, monitoring people's entire browsing behaviour by intercepting each and every click made as you browse, in order to build up a profile which they then used to target advertising. All this without real consent from the user, or at least so it appeared, and indeed without the consent of the owners of the websites to whom these intercepted instructions were intended to be sent. As a model it appeared to break not only laws but people's ideas about being under surveillance - Orwellian in the extreme. It failed here - thanks to the resistance noted above - and has since failed again in South Korea, and appears to be failing in Romania (about which I've blogged before) and Brazil, the three places that Phorm's backers have tried it since. In each case, it looks as though people's resistance has been a key....

There are lessons to learn for all concerned:

1) Those of us advocating and campaigning for privacy can take a good deal of heart from the whole affair - essentially, we won, stopping the pernicious Phorm business model and forcing the UK government not just to back down but to change the law in ways that, ultimately, are more 'privacy-friendly'. 'People power' proved too strong for both business and government forces in this case - and it may be possible again. We certainly shouldn't give up!

2) Businesses need to take note: privacy-invasive business models will face opposition, and that opposition is more powerful than you might imagine. From the perspective of the symbiotic web (my underlying theory, more about which can be found here), if a privacy-invasive model is to succeed, it must give something back to those whose privacy is invaded, something of sufficient value to compensate for the privacy that is either lost or compromised. In Phorm's case, there was very little benefit to the people being monitored - the benefit was all for Phorm or Phorm's advertising partners. That sort of model isn't going to succeed nearly as easily as businesses might think - people will fight, and fight well! Businesses would do better to build more privacy-friendly models from the outset...

3) Governments need to understand the needs and abilities of the people - as well as the needs of businesses and business lobby groups. People are getting more and more aware and more and more able to articulate their needs and make their views known - and to wield powers beyond the understanding of most governments. The recent resistance to SOPA and PIPA in the US is perhaps another example - though the fact that people's interests coincided with those of internet powerhouses like Wikipedia and Google may have been even more important.

This last point is perhaps the most important. Governments all over the world seem to be massively underestimating the influence and power of people, particularly people on the internet. People will fight for what they want - and, more often than governments realise, they will find ways to win those fights. There needs to be a significant shift in the attitude of those governments if we are not to have more conflicts of the sort that caused such a mess over Phorm. There are more conflicts already on the horizon - from the judicial review of the Digital Economy Act to the shady agreement that is ACTA. There will be a lot of mess, I suspect, much of which could be avoided if 'authorities' understood what we wanted a bit more.  The people of the net are starting to get mad, and they're not going to take it anymore.