Tuesday, 6 March 2012

This blog is moving....

...to Wordpress. Anyone who's followed the Google Privacy Policy debate will understand some of the reasons...

The address of the new site is:


...and the first post solely on that site is

"Infamy, Infamy, they've all got it in for me"

Please follow me there!



Thursday, 1 March 2012

Ready to Rumble?

This morning I attended a lecture given by European Commissioner Viviane Reding – and I have to say I was impressed. The lecture was at my old Alma Mater, the LSE, with the estimable Professor Andrew Murray in the chair, and was officially about the importance of data protection in keeping businesses competitive – but in practice it turned about to be a vigorous defence of the new Data Protection Regulation. Commissioner Reding was robust, forthright – and remarkably straightforward for someone in her position.

Her speech started off by looking at the changes that have taken place since the original Data Protection Directive – which was brought in in 1995. She didn’t waste much time – most of the changes are pretty much self-evident to anyone who’s paid much attention, and she knew that her audience wasn’t the kind that would need to be told. The key, though, was that she was looking from the perspective of business. The needs of businesses have changed – and as she put it, the new regulation was designed to meet those needs.

The key points from this perspective will be familiar to most who have studied the planned regulation. First and foremost, because it is a regulation rather than a directive, it applies uniformly throughout the EU, creating both an even playing field and a degree of certainty. Secondly, it is intended to remove ‘red tape’ – multinational companies will only have to deal with the data protection authorities in the country that is their primary base, rather than having to deal with a separate authority for each country they operate in. Taken together, she said that the administrative burden for companies would go down by 2.3 billion Euro a year. It was very direct and clear – she certainly seems to believe what she’s saying.

She also made the point (which she’s made before) that the right to be forgotten, which has received a lot of press, and which I’ve written about before (ad nauseam I suspect), is NOT a threat to free expression, and not a tool for censorship, regardless of how that point seems to be misunderstood or misrepresented. The key, as she described, is to understand that no rights are absolute, and that they have to compete with other rights – and they certainly don’t override them. As I’ve also noted before, this is something that isn’t really understood in the US as well as it is in Europe – the American ‘take’ on rights is much more absolutists, which is one of the reason they accept as ‘rights’ a much narrower range of things that most of the rest of the world.

I doubt her words on the right to be forgotten will cut much mustard with the critics of the right on either side of the Atlantic – but I’m not sure that will matter that much to Commissioner Reding. She’s ready for a fight on this, it seems to me, and for quite a lot else besides. Those who might be expecting her to back down, to compromise, I think are in for a surprise. She’s ready to rumble…

The first and biggest opponent she’s ready to take on looks like being Google. She name-checked them several times both in the speech and in her answers to questions. She talked specifically about the new Google privacy policy – coming into force today – and in answer to a question I asked about the apparent resistance of US companies to data protection she freely admitted that part of the reason for the form and content of the regulation is to give the Commission teeth in its dealings with companies like Google. Now, she said, there was little that Europe could do to Google. Each of the individual countries in the EU could challenge Google, and each could potentially fine Google. ‘Peanuts’ was the word that she used about these fines, freely acknowledging that she didn’t have the weapons with which to fight. With the new regulations, however, they could fine Google 2% of their worldwide revenue. 560 million euro was the figure she quoted: enough to get even Google to stand up and take notice.

She showed no sign of backing down on cookies either – reiterating the need for explicit, informed consent whenever data is gathered, including details of the purposes to which the data is to be put. She seemed ready for a fight on that as well.

Overall, it was a combative Commissioner that took to the lectern this morning – and I was impressed. She’s ready for the fight, whether businesses and governments want it or not. As I’ve blogged elsewhere, the UK government doesn’t share her enthusiasm for a strengthening of data protection, and the reaction from the US has been far from entirely positive either. Commissioner Reding had a few words for the US too, applauding Obama’s moves for online privacy (about which I've blogged here) but suggesting that the US is a good way behind the EU in dealing with privacy. They’re still playing catch-up, talking about it and suggesting ideas, but not ready to take the bull by the horns yet. We may yet lead them to the promised land, seemed to be the message…. and only with her tongue half in her cheek.

She's not going to give up - and neither should she, in my opinion. This is important stuff, and it needs fighting for. She's one of the 'Crazy Europeans' about which I've written before - but we need them. As @spinzo tweeted to me there's 'nothing more frightening than a self-righteous regulator backed by federal fiat and federal coffers' - but I'd LIKE some of the companies involved in privacy invasive practices around the net to be frightened. If they behaved in a bit more of a privacy friendly way we wouldn't need the likes of Commissioner Reding to be ready to rumble. They don't - and we do!

Thursday, 23 February 2012

Big Brother is watching you - and so are his commercial partners

Today, President Obama unveiled a proposal for an internet 'bill of rights':

“American consumers can’t wait any longer for clear rules of the road that ensure their personal information is safe online,” said Mr. Obama.

In a lot of ways, this is to be applauded. The idea, as reported in the media, is to "give consumers greater online privacy protection", which for privacy advocates and researchers such as myself is of course a most laudable aim. Why, then, am I somewhat wary of what is being proposed? Anyone who works in the field is of course naturally sceptical - but there's more to it than that. There's one word in Obama's statement, repeated without real comment in the media reports that I've read, that is crucial. That word is 'consumers'.

Consumers, citizens or human beings?

The use of the word 'consumer' has two key implications. First of all, it betrays an attitude to the internet and to the people who use it. If we're consumers, that makes the net a kind of 'product' to be consumed. It makes us passive rather than active. It means we don't play a part in the creation of the net - and it means that the net is all about money and the economy, rather than about communication, about (free) expression, about social interaction, about democratic discourse and participation. It downplays the political role that the net can be played - and misunderstands the transformations that have gone on in the online world over the last decades. The net isn't just another part of the great spectrum of 'entertainment' - much though the 'entertainment' industry might like to think it is, and hence have free rein to enforce intellectual property rights over anything else.

That's not to downplay the role of economic forces on the net - indeed, as I've argued many times before, business has driven many of the most important developments on the net, and the vast expansion and wonderful services we all enjoy have come from business. Without Google, Facebook and the like, the internet would be a vastly less rich environment than it is - but that's not all... and treating users merely as 'consumers' implies that it is.

The second, perhaps more sinister side to portraying us all as consumers rather than citizens - or even human beings - is that it neatly sidesteps the role that governments have in invading rather than protecting our privacy. Treating us as consumers, and privacy as a 'consumer right', makes it look as though the government are the 'good guys' protecting us from the 'bad' businesses - and tries to stop us even thinking about the invasions of privacy, the snooping, the monitoring, the data gathering and retention, done by governments and their agencies.

Big Brother is watching you...

The reality is, of course, that governments do snoop, they do gather information, they do monitor our activities on social networks and so forth. What's more, we should be worried about it, and we should be careful about how much we 'let' them do it. We need protection from government snooping - we need privacy rights not just as consumers, but as citizens. Further, as I've argued elsewhere, rights to privacy (and other rights) on the internet can be viewed as human rights - indeed I believe they should be viewed as human rights. From an American perspective, this is problematic - but it should at least be possible to cast privacy rights on the net as civil rights rather than consumer rights.

...and so are his commercial partners

At the same time, however, Obama is right that we need protection from the invasions of privacy perpetrated by businesses. For that reason, his initiative should be applauded, though his claiming of credit for the idea should be treated with scepticism, as similar ideas have been floating around the net for a long time - better late than never, though.

There is another side to it that may be even more important - the relationship between businesses and governments. They're not snooping on us, or invading our privacy independently - in practice, and in effect, the biggest problems can come when they work together. Facebook gathers the data, encourages us to 'share' information, to 'self-profile' - and then governments use the information that Facebook has gathered. Email systems, telephone services, ISPs and the like may well gather information for their own purposes - but through data retention they're required not only to keep that information for longer than they might wish to, but to make it available to authorities when the 'need' arises.

Worse, authorities may encourage or even force companies to build 'back-doors' into their products so that 'when needed' the authorities can use them to tap into our conversations, or to discover who we've been socialising with. They may require that photos on networks are subject to facial recognition analysis to hunt down people they wish to find for some reason or other - legitimate or otherwise. Facebook may well build their facial recognition systems for purely commercial reasons - but that doesn't mean that others, including the authorities, might use them for more clearly malign purposes.

We need protection from both

So what's the conclusion? Yes, Obama's right, we need protection from commercial intrusions into our privacy. That, however, is just a small part of what we need. We need protection as human beings, as citizens, AND as consumers. Don't let's be distracted by looking at just a small part of the picture.

Sunday, 12 February 2012

What Muad’Dib can teach us about personal data…

With all the current debate about the so-called 'right to be forgotten', I thought I'd post one of my earlier, somewhat less than serious takes on the matter. A geeky take. A science fiction take...

I've written about it before in more serious ways - both in blogs (such as the two part one on the INFORRM blog, part 1 here and part 2 here) and in an academic paper (here, in the European Journal of Law and Technology) - and I've ranted about it on this blog too ('Crazy Europeans!?!').

This, however, is a very different take - one I presented at the GiKii conference in Gothenburg last summer. In it I look back at that classic of science fiction, Dune. There's a key point in the book, a key issue in the book, that has direct relevance to the issue of personal data. As the protagonist, Paul-Muad'Dib, puts it:

“The power to destroy a thing is the absolute control over it."

In the book, Muad'Dib has the power to destroy the supply of the spice 'Melange', the most valuable commodity in the Dune universe. In a similar manner, if a way can be found for individuals to claim the right to delete personal data, control over that data can begin to shift from businesses and governments back to the individuals.

Here's an animated version of the presentation I gave at Gikii...

This is what it's supposed to suggest...

Melange in Dune

In Frank Herbert’s Dune series, the most essential and valuable commodity in the universe is melange, a geriatric drug that gives the user a longer life span, greater vitality, and heightened awareness; it can also unlock prescience in some humans, depending upon the dosage and the consumer's physiology. This prescience-enhancing property makes safe and accurate interstellar travel possible. Melange comes with a steep price, however: it is addictive, and withdrawal is fatal.

Personal data in the online world

In our modern online world, personal data plays a similar role to the spice melange. It is the most essential and valuable commodity in the online world. It can give those who gather and control it heightened awareness, and can unlock prescience (through predictive profiling). This prescience enhancing property makes all kinds of things possible. It too comes with a steep price, however: it is addictive, and withdrawal can be fatal – businesses and governments are increasingly dependent on their gathering, processing and holding of personal data.

What we can learn from Muad’Dib

For Muad'Dib to achieve ascendency, he had to assert control over the spice - we as individuals need to assert the same control over personal data. We need to assert our rights over the data - both over its 'production' and over its existence afterwards. The most important of these rights, the absolute control over it, is the right to destroy it – the right to delete personal data. That's what the right to be forgotten is about - and what, in my opinion, it should be called. If we have the right to delete data - and the mechanisms to make that right reality - then businesses and governments need to take what we say and want into account before they gather, hold or use our data. If they ride roughshod over our views, we'll have a tool to hold them to account...

The final solution, as for Arrakis, the proper name for the planet known as 'Dune', should be a balance. Production of personal data should still proceed, just as production of spice on Arrakis could still proceed, but on our own terms, and to mutual benefit. Most people don't want a Jihad, just as Paul Atreides didn't want a Jihad – though some may seek confrontation with the authorities and businesses rather than cooperation with them. In Dune, Paul Muad’Dib was not strong enough to prevent that Jihad – and though there has certainly been a ramping up of activism and antagonism over the last year or two, it should be possible to prevent it. If that is to happen, an assertion of rights, and in particular rights over the control over personal data, could be a key step.

A question of control - not of censorship

Looked at from this direction, the right to be forgotten (which I still believe is better understood as a right to delete) is not, as some suggest, about censorship, or about restricting free expression. Instead, it should be seen as a salvo in a conflict over control – a move towards giving netizens more power over the behemoths who currently hold sway.

If people are too concerned about the potential censorship issues - and personally I don't think they should be, but I understand why they are - then perhaps they can suggest other ways to give people more control over what's happening. Right now, as things like the Facebook 'deleted' photos issue I blogged about last week suggest, those who are in control don't seem to be doing much to address our genuine concerns....

Otherwise, they might have to deal with the growing power of the internet community...

Tuesday, 7 February 2012

Do you want a camera in your kid's bedroom??

This morning's disturbing privacy story is the revelation that live feeds from thousands of 'home security cameras' run by the US company Trendnet have been 'breached', allowing anyone on the net access to video feeds, without the need for a password. It was reported in the BBC here, by their technology reporter Leo Kelion.

It's a disturbing tale. As Kelion describes it:

"Internet addresses which link to the video streams have been posted to a variety of popular messageboard sites. Users have expressed concern after finding they could view children's bedrooms among other locations. US-based Trendnet says it is in the process of releasing updates to correct a coding error introduced in 2010."

The internet being what it is, news of the problem seems to have spread faster than Trendnet has been able to control it. This is from Kelion's piece again:

"Within two days a list of 679 web addresses had been posted to one site, and others followed - in some cases listing the alleged Google Maps locations associated with each camera. Messages on one forum included: "someone caught a guy in denmark (traced to ip) getting naked in the bathroom." Another said: "I think this guy is doing situps."

One user wrote "Baby Spotted," causing another to comment "I feel like a pedophile watching this".

A cautionary tale, one might think, and to privacy people like me a lot of questions immediately come to mind. Many of them, particularly the technical ones, have been answered in Kelion's piece. There is one question, however, that is conspicuous by its absence from Kelion's otherwise excellent piece: what are the cameras doing in children's bedrooms in the first place? Is it normal, now, to have this kind of level of surveillance in our private homes? In our children's bedrooms?

I asked Kelion about this on twitter, and his initial (and admirably instant) response was that security cameras were nothing new, but the breach in the feeds was. That was news, the presence of the cameras was not. That set me thinking - and made me write this blog. Is he right? Should we just 'accept' the presence of surveillance even in our most intimate and private places? The success of companies like Trendnet suggests that many thousands of people do accept it - but I hope that millions more don't. I also hope that an affair like this will make some people think twice before installing their own 'private' big brother system.

Surveillance is a double-edged sword. Just as any data on the internet is ultimately vulnerable, so is any data feed - the only way for data not to be vulnerable is for it not to exist. Those parents wanting to protect their children from being watched in the internet have a simple solution: don't install the cameras in the first place!

It's the same story over and over again in the world of privacy and surveillance. We build systems, gather data, set up infrastructures and then seem shocked and amazed when they prove vulnerable. It shouldn't be a surprise... we should think before we build, think before we design, think before we install...

Monday, 6 February 2012

Facebook, Photos and the Right to be Forgotten

Another day, another story about the right to be forgotten. This time it's another revelation about how hard it is to delete stuff from Facebook. In this case it's photos - with Ars Technica giving an update on their original story from 2009 about how 'deleted' photos weren't really deleted. Now, according to their new story, three years later, the photos they tried to remove back then are STILL there.

The Ars Technica story gives a lot more detail - and does suggest that Facebook are at least trying to do something about the problem, though without much real impact at this stage. As Ars Technica puts it:

"....with the process not expected to be finished until a couple months from now—and unfortunately, with a company history of stretching the truth when asked about this topic—we'll have to see it before we believe it."

I'm not going to try to analyse why Facebook has been so slow at dealing with this - there are lots of potential reasons, from the technical to the political and economic - but from the perspective of someone who's been watching developments over the years one thing is very important to understand: this slowness and apparent unwillingness (or even disinterest) has had implications. Indeed, it can be seen as one of the main drivers behind the push by the European Union to bring in a 'right to be forgotten'.

I've written (and most recently ranted in my blog 'Crazy Europeans') about the subject many times before, but I think it bears repeating. This kind of legislative approach, which seems to make some people in the field very unhappy, doesn't arise from nothing, just materialising at the whim of a few out-of-touch privacy advocates or power-hungry bureaucrats. It emerges from a real concern, from the real worries of real people. As the Ars Technica article puts it:

"That's when the reader stories started pouring in: we were told horror stories about online harassment using photos that were allegedly deleted years ago, and users who were asked to take down photos of friends that they had put online. There were plenty of stories in between as well, and panicked Facebook users continue to e-mail me, asking if we have heard of any new way to ensure that their deleted photos are, well, deleted."

When people's real concerns aren't being addressed - and when people feel that their real concerns aren't being addressed - then things start to happen. Privacy advocates bleat - and those in charge of regulation think about changing that regulation. In Europe we seem to be more willing to regulate than in the US, but with Facebook facing regular privacy audits from the FTC in the US, they're going to have to start to face up to the problem, to take it more seriously.

There's something in it for Facebook too. It's in Facebook's interest that people are confident that their needs will be met.  What's more, if they want to encourage sharing, particularly immediate, instinctive, impulsive sharing, they need to understand that when people do that kind of thing they can and do make mistakes – and they would like the opportunity to rectify those mistakes. Awareness of the risks appears to be growing among users of these kinds of system – and privacy is now starting to become a real selling point on the net. Google and Microsoft's recent advertising campaigns on privacy are testament to that - and Google's attempts to portray its new privacy policy as something positive are quite intense.

That in itself is a good sign, and with Facebook trying to milk as much as they can from the upcoming IPO, they might start to take privacy with the seriousness that their users want and need. Taking down photos when people want them taken down - and not keeping them for years after the event - would be a good start. If it doesn't happen soon, and isn't done well, then Facebook can expect an even stronger push behind regulation like the Right to be Forgotten. If they don't want this kind of thing, then they need to pre-empt it by implementing better privacy, better user rights, themselves.

Saturday, 28 January 2012

Phorm - a chapter closes?

Another chapter of the long-running Phorm saga seems to have come to a close, with the announcement by the European Commission that they have closed the infringement case with the UK about their implementation of rules on privacy in electronic communications. In order to get this closure, the UK had, in the words of the Commission press release

'amended its national legislation so as not to allow interception of users' electronic communications without their explicit consent, and established an additional sanction and supervisory mechanism to deal with breaches of confidentiality in electronic communications.'

This case came about as a result of the big mess that the UK government got into over Phorm - something which I've written about both academically and in blogs on more than one occasion before. In essence, the government decided to back Phorm, a business which privacy advocates and others had been telling them from the very beginning was deeply problematic, and that decision backfired pretty spectacularly. The amount of egg that ended up on government faces as a result of the affair was pretty spectacular. The action of the Commission was a direct result of the admirable work of campaigners like Alexander Hanff at Privacy International, drawing on the excellent investigatory analysis by the University of Cambridge Computer Lab's Richard Clayton and the legal work of Nicholas Bohm for the Foundation for Information Policy Research - work that was effectively in direct opposition to the government. This work led to questions to the commission, upon which the commission acted, as well as, more directly, to the collapse of the Phorm business model as its business allies deserted it and opposition from the public became clearer and clearer.

Phorm's business model was particularly pernicious from a privacy perspective. They took behavioural advertising (which is problematic in most of its forms) to an extreme, monitoring people's entire browsing behaviour by intercepting each and every click made as you browse, in order to build up a profile which they then used to target advertising. All this without real consent from the user, or at least so it appeared, and indeed without the consent of the owners of the websites to whom these intercepted instructions were intended to be sent. As a model it appeared to break not only laws but people's ideas about being under surveillance - Orwellian in the extreme. It failed here - thanks to the resistance noted above - and has since failed again in South Korea, and appears to be failing in Romania (about which I've blogged before) and Brazil, the three places that Phorm's backers have tried it since. In each case, it looks as though people's resistance has been a key....

There are lessons to learn for all concerned:

1) Those of us advocating and campaigning for privacy can take a good deal of heart from the whole affair - essentially, we won, stopping the pernicious Phorm business model and forcing the UK government not just to back down but to change the law in ways that, ultimately, are more 'privacy-friendly'. 'People power' proved too strong for both business and government forces in this case - and it may be possible again. We certainly shouldn't give up!

2) Businesses need to take note: privacy-invasive business models will face opposition, and that opposition is more powerful than you might imagine. From the perspective of the symbiotic web (my underlying theory, more about which can be found here), if a privacy-invasive model is to succeed, it must give something back to those whose privacy is invaded, something of sufficient value to compensate for the privacy that is either lost or compromised. In Phorm's case, there was very little benefit to the people being monitored - the benefit was all for Phorm or Phorm's advertising partners. That sort of model isn't going to succeed nearly as easily as businesses might think - people will fight, and fight well! Businesses would do better to build more privacy-friendly models from the outset...

3) Governments need to understand the needs and abilities of the people - as well as the needs of businesses and business lobby groups. People are getting more and more aware and more and more able to articulate their needs and make their views known - and to wield powers beyond the understanding of most governments. The recent resistance to SOPA and PIPA in the US is perhaps another example - though the fact that people's interests coincided with those of internet powerhouses like Wikipedia and Google may have been even more important.

This last point is perhaps the most important. Governments all over the world seem to be massively underestimating the influence and power of people, particularly people on the internet. People will fight for what they want - and, more often than governments realise, they will find ways to win those fights. There needs to be a significant shift in the attitude of those governments if we are not to have more conflicts of the sort that caused such a mess over Phorm. There are more conflicts already on the horizon - from the judicial review of the Digital Economy Act to the shady agreement that is ACTA. There will be a lot of mess, I suspect, much of which could be avoided if 'authorities' understood what we wanted a bit more.  The people of the net are starting to get mad, and they're not going to take it anymore.

Thursday, 26 January 2012

Crazy Europeans!?!

As anyone who pays attention to the world of data - and data privacy in particular - cannot help but be aware, those crazy Europeans are pushing some more of their mad data protection laws (a good summary of which can be found here) including the clearly completely insane 'right to be forgotten'. Reactions have been pretty varied on in Europe, but in the US they seem to have been pretty consistent, and can largely be boiled down to two points:

1) These Europeans are crazy!
2) This will all be a huge imposition on business - No fair!!!

There have been a fair few similar reactions in the UK too, and there will probably be more once the more rabidly anti-European parts of the popular press actually notice what's going on. As I've blogged before, the likes of Ken Clarke have spoken up against this kind of thing before.

So I think we need to ask ourselves one question: why ARE these crazy Europeans doing all this mad stuff?

Well, to be frank, the Internet 'industry' has only got itself to blame. This is an industry that has developed the surreptitious gathering of people's personal data into an art form, yet an industry that can't keep its data safe from hackers and won't keep it safe from government agencies. This is an industry that tracks our every move on the web and gets stroppy if we want to know when it's happening and why. This is an industry that makes privacy policies ridiculously hard to read whilst at the same time working brilliantly on making other aspects of their services more and more user-friendly. Why not do the same to the privacy settings? This is an industry that makes account deletion close to impossible (yes, I'm talking to you, Facebook) and pulls out all the stops to keep us 'logged in' at all times. This is an industry that tells us that WE should be completely transparent while remaining as obscure and opaque as possible themselves. This is an industry that often seems to regard privacy as just a little problem that needs to be sidestepped - or something that is 'no longer a social norm' (and yes, I'm talking to you, Facebook again).....

So.... If the internet 'industry', particularly in the US,  doesn't want this kind of regulation, this kind of 'interference' with its business models, the answer's actually really simple: build better business models, models that respect people's privacy! Stop riding rough-shod over what we, particularly in Europe, but certainly in the US too, care deeply about. Use your brilliance in both business and technology to find a better way, rather than just moaning that we're interfering with what you want to do. When fighting against SOPA and PIPA (and I hope ACTA too in the near future), most of the industry champion the people admirably - perhaps because the people's interests coincided with their own. In privacy, the same is actually true, however much it may seem the other way around. In the end, the internet industry will be better off if it takes privacy seriously.

Regulation doesn't happen just because a bunch of faceless Belgian bureaucrats have too much power and too little to do - it happens when there's a real problem to solve. Oh, they may well go over the top, they may well use crude regulatory sledgehammers where delicate rapiers would do the job better, but they do at least try, which seems more than much of the industry does...

So don't blame the crazy Europeans. Take a closer look in the mirror...

Wednesday, 25 January 2012

Players and Pawns in the Game of Privacy

Privacy is pretty constantly in the news at the moment. People like me can hardly take their eye off the news for a moment. This morning I was trying to do three things at once: follow David Allen Green's evidence at the Leveson inquiry (where amongst other things he was talking about the NightJack story which has significant privacy implications), listen to Viviane Reding talking about the new reforms to the data protection regime in Europe, and discover what was going on in the emerging story of 02's apparent sending of people's mobile numbers to websites visited via their mobile phones....

Big issues... and lots of media coverage... and lots of opportunities for academics, advocates of one position or other, technical experts and so forth to write/talk/tweet/blog etc on the subject. And many of us are taking the opportunity to say our bit, as we like to do. A good thing? Yes, in general - because perhaps the biggest change I've seen over the years I've been researching into the field is that the debate is wider, bringing in more people and more subjects, and getting more public attention - which must, overall, be a good thing. The more the issues are debated and thought about, the more chance there is that we can get better understanding, some sort of consensus, and find better solutions. And yet there are dangers attached to the process - because as well as the people who have valuable things to say and good, strong ethical positions to support their case, there are others with much more questionable agendas, often hidden, who would like to use others for their own purposes. Advocates, academics and experts need to guard against being used by others with very different motives.

There are particular examples happening right now. One subject that particularly interests me, about which I've blogged and written many times before, is the right to be forgotten. Viviane Reding has talked about it in the last few days - and there have been reactions in both directions. Both, it seems to me, need to be wary of their being used in ways that they don't intend:

i) Those who oppose a ‘right to be forgotten’/’right to delete’ need to be careful that they’re not being used as ‘cover’ for those whose business models depend on the holding and using of personal data. The right to delete is a threat to their business models, and they can (and probably will) use all the tools at their disposal to oppose it, including using 'experts' and academics. The valid concerns about censorship/free expression aren't what those people care about - they want to be able to continue to use people's personal data to make money. Advocates for free expression etc need to be careful that they're not being used in that kind of way.

ii) Conversely, those who (like me) advocate for a ‘right to be forgotten’/’right to delete’ need to be careful that they’re not being used by those who wish to censor and control - because there IS a danger that a poorly written and executed right to be forgotten could be set up in that kind of way. I don't believe that's what's intended by the current version, nor to I believe that this is how it would or could be used, but it's certainly possible, and people on 'my' side of the argument need to be vigilant that it doesn't go that way.

Similar arguments can be used in other fields - for example about the question of the right to anonymity. Those who (like me) espouse a right to anonymity need to be careful about not providing unfettered opportunities for those who wish to bully, to defame etc., while those who support the reverse – an internet with real name/identification systems throughout, to control access to age-sensitive sites, to deal with copyright infringement etc – need to be very careful not to be used as an excuse for setting up systems which allow control and ultimately oppression.

So what does this all mean? Should academics and other 'experts' simply keep out of the blogosphere and the media, and leave their musings for academic journals and unreadable books? Certainly not - but we do need to be a little more thoughtful about the agendas of those who might use us, who might misquote us, who might take us out of context and so forth. I suspect that this might have been what happened to Vint Cerf when he wrote a short while ago suggesting that internet access was not a human right. Others might well have been trying to use him... as they might well try to use any of those who write in this kind of a field. However clever we might think we are, we're very often pawns in the game, not players.

Thursday, 19 January 2012

Same as it ever was... privacy in history!

Earlier today, Eastman Kodak filed for Chapter 11 Bankruptcy protection. It might well signal the end for a company which was perhaps the single most important player in an industry that revolutionised the world in many ways: the photographic industry. Kodak has been in existence for 131 years, and in that time the world has changed dramatically in many ways - but perhaps not in as many ways as we might think. Kodak was crucial in the history of photography - but it was also crucial in the history of privacy.

Back in the late 19th century, when Kodak introduced the first hand-held camera, that new technology scared a lot of people - and inspired a whole new phase in the legal understanding of privacy. Amongst those alarmed by it were young lawyers Samuel Warren and Louis Brandeis - who went on to write a seminal piece for the Harvard Law Review: "The Right to Privacy". It was a remarkable piece of work and set into motion a train of legal thought that is still chuffing away to this very day. I remember when I first read it I assumed the date was a misprint: 1890. Surely that must mean 1980? Here's an extract:

“The intensity and complexity of life, attendant upon advancing civilization, have rendered necessary some retreat from the world, and man, under the refining influence of culture, has become more sensitive to publicity so that solitude and privacy have become more essential to the individual; but modern enterprise and invention have, through invasion upon his privacy, subjected him to mental pain and distress, far greater than could be inflicted by mere bodily injury.”

The same debate rages now - and the 'enterprise and invention' that was 'modern' in 1890 is every bit as prevalent now. Have things really changed? Are the attacks on privacy a 'modern' crisis in the 21st century - or are things just the same as they ever were. Here's some more of Warren and Brandeis:

"Gossip is no longer the resource of the idle and the vicious, but has become a trade, which is pursued with industry as well as effrontery. To satisfy a prurient taste the details of sexual relations are spread broadcast in the columns of the daily papers. To occupy the indolent, column upon column is filled with idle gossip, which can only be procured by intrusion upon the domestic circle."

Lord Justice Leveson might well say something very similar when his inquiry into the culture, ethics and practice of the press comes to its conclusion. Phone hacking may be the latest form of 'intrusion upon the domestic circle' but in many ways it's not that different from the tactics that have been used by the press (and others) for well over a century, as Warren and Brandeis made very clear.

So has much changed? Or is this all just human nature, and we need to 'grin and bear it'. Has the technological development of the last 120+ years had a significant effect? Here's a little more of Warren and Brandeis:

"Even gossip apparently harmless, when widely and persistently circulated, is potent for evil."

The internet, by its very nature, gives a far greater opportunity for wide and persistent circulation of gossip - but once again, it's not qualitatively different from what Warren and Brandeis were concerned about. The tools are more efficient, the mechanisms more generally available, and the scale larger, but isn't it the same problem, just writ a bit larger? The other side of the coin, however, is also, in my opinion, true. Privacy isn't a problem that's going away - and it's not, despite the suggestions of the likes of Mark Zuckerberg, something that's no longer a social norm. The ways in which Warren and Brandeis's piece, written more than 120 years ago, seems to fit so well with current practices and current concerns suggests precisely the opposite. Privacy is still an issue - and it will in all likelihood remain an issue forever. They were right to be concerned about it - and right, in my opinion, that we have a right to privacy. We had it then, and we have it now - not an absolute right, not a right that overrides other competing rights such as freedom of expression, but a right that needs to be considered, and needs to be fought for. That fight will go on... as it always has.

Thursday, 12 January 2012

10 things I hate about the ICO

With apologies to William Shakespeare, Elizabeth Barrett Browning, Heath Ledger, Julia Stiles and many more…

10 things I hate about the ICO

I hate the way you ask for teeth but seem afraid to bite
I hate the way you think the press are far too big to fight
I hate the way you always think that business matters most
Leaving all our online rights, our privacy, as toast

I hate the way you keep your fines for councils and their kind
While leaving business all alone, in case the poor dears mind
I hate the way you take the rules that Europe writes quite well
And turn them into nothing much, as far as we can tell

I hate the way that your advice on cookies was so vague
Could it possibly have been, you were a touch afraid?
I hate the way you talked so tough to old ACS Law
But when it came to action, it didn’t hurt for sure

I hate the way it always seems that others take the fore
While you sit back and wait until the interest is no more
I hate that your investigations all stop far too soon
As PlusNet, Google and BT have all found to their boon

I hate the way you tried your best to hide your own report
‘Bury it on a busy day’; a desperate resort!
You should be open, clear and fair, not secretive and poor
We’ll hold you up for all to see – we expect so much more!

I hated how when Google’s cars were taking all our stuff
You hardly seemed to care at all – that wasn’t near’ enough
Even when you knew the truth, you knew not what to do
It took the likes of good PI to show you where to go…

I hated how my bugbears Phorm, didn’t get condemned
Even when their every deed could not help but offend
You let them off with gentle words, ‘must try harder’ you just said
Some of us, who cared a lot, almost wished you dead

You tease us, tempt us, give us hope – then let us down so flat
We think you’re on our side – you’re not – and maybe that is that!
Will all these bad things ever change? We can but hope and dream
That matters at the ICO aren’t quite as they might seem.

We need you, dearest ICO, far more than we should
We’d love you if you only tried to do the job you could
We’d love you if you stood up tall, and faced our common foes
Until you do, sad though it is, then hatred’s how it goes.

P.S. I don’t really hate the ICO at all really.... this is 'poetic' licence!

Wednesday, 11 January 2012

The Internet IS a (Human) Right...

It isn’t often that I find myself disagreeing with something that Vint Cerf, one of the ‘fathers of the internet’ has said, but when I read his much publicised Op Ed piece in the New York Times, I did.

First of all, and perhaps most importantly, I didn’t like the headline, which stated baldly and boldly that ‘Internet Access is not a Human Right’. Regardless of whether you agree or disagree with that statement, the piece said a great deal more than that – indeed, the main thrust of the argument was about the importance of the internet, and of internet access, to human rights. Many people will have just read the headline – or even read the many tweets which stated just that headline and a link – and drawn conclusions very different to those which Cerf might like. The headline, of course, may well have been the choice of the editorial team and the New York Times, rather than Cerf himself, but either he was OK with it or he allowed himself to be led in a particular direction.

Secondly, I think the point that he makes leading to this headline, and to his conclusions, reflects a particularly US perspective on 'human rights' - a minimalist approach which emphasises civil and political rights and downplays (or even denies) economic and social rights amongst others. Most of the rest of the world takes a broader view of human rights: the International Covenant on Economic, Social and Cultural Rights was introduced in 1966, and has been ratified by the vast majority of the members of the UN – but not by the US. The covenant includes such rights as the right to work, the right to social security, rights to family life, right to health, to education and so forth - and it isn't too much of a stretch to see that right to internet access might fit within this spectrum.

That Cerf doesn't see it this way is not surprising given that he is American - but I think his argument is weaker than that. In the piece, Cerf’s gives the example of a man not having a right to a horse. He talks about how a horse was at one time crucial to ‘make a living’, and that means that the ‘human right’ isn’t a right to have a horse, but a right to ‘make a living’. However, even that’s based on assumptions to do with our time and system. Do you ‘need’ to ‘make a living’ if your society isn’t based on capitalism? Non-capitalist societies have existed in the past - and indeed exist on small scales in various places around the world today. Can we really assume that they will never exist in the future? It is a bold assumption to make - but not, I think, one that needs to be made.

We need to be very careful about the assumptions we make about any human right – and that, in practice, many of what we consider to be human rights are instrumental, qualified, or contextual rather than absolute, pure and simple. Another example from the legal field: do we have a ‘right to a free trial’ – or a right to justice? Trial by jury may be the best way we know now of assuring justice, but might there not be other ways?

What does this mean? Well, primarily, to me, it means we need to be less 'purist' about the terms we use, and more pragmatic - and to understand that we live in a particular time, where particular things matter. Moreover, that the language that is currently used in most parts of the world is one in which the term 'human right' has power - and we should not be afraid to use that power. Right now, to flourish in a 'free', developed society, internet access is crucial. Perhaps even more to the point, internet access has shown itself to have a potential for liberation even in places less 'free' and less 'developed. I'm not a cyber-utopian - and I fully acknowledge the strengths of the arguments of Morozov about the potential of the internet for control as much as for liberation - but for me that actually makes it even more important that we look at the internet from a rights perspective: if we have a right to internet access then it's much easier to argue that we have rights (such as privacy rights) while we use the internet, and those rights are critical for supporting the more liberating aspects of the internet.

That's another thing that disappoints me about Cerf's Op Ed piece. He doesn’t mention privacy, he doesn’t mention freedom from censorship, he doesn’t mention freedom from surveillance – I wish he would, because next after access these are the crucial enablers to human rights, to use his terms. I’d put it in stronger terms myself. I’d say we have rights to privacy online, rights to freedom from censorship, and rights to freedom from surveillance. If you don’t want to call them human rights, that’s fine by me – but right now, right here, in the world that we live in, we need these rights. The fact that we need them means that we should claim them, and that governments, businesses and yes, engineers, should be doing what they can to ensure that we get them.

Finally, going back to the headline itself I think Cerf and other seminal figures in the history and development of the internet, have got to be careful about not letting themselves be used by those who'd like to restrict internet access and freedom: there are others with very dubious agendas who would like to push the 'internet access not a human right' point. When one of the fathers of the internet writes that internet access is not a human right, regardless of the details below, there is a significant chance that it will be latched onto by those who would like to restrict our freedoms, whether to enforce copyright, to 'fight' terrorism or online crime, or for other purposes. That is something that we should be careful to avoid.

ADDENDUM (15/1/2012)

There have been a number of other interesting blogs/responses on the subject. Here are links to a few of them:

Adam Wagner's UK Human Rights Blog
Frank Pasquale on madisonian.net
Amnesty International's Scott Edwards blog post on HUMAN RIGHTS NOW
Sherif Elsayed-Ali in Egypt Independent

All well worth a read!

Thursday, 5 January 2012

Personalisation and politics

I have to admit to following the Republican party's presidential candidate race with some fascination. It's a slightly ghoulish fascination - there's often a touch of fear when I listen to some of the candidates, and there's always the underlying question of 'how low can they go'. There's comedy, tragedy, a bit of historical eccentricity, and often a good deal of farce. It's also, however, revealing of some of the issues that we should take seriously in terms of how our politics, our democratic politics, functions - and in particular, how it might function in the future.

One particular aspect that came to the fore to me in the recent Iowa Caucus - the role of advertising in politics. We haven't developed it to nearly the same degree in the UK as the US, though every successful politician this side of the pond has tried to follow Thatcher's hugely effective use of Saatchi & Saatchi. In the US, though, it's a highly developed art form - and is only likely to become more so. In Iowa, an orchestrated advertising campaign against the surging Newt Gingrich sent him down from first to fourth place (and nearly out of the race) in a matter of days. Advertising works, or at least appears to - and politicians know it, and know it well.

What might this mean for the future? I've written about advertising many times before, both in academic papers and in blogs. The internet is changing advertising - and we need to be aware of how that change might have an impact not only on our commercial behaviour but on our political behaviour: on politics itself. There are two trends in internet advertising that are particularly relevant and worth thinking about here: behavioural profiling and personalisation. People browsing the internet can be (and are) profiled according to their online behaviour, from the search terms they use and the links they follow to the friends they have on social media sites, the music they listen to, movies they watch and so forth. That profiling is generally used to target advertising - advertising more suited to their personal needs and desires. My last blog, Privacy and the Phantom Tollbooth, talked about some of the risks of this kind of thing - but when looked at from a political perspective the risks are even more sinister.

Through profiling, it is possible to make good guesses - sometimes very good guesses - as to which political issues matter to someone and which ones don't. With just a little bit of work, the vast majority of which could be entirely automatic, it could become possible to create tailored political advertisements designed to highlight the policies or features of a particular candidate or party that are of specific interest to an individual - and to omit anything that might detract from their attraction. And, given the US experience in particular, to do the reverse for any opponents - automatically pick out the things that will make a particular voter see them in the most negative light possible.

Taking this a few steps further, these ads could include background music that the advertiser knows that you particularly like, and even voice-overs by an actor that they know you admire - they could even choose the colours, styles and typefaces to suit your 'known' preferences. Of course they wouldn't do this for everyone, at least not at first, but it wouldn't take that much effort to produce a range of options (a handful of different actors, soundtracks etc would do the job) that would cover most of the key, swing voters. Political advertising in its current form is already persuasive - how much more persuasive could it be in this kind of form? And remember that with behavioural targeting in the hands of relatively few advertising organisations, these advertisements can be sent to a vast number of different websites that you visit. They can be sent to you in emails. They can be inserted at the beginnings of videos that you watch online.... the possibilities are endless.

Is this far fetched? A nightmare scenario beyond the realms of possibility? Spend a little time watching US elections and I don't think you'll feel that way. It's just the logical extension of existing advertising and political trends. It is important to remember, too, that this kind of thing requires money - and money already talks enormously in politics. The power of personalised advertising can very easily become just one more tool in the hands of those who already wield excessive power over the political domain.

What can be done? Well, the first thing is a matter of awareness. The impact of behavioural advertising goes beyond the commercial sphere, and we need to understand this. It's not just a matter of deciding which deodorant or drink we choose - potentially it's about our whole lives. We ignore its importance at our peril - so things like 'do not track' really matter, and the European 'Cookie Directive' should not be dismissed as a legalistic impediment to good business. They may not be perfect tools - indeed, it seems clear that they aren't - but they're being pushed for very good reasons. Tracking on the internet should not be the default, accepted without a thought. The risks are far greater than most people realise.