Last night I was reading my daughter's bedtime story from that classic of American children's literature, The Phantom Tollbooth, when I came across a passage that set out brilliantly the problems that can arise as a result of the gathering and use of private data. Bear in mind that The Phantom Tollbooth was first published in 1961: Norman Juster didn't have the benefit of seeing how what can loosely now be described as 'big data' operates - but he did have an understanding of how our information can be used against us, even when we have 'nothing to hide'.
To set the scene: Milo the boy, Tock the Watchdog and the huge insect the Humbug are on the final stages of their mission to rescue the princesses Rhyme and Reason from the Castle in the Air. They reach the bottom of the final staircase, pursued by demons, where they don't notice a little round man sleeping peacefully on a very large ledger. The next part I'm just going to repeat:
------------------------------------------
"NAMES?" the little man called out briskly, just as the startled bug reached for the first step. He sat up quickly, pulled the book out from under him, put on a green eyeshade, and waited with his pen poised in the air.
"Well, I..." stammered the bug.
"NAMES?" he cried again, and as he did, he opened the book to page 512 and began to write furiously. The quill made horrible scratching noises, and the point, which was continuously catching on the paper, flicked tiny inkblots all over him. As they called out their names, he noted them carefully in alphabetical order.
"Splendid, splendid, splendid," he muttered to himself. "I haven't had an M for ages."
"What do you want our names for?" asked Milo, looking anxiously over his shoulder. "We're in a bit of a hurry."
"Oh, this won't take a minute," the man assured them. "I'm just the official Senses Taker, and I must have some information before I can take your senses. Now, if you'll just tell me when you were born, where you were born, why you were born, how old you are now, how old you were then, how old you'll be in a little while, your mother's name, your father's name, your aunt's name, your uncle's name, your cousin's name, where you live, how long you've lived there, the schools you've attended, the schools you haven't attended, your hobbies, your telephone number, your shoe size, shirt size, collar size, hat size, and the names and addresses of six people who can verify all this information, we'll get started."
------------------------------------------
These days, of course, there wouldn't need to be a 'senses taker' to get most of that information - 800 million or so of us have already 'volunteered' much of it to Facebook, while much of the rest of it (the sensible bits anyway) can be gathered reasonably directly from other sources. Anyway, the Senses Taker proceeds to gather all this and more, before Milo quite reasonably suggests that they need to get a move on, and can they just proceed. At that point, the Senses Taker demands to know their destination.
------------------------------------------
"The Castle in the Air," said Milo impatiently.
"Why bother?" said the Senses Taker, pointing to the distance. "I'm sure you'd rather see what I have to show you."
As he spoke, they all looked up, but only Milo could see the gay and exciting circus there on the horizon. There were tents and side shows and rides and even wild animals - everything a little boy could spend hours watching.
"And wouldn't you enjoy a more pleasant aroma?" he said, turning to Tock.
Almost immediately the dog smelt a wonderful smell that no-one but he could smell. It was made up of all the marvellous things that had ever delighted his curious nose.
"And here's something I know you'll enjoy hearing," he assured the Humbug.
The bug listened with rapt attention to something he alone could hear - the shouts and applause of an enormous crowd, all cheering for him.
They each stood as if in a trance, looking, smelling, and listening to the very special things that the Senses Taker had provided for them, forgetting completely about where they were going and who, with evil intent, was coming up behind them.
The Senses Taker sat back with a satisfied smile on his puffy little face as the demons came closer and closer, until less than a minute separated them from their helpless victims.
But Milo was too engrossed in the circus to notice, and Tock had closed his eyes, the better to smell, and the bug bowing and waving, stood with a look of sheer bliss on his face, interested only in the wild ovation.
------------------------------------------
Of course Milo, Tock and the Humbug do eventually escape, and the Senses Taker's true nature is revealed: he is a demon himself:
------------------------------------------
"I warned you; I warned you I was the Senses Taker," sneered the Senses Taker. "I help people find what they're not looking for, hear what they're not listening for, run after what isn't there. And, furthermore," he cackled, hopping around gleefully on his stubby legs, "I'll steal your sense of purpose, take your sense of duty, destroy your sense of proportion..."
------------------------------------------
It's as good a description of the dangers of the personalisation of the internet - which I've written about before, and is inherent in the Symbiotic Web model that underlies a lot of my work - as you might find. The Senses Taker's processes - gather all the data it can, use it to conceptualise how each individual might be seduced into doing something to the benefit of the Senses Taker (rather than to the benefit of the individual) is pretty much exactly what behavioural advertising does, what Facebook does, what many other kinds of privacy-invasive profile-based systems do. And the Sense Taker is a demon.......
P.S. If you haven't read the Phantom Tollbooth, you should! It's a brilliant book, lots of fun and at the same time actually quite deep!
My thoughts and stories relating to privacy, autonomy, human rights, law and the web
Friday, 30 December 2011
Sunday, 18 December 2011
12 wishes for online privacy....
It's that time of year for lists, predictions and so forth. I don't want to make predictions myself - I know all too well how hard it is to predict anything in this world, and even more so in the online world. I do, however, have wishes. Many of these are pipe dreams, I'm afraid, but some of them do have some small hope of coming true. So here they are, my twelve wishes for online privacy…
- That I don’t hear the ‘if you’ve got nothing to hide…’ argument against privacy ever again...
- That governments worldwide begin to listen more to individuals and to advocacy groups and less to the industry lobby groups, particularly those of the copyright and security industries
- That privacy problems continue to grab the headlines – so that privacy starts to be something of a selling point, and companies compete to become the most ‘privacy-friendly’ rather than just paying lip service to privacy
- That the small signs I’ve been seeing that Google might be ‘getting’ privacy do not turn out to be illusions. Go on, Google, go on!
- That my ‘gut feeling’ that 2012 could be the peak year for Facebook turns out to be true. Not because I particularly dislike Facebook – I can see the benefits and strengths of its system – but because the kind of domination and centralisation it represents can’t be good for privacy in the end, and I don't believe that the man who said that privacy was no longer a 'social norm' has really changed his spots
- That the ICO grows some cojones, and starts understanding that it’s supposed to represent us, not just find ways for businesses to get around data protection regulations…
- That the media (and yes, I’m talking to YOU, BBC), whenever they get told about a new technical innovation, don’t just talk about how wonderful and exciting it is, but think a little more critically, and particularly about privacy
- That the revision to the Data Protection Directive (or perhaps Regulation) turns into something that is both helpful and workable – and not by compromising privacy to the wishes of business interests.
- That neither SOPA nor PIPA get passed in the US…
- That the right to be forgotten, something I’ve written about a number of times before, is discussed for what it is, not what people assume it must be based solely on the misleading name. It’s not about censorship or rewriting history. It really isn’t! It’s about people having rights over their own data! Whose data? Our data!
- That the Labour Party begins to put together a progressive digital policy, and says sorry for ever having listened to the copyright lobby in introducing the Digital Economy Act!
- That we start thinking more about the ordinary privacy of ordinary people, not just that of celebrities and politicians…
These are of course just a sample of the things I could say - but if even a few of them start to become true, it would be a really good start. Here's wishing....
Thursday, 8 December 2011
Privacy is not the enemy...
I attended the Oxford Institute event 'Anonymity, Privacy and Open Data' yesterday, notable amongst other things for Professor Ross Anderson's systematic and incredibly powerful destruction of the argument in favour of 'anonymisation' as a protection for privacy. It was a remarkable event, with excellent speakers talking on the most pertinent subjects of the day in terms of data privacy: compelling stuff, and good to see so many interesting people working in the privacy and related fields.
And yet, at one point, one of the audience asked a question about whether a group like this was not too narrow, and that by focussing on privacy we were losing sight of other 'goods' - he was thinking particularly of medical goods, as 'privacy' was seen as threatening the possibility of sharing medical data. I understood his point - and I understood his difficulty, as he was in a room to a great extent full of people interested in privacy (hardly surprising given the title of the event). Privacy advocates are often used to the reverse position - trying to 'shout out' about privacy to a room full of avid data-sharers or supporters of business innovation above all things. A lot of antagonism. A lot of feelings about being 'threatened'. And yet I believe that many of those threatened are missing the point about privacy. Just as Guido Fawkes is wrong to characterise privacy just as a 'euphemism for censorship' (as I've written about before) and Paul McMullan is wrong to suggest that 'privacy is for paedos', the idea that privacy is the 'enemy' of so many things is fundamentally misconceived. To a great extent the opposite is true.
Privacy is not the enemy of free expression - indeed, as Jo Glanville of Index on Censorship has argued, privacy is essential for free expression. Without the protection provided by privacy, people are shackled by the risk that their enemies, those that would censor them, arrest them or worse, can uncover their indentures, find them and do their worst. Without privacy, there is no free expression.
Privacy is not the enemy of 'publicness' - in a similar way, to be truly 'public', people need to be able to protect what is private. They need to be able to have at least some control over what they share, what they put into the public. If they have no privacy, no control at all, how can they know what to share?
Privacy is not the enemy of law enforcement - privacy is sometimes suggested to be a tool for criminals, something behind which they can hide behind. The old argument that 'if you've got nothing to hide, you've got nothing to fear' has been exposed as a fallacy many times - perhaps most notably by Daniel Solove (e.g. here), but there is another side to the argument. Criminals will use whatever tools you present them with. If you provide an internet with privacy and anonymity they'll use that privacy and anonymity - but if you provide an internet without privacy, they'll exploit that lack of privacy. Many scams related to identity theft are based around taking advantage of that lack of privacy. It would perhaps be stretching a point to suggest that privacy is a friend to law enforcement - but it is as much of an enemy to criminals as it is to law enforcement agencies. Properly implemented privacy can protect us from crime.
Privacy is not the enemy of security - in a similar way, terrorists and those behind what's loosely described as cyberwarfare will exploit whatever environment they are provided with. If Western Law enforcement agencies demand that social networks install 'back doors' to allow them to pursue terrorists and criminals, you can be sure that those back doors will be used by their enemies - terrorists, criminals, agents of enemy states and so forth. This last week has seen Privacy International launch their 'Big Brother Inc' database, revealing the extent to which surveillance products developed in the West are being sold to despotic and oppressive regimes. It's systematic, and understandable. Surveillance is a double-edged sword - and privacy is a shield which faces many ways (to stretch a metaphor beyond its limits!). Proper privacy protection works against the 'bad guys' as well as the 'good'. It's a supporter of security, not an enemy.
Privacy is not the enemy of business - though it is the enemy of certain particular business models, just as 'health' is the enemy of the tobacco industry. Ultimately, privacy is a supporter of business, because better privacy increases trust, and trust helps business. Governments need to start to be clear that this is the case - and that by undermining privacy (for example though the oppressive and disproportionate attempts to control copyright infringement) they undermine trust, both in businesses and in themselves as governments. Privacy is certainly a challenge to business - but that's merely reflective of the challenges that all businesses face (and should face) in developing businesses that people want to use and are willing to pay money for.
Privacy is not the enemy of open data - indeed, precisely the opposite. First of all, privacy should make it clear which data should be shared, and how. 'Public' data doesn't infringe privacy - from bus timetables to meteorological records, from public accounts to parliamentary voting records. Personal data is just that - personal - and sharing it should happen with real consent. When is that consent likely to be given? When people trust that their data will be used appropriately. When will they trust? When privacy is generally in place. Better privacy means better data sharing.
All this is without addressing the question of whether (and to what extent) privacy is a fundamental right. I won't get into that here - it's a philosophical question and one of great interest to me, but the arguments in favour of privacy are highly practical as well as philosophical. Privacy shouldn't be the enemy - it should be seen as something positive, something that can assist and support. Privacy builds trust, and trust helps everyone.
And yet, at one point, one of the audience asked a question about whether a group like this was not too narrow, and that by focussing on privacy we were losing sight of other 'goods' - he was thinking particularly of medical goods, as 'privacy' was seen as threatening the possibility of sharing medical data. I understood his point - and I understood his difficulty, as he was in a room to a great extent full of people interested in privacy (hardly surprising given the title of the event). Privacy advocates are often used to the reverse position - trying to 'shout out' about privacy to a room full of avid data-sharers or supporters of business innovation above all things. A lot of antagonism. A lot of feelings about being 'threatened'. And yet I believe that many of those threatened are missing the point about privacy. Just as Guido Fawkes is wrong to characterise privacy just as a 'euphemism for censorship' (as I've written about before) and Paul McMullan is wrong to suggest that 'privacy is for paedos', the idea that privacy is the 'enemy' of so many things is fundamentally misconceived. To a great extent the opposite is true.
Privacy is not the enemy of free expression - indeed, as Jo Glanville of Index on Censorship has argued, privacy is essential for free expression. Without the protection provided by privacy, people are shackled by the risk that their enemies, those that would censor them, arrest them or worse, can uncover their indentures, find them and do their worst. Without privacy, there is no free expression.
Privacy is not the enemy of 'publicness' - in a similar way, to be truly 'public', people need to be able to protect what is private. They need to be able to have at least some control over what they share, what they put into the public. If they have no privacy, no control at all, how can they know what to share?
Privacy is not the enemy of law enforcement - privacy is sometimes suggested to be a tool for criminals, something behind which they can hide behind. The old argument that 'if you've got nothing to hide, you've got nothing to fear' has been exposed as a fallacy many times - perhaps most notably by Daniel Solove (e.g. here), but there is another side to the argument. Criminals will use whatever tools you present them with. If you provide an internet with privacy and anonymity they'll use that privacy and anonymity - but if you provide an internet without privacy, they'll exploit that lack of privacy. Many scams related to identity theft are based around taking advantage of that lack of privacy. It would perhaps be stretching a point to suggest that privacy is a friend to law enforcement - but it is as much of an enemy to criminals as it is to law enforcement agencies. Properly implemented privacy can protect us from crime.
Privacy is not the enemy of security - in a similar way, terrorists and those behind what's loosely described as cyberwarfare will exploit whatever environment they are provided with. If Western Law enforcement agencies demand that social networks install 'back doors' to allow them to pursue terrorists and criminals, you can be sure that those back doors will be used by their enemies - terrorists, criminals, agents of enemy states and so forth. This last week has seen Privacy International launch their 'Big Brother Inc' database, revealing the extent to which surveillance products developed in the West are being sold to despotic and oppressive regimes. It's systematic, and understandable. Surveillance is a double-edged sword - and privacy is a shield which faces many ways (to stretch a metaphor beyond its limits!). Proper privacy protection works against the 'bad guys' as well as the 'good'. It's a supporter of security, not an enemy.
Privacy is not the enemy of business - though it is the enemy of certain particular business models, just as 'health' is the enemy of the tobacco industry. Ultimately, privacy is a supporter of business, because better privacy increases trust, and trust helps business. Governments need to start to be clear that this is the case - and that by undermining privacy (for example though the oppressive and disproportionate attempts to control copyright infringement) they undermine trust, both in businesses and in themselves as governments. Privacy is certainly a challenge to business - but that's merely reflective of the challenges that all businesses face (and should face) in developing businesses that people want to use and are willing to pay money for.
Privacy is not the enemy of open data - indeed, precisely the opposite. First of all, privacy should make it clear which data should be shared, and how. 'Public' data doesn't infringe privacy - from bus timetables to meteorological records, from public accounts to parliamentary voting records. Personal data is just that - personal - and sharing it should happen with real consent. When is that consent likely to be given? When people trust that their data will be used appropriately. When will they trust? When privacy is generally in place. Better privacy means better data sharing.
All this is without addressing the question of whether (and to what extent) privacy is a fundamental right. I won't get into that here - it's a philosophical question and one of great interest to me, but the arguments in favour of privacy are highly practical as well as philosophical. Privacy shouldn't be the enemy - it should be seen as something positive, something that can assist and support. Privacy builds trust, and trust helps everyone.
Saturday, 26 November 2011
Heroes and villains?
I wrote a piece a little while ago about Julian Assange - you can find it here - which amongst other things suggested that just because you consider someone a hero for one part of their lives doesn't mean that they're necessarily something other than a hero in another way. Events this week have reminded me of the other side of that coin: that just because someone might be seen as a villain in one way, doesn't mean that everything about them is despicable. What's more, if we believe in human rights, it doesn't mean that 'villains' shouldn't have those human rights. One particular such 'villain' has been in the news these last few days: Max Mosley.
Before I say anything more, I need to make it clear that my background is very left wing - I have grandparents, step-grandparents and great aunts who were communists. I myself had the nickname 'commie bastard' at college - though all that really meant is that I was the only member of the Labour Party at what was then the extremely right-wing Pembroke College Cambridge. As such, Max Mosley is someone who I 'instinctively' look on with extreme distaste. His father, Oswald Mosley, was a particular figure of hate for my family - in case anyone is unaware, Oswald Mosley was the founder and leader of the British Union of Fascists, and a supporter of Hitler. Hitler was a guest at his wedding. I still consider myself to be very much on the political left. Max Mosley, not just as his father's son, but as someone who represents extreme wealth and the excesses connected with it, is not someone that I have anything but instinctive dislike for.
...but just as even 'heroes' like Assange need to be subject to the law when appropriate (as I argued before), even those you dislike intensely need to be accorded rights. Indeed, one of the key tests of whether you really believe in human rights is whether you really grant them to those you dislike. Many people have been tested on those grounds over the last months and hardly come up smelling of roses - the attitude to the death of Gaddafi is perhaps the most extreme example. For Max Mosley, the test is simpler and should be less taxing. However much I might dislike what he seems to represent, he still deserves privacy. What the newspapers did to him was, in my view, unacceptable - and he was right to fight against them. Personally I thought he came across well in the Leveson inquiry. It wasn't Mosley that looked like the villain here - and his work in supporting other victims of phone hacking is something to be applauded too.
...which brings me onto the other 'heroes' and 'villains' of the last week: the press. If you listened as I did to the testimony of the many witnesses to the Leveson inquiry, from Mosley himself to the celebs like Hugh Grant, Steve Coogan and Sienna Miller, to JK Rowling, to the families of Milly Dowler and Madelaine McCann, and to Margaret Watson, it's hard not to see the press as venal, vicious, unprincipled and unfair. The instinctive reaction again is to punish them, to clamp down on them, to restrict them. And yet that's not the whole story either - because we also have to remember how the story itself broke, though the work of the Guardian. We have to remember how the MPs' expenses scandal was revealed by the Telegraph. How the cricket match-fixing scandal was uncovered by the now-departed News of the World. Just as Assange and Mosley could be heroes in one way and might be villains in another, so are the press. We need to look at the balance, and remember both sides to all their stories.
How is that balance maintained? The most important thing to remember is that it's a dynamic balance, and that we have to remain vigilant. Don't overreact - and that's an easy temptation particularly in relation to the press, and if the stories about Max Mosley planning to sue Google are true, they would be a prime example of such overreaction, and something I plan to write about separately - but don't be afraid of action either. Even in terms of the press, there are two currently very different things going on right now. At the same time as any action emerging from Leveson might produce restrictions on press activity in relation to privacy, the potential changes to the draft defamation bill might produce greater freedom for the press in relation to defamation. Instinctively, again, that might be right for people of my political perspective - defamation law has often been seen as a tool for the rich, while privacy should (though often isn't, as I've argued before) be something of as much interest to the 'insignificant' as the rich and famous. Both the potential shifts in balance, from Leveson and from changes to libel law, could well be appropriate. Let's hope it works out that way.
Before I say anything more, I need to make it clear that my background is very left wing - I have grandparents, step-grandparents and great aunts who were communists. I myself had the nickname 'commie bastard' at college - though all that really meant is that I was the only member of the Labour Party at what was then the extremely right-wing Pembroke College Cambridge. As such, Max Mosley is someone who I 'instinctively' look on with extreme distaste. His father, Oswald Mosley, was a particular figure of hate for my family - in case anyone is unaware, Oswald Mosley was the founder and leader of the British Union of Fascists, and a supporter of Hitler. Hitler was a guest at his wedding. I still consider myself to be very much on the political left. Max Mosley, not just as his father's son, but as someone who represents extreme wealth and the excesses connected with it, is not someone that I have anything but instinctive dislike for.
...but just as even 'heroes' like Assange need to be subject to the law when appropriate (as I argued before), even those you dislike intensely need to be accorded rights. Indeed, one of the key tests of whether you really believe in human rights is whether you really grant them to those you dislike. Many people have been tested on those grounds over the last months and hardly come up smelling of roses - the attitude to the death of Gaddafi is perhaps the most extreme example. For Max Mosley, the test is simpler and should be less taxing. However much I might dislike what he seems to represent, he still deserves privacy. What the newspapers did to him was, in my view, unacceptable - and he was right to fight against them. Personally I thought he came across well in the Leveson inquiry. It wasn't Mosley that looked like the villain here - and his work in supporting other victims of phone hacking is something to be applauded too.
...which brings me onto the other 'heroes' and 'villains' of the last week: the press. If you listened as I did to the testimony of the many witnesses to the Leveson inquiry, from Mosley himself to the celebs like Hugh Grant, Steve Coogan and Sienna Miller, to JK Rowling, to the families of Milly Dowler and Madelaine McCann, and to Margaret Watson, it's hard not to see the press as venal, vicious, unprincipled and unfair. The instinctive reaction again is to punish them, to clamp down on them, to restrict them. And yet that's not the whole story either - because we also have to remember how the story itself broke, though the work of the Guardian. We have to remember how the MPs' expenses scandal was revealed by the Telegraph. How the cricket match-fixing scandal was uncovered by the now-departed News of the World. Just as Assange and Mosley could be heroes in one way and might be villains in another, so are the press. We need to look at the balance, and remember both sides to all their stories.
How is that balance maintained? The most important thing to remember is that it's a dynamic balance, and that we have to remain vigilant. Don't overreact - and that's an easy temptation particularly in relation to the press, and if the stories about Max Mosley planning to sue Google are true, they would be a prime example of such overreaction, and something I plan to write about separately - but don't be afraid of action either. Even in terms of the press, there are two currently very different things going on right now. At the same time as any action emerging from Leveson might produce restrictions on press activity in relation to privacy, the potential changes to the draft defamation bill might produce greater freedom for the press in relation to defamation. Instinctively, again, that might be right for people of my political perspective - defamation law has often been seen as a tool for the rich, while privacy should (though often isn't, as I've argued before) be something of as much interest to the 'insignificant' as the rich and famous. Both the potential shifts in balance, from Leveson and from changes to libel law, could well be appropriate. Let's hope it works out that way.
Saturday, 19 November 2011
Whose data? Our data!!!
There’s a slogan echoing around the streets of major cities around the globe at the moment: ‘Whose streets – our streets!’ It’s the mantra of the ‘occupy’ movement, expressing the frustration and injustice – particularly economic injustice – and the sense that all kinds of things that should be ‘ours’ have been taken out of ‘our’ control.
The same could – and should – be said about personal data. The mantra of the occupy movement has a very direct parallel in the world of data, which is why I think we should be saying, loud and proud, ‘Whose data – our data!’
Just as for the occupy movement (which I’ve written about before), the chances of getting everything that we want in relation to data are slim – but the chances of changing the agenda in relation to data are not, and the chances of bringing about some real changes in the medium and long term even less so. The occupy movement, particularly in the US, have brought some ideas that previously were hardly talked about in the media, like wage and wealth inequality, close to the top of the agenda. They may even have moved it high enough that politicians feel the need to do something about it – I certainly hope so.
The personal data agenda.
Can we do the same for personal data? One of the current points of discussion is the idea of a ‘right to be forgotten’ – something that relates directly to the question of whether personal data is ‘ours’ in any meaningful way. I’ve spoken and written about it a lot before – my academic article on my take on it, ‘a right to delete?’ can be found online here, while I’ve also blogged on the subject on the INFORRM blog. It’s currently under discussion as part of the forthcoming revision to the Data Protection Directive, to great resistance from the UK. The latest manifestation of this resistance has come from the ICO, suggesting that the right to be forgotten should not be included as it would be unenforceable, and that the inclusion would give people unrealistic expectations, as well as potentially interfering with free speech. Effectively, they seem to be suggesting that including it would send out the wrong message. This pronouncement echoes previous statements by Ken Clarke in May, and Ed Vaizey a couple of weeks ago – it looks like part of a campaign to rein in the attempts by Europe to give more weight to privacy and user rights in the balancing exercise with business use of personal data.
Are the ICO right?
I believe that the ICO are wrong about this in a number of ways. First of all, I think they’re wrong about the unenforceability issue – at least to a great extent. In the Mexico City conference on data protection earlier this month, even Google admitted that they could do their part, but that it would be expensive. That's very different from saying that it is unenforceable. What’s more, it doesn’t have to be perfectly implemented in order to have a benefit to people – if, for example, the right to be forgotten would allow people to easily, simply and quickly delete their Facebook profiles, or the data held on them by Tesco, that could be significant. It could also, as I’ve argued in my article, help persuade businesses to develop business models less dependent on the gathering and holding of massive amounts of personal data – if they know that such data might be ‘deletable’.
Secondly, I believe they’re quite wrong about the free speech issue – again, as I outline in my paper, if proper exceptions are put in place to allow archives to be kept, then free speech isn’t affected at all. The idea is not to be able to delete a record of what school you went to – but to be able to delete records of what breakfast cereal you bought, or profiles created based on surveillance of your internet activity.
Thirdly, and perhaps most importantly, I think they’re wrong about the message being sent out – profoundly wrong. The message that the ICO is sending out is that business matters more than people’s rights – and it’s a message that has echoes throughout the world at the moment, echoes that are what has provoked the anger in so many people that lies being the ‘occupy’ movement. It’s the same logic as that which supports bankers bonuses over benefits for the disabled, and looks for tax cuts for the rich whilst enforcing austerity measures that cut public services to the bone and beyond. Even more importantly, it suggests that the ICO does not see its role as protecting individual rights over data – but as supporting the government’s business agenda.
Whose data – our data!
The actions and messages of the ICO are essentially saying that this is too difficult to do, so we shouldn’t even try. It reminds me very much of the arguments against the idea of having smoke-free restaurants and pubs – a lot of people said it would be impossible, would drive the restaurants and pubs out of business. Further back, there have been similar stories throughout history – most dramatically, they were made against the abolition of slavery. We shouldn’t let this kind of logic stop us from doing what is right – we should find a way. And we can find a way, if only we can find the will. The ICO needs to be stronger, to understand that it has to serve us, not just business or the government. Privacy International asked in February whether the ICO was fit for purpose – and the answer increasingly seems to be clearly not. We need to remind them what their purpose should be – and that, more than anything else, is to represent us, the people. We need to remind them whose data they’re supposed to be protecting. Whose data? Our data!
Tuesday, 15 November 2011
The significance of the insignificant
I watched yesterday’s parliamentary committee session on Privacy and Injunctions with some interest – after all, privacy is one of my subjects. The excellent David Allen Green (of Jack of Kent fame) gave the committee a number of lessons both in law and technology, and Guido Fawkes (Paul Staines) tormented them with the reality of the modern world. It was entertaining stuff – and yet the more I watched, the less it seemed to be connected with what I see as the biggest and fastest growing problem that the internet in particular represents in terms of privacy.
That came to a head when Guido Fawkes made the remark that ‘privacy is just a euphemism for censorship’. It was a good soundbite – and fitted some excellent subsequent tweets – and he certainly had a point when considering the way that privacy has been used to protect the rich, the famous and the influential, particularly in relation to super-injunctions, one of the key subjects being discussed by the committee. As a football fan, I’ve hardly been able to blink this year without hearing another piece of gossip that I’m not allowed to know, let alone talk about. However, there’s another side to privacy, one to which neither the committee nor the witnesses before them seemed to pay any attention. The side of the insignificant.
Insignificant people have the right to privacy too
The focus of both the committee and the witnesses, entirely understandably given their remit, was on the privacy of what might loosely be described as ‘significant’ people. And yet ordinary people, ‘insignificant’ people, have a right to privacy too. Protecting their privacy, except in unusual circumstances, isn’t anything to do with censorship. It’s about autonomy. It’s about the right, as Warren and Brandeis put it so long ago, to be left alone. The right to live, to enjoy the fruits of our modern society freely and without excessive interference.
By focussing on privacy as protecting significant information about ‘significant’ people, we miss what is, in many ways the far more important issue of the lack of control over the gathering of insignificant information about ‘insignificant’ people.
The result is that what is seen as ‘privacy’ – insofar as it is protected by law (and David Allen Green gave yesterday’s committee an excellent exposition of the inadequacies of that law) very often ends up protecting the ‘wrong’ people in the wrong ways, and failing to protect the right people in the right ways.
Insignificant invasions of privacy matter
Protections against the significant stuff, particularly for significant people is already provided. The law protects against defamation – perhaps excessively, at least in the eyes of the supporters of libel reform – and ‘significant’ people can and have used that law to provide that protection, but provides little in the way of protection for ‘insignificant’ invasions of privacy.
Why is this? To a great extent it is because these ‘insignificant’ breaches of privacy are seen as, well, insignificant. On their own, that may even be appropriate. What does it matter if someone knows what I had for breakfast this morning, or what kind of music I’m listening as I type this blog? Each individual fact gathered this way doesn’t seem to matter at all – and yet they do matter. They matter philosophically – they’re really my business, and no one else’s – but they also matter in a much more important way. In this digital world of ours, they’re used to profile me, to categorise me, to determine what advertisements I receive on the internet, perhaps what content I’m shown, what links I’m provided with. They might determine what prices I’m offered for insurance, for plane tickets and so forth. They might be used to ‘rate’ me (I’m not even going to start on Klout) in other ways. They might be used to assess my likely political leanings – perhaps just for advertising at the moment, but after that….
…and yet far less attention is paid to them than the ‘obvious’ side of privacy. Even on social networking sites like Facebook, attention is paid to the ‘significant’ privacy problems – compromising or clearly embarrassing photographs for example, rather than the much more financially important detailed profiling and social mapping data that are the basis of Facebook’s business model. Do the compromising photos matter? Yes, of course they do, but ways are already being found to deal with them, through education of the users, or at least greater understanding from the users, something which has at least some chance of succeeding. As for the profiling data, few people seem to care that much at all.
Changes are needed
There are all sorts of legal problems with dealing with insignificant stuff. There is a need to show damage – and individually insignificant facts aren’t damaging, and even profiling isn’t necessarily directly ‘damaging’ in financial terms. There is the thorny issue of consent – do we consent to all this data gathering and use through the various terms and conditions we never read? Do we, as the recent Wikileaks/Twitter ruling suggests, have no real expectation of privacy in our internet dealings?
As it stands, there is little to help. Law doesn’t seem to cut it – for all the valiant efforts of the Article 29 Working Party and others. Politicians in general seem neither to understand nor to care. Business models, particularly on the internet, almost rely on these invasions of privacy. We need to change that. To protect the insignificant, we need a change in approach, a change in infrastructure, and a change in business plans. We need to understand and control online tracking. We need opt-in, not opt-out, we need explanations that actually explain, and we need a whole lot more. Most of all, we need better understanding that privacy is more than just a way for the rich and powerful to protect themselves. It's about all of us.
The privacy of the insignificant hasn’t needed protecting before – only in this digital age can their insignificant events be gathered, or processed into something significant – so the law hasn’t been needed to protect them, and hasn’t developed a form that can protect them. It needs to now.
Thursday, 10 November 2011
The beginning or the end of cyberlaw?
From time to time I have described myself as a ‘cyberlawyer’. When I’ve done so, I’ve had three kinds of reaction: the positive, the negative and the dumbfounded. Some people find the idea of cyberlaw almost exciting – looking to the future in a kind of William Gibson-esque way. Others look at it with derision – Easterbrook’s comparison of it with the non-existent law of the horse back in 1996 is one that echoes still. Some simply don’t understand what cyberlaw is, or what it might be.
For a long time I’ve taken the side of the first – indeed, my enjoyment of science fiction was certainly part of what led me down the path of cyberlaw – but I’m beginning to think that the other two reactions are perhaps more appropriate – though not necessarily for the reasons that proponents of either argument might have made. It’s not, as Easterbrook suggested, that cyberlaw is too much of a niche subject, nor that ‘cyberspace’ is something only of interest to geeks and nerds. The opposite. Increasingly it seems that almost all lawyers will have to learn cyberlaw – and that almost all people are becoming citizens of cyberspace.
The significance of cyberlaw within the legal community seems to be growing. The first time I went to the cyberlaw section of the Society of Legal Scholars conference, at the LSE in 2008, I sat through sessions with just a handful of other scholars – making even a small seminar room feel empty. This year, at Downing College Cambridge, it was standing room only as pretty much every session was packed beyond the capacity of the room. We had to borrow chairs from other far less popular sessions, and even thought of moving to one of the bigger venues. In other ways, too, cyberlaw seems to be becoming more mainstream. Over the last month or so I’ve been lucky enough to make contributions to two high-quality blogs well outside the realms of cyberspace – most recently writing about web-blocking for the UK Constitutional Law Group blog, and before that writing about the ‘right to be forgotten’ for the excellent INFORRM media law blog. Whilst I would like to pretend that I’ve been asked to make these contributions because of my individual brilliance, I have a feeling it’s much more of a reflection of the way that cyberlaw now impacts upon almost every aspect of law – and not just media and constitutional law.
Media lawyers need to understand the ‘new media’. Constitutional lawyers need to think about the impact of the cross-border nature of the internet on sovereignty, and the way that rights function online. Employment lawyers need to consider how social media impacts upon things like hiring and firing. Commercial lawyers need to understand electronic contracting. Intellectual property lawyers may well spend more time dealing with digital IP than anything else. Tax lawyers have to grapple with the complex issues of jurisdiction and so forth. Criminal lawyers have to look at how the rules of evidence apply to digital records, and think carefully about the legality of electronic investigatory methods. Human rights lawyers – and I consider my field to be as much human rights as cyberlaw – need to understand both the opportunities for and threats to human rights that arise as a result of the internet. And for each branch of law these are just some of the more obvious and superficial ways in which the digital world has to be taken into account – there are few areas of law where the internet doesn’t have a significant impact.
So what does this mean? Does the increasing importance of cyberlaw mean that we all have to become cyberlawyers – and hence that the whole idea of cyberlaw disappears? Will every lawyer be a cyberlawyer? Ultimately that may be so – but there’s a long way to go before that happens. The law is still finding it hard to come to terms with the internet, for all the efforts of the pioneering cyberlawyers – and the politicians are even further behind, with a few honourable exceptions. There’s also a significant rump of the legal ‘establishment’ that may have to be dragged kicking and screaming into the brave new world where ‘reality’ and ‘cyberspace’ are increasingly integrated. It’s coming, though, and faster, I suspect, than even people like me imagine.
Thursday, 3 November 2011
Assange - keeping the issues separate
Yesterday, as most people interested in the subject know, Assange lost his appeal against extradition to Sweden to face accusations of sexual misconduct. He lost on all four counts of his appeal, and lost so convincingly that many commentators have suggested that his chances of success in one, final appeal to the Supreme Court are very slim indeed. He has not yet, at the time of writing, decided whether or not to make such an appeal.
It’s not the facts of what happened yesterday that matters to me, but the implications – and in particular, the reactions from so many people interested in Assange, in Wikileaks, in freedom of information, in combating secrecy, in the potential liberating power of the internet and so forth. For far too many of them, in my opinion, all these issues have been far to closely linked. We need to separate out the different issues. Julian Assange is not Wikileaks, and Wikileaks is not Julian Assange. Freedom of information and the fight against government and corporate secrecy and power is not dependent on Wikileaks, let alone on Julian Assange himself. We need to be able to separate the issues, and to think clearly about them. We need to be able to fight the right battles, not the wrong ones.
There are many people who, like me, are very much in support of the aims of Wikileaks, and who see the liberating potential of the internet as one of the most important things to emerge in recent times (without understating the reverse – the potential for the internet to be used for oppression and control, as so ably set out by Evgeny Morozov and others), but who, at the same time, support the concept of the rule of law, where that law is both appropriate and proportionate. I want open government, liberal government, accountable government – not no government at all. I don’t want personality cults, I don’t want anyone to be above the law, whether they are ‘good guys’ or ‘bad guys’. For me, that means I want Assange to face his accusers, and I want to be able to find out whether he is guilty or not.
Assange has already lost a lot of supporters in Sweden – as this Swedish commentator points out – by attacking both their legal system in relation to sexual offences and their apparent willingness to extradite easily to the US. For me, both of these accusations need to be looked at very carefully. Most people who have studied the way that sexual offences – and in particular accusations of rape – have been treated historically in the courts should recognise that women have generally got a very raw deal indeed. The way that the Swedish system has attempted to at least to start to rectify this balance is one that should be applauded and supported, not attacked or even vilified, in the way that some supporters of Assange seem to have done – ‘the Saudi Arabia of Feminism’ is one of the descriptions put forward. Such attacks are not justified or in any way appropriate – at least not to me.
And are Sweden really more likely to extradite Assange to the US than we are in the UK? It seems unlikely, as Andy Greenberg’s report in Forbes suggests. The UK doesn’t have a good record in resisting such requests – and given all the publicity it seems highly unlikely that the Swedish would let such a thing happen on their watch. Moreover, the Swedish system would require dual criminality for an extradition to occur – that is, the offence committed has to be a crime both in the country seeking extradition and in Sweden itself. Assange’s ‘offenses’ would not easily be shoehorned into that description. Either way, it’s hard to see an extradition occurring from Sweden – extradition from the UK seems far more likely.
There's one further point about the Swedish system - one that seems to have been missed by many of his supporters. It’s not really true that ‘no charges’ have been brought. As the judge pointed out in yesterday’s ruling, the Swedish system is different to that in the UK, and ‘charges’ are only brought at a very late stage, with a trial to follow almost immediately. The Swedish investigation has gone past the point where, in the UK, US or Australian investigation, charges would have been brought. Implications that the opposite true are really not helpful.
When I’ve suggested either that Assange was likely to get a fair trial in Sweden or that extradition to the US was unlikely, many people have shot me down, suggesting that there would be a stitch up between the Swedish and US authorities, that the charges were trumped up to start with – ultimately that there is a great conspiracy to bring Assange down. I don’t find the latter that difficult to believe – there are certainly some very bad things happening in relation to Wikileaks, and the approach used to try to squeeze the life out of them through the financial blockade is one of the most reprehensible and dangerous developments of recent years. However, if that conspiracy extends to ‘trumped up’ charges of rape and sexual assault on Assange, then for me that actually provides an opportunity, not a threat.
That’s where the rub comes. If Assange is guilty, then he should face the charges and receive appropriate punishment. If he’s innocent – and in particular if he’s the victim of a conspiracy-based set-up – then by facing the charges, by going through a legal process, he can prove that, and even expose the conspiracy. I’m not saying that I believe either way – neither I, nor the vast majority of either his supporters or his enemies know enough to know that. If he’s guilty, he wouldn’t be the first man to have abused his position of celebrity and power to behave inappropriately. If he’s innocent, he wouldn’t be the first innocent man accused in this way – or the first set up by his enemies.
For me, though, if you support the kinds of things that Wikileaks supports – exposing the truth, holding the powerful to account, moving towards a better, more open, more liberal future – you should want all this to be out in the open too. That means letting Assange go to Sweden, and it means refraining from the very smear tactics that his opponents use in relation to the Swedish judicial system. There are many, many things to be concerned about in relation to the treatment of Wikileaks, and indeed of Assange – but yesterday’s ruling, almost certainly correct from a legal perspective as bloggers like the excellent Adam Wagner have made clear, is not one of them.
Whether Assange is guilty or not, and whether he’s found guilty or not, supporters of freedom of information – and supporters of Wikileaks – should try not to tie his personal issues with the broader, more important issues that Wikileaks has raised. They’re not intrinsically and inextricably linked – and if we let them be, we’re playing into the hands of the very groups that we should be opposing.
Tuesday, 25 October 2011
Search Engines, Search Engine Optimisation - and us!
Last week, Google announced that it was making SSL encryption the default on all searches for ‘signed in’ people. They announced it as a move towards better security and privacy, and some people (myself included) saw it as a small but potentially significant step in the right direction. Almost as soon as the announcement was out, however, stories saying exactly the opposite began to appear: the blogosphere was abuzz. One of the more notable – one that was tweeted around what might loosely be described as ‘privacy circles’ came in the Telegraph. “Google is selling your privacy at a price” was the scary headline.
So who was right? Was it a positive move for privacy, or another demonstration that Google doesn’t follow its own mantra about doing evil? Perhaps, when you look a little deeper, it was neither – and both Google and those who wrote stories like that in the Telegraph have another agenda. Perhaps it’s not what happened with SSL, but that agenda that we should be concerned about. The clue comes from looking a bit closer at who wrote the story in the Telegraph: Rob Jackson, who is described as ‘the MD of Elisa DBI, a digital business measurement and optimisation consultancy’. That is, he comes from the Search Engine Optimisation (SEO) industry. What’s happening here isn’t really much to do with privacy as far as either Google or the SEO industry – it’s just another episode in the cat-and-mouse story between search engines and those who want to ‘manipulate’ them, a story that’s been going on since search engines first appeared. The question is, how do we, the ordinary citizens of cyberspace, fit into that story. Do we benefit from the ongoing conflict and tension between the two, a tension which brings about developments both on both a technological and business level – or are we, as some think is true in much of what goes on in cyberspace, just being used to make money by all concerned, and our privacy and autonomy is neither here nor there?
What’s really going on?
As far as I can see, the most direct implication of the implementation of SSL encryption is that Google are preventing webmasters of sites reached through a Google search – and SEOs – from seeing the search term used to find them. Whether those webmasters – let alone the SEOs – have any kind of ‘right’ to know how they were found is an unanswered question, but for the webmasters it is an annoyance at least. For SEOs, on the other hand, it could be a major blow, as it undermines a fundamental part of the way that they work. That, it seems to me, is why they’re so incensed by the move – it makes their job far harder to do. Without having at least some knowledge of which search term produces which result, how can they help sites to be easier to find? How can they get your site higher on the search results, as they often claim to be able to do?
I have little doubt that they’ll find a way – historically they always have. With every new development of search there’s been a corresponding development by those who wish to get their sites – or more directly the sites of their clients – higher up the lists, from choosing particular words on the sites to the use of metatags right up to today’s sophisticated SEOs. Still, it’s interesting that the story that they’ve been pushing out is that Google is ‘selling your privacy for a price’. That in itself is somewhat misleading. A more honest headline might have been:
‘Google is STILL selling your privacy for a price, but now they’re trying to stop us selling it too!’
Google has, in many ways, always been selling your private information – that’s how their business model works, using the terms you use to search in order to target their advertising – but with the SSL move they’ve made it harder for others to use that information too. They themselves will still know the search terms, and seems to still be ‘selling’ the terms to those using their AdWords system – but that’s what they’ve pretty much always done, even if many people have remained blissfully unaware that this was what was happening.
There’s another key difference between Google and the SEOs – from Google, we do at least get an excellent service in exchange for letting them use our search terms to make money. Anyone who remembers the way we used to navigate the web before Google should acknowledge that what they do makes our online lives much faster and easier. There’s an exchange going on, an exchange that is at least to an extent mutually beneficial. It's part of the symbiotic relationship between the people using the internet and the businesses who run the fundamental services of the internet that is described in my theory of The Symbiotic Web. With SEOs, the question is whether we – particularly in our capacity as searchers – are actually benefiting at all.
The business of Search Engine Optimisation
Who DOES benefit from the work of SEOs? Their claims are bold. As Rob Jackson puts it in the Telegraph article:
“One leading SEO professional told me that Google is essentially reverse-engineered by the the SEO professionals around the world. If they were all to stop at once, Google wouldn't be able to find its nose.”
It’s a bold claim, but I suspect that people within Google would be amused rather than alarmed by the idea. Do we, as users, benefit from the operations of SEOs? On the face of it, it appears unlikely: searchers want to find the sites most relevant and useful to them, not the sites whose webmasters have employed the best SEOs to optimise their sites. Excellent and relevant sites and services get pushed down the search list by less good and less helpful sites who have used the most advanced and effective SEO techniques. And it’s our information, our search terms, that are being used by the SEOs.
There is, however, another side to the business, and one that’s growing in significance all the time. The idea that we are just ‘searchers’ looking round the web for information and interesting things is outdated, at least for a fair number of us. We also blog, we have our own private sites – and often our own ‘business’ sites. And we want our blogs to be read, our sites to be found – and how can this happen unless there is a way for them to be found.
SEOs might say that this is where they come in, this is where they can help us – and this might well be true to an extent. I for one, however, would like my sites to be judged on their merits, read because they’re worth reading and not just because I’ve employed a bit of a wizard to do the optimisation. I’d like search to be fair – I don’t want my services to be at a disadvantage either to those who have a commercial tie-in with Google or to those who are paying a better SEO than mine. I want a right to be found – when I want to be found.
Do I have a right like that? Should I have a right like that? Cases like the Foundem case have asked that, but I don’t think we yet have an answer, or at least what answers we have have been inconclusive and hardly heard. Perhaps we should be asking it a bit more loudly.
Thursday, 20 October 2011
Goo goo google's tiny steps towards privacy...
Things seem to be hotting up in the battle for privacy on the internet. Over the last few days, Google have made three separate moves which look, on the surface at least, as though they're heading, finally, in the right direction as far as privacy is concerned. Each of the moves could have some significance, and each has some notable drawbacks - but to me at least, it's what lies behind them that really matters.
The first of the three moves was the announcement on October 19th, that for signed in users, Google was now adding end-to-end (SSL) encryption for search. I'll leave the technical analysis of this to those much more technologically capable than me, but the essence of the move is that it adds a little security for users, making it harder to eavesdrop on a user's seating activities - and meaning that when someone arrives at a website after following a google search, the webmaster of the site arrived at will know that the person arrived via google, but not the search term used to find them. There are limitations, of course, and Google themselves still gather and store the information for their own purposes, but it is still a step forward, albeit small. It does, however, only apply to 'signed in' users - which cynics might say is even more of a drawback, because by signing in a user is effectively consenting to the holding, use and aggregation of their data by Google. The Article 29 Working Party, the EU body responsible for overseeing the data protection regime, differentiates very clearly between signed-in and 'anonymous' (!) users of the service in terms of complying with consent requirements - Google would doubtless very much like more and more users to be signed in when they use the service, if only to head off any future legal conflicts. Nonetheless, the implementation of SSL should be seen as a positive step - the more that SSL is implemented in all aspects of the internet, the better. It's a step forward - but a small one.
There have also been suggestions (e.g. in this article in the Telegraph) that the move is motivated only by profit, and in particular to make Google's AdWords more effective at the expense of techniques used by Search Engine Optimisers, who with the new system will be less able to analyse and hence optimise. There is something to this, no doubt - but it must also be remembered first of all that pretty much every move of Google is motivated by profit, that's the nature of the beast, and secondly that a lot of the complaints (including the Telegraph article) come from those with a vested interest in the status quo - the Search Engine Optimisers themselves. Of course profit is the prime motivation - but if profit motives drive businesses to do more privacy-friendly things, so much the better. That, as will be discussed below, is one of the keys to improving things for privacy.
There have also been suggestions (e.g. in this article in the Telegraph) that the move is motivated only by profit, and in particular to make Google's AdWords more effective at the expense of techniques used by Search Engine Optimisers, who with the new system will be less able to analyse and hence optimise. There is something to this, no doubt - but it must also be remembered first of all that pretty much every move of Google is motivated by profit, that's the nature of the beast, and secondly that a lot of the complaints (including the Telegraph article) come from those with a vested interest in the status quo - the Search Engine Optimisers themselves. Of course profit is the prime motivation - but if profit motives drive businesses to do more privacy-friendly things, so much the better. That, as will be discussed below, is one of the keys to improving things for privacy.
The second of the moves was the launch of Google's 'Good to know', a 'privacy resource centre', intended to help guide users in how to find out what's happening to their data, and to use tools to control that data use. Quite how effective it will be has yet to be seen - but it is an interesting move, particularly in terms of how Google is positioning itself in relation to privacy. It follows from the much quieter and less user-friendly Google Dashboard and Google AdPreferences, which technically gave users quite a lot of information and even some control, but were so hard to find that for most intents and purposes they appeared to exist only to satisfy the demands of privacy advocates, and not to do anything at all for ordinary users. 'Good to know' looks like a step forward, albeit a small and fairly insubstantial one.
The third move is the one that has sparked the most interest - the announcement by Google executive Vic Gundotra that social networking service Google+ will 'begin supporting pseudonyms and other types of identity.' The Electronic Frontier Foundation immediately claimed 'victory in the nymwars', suggesting that Google had 'surrendered'. Others have taken a very different view - as we shall see. The 'nymwars' as they've been dubbed concern the current policies of both Facebook and Google to require a 'real' identity in order to maintain an account with them - a practice which many (myself definitely included) think is pernicious and goes against the very things which have made the internet such a success, as well as potentially putting many people at real risks in the real world. The Mexican blogger who was killed and decapitated by drugs cartels after posting on an anti-drugs website is perhaps the most dramatic example of this, but the numbers of people at risk from criminals, authoritarian governments and others is significant. To many (again, myself firmly included), the issue of who controls links between 'real' and 'online' identities is one of the most important on the internet in its current state. The 'nymwars' are of fundamental importance - and so, to me, is Google's announcement.
Some have greeted it with cynicism and anger. One blogger put it bluntly:
"Google's statement is obvious bullshit, and here's why. The way you "support" pseudonyms is as follows: Stop deleting peoples' accounts when you suspect that the name they are using is not their legal name.
There is no step 2."
The EFF's claims of 'victory' in the nymwars is perhaps overstated - but Google's move isn't entirely meaningless, nor is it necessarily cynical. Time will tell exactly what Google means by 'supporting pseudonyms', and whether it will really start to deal with the problems brought about by a blanket requirement for 'real' identities - but this isn't the first time that someone within Google has been thinking about these issues. Back in February, Google's 'Director of Privacy, Product and Engineering' wrote a blog for the Google Policy Blog called 'The freedom to be who you want to be...', in which she said that Google recognised three kinds of user: 'unidentified', pseudonymous and identified. It's a good piece, and well worth a read, and shows that within Google these debates must have been going on for a while, because the 'real identity' approach for Google Plus has at least in the past been directly contrary to what Whitten was saying in the blog.
That's one of the reasons I think Vic Gundotra's announcement is important - it suggests that the 'privacy friendly' people within Google are having more say, and perhaps even winning the arguments. When you combine it with the other two moves mentioned above, that seems even more likely. Google may be starting to position itself more firmly on the 'privacy' side of the fence, and using privacy to differentiate itself from the others in the field - most notably Facebook. To many people, privacy has often seemed like the last thing that Google would think about - that may be finally changing.
4Chan's Chris Poole, in a brilliant speech to the Web 2.0 conference on Monday, challenged Facebook, Google and others to start thinking of identity in a more complex, nuanced way, and suggested that Facebook and Google, with their focus on real identities, had got it fundamentally wrong. I agreed with almost everything he said - and so, I suspect, did some of the people at Google. The tiny steps we've seen over the last few days may be the start of their finding a way to make that understanding into something real. At the very least, Google seem to be making a point of saying so.
That, for me, is the final and most important point. While Google and Facebook, the two most important players in the field, stood side by side in agreement about the need for 'real' identities, it was hard to see a way to 'defeat' that concept, and it felt almost as though victory for the 'real' identities side was inevitable, regardless of all the problems that would entail, and regardless of the wailing and gnashing of teeth of the privacy advocates, hackers and so forth about how wrong it was. If the two monoliths no longer stand together, that victory seems far less assured. If we can persuade Google to make a point of privacy, and if that point becomes something that brings Google benefits, then we all could benefit in the end. The nymwars certainly aren't over, but there are signs that the 'good guys' might not be doomed to defeat.
Google is still a bit of a baby as far as privacy is concerned, making tiny steps but not really walking yet, let alone running. In my opinion, we need to encourage it to keep on making those tiny steps, applaud those steps, and it might eventually grow up...
UPDATED TO INCLUDE REFERENCE TO SEOS...
UPDATED TO INCLUDE REFERENCE TO SEOS...
Tuesday, 18 October 2011
Privacy is personal...
My real interest in privacy - and specifically internet privacy - arose a little over ten years ago. Something happened to me that change the way I thought about the whole issue - something personal, something direct. Up until that point I hadn't really thought much about privacy, though I'd been involved with the online world from a very early stage, setting up projects to provide rural communities with access to information, and trying to provide online education to housebound children in the mid 1990s - not exactly cutting edge stuff, but not too far from it. I'd also been involved in human rights work - most directly children's rights - but I'd never thought much about privacy. To me, then, just as to many people now, it just didn't feel important, particularly compared to the problems happening all over the world. 911 had just happened, and war was in the air.
I was living in New Zealand when the US invaded Afghanistan - and I was deeply concerned about the consequences of that action. I wrote about my concern in an email to a friend, also in New Zealand, and in that email I was at least partially critical of US foreign policy. I even mentioned Israel at one point. Some time over the next three hours, my email account became inaccessible.
At the time I was using a free email account - one of the big ones - that I had set up whilst in the US a few years earlier. A '.com' email account. As I was living in a very isolated part of New Zealand, this email account was one of my few links to the outside world. It had all my contacts' details, and all the messages I had sent and received for a long time - and I had been foolish enough not to keep written records elsewhere of a lot of the details. At first I thought it was just a blip, an accident - and I set up another email account and wrote to the service provider asking what had happened to my account, whether the password had been accidentally reset or something else. I was met with terse replies saying that the account had been terminated for a breach of contract terms. Friends told me to give up, and go with the new account - but I'm not that kind of person. I kept on badgering them, trying to find out what was going on. I hadn't yet thought that it might be connected with the email that I'd sent. Eventually I got a message saying that I had been using the email for commercial purposes, which is why it had been cancelled - which was absurd, as anyone who knew my financial position at the time would know. Then, about six months later, they reinstated the account, minus all the content, contacts and so forth.
Now of course I have no evidence to prove that the account was cancelled because of that particular email - it may indeed just have been a mistake, the account may even have been hacked into (though such things were much rarer in those days), but even the suspicion was enough to disturb me enormously, and set me on the path that I'm still on today. I started asking how it could have happened, what happens to emails, how easily they can be read, how my privacy might have been invaded. The more I investigated, the more I uncovered, the more interested I became - and it ended up changing my whole life. The perceived invasion of privacy - in a sense it doesn't even matter if it was real - was so personal that it cut me to the quick.
Back then I had had very little to do with the law - my degree was in mathematics, I qualified as an accountant and worked with technology, not the law. Now, as a result of following this path, I'm a lecturer in a law school at a good university, have published research and submitted a PhD on the subject of data privacy - and it seems even more relevant than it did ten years ago, as the online world has expanded and become more and more intrinsically linked with everything we do. Invasions of privacy do matter - whatever the likes of Mark Zuckerberg might think - and they matter because they're deeply personal, and touch the parts of us that we really care about.
I was living in New Zealand when the US invaded Afghanistan - and I was deeply concerned about the consequences of that action. I wrote about my concern in an email to a friend, also in New Zealand, and in that email I was at least partially critical of US foreign policy. I even mentioned Israel at one point. Some time over the next three hours, my email account became inaccessible.
At the time I was using a free email account - one of the big ones - that I had set up whilst in the US a few years earlier. A '.com' email account. As I was living in a very isolated part of New Zealand, this email account was one of my few links to the outside world. It had all my contacts' details, and all the messages I had sent and received for a long time - and I had been foolish enough not to keep written records elsewhere of a lot of the details. At first I thought it was just a blip, an accident - and I set up another email account and wrote to the service provider asking what had happened to my account, whether the password had been accidentally reset or something else. I was met with terse replies saying that the account had been terminated for a breach of contract terms. Friends told me to give up, and go with the new account - but I'm not that kind of person. I kept on badgering them, trying to find out what was going on. I hadn't yet thought that it might be connected with the email that I'd sent. Eventually I got a message saying that I had been using the email for commercial purposes, which is why it had been cancelled - which was absurd, as anyone who knew my financial position at the time would know. Then, about six months later, they reinstated the account, minus all the content, contacts and so forth.
Now of course I have no evidence to prove that the account was cancelled because of that particular email - it may indeed just have been a mistake, the account may even have been hacked into (though such things were much rarer in those days), but even the suspicion was enough to disturb me enormously, and set me on the path that I'm still on today. I started asking how it could have happened, what happens to emails, how easily they can be read, how my privacy might have been invaded. The more I investigated, the more I uncovered, the more interested I became - and it ended up changing my whole life. The perceived invasion of privacy - in a sense it doesn't even matter if it was real - was so personal that it cut me to the quick.
Back then I had had very little to do with the law - my degree was in mathematics, I qualified as an accountant and worked with technology, not the law. Now, as a result of following this path, I'm a lecturer in a law school at a good university, have published research and submitted a PhD on the subject of data privacy - and it seems even more relevant than it did ten years ago, as the online world has expanded and become more and more intrinsically linked with everything we do. Invasions of privacy do matter - whatever the likes of Mark Zuckerberg might think - and they matter because they're deeply personal, and touch the parts of us that we really care about.
Friday, 14 October 2011
Business and Privacy: Evidence and Assumptions?
I came across a couple of stories yesterday that at first glance appeared unconnected, dealing with difference aspects of the current privacy debates concerning the internet. One comes from one side of the Atlantic, the other from the other. One deals with the 'fight' against piracy, the other with the current favourite of the online advertising industry, behavioural targeting. Very different issues - but they do have something in common: an inherent assumption that business success should take precedence over individual rights and freedoms.
The first issue was the revelation, through a Freedom of Information Request by the admirable Open Rights Group, that the Department of Culture, Media and Sport had no evidence to support their strategies to reduce the infringement of copyright by websites - you can see their report on the issue here.
The second came from my following of the House Energy and Commerce Committee hearing in Washington, about consumer privacy and online behavioural advertising - a hearing at least on the surface intended to consider consumer concerns, but which by the sound of it had a lot more to do with industry putting their case to avoid regulation. I followed on twitter, and remember one particular call from a regular and respected tweeter from the US who demanded evidence before regulation is considered. Specifically, he wanted evidence as to how much of the advertising economy depended on behavioural targeting - the underlying suggestion being, presumably, that we shouldn't regulate if it would have too significant an impact on revenue streams.
There are two different ways to look at the two stories. You can look at them as a reflection of the different attitudes to regulation on the two sides of the Atlantic - in England we're rushing to regulate, while in the US regulation is to be avoided unless absolutely necessary. Alternatively, however, you can look at them as a reflection of the way that business needs are set above individual rights and freedoms.
Copyright and piracy....
The Open Rights Group's request was in relation to the proposals in the Digital Economy Act, but that Act is just one of many measures introduced over the years to combat 'piracy', although the evidence in support of any of them has generally been conspicuous by its absence. That applies both to evidence to suggest that the problem is as bad as the industry suggests and to the efficacy of the measures being proposed to combat it. Does piracy cause a massive loss of revenue to rights holders? Perhaps, but the suggestions over the years that every illegally downloaded song is a lost sale is far from convincing, and the idea that listening to something illegally might even lead to further legal sales seems to have merit too. The massive success of iTunes suggests that carrots rather than sticks might be more effective - indeed, recent reports from Sweden showing that piracy had reduced as Spotify had been introduced adds weight to this idea.
The Open Rights Group's FOI request was about the effectiveness of the proposals - and the DCMS effectively acknowledged that they have no evidence about it. So we have proposals for measures about which there is no evidence, to address an issue about which evidence is scanty to say the least... and yet on that basis we're willing to put restrictions on individuals' freedoms, potentially apply censorship, and even cut off people's internet access as a result. That same internet access that is increasingly regarded as a human right.
The Digital Economy Act is one thing, but there's something else looming on the horizon of even more concern: the Anti-Counterfeiting Trade Agreement (ACTA), whose measures are potentially even more draconian than those in the DEA, and whose scope is even more all-encompassing. The US has already signed it - somewhat against the suggestion that the US prefers not to regulate where possible - and the EU may well sign it soon, though it still needs to pass through the European Parliament, and lobbying of MEPs is underway on both sides.
Behavioural advertising...
Legislation on behavioural advertising has already taken place in Europe, with the notorious 'Cookies Directive', about which I've written before - but the implementation, enforcement and acceptance of that directive has proved troublesome from the outset, and whether it ends up being at all meaningful has yet to be seen. Legislation in the US is what is currently under discussion, and what is being keenly resisted by the advertising industry and others. 'Show us the evidence' is the call - and until that evidence is shown, advertisers should be able to do whatever they want.
Evidence in relation to privacy is a contentious issue in lots of ways. Demonstrating 'harm' from an invasion of privacy is difficult, partly because each individual invasion isn't likely to be significant - particularly in respect of mundane tracking of websites browsed and so forth - and partly because the 'harm' is generally intangible, and far from easily turned into something easily quantifiable. Some people suggest that we should treat our personal information like a commodity, akin in some ways to intellectual property, but for me that fails to capture the real essence of privacy. I don't want to put a 'value' on my personal data, any more than I want to put a value on each of my fingers, or on my relationships with my friends and family. It's something different, and needs protecting as something different. I shouldn't need to prove the 'harm' done by that data being at risk - the loss of it, or loss of control over it, is a harm in itself.
That isn't all - not only does there appear to be an expectation that we should prove harm, but that even if there IS harm, we've got to prove that we wouldn't be damaging the advertisers' businesses too much. If their businesses would be harmed too much, we shouldn't put regulations in place....
Two different situations - but the same assumptions
In the copyright scenario, we're having our freedom restricted and our privacy invaded without real evidence to support what's happening. In the behavioural advertising scenario, we're having our privacy invaded and we're being asked to prove that there's a problem before any restrictions are placed - and, what's more, we're being asked to prove that we wouldn't damage business too much.
In both cases, it's the individuals who lose out. Business takes priority, and individuals rights, particularly in respect of privacy, are overridden. Where businesses perceive there are problems (as in the copyright scenario), they're not asked for proof - but where individuals perceive there are problems, they're asked for proof in ways that are inappropriate and unattainable. Shouldn't the situation be exactly the other way around? Shouldn't individuals' rights be considered above the business models of corporations? Shouldn't the burden of proof work in favour of individuals against businesses, rather than the other way around? Of course that's a difficult argument to make in economically troubled times - but it's an argument that in my opinion needs to be made, and made strongly.
The first issue was the revelation, through a Freedom of Information Request by the admirable Open Rights Group, that the Department of Culture, Media and Sport had no evidence to support their strategies to reduce the infringement of copyright by websites - you can see their report on the issue here.
The second came from my following of the House Energy and Commerce Committee hearing in Washington, about consumer privacy and online behavioural advertising - a hearing at least on the surface intended to consider consumer concerns, but which by the sound of it had a lot more to do with industry putting their case to avoid regulation. I followed on twitter, and remember one particular call from a regular and respected tweeter from the US who demanded evidence before regulation is considered. Specifically, he wanted evidence as to how much of the advertising economy depended on behavioural targeting - the underlying suggestion being, presumably, that we shouldn't regulate if it would have too significant an impact on revenue streams.
There are two different ways to look at the two stories. You can look at them as a reflection of the different attitudes to regulation on the two sides of the Atlantic - in England we're rushing to regulate, while in the US regulation is to be avoided unless absolutely necessary. Alternatively, however, you can look at them as a reflection of the way that business needs are set above individual rights and freedoms.
Copyright and piracy....
The Open Rights Group's request was in relation to the proposals in the Digital Economy Act, but that Act is just one of many measures introduced over the years to combat 'piracy', although the evidence in support of any of them has generally been conspicuous by its absence. That applies both to evidence to suggest that the problem is as bad as the industry suggests and to the efficacy of the measures being proposed to combat it. Does piracy cause a massive loss of revenue to rights holders? Perhaps, but the suggestions over the years that every illegally downloaded song is a lost sale is far from convincing, and the idea that listening to something illegally might even lead to further legal sales seems to have merit too. The massive success of iTunes suggests that carrots rather than sticks might be more effective - indeed, recent reports from Sweden showing that piracy had reduced as Spotify had been introduced adds weight to this idea.
The Open Rights Group's FOI request was about the effectiveness of the proposals - and the DCMS effectively acknowledged that they have no evidence about it. So we have proposals for measures about which there is no evidence, to address an issue about which evidence is scanty to say the least... and yet on that basis we're willing to put restrictions on individuals' freedoms, potentially apply censorship, and even cut off people's internet access as a result. That same internet access that is increasingly regarded as a human right.
The Digital Economy Act is one thing, but there's something else looming on the horizon of even more concern: the Anti-Counterfeiting Trade Agreement (ACTA), whose measures are potentially even more draconian than those in the DEA, and whose scope is even more all-encompassing. The US has already signed it - somewhat against the suggestion that the US prefers not to regulate where possible - and the EU may well sign it soon, though it still needs to pass through the European Parliament, and lobbying of MEPs is underway on both sides.
Behavioural advertising...
Legislation on behavioural advertising has already taken place in Europe, with the notorious 'Cookies Directive', about which I've written before - but the implementation, enforcement and acceptance of that directive has proved troublesome from the outset, and whether it ends up being at all meaningful has yet to be seen. Legislation in the US is what is currently under discussion, and what is being keenly resisted by the advertising industry and others. 'Show us the evidence' is the call - and until that evidence is shown, advertisers should be able to do whatever they want.
Evidence in relation to privacy is a contentious issue in lots of ways. Demonstrating 'harm' from an invasion of privacy is difficult, partly because each individual invasion isn't likely to be significant - particularly in respect of mundane tracking of websites browsed and so forth - and partly because the 'harm' is generally intangible, and far from easily turned into something easily quantifiable. Some people suggest that we should treat our personal information like a commodity, akin in some ways to intellectual property, but for me that fails to capture the real essence of privacy. I don't want to put a 'value' on my personal data, any more than I want to put a value on each of my fingers, or on my relationships with my friends and family. It's something different, and needs protecting as something different. I shouldn't need to prove the 'harm' done by that data being at risk - the loss of it, or loss of control over it, is a harm in itself.
That isn't all - not only does there appear to be an expectation that we should prove harm, but that even if there IS harm, we've got to prove that we wouldn't be damaging the advertisers' businesses too much. If their businesses would be harmed too much, we shouldn't put regulations in place....
Two different situations - but the same assumptions
In the copyright scenario, we're having our freedom restricted and our privacy invaded without real evidence to support what's happening. In the behavioural advertising scenario, we're having our privacy invaded and we're being asked to prove that there's a problem before any restrictions are placed - and, what's more, we're being asked to prove that we wouldn't damage business too much.
In both cases, it's the individuals who lose out. Business takes priority, and individuals rights, particularly in respect of privacy, are overridden. Where businesses perceive there are problems (as in the copyright scenario), they're not asked for proof - but where individuals perceive there are problems, they're asked for proof in ways that are inappropriate and unattainable. Shouldn't the situation be exactly the other way around? Shouldn't individuals' rights be considered above the business models of corporations? Shouldn't the burden of proof work in favour of individuals against businesses, rather than the other way around? Of course that's a difficult argument to make in economically troubled times - but it's an argument that in my opinion needs to be made, and made strongly.
Tuesday, 11 October 2011
Privacy, Parenting and Porn
One of the stories doing the media rounds today surrounded the latest pronouncements from the Prime Minister concerning porn on the internet. Two of my most commonly used news sources, the BBC and the Guardian, had very different takes on in. The BBC suggested that internet providers were offering parents an opportunity to block porn (and 'opt-in' to website blocking) while the Guardian took it exactly the other way - suggesting that users would have to opt out of the blocking - or, to be more direct, to 'opt-in' to being able to receive porn.
Fool that I am, I fell for the Guardian's version of the story (as did a lot of people, from the buzz on twitter) which seems now to have been thoroughly debunked, with the main ISPs saying that the new system would make no difference, and bloggers like the excellent David Meyer of ZDNet making it clear that the BBC was a lot closer to the truth. The idea would be that parents would be given the choice as to whether to accept the filtering/blocking system, which, on the face of it, seems much more sensible.
Even so, the whole thing sets off a series of alarm bells. Why does this sort of thing seem worrying? The first angle that bothers me is the censorship one - who is it that decides what is filtered and what is not? Where do the boundaries lie? One person's porn is another person's art - and standards are constantly changing. Cultural and religious attitudes all come into play. Now I'm not an expert in this area - and there are plenty of people who have written and said a great deal about it, far more eloquently than me - but at the very least it appears clear that there are no universal standards, and that decisions as to what should or should not be put on 'block lists' need to be made very carefully, with transparency about the process and accountability from those who make the decisions. There needs to be a proper notification and appeals process - because decisions made can have a huge impact. None of that appears true about most 'porn-blocking' systems, including the UK's Internet Watch Foundation, often very misleadingly portrayed as an example of how this kind of thing should be done.
The censorship side of things, however, is not the angle that interests me the most. Two others are of far more interest: the parenting angle, and the privacy angle. As a father myself, of course I want to protect my child - but children need independence and privacy, and need to learn how to protect themselves. The more we try to wrap them in cotton wool, to make their world risk-free, the less able they are to learn how to judge for themselves, and to protect themselves. If I expect technology, the prime minister, the Internet Watch Foundation to do all the work for me, not only am I abdicating responsibility as a parent but I'm denying my child the opportunity to learn and to develop. The existence of schemes like the one planned could work both ways at once: it could make parents think that their parenting job is done for them, and it could also reduce children's chances to learn to discriminate, to decide, and to develop their moral judgment....
....but that is, of course, a very personal view. Other parents might view it very differently - what we need is some kind of balance, and, as noted above, proper transparency and accountability.
The other angle is that of privacy. Systems like this have huge potential impacts on privacy, in many different ways. One, however, is of particular concern to me. First of all, suppose the Guardian was right, and you had to 'opt-in' to be able to view the 'uncensored internet'. That would create a database of people who might be considered 'people who want to watch porn'. How long before that becomes something that can be searched when looking for potential sex offenders? If I want an uncensored internet, does that make me a potential paedophile? Now the Guardian appears to be wrong, and instead we're going to have to opt-in to accept the filtering system - so there won't be a list of people who want to watch porn, instead a list of people who want to block porn. It wouldn't take much work, however, on the customer database of a participating ISP to select all those users who had the option to choose the blocking system, and didn't take it. Again, you have a database of people who, if looked at from this perspective, want to watch porn....
Now maybe I'm overreacting, maybe I'm thinking too much about what might happen rather than what will happen - but slippery slopes and function creep are far from rare in this kind of a field. I always think of the words of Bruce Schneier, on a related subject:
"It’s bad civic hygiene to build technologies that could someday be used to facilitate a police state"
Now I'm not suggesting that this kind of thing would work like this - but the more 'lists' and 'databases' we have of people who don't do what's 'expected' of them, or what society deems 'normal', the more opportunities we create for potential abuse. We should be very careful...
Fool that I am, I fell for the Guardian's version of the story (as did a lot of people, from the buzz on twitter) which seems now to have been thoroughly debunked, with the main ISPs saying that the new system would make no difference, and bloggers like the excellent David Meyer of ZDNet making it clear that the BBC was a lot closer to the truth. The idea would be that parents would be given the choice as to whether to accept the filtering/blocking system, which, on the face of it, seems much more sensible.
Even so, the whole thing sets off a series of alarm bells. Why does this sort of thing seem worrying? The first angle that bothers me is the censorship one - who is it that decides what is filtered and what is not? Where do the boundaries lie? One person's porn is another person's art - and standards are constantly changing. Cultural and religious attitudes all come into play. Now I'm not an expert in this area - and there are plenty of people who have written and said a great deal about it, far more eloquently than me - but at the very least it appears clear that there are no universal standards, and that decisions as to what should or should not be put on 'block lists' need to be made very carefully, with transparency about the process and accountability from those who make the decisions. There needs to be a proper notification and appeals process - because decisions made can have a huge impact. None of that appears true about most 'porn-blocking' systems, including the UK's Internet Watch Foundation, often very misleadingly portrayed as an example of how this kind of thing should be done.
The censorship side of things, however, is not the angle that interests me the most. Two others are of far more interest: the parenting angle, and the privacy angle. As a father myself, of course I want to protect my child - but children need independence and privacy, and need to learn how to protect themselves. The more we try to wrap them in cotton wool, to make their world risk-free, the less able they are to learn how to judge for themselves, and to protect themselves. If I expect technology, the prime minister, the Internet Watch Foundation to do all the work for me, not only am I abdicating responsibility as a parent but I'm denying my child the opportunity to learn and to develop. The existence of schemes like the one planned could work both ways at once: it could make parents think that their parenting job is done for them, and it could also reduce children's chances to learn to discriminate, to decide, and to develop their moral judgment....
....but that is, of course, a very personal view. Other parents might view it very differently - what we need is some kind of balance, and, as noted above, proper transparency and accountability.
The other angle is that of privacy. Systems like this have huge potential impacts on privacy, in many different ways. One, however, is of particular concern to me. First of all, suppose the Guardian was right, and you had to 'opt-in' to be able to view the 'uncensored internet'. That would create a database of people who might be considered 'people who want to watch porn'. How long before that becomes something that can be searched when looking for potential sex offenders? If I want an uncensored internet, does that make me a potential paedophile? Now the Guardian appears to be wrong, and instead we're going to have to opt-in to accept the filtering system - so there won't be a list of people who want to watch porn, instead a list of people who want to block porn. It wouldn't take much work, however, on the customer database of a participating ISP to select all those users who had the option to choose the blocking system, and didn't take it. Again, you have a database of people who, if looked at from this perspective, want to watch porn....
Now maybe I'm overreacting, maybe I'm thinking too much about what might happen rather than what will happen - but slippery slopes and function creep are far from rare in this kind of a field. I always think of the words of Bruce Schneier, on a related subject:
"It’s bad civic hygiene to build technologies that could someday be used to facilitate a police state"
Now I'm not suggesting that this kind of thing would work like this - but the more 'lists' and 'databases' we have of people who don't do what's 'expected' of them, or what society deems 'normal', the more opportunities we create for potential abuse. We should be very careful...
Monday, 10 October 2011
Privacy - and Occupy Wall Street?
One of tweeters I follow, the estimable @privacycamp, asked a question on twitter last night: is there a privacy take on 'Occupy Wall Street'? I immediately fired off a quick response - of course there is - but it started me off on a train of thought that's still chugging along. That's brought about this somewhat rambling blog-post, a bit different from anything I've done before - and I'd like to stress that even more than usual these are my personal musings!
Many people in the UK may not even have noticed Occupy Wall Street - it certainly hasn't had a lot of mainstream media coverage over here - but it seems to me to be something worthy of a lot of attention. A large number of people - exactly how many is difficult to be sure about - have been 'occupying' Liberty Square near Wall Street, the financial heart of New York - indeed, some might call it the financial heart of the modern capitalist world. Precisely what they're protesting against is hard to pin down but not at all hard to understand. As it's described on occupywallst.org, it is a:
"leaderless resistance movement with people of many colors, genders and political persuasions. The one thing we all have in common is that We Are The 99% that will no longer tolerate the greed and corruption of the 1%."
That isn't in any sense an 'official' definition - because there's nothing 'official' about occupy wall street. The movement has spread - the Guardian, one of the UK newspapers to give it proper coverage, talks about it reaching 70 US cities - and has lasted over three weeks so far, with little sign of flagging despite poor media coverage, strong-arm police tactics and a perceived lack of focus.
So what has this got to do with privacy? Or, perhaps more pertinently, what has this kind of a struggle got in common with the struggle for privacy? Why do people like me, whose work is concerned with internet privacy, find ourselves instinctively both supporting and admiring the people occupying Wall Street? Well, the two struggles do have a lot more in common than might appear at first glance. They're both struggles for the 'ordinary' people - for the 'little' people - against a huge and often seemingly irresistible 'machine'. Where Occupy Wall Street is faced by an array of banks with huge political and financial influence, internet privacy advocates are faced by the monoliths of the internet industry - Google, Facebook, Amazon, Microsoft, Apple etc - whose political and financial influence is beginning to rival that of the banks. Both Occupy Wall Street and internet privacy advocates are faced by systems and structures that seems to have no alternatives, and institutions which appear so entrenched as to be impossible to stand against.
Further to that, both the banks and the big players of the internet can claim with justification that over the years they've provided huge benefits to all of us, and that we wouldn't be enjoying the pleasures and benefits of our modern society but for their innovation and enterprise - I'm writing this blog on a system owned by Google, on a computer made by Apple, and bought through a credit card provided by one of the big banks. Does this mean, however, that I should accept everything that those big players - either financial or technological - give me, and accept it uncritically? Does it mean that the people occupying Wall Street should shuffle off home and accept that Wall Street, warts and all, cannot be stood up against - and should be supported, not challenged? I don't think so.
Of course there are ways in which the two struggles are radically different. The damage done to peoples' lives by the financial crisis which is the core of the protest against Wall Street is huge - far huger than the material damage done by all the privacy-intrusive practices performed on the internet. People have lost their livelihoods, their houses, their families - perhaps even their futures - as a result. The damage from privacy intrusions is less material, harder to pin down, harder to see, harder to prove. It is, however, very important - and is likely to become more important in the future. Ultimately it has an affect on our autonomy - and that's where the real parallels with Occupy Wall Street lie. Both movements are about people wanting more control over their lives. Both are about people standing up and saying 'enough is enough,' and 'we don't want to take this any more'.
Occupy Wall Street may well fizzle out soon. I hope not - because I'd love to see it have a lasting influence, and help change the political landscape. The odds are stacked against them in more ways that I can count - but I didn't think they'd last as long as they have, so who knows what will happen? The struggle for privacy faces qualitatively different challenges, but at times it seems as though the odds are stacked just as much in favour of those who would like the whole idea of privacy to be abandoned. Even if that is the case, it's still a fight that I believe needs fighting.
Many people in the UK may not even have noticed Occupy Wall Street - it certainly hasn't had a lot of mainstream media coverage over here - but it seems to me to be something worthy of a lot of attention. A large number of people - exactly how many is difficult to be sure about - have been 'occupying' Liberty Square near Wall Street, the financial heart of New York - indeed, some might call it the financial heart of the modern capitalist world. Precisely what they're protesting against is hard to pin down but not at all hard to understand. As it's described on occupywallst.org, it is a:
"leaderless resistance movement with people of many colors, genders and political persuasions. The one thing we all have in common is that We Are The 99% that will no longer tolerate the greed and corruption of the 1%."
That isn't in any sense an 'official' definition - because there's nothing 'official' about occupy wall street. The movement has spread - the Guardian, one of the UK newspapers to give it proper coverage, talks about it reaching 70 US cities - and has lasted over three weeks so far, with little sign of flagging despite poor media coverage, strong-arm police tactics and a perceived lack of focus.
So what has this got to do with privacy? Or, perhaps more pertinently, what has this kind of a struggle got in common with the struggle for privacy? Why do people like me, whose work is concerned with internet privacy, find ourselves instinctively both supporting and admiring the people occupying Wall Street? Well, the two struggles do have a lot more in common than might appear at first glance. They're both struggles for the 'ordinary' people - for the 'little' people - against a huge and often seemingly irresistible 'machine'. Where Occupy Wall Street is faced by an array of banks with huge political and financial influence, internet privacy advocates are faced by the monoliths of the internet industry - Google, Facebook, Amazon, Microsoft, Apple etc - whose political and financial influence is beginning to rival that of the banks. Both Occupy Wall Street and internet privacy advocates are faced by systems and structures that seems to have no alternatives, and institutions which appear so entrenched as to be impossible to stand against.
Further to that, both the banks and the big players of the internet can claim with justification that over the years they've provided huge benefits to all of us, and that we wouldn't be enjoying the pleasures and benefits of our modern society but for their innovation and enterprise - I'm writing this blog on a system owned by Google, on a computer made by Apple, and bought through a credit card provided by one of the big banks. Does this mean, however, that I should accept everything that those big players - either financial or technological - give me, and accept it uncritically? Does it mean that the people occupying Wall Street should shuffle off home and accept that Wall Street, warts and all, cannot be stood up against - and should be supported, not challenged? I don't think so.
Of course there are ways in which the two struggles are radically different. The damage done to peoples' lives by the financial crisis which is the core of the protest against Wall Street is huge - far huger than the material damage done by all the privacy-intrusive practices performed on the internet. People have lost their livelihoods, their houses, their families - perhaps even their futures - as a result. The damage from privacy intrusions is less material, harder to pin down, harder to see, harder to prove. It is, however, very important - and is likely to become more important in the future. Ultimately it has an affect on our autonomy - and that's where the real parallels with Occupy Wall Street lie. Both movements are about people wanting more control over their lives. Both are about people standing up and saying 'enough is enough,' and 'we don't want to take this any more'.
Occupy Wall Street may well fizzle out soon. I hope not - because I'd love to see it have a lasting influence, and help change the political landscape. The odds are stacked against them in more ways that I can count - but I didn't think they'd last as long as they have, so who knows what will happen? The struggle for privacy faces qualitatively different challenges, but at times it seems as though the odds are stacked just as much in favour of those who would like the whole idea of privacy to be abandoned. Even if that is the case, it's still a fight that I believe needs fighting.
Monday, 3 October 2011
The privacy race to the bottom
I tend to be a ‘glass-half’ sort of person, seeing the positive side of any problem. In terms of privacy, however, this has been very hard over the last few weeks. For some reason, most of the ‘big guns’ of the internet world have chosen the last few weeks to try to out-do each other in their privacy-intrusiveness. One after the other, Google, Facebook and Amazon have made moves that have had such huge implications for privacy that it’s hard to keep positive. It feels like a massive privacy 'race to the bottom'.
Taking Google first, it wasn’t exactly that any particular new service or product hit privacy, but more the sense of what lies ahead that was chilling, with Google’s VP of Products, Bradley Horowitz, talking about how ‘Google + was Google itself’. As Horowitz put it in an interview for Wired last week:
"But Google+ is Google itself. We're extending it across all that we do — search, ads, Chrome, Android, Maps, YouTube — so that each of those services contributes to our understanding of who you are."
Our understanding of who you are. Hmmm. The privacy alarm bells are ringing, and ringing loud. Lots of questions arise, most directly to do with consent, understanding and choice. Do people using Google Maps, or browsing with Chrome, or even using search, know, understand and accept that their actions are being used to build up profiles so that Google can understand 'who they are'? Do they have any choice about whether their data is gathered or used, or how or whether their profile is being generated? The assumption seems to be that they just 'want' it, and will appreciate it when it happens.
Mind you, Facebook are doing their very best to beat Google in the anti-privacy race. The recent upgrade announced by Facebook has had massive coverage, not least for its privacy intrusiveness, from Timeline to Open Graph. Once again it appears that Mark Zuckerberg is making his old assumption that privacy is no longer a social norm, and that we all want to be more open and share everything. Effectively, he seems to be saying that privacy is dead - and if it isn't quite yet, he'll apply the coup-de-grace.
That, however is only part of the story. The other side is a bit less expected, and a bit more sinister. Thanks to the work of Australian hacker/blogger Nik Cubrilovic, it was revealed that Facebook's cookies 'might' be continuing to track us after we log out of Facebook. Now first of all Facebook denied this, then they claimed it was a glitch and did something to change it. All the time, Facebook tried to portray themselves as innocent - even as the 'good guys' in the story. A Facebook engineer – identifying himself as staffer Gregg Stefancik – said that “our cookies aren’t used for tracking”, and that “most of the cookies you highlight have benign names and values”. He went on to make what seemed to be a very reassuring suggestion quoted in The Register:
"Generally, unlike other major internet companies, we have no interest in tracking people."
How, then, does this square with the discovery that a couple of weeks ago Facebook appears to have applied for a patent to do precisely that? The patent itself is chilling reading. Amongst the gems in the abstract is the following:
"The method additionally includes receiving one or more communications from a third-party website having a different domain than the social network system, each message communicating an action taken by a user of the social networking system on the third-party website"
Not only do they want to track us, but they don't want us to know about it, telling us they have no interest in tracking.
OK, so that's Google and Facebook, with Facebook probably edging slightly ahead in their privacy-intrusiveness. But who is this coming fast on the outside? Another big gun, but a somewhat unexpected one: Amazon. The new Kindle Fire, a very sexy bit of kit, takes the Kindle, transforms the screen into something beautiful and colourful. It also adds a web-browsing capability, using a new browser Amazon calls Silk. All fine, so far, but the kicker is that Silk appears to track your every action on the web and pass it on to Amazon. Take that, Google, take that Facebook! Could Amazon beat both of them in the race to the bottom? They're certainly giving it a go.
All pretty depressing reading for those of us interested in privacy. And the trio could easily be joined by another of the big guns when Apple launches its new 'iCloud' service, due this week. I can't say I'm expecting something very positive from a service which might put all your content in the cloud....
...and yet, somehow, I DO remain positive. Though the big guns all seem to be racing the same way, there has at least been a serious outcry about most of it, and it's making headline news not just in what might loosely be described as the 'geek press'. Facebook seemed alarmed enough by Nik Cubrilovic's discoveries to react swiftly, even if a touch disingenuously. We all need to keep talking about this, we all need to keep challenging the assumption that privacy doesn't matter. We need to somehow start to shift the debate, to move things so that companies compete to be the most privacy-friendly rather than the most privacy-intrusive. If we don't, there's only one outcome. The only people who really lose in the privacy race-to-the-bottom are us....
Friday, 30 September 2011
Romanian re-Phorm-ation?
News has emerged this week that Phorm, the online-behavioural-advertising company about whom a great deal has been written (including by me) has targeted a new country for its latest attempt to track internet users’ every move: Romania.
Having been kicked out of the UK after a huge struggle a couple of years ago – a struggle from which civil society came out with a lot of credit, not least the Foundation for Information Policy Research and in particular the work of Richard Clayton and Nicholas Böhm, while the UK government came out with a severe amount of egg on its face – Phorm has tried to relaunch its services in a number of other countries. South Korea was the first, then Brazil, both without much sign of success, before the current efforts in Romania.
As a reminder, what Phorm’s services essentially do is ‘intercept’ the instructions a user sends as he or she browses the web – every site visited, every link followed, every click – and uses that information to build up a profile of the user, mostly to enable it to target advertising as accurately as possible but potentially (at least according to the publicity put out by Phorm during their attempts to launch in the UK) to tailor content. In a lot of ways Phorm’s system is only a logical extension of what many other advertisers on the web do – almost everyone’s at it, from Google to Facebook to Amazon (particularly if the stories emerging about the Kindle Fire are true). There are significant differences, however, to even the most privacy-invasive services offered by the others. The most important of these is that it covers ALL your activity on the web: even the latest furore about Facebook tracking you when you’re logged out didn’t get close to that, only potentially tracking you when you visit sites with Facebook links or ‘like’ buttons.
The second difference, almost as important, is that in exchange for these immense invasions of privacy, Phorm offers you nothing except better targeted advertising – something that few people would value very much. All the others give you something quite significant in exchange for their gathering your data: Google offers you very effective search engines, mapping systems, blogging services (including the one on which this blog is hosted) and much more, Facebook provides a social networking service of huge functionality, while Amazon’s Kindle is a lovely bit of kit for a remarkably small price, one that many people enjoy. There’s a ‘bargain’ going on for your data, even if few people fully grasp that this exchange is going on. With Phorm there’s nothing – essentially, they just spy on you for their own benefit, and give you nothing in return. Indeed, they might even harm your browsing, as the ‘interception’ process can potentially slow down your web-browsing.
Phorm failed in the UK, and I for one am very glad that they did. I hope the same happens in Romania, unless they’ve changed their practices significantly. The signs so far, sketchy though they are, do not suggest that this is very likely. Just as they did in the UK, they’ve done a deal with one of the big ISPs, Romtelecom, which is a part state-owned telecoms and internet company, and are looking for business partners. Their product appears to be pretty much the same as it was before, though they do at least mention the word ‘choose’ in terms of customer actions. That ‘choice’ does not seem to amount to much in reality, and indeed there seems to be another twist: they’ve added flash cookies to the system, with the express intention of using them to re-spawn their own status cookies in case you ‘accidentally’ delete them. The precise technical details have not yet emerged: I am looking forward to finding out if they’ve learned the lessons of their previous failures and decided to do something that actually respects the individual users and gives them some kind of real consent process. I’m not exactly waiting with bated breath…
I have a personal connection with Romania – my wife’s Romanian – and that country has experienced far too much of surveillance and invasions of privacy in the past. Indeed, Romania was one of the first countries to hit out against the privacy-invasive Data Retention Directive, their supreme court striking down the implementation of the Directive in their country as unconstitutional in 2009. I am fully confident that they will find a way to fight against this latest intrusion into their privacy. Phorm may have chosen Romania as a ‘soft target’. I suspect they’ll find the reality quite different, unless they’ve seriously changed their spots….