Tuesday, 25 October 2011

Search Engines, Search Engine Optimisation - and us!


Last week, Google announced that it was making SSL encryption the default on all searches for ‘signed in’ people. They announced it as a move towards better security and privacy, and some people (myself included) saw it as a small but potentially significant step in the right direction. Almost as soon as the announcement was out, however, stories saying exactly the opposite began to appear: the blogosphere was abuzz. One of the more notable – one that was tweeted around what might loosely be described as ‘privacy circles’ came in the Telegraph. “Google is selling your privacy at a price” was the scary headline.

So who was right? Was it a positive move for privacy, or another demonstration that Google doesn’t follow its own mantra about doing evil? Perhaps, when you look a little deeper, it was neither – and both Google and those who wrote stories like that in the Telegraph have another agenda. Perhaps it’s not what happened with SSL, but that agenda that we should be concerned about. The clue comes from looking a bit closer at who wrote the story in the Telegraph: Rob Jackson, who is described as ‘the MD of Elisa DBI, a digital business measurement and optimisation consultancy’. That is, he comes from the Search Engine Optimisation (SEO) industry. What’s happening here isn’t really much to do with privacy as far as either Google or the SEO industry – it’s just another episode in the cat-and-mouse story between search engines and those who want to ‘manipulate’ them, a story that’s been going on since search engines first appeared. The question is, how do we, the ordinary citizens of cyberspace, fit into that story. Do we benefit from the ongoing conflict and tension between the two, a tension which brings about developments both on both a technological and business level – or are we, as some think is true in much of what goes on in cyberspace, just being used to make money by all concerned, and our privacy and autonomy is neither here nor there?

What’s really going on?

As far as I can see, the most direct implication of the implementation of SSL encryption is that Google are preventing webmasters of sites reached through a Google search – and SEOs – from seeing the search term used to find them. Whether those webmasters – let alone the SEOs – have any kind of ‘right’ to know how they were found is an unanswered question, but for the webmasters it is an annoyance at least. For SEOs, on the other hand, it could be a major blow, as it undermines a fundamental part of the way that they work. That, it seems to me, is why they’re so incensed by the move – it makes their job far harder to do. Without having at least some knowledge of which search term produces which result, how can they help sites to be easier to find? How can they get your site higher on the search results, as they often claim to be able to do?

I have little doubt that they’ll find a way – historically they always have. With every new development of search there’s been a corresponding development by those who wish to get their sites – or more directly the sites of their clients – higher up the lists, from choosing particular words on the sites to the use of metatags right up to today’s sophisticated SEOs. Still, it’s interesting that the story that they’ve been pushing out is that Google is ‘selling your privacy for a price’. That in itself is somewhat misleading. A more honest headline might have been:

‘Google is STILL selling your privacy for a price, but now they’re trying to stop us selling it too!’

Google has, in many ways, always been selling your private information – that’s how their business model works, using the terms you use to search in order to target their advertising – but with the SSL move they’ve made it harder for others to use that information too. They themselves will still know the search terms, and seems to still be ‘selling’ the terms to those using their AdWords system – but that’s what they’ve pretty much always done, even if many people have remained blissfully unaware that this was what was happening.

There’s another key difference between Google and the SEOs – from Google, we do at least get an excellent service in exchange for letting them use our search terms to make money. Anyone who remembers the way we used to navigate the web before Google should acknowledge that what they do makes our online lives much faster and easier. There’s an exchange going on, an exchange that is at least to an extent mutually beneficial. It's part of the symbiotic relationship between the people using the internet and the businesses who run the fundamental services of the internet that is described in my theory of The Symbiotic Web. With SEOs, the question is whether we – particularly in our capacity as searchers – are actually benefiting at all.

The business of Search Engine Optimisation

Who DOES benefit from the work of SEOs? Their claims are bold. As Rob Jackson puts it in the Telegraph article:

“One leading SEO professional told me that Google is essentially reverse-engineered by the the SEO professionals around the world. If they were all to stop at once, Google wouldn't be able to find its nose.”

It’s a bold claim, but I suspect that people within Google would be amused rather than alarmed by the idea. Do we, as users, benefit from the operations of SEOs? On the face of it, it appears unlikely: searchers want to find the sites most relevant and useful to them, not the sites whose webmasters have employed the best SEOs to optimise their sites. Excellent and relevant sites and services get pushed down the search list by less good and less helpful sites who have used the most advanced and effective SEO techniques. And it’s our information, our search terms, that are being used by the SEOs.

There is, however, another side to the business, and one that’s growing in significance all the time. The idea that we are just ‘searchers’ looking round the web for information and interesting things is outdated, at least for a fair number of us. We also blog, we have our own private sites – and often our own ‘business’ sites. And we want our blogs to be read, our sites to be found – and how can this happen unless there is a way for them to be found.

SEOs might say that this is where they come in, this is where they can help us – and this might well be true to an extent. I for one, however, would like my sites to be judged on their merits, read because they’re worth reading and not just because I’ve employed a bit of a wizard to do the optimisation. I’d like search to be fair – I don’t want my services to be at a disadvantage either to those who have a commercial tie-in with Google or to those who are paying a better SEO than mine. I want a right to be found – when I want to be found.

Do I have a right like that? Should I have a right like that? Cases like the Foundem case have asked that, but I don’t think we yet have an answer, or at least what answers we have have been inconclusive and hardly heard. Perhaps we should be asking it a bit more loudly.

Thursday, 20 October 2011

Goo goo google's tiny steps towards privacy...

Things seem to be hotting up in the battle for privacy on the internet. Over the last few days, Google have made three separate moves which look, on the surface at least, as though they're heading, finally, in the right direction as far as privacy is concerned. Each of the moves could have some significance, and each has some notable drawbacks - but to me at least, it's what lies behind them that really matters.

The first of the three moves was the announcement on October 19th, that for signed in users, Google was now adding end-to-end (SSL) encryption for search. I'll leave the technical analysis of this to those much more technologically capable than me, but the essence of the move is that it adds a little security for users, making it harder to eavesdrop on a user's seating activities - and meaning that when someone arrives at a website after following a google search, the webmaster of the site arrived at will know that the person arrived via google, but not the search term used to find them. There are limitations, of course, and Google themselves still gather and store the information for their own purposes, but it is still a step forward, albeit small. It does, however, only apply to 'signed in' users - which cynics might say is even more of a drawback, because by signing in a user is effectively consenting to the holding, use and aggregation of their data by Google. The Article 29 Working Party, the EU body responsible for overseeing the data protection regime, differentiates very clearly between signed-in and 'anonymous' (!) users of the service in terms of complying with consent requirements - Google would doubtless very much like more and more users to be signed in when they use the service, if only to head off any future legal conflicts. Nonetheless, the implementation of SSL should be seen as a positive step - the more that SSL is implemented in all aspects of the internet, the better. It's a step forward - but a small one.

There have also been suggestions (e.g. in this article in the Telegraph) that the move is motivated only by profit, and in particular to make Google's AdWords more effective at the expense of techniques used by Search Engine Optimisers, who with the new system will be less able to analyse and hence optimise. There is something to this, no doubt - but it must also be remembered first of all that pretty much every move of Google is motivated by profit, that's the nature of the beast, and secondly that a lot of the complaints (including the Telegraph article) come from those with a vested interest in the status quo - the Search Engine Optimisers themselves. Of course profit is the prime motivation - but if profit motives drive businesses to do more privacy-friendly things, so much the better. That, as will be discussed below, is one of the keys to improving things for privacy.

The second of the moves was the launch of Google's 'Good to know', a 'privacy resource centre', intended to help guide users in how to find out what's happening to their data, and to use tools to control that data use. Quite how effective it will be has yet to be seen - but it is an interesting move, particularly in terms of how Google is positioning itself in relation to privacy. It follows from the much quieter and less user-friendly Google Dashboard and Google AdPreferences, which technically gave users quite a lot of information and even some control, but were so hard to find that for most intents and purposes they appeared to exist only to satisfy the demands of privacy advocates, and not to do anything at all for ordinary users. 'Good to know' looks like a step forward, albeit a small and fairly insubstantial one.

The third move is the one that has sparked the most interest - the announcement by Google executive Vic Gundotra that social networking service Google+ will 'begin supporting pseudonyms and other types of identity.' The Electronic Frontier Foundation immediately claimed 'victory in the nymwars', suggesting that Google had 'surrendered'. Others have taken a very different view - as we shall see. The 'nymwars' as they've been dubbed concern the current policies of both Facebook and Google to require a 'real' identity in order to maintain an account with them - a practice which many (myself definitely included) think is pernicious and goes against the very things which have made the internet such a success, as well as potentially putting many people at real risks in the real world. The Mexican blogger who was killed and decapitated by drugs cartels after posting on an anti-drugs website is perhaps the most dramatic example of this, but the numbers of people at risk from criminals, authoritarian governments and others is significant. To many (again, myself firmly included), the issue of who controls links between 'real' and 'online' identities is one of the most important on the internet in its current state. The 'nymwars' are of fundamental importance - and so, to me, is Google's announcement.

Some have greeted it with cynicism and anger. One blogger put it bluntly:

"Google's statement is obvious bullshit, and here's why. The way you "support" pseudonyms is as follows: Stop deleting peoples' accounts when you suspect that the name they are using is not their legal name.

There is no step 2."

The EFF's claims of 'victory' in the nymwars is perhaps overstated - but Google's move isn't entirely meaningless, nor is it necessarily cynical. Time will tell exactly what Google means by 'supporting pseudonyms', and whether it will really start to deal with the problems brought about by a blanket requirement for 'real' identities - but this isn't the first time that someone within Google has been thinking about these issues. Back in February, Google's 'Director of Privacy, Product and Engineering' wrote a blog for the Google Policy Blog called 'The freedom to be who you want to be...', in which she said that Google recognised three kinds of user: 'unidentified', pseudonymous and identified. It's a good piece, and well worth a read, and shows that within Google these debates must have been going on for a while, because the 'real identity' approach for Google Plus has at least in the past been directly contrary to what Whitten was saying in the blog.

That's one of the reasons I think Vic Gundotra's announcement is important - it suggests that the 'privacy friendly' people within Google are having more say, and perhaps even winning the arguments. When you combine it with the other two moves mentioned above, that seems even more likely. Google may be starting to position itself more firmly on the 'privacy' side of the fence, and using privacy to differentiate itself from the others in the field - most notably Facebook. To many people, privacy has often seemed like the last thing that Google would think about - that may be finally changing.

4Chan's Chris Poole, in a brilliant speech to the Web 2.0 conference on Monday, challenged Facebook, Google and others to start thinking of identity in a more complex, nuanced way, and suggested that Facebook and Google, with their focus on real identities, had got it fundamentally wrong. I agreed with almost everything he said - and so, I suspect, did some of the people at Google. The tiny steps we've seen over the last few days may be the start of their finding a way to make that understanding into something real. At the very least, Google seem to be making a point of saying so.

That, for me, is the final and most important point. While Google and Facebook, the two most important players in the field, stood side by side in agreement about the need for 'real' identities, it was hard to see a way to 'defeat' that concept, and it felt almost as though victory for the 'real' identities side was inevitable, regardless of all the problems that would entail, and regardless of the wailing and gnashing of teeth of the privacy advocates, hackers and so forth about how wrong it was. If the two monoliths no longer stand together, that victory seems far less assured. If we can persuade Google to make a point of privacy, and if that point becomes something that brings Google benefits, then we all could benefit in the end. The nymwars certainly aren't over, but there are signs that the 'good guys' might not be doomed to defeat.

Google is still a bit of a baby as far as privacy is concerned, making tiny steps but not really walking yet, let alone running. In my opinion, we need to encourage it to keep on making those tiny steps, applaud those steps, and it might eventually grow up...

UPDATED TO INCLUDE REFERENCE TO SEOS...

Tuesday, 18 October 2011

Privacy is personal...

My real interest in privacy - and specifically internet privacy - arose a little over ten years ago. Something happened to me that change the way I thought about the whole issue - something personal, something direct. Up until that point I hadn't really thought much about privacy, though I'd been involved with the online world from a very early stage, setting up projects to provide rural communities with access to information, and trying to provide online education to housebound children in the mid 1990s - not exactly cutting edge stuff, but not too far from it. I'd also been involved in human rights work - most directly children's rights - but I'd never thought much about privacy. To me, then, just as to many people now, it just didn't feel important, particularly compared to the problems happening all over the world. 911 had just happened, and war was in the air.

I was living in New Zealand when the US invaded Afghanistan - and I was deeply concerned about the consequences of that action. I wrote about my concern in an email to a friend, also in New Zealand, and in that email I was at least partially critical of US foreign policy. I even mentioned Israel at one point. Some time over the next three hours, my email account became inaccessible.

At the time I was using a free email account - one of the big ones - that I had set up whilst in the US a few years earlier. A '.com' email account. As I was living in a very isolated part of New Zealand, this email account was one of my few links to the outside world. It had all my contacts' details, and all the messages I had sent and received for a long time - and I had been foolish enough not to keep written records elsewhere of a lot of the details. At first I thought it was just a blip, an accident - and I set up another email account and wrote to the service provider asking what had happened to my account, whether the password had been accidentally reset or something else. I was met with terse replies saying that the account had been terminated for a breach of contract terms. Friends told me to give up, and go with the new account - but I'm not that kind of person. I kept on badgering them, trying to find out what was going on. I hadn't yet thought that it might be connected with the email that I'd sent. Eventually I got a message saying that I had been using the email for commercial purposes, which is why it had been cancelled - which was absurd, as anyone who knew my financial position at the time would know. Then, about six months later, they reinstated the account, minus all the content, contacts and so forth.

Now of course I have no evidence to prove that the account was cancelled because of that particular email - it may indeed just have been a mistake, the account may even have been hacked into (though such things were much rarer in those days), but even the suspicion was enough to disturb me enormously, and set me on the path that I'm still on today. I started asking how it could have happened, what happens to emails, how easily they can be read, how my privacy might have been invaded. The more I investigated, the more I uncovered, the more interested I became - and it ended up changing my whole life. The perceived invasion of privacy - in a sense it doesn't even matter if it was real - was so personal that it cut me to the quick.

Back then I had had very little to do with the law - my degree was in mathematics, I qualified as an accountant and worked with technology, not the law. Now, as a result of following this path, I'm a lecturer in a law school at a good university, have published research and submitted a PhD on the subject of data privacy - and it seems even more relevant than it did ten years ago, as the online world has expanded and become more and more intrinsically linked with everything we do. Invasions of privacy do matter - whatever the likes of Mark Zuckerberg might think - and they matter because they're deeply personal, and touch the parts of us that we really care about.

Friday, 14 October 2011

Business and Privacy: Evidence and Assumptions?

I came across a couple of stories yesterday that at first glance appeared unconnected, dealing with difference aspects of the current privacy debates concerning the internet. One comes from one side of the Atlantic, the other from the other. One deals with the 'fight' against piracy, the other with the current favourite of the online advertising industry, behavioural targeting. Very different issues - but they do have something in common: an inherent assumption that business success should take precedence over individual rights and freedoms.

The first issue was the revelation, through a Freedom of Information Request by the admirable Open Rights Group, that the Department of Culture, Media and Sport had no evidence to support their strategies to reduce the infringement of copyright by websites - you can see their report on the issue here.

The second came from my following of the House Energy and Commerce Committee hearing in Washington, about consumer privacy and online behavioural advertising - a hearing at least on the surface intended to consider consumer concerns, but which by the sound of it had a lot more to do with industry putting their case to avoid regulation. I followed on twitter, and remember one particular call from a regular and respected tweeter from the US who demanded evidence before regulation is considered. Specifically, he wanted evidence as to how much of the advertising economy depended on behavioural targeting - the underlying suggestion being, presumably, that we shouldn't regulate if it would have too significant an impact on revenue streams.

There are two different ways to look at the two stories. You can look at them as a reflection of the different attitudes to regulation on the two sides of the Atlantic - in England we're rushing to regulate, while in the US regulation is to be avoided unless absolutely necessary.  Alternatively, however, you can look at them as a reflection of the way that business needs are set above individual rights and freedoms.

Copyright and piracy....

The Open Rights Group's request was in relation to the proposals in the Digital Economy Act, but that Act is just one of many measures introduced over the years to combat 'piracy', although the evidence in support of any of them has generally been conspicuous by its absence. That applies both to evidence to suggest that the problem is as bad as the industry suggests and to the efficacy of the measures being proposed to combat it. Does piracy cause a massive loss of revenue to rights holders? Perhaps, but the suggestions over the years that every illegally downloaded song is a lost sale is far from convincing, and the idea that listening to something illegally might even lead to further legal sales seems to have merit too. The massive success of iTunes suggests that carrots rather than sticks might be more effective - indeed, recent reports from Sweden showing that piracy had reduced as Spotify had been introduced adds weight to this idea.

The Open Rights Group's FOI request was about the effectiveness of the proposals - and the DCMS effectively acknowledged that they have no evidence about it. So we have proposals for measures about which there is no evidence, to address an issue about which evidence is scanty to say the least... and yet on that basis we're willing to put restrictions on individuals' freedoms, potentially apply censorship, and even cut off people's internet access as a result. That same internet access that is increasingly regarded as a human right.

The Digital Economy Act is one thing, but there's something else looming on the horizon of even more concern: the Anti-Counterfeiting Trade Agreement (ACTA), whose measures are potentially even more draconian than those in the DEA, and whose scope is even more all-encompassing. The US has already signed it - somewhat against the suggestion that the US prefers not to regulate where possible - and the EU may well sign it soon, though it still needs to pass through the European Parliament, and lobbying of MEPs is underway on both sides.

Behavioural advertising...

Legislation on behavioural advertising has already taken place in Europe, with the notorious 'Cookies Directive', about which I've written before - but the implementation, enforcement and acceptance of that directive has proved troublesome from the outset, and whether it ends up being at all meaningful has yet to be seen. Legislation in the US is what is currently under discussion, and what is being keenly resisted by the advertising industry and others. 'Show us the evidence' is the call - and until that evidence is shown, advertisers should be able to do whatever they want.

Evidence in relation to privacy is a contentious issue in lots of ways. Demonstrating 'harm' from an invasion of privacy is difficult, partly because each individual invasion isn't likely to be significant - particularly in respect of mundane tracking of websites browsed and so forth - and partly because the 'harm' is generally intangible, and far from easily turned into something easily quantifiable. Some people suggest that we should treat our personal information like a commodity, akin in some ways to intellectual property, but for me that fails to capture the real essence of privacy. I don't want to put a 'value' on my personal data, any more than I want to put a value on each of my fingers, or on my relationships with my friends and family. It's something different, and needs protecting as something different. I shouldn't need to prove the 'harm' done by that data being at risk - the loss of it, or loss of control over it, is a harm in itself.

That isn't all - not only does there appear to be an expectation that we should prove harm, but that even if there IS harm, we've got to prove that we wouldn't be damaging the advertisers' businesses too much. If their businesses would be harmed too much, we shouldn't put regulations in place....

Two different situations - but the same assumptions

In the copyright scenario, we're having our freedom restricted and our privacy invaded without real evidence to support what's happening. In the behavioural advertising scenario, we're having our privacy invaded and we're being asked to prove that there's a problem before any restrictions are placed - and, what's more, we're being asked to prove that we wouldn't damage business too much.

In both cases, it's the individuals who lose out. Business takes priority, and individuals rights, particularly in respect of privacy, are overridden. Where businesses perceive there are problems (as in the copyright scenario), they're not asked for proof - but where individuals perceive there are problems, they're asked for proof in ways that are inappropriate and unattainable. Shouldn't the situation be exactly the other way around? Shouldn't individuals' rights be considered above the business models of corporations? Shouldn't the burden of proof work in favour of individuals against businesses, rather than the other way around? Of course that's a difficult argument to make in economically troubled times - but it's an argument that in my opinion needs to be made, and made strongly.

Tuesday, 11 October 2011

Privacy, Parenting and Porn

One of the stories doing the media rounds today surrounded the latest pronouncements from the Prime Minister concerning porn on the internet. Two of my most commonly used news sources, the BBC and the Guardian, had very different takes on in. The BBC suggested that internet providers were offering parents an opportunity to block porn (and 'opt-in' to website blocking) while the Guardian took it exactly the other way - suggesting that users would have to opt out of the blocking - or, to be more direct, to 'opt-in' to being able to receive porn.

Fool that I am, I fell for the Guardian's version of the story (as did a lot of people, from the buzz on twitter) which seems now to have been thoroughly debunked, with the main ISPs saying that the new system would make no difference, and bloggers like the excellent David Meyer of ZDNet making it clear that the BBC was a lot closer to the truth. The idea would be that parents would be given the choice as to whether to accept the filtering/blocking system, which, on the face of it, seems much more sensible.

Even so, the whole thing sets off a series of alarm bells. Why does this sort of thing seem worrying? The first angle that bothers me is the censorship one - who is it that decides what is filtered and what is not? Where do the boundaries lie? One person's porn is another person's art - and standards are constantly changing. Cultural and religious attitudes all come into play. Now I'm not an expert in this area - and there are plenty of people who have written and said a great deal about it, far more eloquently than me - but at the very least it appears clear that there are no universal standards, and that decisions as to what should or should not be put on 'block lists' need to be made very carefully, with transparency about the process and accountability from those who make the decisions. There needs to be a proper notification and appeals process - because decisions made can have a huge impact. None of that appears true about most 'porn-blocking' systems, including the UK's Internet Watch Foundation, often very misleadingly portrayed as an example of how this kind of thing should be done.

The censorship side of things, however, is not the angle that interests me the most. Two others are of far more interest: the parenting angle, and the privacy angle. As a father myself, of course I want to protect my child - but children need independence and privacy, and need to learn how to protect themselves. The more we try to wrap them in cotton wool, to make their world risk-free, the less able they are to learn how to judge for themselves, and to protect themselves. If I expect technology, the prime minister, the Internet Watch Foundation to do all the work for me, not only am I abdicating responsibility as a parent but I'm denying my child the opportunity to learn and to develop. The existence of schemes like the one planned could work both ways at once: it could make parents think that their parenting job is done for them, and it could also reduce children's chances to learn to discriminate, to decide, and to develop their moral judgment....

....but that is, of course, a very personal view. Other parents might view it very differently - what we need is some kind of balance, and, as noted above, proper transparency and accountability.

The other angle is that of privacy. Systems like this have huge potential impacts on privacy, in many different ways. One, however, is of particular concern to me. First of all, suppose the Guardian was right, and you had to 'opt-in' to be able to view the 'uncensored internet'. That would create a database of people who might be considered 'people who want to watch porn'. How long before that becomes something that can be searched when looking for potential sex offenders? If I want an uncensored internet, does that make me a potential paedophile? Now the Guardian appears to be wrong, and instead we're going to have to opt-in to accept the filtering system - so there won't be a list of people who want to watch porn, instead a list of people who want to block porn. It wouldn't take much work, however, on the customer database of a participating ISP to select all those users who had the option to choose the blocking system, and didn't take it. Again, you have a database of people who, if looked at from this perspective, want to watch porn....

Now maybe I'm overreacting, maybe I'm thinking too much about what might happen rather than what will happen - but slippery slopes and function creep are far from rare in this kind of a field. I always think of the words of Bruce Schneier, on a related subject:

"It’s bad civic hygiene to build technologies that could someday be used to facilitate a police state"

Now I'm not suggesting that this kind of thing would work like this - but the more 'lists' and 'databases' we have of people who don't do what's 'expected' of them, or what society deems 'normal', the more opportunities we create for potential abuse. We should be very careful...

Monday, 10 October 2011

Privacy - and Occupy Wall Street?

One of tweeters I follow, the estimable @privacycamp, asked a question on twitter last night: is there a privacy take on 'Occupy Wall Street'? I immediately fired off a quick response - of course there is - but it started me off on a train of thought that's still chugging along. That's brought about this somewhat rambling blog-post, a bit different from anything I've done before - and I'd like to stress that even more than usual these are my personal musings!

Many people in the UK may not even have noticed Occupy Wall Street - it certainly hasn't had a lot of mainstream media coverage over here - but it seems to me to be something worthy of a lot of attention. A large number of people - exactly how many is difficult to be sure about - have been 'occupying' Liberty Square near Wall Street, the financial heart of New York - indeed, some might call it the financial heart of the modern capitalist world. Precisely what they're protesting against is hard to pin down but not at all hard to understand. As it's described on occupywallst.org, it is a:

"leaderless resistance movement with people of many colors, genders and political persuasions. The one thing we all have in common is that We Are The 99% that will no longer tolerate the greed and corruption of the 1%."

That isn't in any sense an 'official' definition - because there's nothing 'official' about occupy wall street. The movement has spread - the Guardian, one of the UK newspapers to give it proper coverage, talks about it reaching 70 US cities - and has lasted over three weeks so far, with little sign of flagging despite poor media coverage, strong-arm police tactics and a perceived lack of focus.

So what has this got to do with privacy? Or, perhaps more pertinently, what has this kind of a struggle got in common with the struggle for privacy? Why do people like me, whose work is concerned with internet privacy, find ourselves instinctively both supporting and admiring the people occupying Wall Street? Well, the two struggles do have a lot more in common than might appear at first glance. They're both struggles for the 'ordinary' people - for the 'little' people - against a huge and often seemingly irresistible 'machine'. Where Occupy Wall Street is faced by an array of banks with huge political and financial influence, internet privacy advocates are faced by the monoliths of the internet industry - Google, Facebook, Amazon, Microsoft, Apple etc - whose political and financial influence is beginning to rival that of the banks. Both Occupy Wall Street and internet privacy advocates are faced by systems and structures that seems to have no alternatives, and institutions which appear so entrenched as to be impossible to stand against.

Further to that, both the banks and the big players of the internet can claim with justification that over the years they've provided huge benefits to all of us, and that we wouldn't be enjoying the pleasures and benefits of our modern society but for their innovation and enterprise - I'm writing this blog on a system owned by Google, on a computer made by Apple, and bought through a credit card provided by one of the big banks. Does this mean, however, that I should accept everything that those big players - either financial or technological - give me, and accept it uncritically? Does it mean that the people occupying Wall Street should shuffle off home and accept that Wall Street, warts and all, cannot be stood up against - and should be supported, not challenged? I don't think so.

Of course there are ways in which the two struggles are radically different. The damage done to peoples' lives by the financial crisis which is the core of the protest against Wall Street is huge - far huger than the material damage done by all the privacy-intrusive practices performed on the internet. People have lost their livelihoods, their houses, their families - perhaps even their futures - as a result. The damage from privacy intrusions is less material, harder to pin down, harder to see, harder to prove. It is, however, very important - and is likely to become more important in the future. Ultimately it has an affect on our autonomy - and that's where the real parallels with Occupy Wall Street lie. Both movements are about people wanting more control over their lives. Both are about people standing up and saying 'enough is enough,' and 'we don't want to take this any more'.

Occupy Wall Street may well fizzle out soon. I hope not - because I'd love to see it have a lasting influence, and help change the political landscape. The odds are stacked against them in more ways that I can count - but I didn't think they'd last as long as they have, so who knows what will happen? The struggle for privacy faces qualitatively different challenges, but at times it seems as though the odds are stacked just as much in favour of those who would like the whole idea of privacy to be abandoned. Even if that is the case, it's still a fight that I believe needs fighting.

Monday, 3 October 2011

The privacy race to the bottom


I tend to be a ‘glass-half’ sort of person, seeing the positive side of any problem. In terms of privacy, however, this has been very hard over the last few weeks. For some reason, most of the ‘big guns’ of the internet world have chosen the last few weeks to try to out-do each other in their privacy-intrusiveness. One after the other, Google, Facebook and Amazon have made moves that have had such huge implications for privacy that it’s hard to keep positive. It feels like a massive privacy 'race to the bottom'.

Taking Google first, it wasn’t exactly that any particular new service or product hit privacy, but more the sense of what lies ahead that was chilling, with Google’s VP of Products, Bradley Horowitz, talking about how ‘Google + was Google itself’. As Horowitz put it in an interview for Wired last week:

"But Google+ is Google itself. We're extending it across all that we do — search, ads, Chrome, Android, Maps, YouTube — so that each of those services contributes to our understanding of who you are."

Our understanding of who you are. Hmmm. The privacy alarm bells are ringing, and ringing loud. Lots of questions arise, most directly to do with consent, understanding and choice. Do people using Google Maps, or browsing with Chrome, or even using search, know, understand and accept that their actions are being used to build up profiles so that Google can understand 'who they are'? Do they have any choice about whether their data is gathered or used, or how or whether their profile is being generated?  The assumption seems to be that they just 'want' it, and will appreciate it when it happens.

Mind you, Facebook are doing their very best to beat Google in the anti-privacy race. The recent upgrade announced by Facebook has had massive coverage, not least for its privacy intrusiveness, from Timeline to Open Graph. Once again it appears that Mark Zuckerberg is making his old assumption that privacy is no longer a social norm, and that we all want to be more open and share everything. Effectively, he seems to be saying that privacy is dead - and if it isn't quite yet, he'll apply the coup-de-grace.

That, however is only part of the story. The other side is a bit less expected, and a bit more sinister. Thanks to the work of Australian hacker/blogger Nik Cubrilovic, it was revealed that Facebook's cookies 'might' be continuing to track us after we log out of Facebook. Now first of all Facebook denied this, then they claimed it was a glitch and did something to change it. All the time, Facebook tried to portray themselves as innocent - even as the 'good guys' in the story. A Facebook engineer – identifying himself as staffer Gregg Stefancik – said that “our cookies aren’t used for tracking”, and that “most of the cookies you highlight have benign names and values”. He went on to make what seemed to be a very reassuring suggestion quoted in The Register:

"Generally, unlike other major internet companies, we have no interest in tracking people." 


How, then, does this square with the discovery that a couple of weeks ago Facebook appears to have applied for a patent to do precisely that? The patent itself is chilling reading. Amongst the gems in the abstract is the following:

"The method additionally includes receiving one or more communications from a third-party website having a different domain than the social network system, each message communicating an action taken by a user of the social networking system on the third-party website"

Not only do they want to track us, but they don't want us to know about it, telling us they have no interest in tracking.

OK, so that's Google and Facebook, with Facebook probably edging slightly ahead in their privacy-intrusiveness. But who is this coming fast on the outside? Another big gun, but a somewhat unexpected one: Amazon. The new Kindle Fire, a very sexy bit of kit, takes the Kindle, transforms the screen into something beautiful and colourful. It also adds a web-browsing capability, using a new browser Amazon calls Silk. All fine, so far, but the kicker is that Silk appears to track your every action on the web and pass it on to Amazon. Take that, Google, take that Facebook! Could Amazon beat both of them in the race to the bottom? They're certainly giving it a go.

All pretty depressing reading for those of us interested in privacy. And the trio could easily be joined by another of the big guns when Apple launches its new 'iCloud' service, due this week. I can't say I'm expecting something very positive from a service which might put all your content in the cloud....

...and yet, somehow, I DO remain positive. Though the big guns all seem to be racing the same way, there has at least been a serious outcry about most of it, and it's making headline news not just in what might loosely be described as the 'geek press'. Facebook seemed alarmed enough by Nik Cubrilovic's discoveries to react swiftly, even if a touch disingenuously. We all need to keep talking about this, we all need to keep challenging the assumption that privacy doesn't matter. We need to somehow start to shift the debate, to move things so that companies  compete to be the most privacy-friendly rather than the most privacy-intrusive. If we don't, there's only one outcome. The only people who really lose in the privacy race-to-the-bottom are us....