....but it's certainly seems to be newsworthy at the moment. In the last two weeks there seems to have been a deluge of new stories.
1) On the 28th October, there was a call for an 'internet bill of rights' in a parliamentary debate.
2) On the 2nd November, business minister Ed Vaizey suggested a 'mediation service' to deal with disputes about personal data held on the net.
3) Also on the 2nd November, the US Supreme Court began a hearing about violent computer games...
4) ....and Google put forward its proposed settlement over the Google Buzz privacy issue
5) ....and the busy Ed Vaizey put forward the suggestion of a new 'privacy code' for online businesses like Google and Facebook
6) ...while the case of the stabbing of MP Stephen Timms by a young woman who had been 'radicalised' by watching videos on YouTube sparked a little furore about why such things should be allowed online - this is just one piece about it from The Telegraph.
7) On the 3rd November the ICO issued its response to the Google Street View data gathering fiasco
8) ...followed almost immediately by a statement from five civil liberties groups (Privacy International, NO2ID, Big Brother Watch, Action on Rights for Children, and Open Rights Group) suggesting that the ICO's action on this issue (and indeed on many others) makes it 'not fit for purpose'.
9) On the 4th November, Prime Minister David Cameron spoke in East London about making that region the 'new silicon valley' - and, amongst other things, about making our copyright laws 'fit for the internet age'.
10) ...and the European Commission launched its proposals for a 'comprehensive approach on personal data protection in the European Union'.
11) On the 8th November, Google announced it was shutting off its data feeds to Facebook...
12) And yesterday it was announced that BT and Talk Talk had been successful in getting a judicial review of the Digital Economy Act.
Lots of news - but what does it all mean? Firstly, that the subject really is current, and of increasing importance. In these two weeks we've had a statement from the prime minister, we've had a hearing in the US supreme court, , we've had one of the biggest players in the internet world Google, involved in three different ways - four if you blame their YouTube service for hosting the radicalising videos - and we've had the European Commission making what could be a very significant statement.
Secondly, that the situation is far from simple - and that the 'regulatory matrix' is complex. The differing relationships between the different interested parties have all come into play. We've had civil liberties groups challenging a regulatory body, we've had companies challenging the law, we've had questions in parliament, we've had a spat between probably the two biggest players in the internet world, we've had a class action against a company (Google), we've had interventions from regulatory bodies and politicians.
This lack of simplicity is the key - as Andrew Murray highlights in his theory of Symbiotic Regulation (in his excellent book The Regulation of Cyberspace). All these different relationships - between politicians, the judiciary, companies, civil liberties groups, and, of course, individuals - have their part to play in what happens on the internet from a regulatory perspective. It makes it complex - but it makes it interesting. And, at times like these, it makes it news!
My thoughts and stories relating to privacy, autonomy, human rights, law and the web
Thursday, 11 November 2010
Thursday, 21 October 2010
Opting out of Street View....
Nearly 250,000 Germans have 'opted out' of having their homes visible when Google's Street View comes online, though Andreas Türk, Product Manager for Street View in Germany, has admitted that some of those homes will still be visible when the service comes online, which will be some time in the near future, as the process is complex and not all instructions were clear. His blog here provides the explanations.
It's an interesting figure - is 250,000 (or, to be more precise, 244,237) a large number? As Andreas Türk says, it amounts to 2.89% of those who could have objected, and the argument can be made both ways. Google might argue that it means that the vast, vast majority don't object to Street View, so their service has some kind of overall 'acceptance' or even 'support' by the populace. Privacy advocates might say the converse - in absolute terms, 250,000 is a LOT of people. If you had 250,000 people marching on the streets with banners saying 'NO TO STREET VIEW' it would make headline news, certainly in Germany, and probably throughout Europe.
Both sides have a point: 2.89% isn't a very large proportion, but 250,000 is a lot of people, and when you look closer at the process I suspect that the privacy advocates have a stronger position. Given that the opt-out required an active process (and Google say that 2/3 of those who objected used their own online tool to do so) it does suggest that quite a lot of people care about this. If the reverse system had been in place - and you had to actively choose to HAVE your home 'unblurred' on Street View, what kind of figures would you get? Would more than 250,000 have gone through a process to make their houses visible? I doubt it....
...and what of the rest of us? Germans got a choice because their government made a point about it, and demanded that Google give them the choice before the service went active. As the BBC reports, other governments have made other kinds of objections, but none have been given the choice that the Germans have had. As I've blogged before, Germany has a pretty active privacy lobby, so it's not surprising that they are the country that has taken this step - what would the result have been if the option had been given in the UK? Or the US? Probably not as dramatic as the German result - which makes me wonder whether Google has missed a trick by not providing the option elsewhere. If they did so, and an even tinier fraction than the 2.9% in privacy-aware Germany objected, they might be able to be even bolder about proclaiming that people love Street View.....
It's an interesting figure - is 250,000 (or, to be more precise, 244,237) a large number? As Andreas Türk says, it amounts to 2.89% of those who could have objected, and the argument can be made both ways. Google might argue that it means that the vast, vast majority don't object to Street View, so their service has some kind of overall 'acceptance' or even 'support' by the populace. Privacy advocates might say the converse - in absolute terms, 250,000 is a LOT of people. If you had 250,000 people marching on the streets with banners saying 'NO TO STREET VIEW' it would make headline news, certainly in Germany, and probably throughout Europe.
Both sides have a point: 2.89% isn't a very large proportion, but 250,000 is a lot of people, and when you look closer at the process I suspect that the privacy advocates have a stronger position. Given that the opt-out required an active process (and Google say that 2/3 of those who objected used their own online tool to do so) it does suggest that quite a lot of people care about this. If the reverse system had been in place - and you had to actively choose to HAVE your home 'unblurred' on Street View, what kind of figures would you get? Would more than 250,000 have gone through a process to make their houses visible? I doubt it....
...and what of the rest of us? Germans got a choice because their government made a point about it, and demanded that Google give them the choice before the service went active. As the BBC reports, other governments have made other kinds of objections, but none have been given the choice that the Germans have had. As I've blogged before, Germany has a pretty active privacy lobby, so it's not surprising that they are the country that has taken this step - what would the result have been if the option had been given in the UK? Or the US? Probably not as dramatic as the German result - which makes me wonder whether Google has missed a trick by not providing the option elsewhere. If they did so, and an even tinier fraction than the 2.9% in privacy-aware Germany objected, they might be able to be even bolder about proclaiming that people love Street View.....
Thursday, 7 October 2010
How personal is personal?
The Register is reporting that the ICO wants a clearer definition of what consititutes 'personal data' - and it is indeed a crucial question, particularly under the current data protection regime. The issue has come up in the ICO's response to the Government consultation on the review of the Data Protection Directive - and one of the key points is that there is a difference between how personal data is defined in the directive and how it is defined in the UK Data Protection Act. That difference gives scope for lots of legal argument - and is one of many factors that help to turn the data protection regime from something that should be about rights and personal protection into something often hideously technical and legalistic. The ICO, fortunately, seems to recognise this. As quoted in The Register, ICO Deputy Director David Smith says:
There's another area, however, that just makes it into the end of the Register's article, that may be even more important - the question of what constitutes 'sensitive personal data'. Here, again, the ICO is on the ball - this is again from the Register:
For instance, consider data about someone's membership of the Barbra Streisand fan club. Sensitive data? In most situations, people might consider it not to be sensitive at all - who cares what kind of music someone listens to? However, liking Barbra Streisand might mean a very different thing for a 22 year old man than it does for a 56 year old woman. Extra inferences might be drawn if the data gatherer has also learned that the data subject has been searching for holidays only in San Francisco and Sydney, or spends a lot of time looking at hairdressing websites. Add to that the real 'geo-tag' kind of information about where people actually go, and you can build up quite detailed profiles without ever touching what others might consider sensitive. When you have all that information, even supposedly trivial information like favourite colours or favourite items in your Tesco online shopping could end up being sensitive - as an extra item in a profile that 'confirms' or 'denies' (according to the kinds of probabilistic analyses that are used for behavioural profiling) that a person fits into a particular category.
What does all this mean? Essentially that ANY data that can be linked to a person can become sensitive - and that judging the context is so difficult that it is almost impossible. Ultimately, if we believe that sensitive data needs particular protection, then we should apply that kind of protection to ALL personal data, regardless of how apparently sensitive it is....
"We need to ensure that people have real protection for their personal information, not just protection on paper and that we are not distracted by arguments over interpretations of the Data Protection Act,"That's the crux of it - right now, people don't really have as much real protection as they should. Will any new version of the directive (and then the DPA) be any better? It would be excellent if it did, but right now it's hard to imagine that it will, unless there is a fundamental shift in attitudes.
There's another area, however, that just makes it into the end of the Register's article, that may be even more important - the question of what constitutes 'sensitive personal data'. Here, again, the ICO is on the ball - this is again from the Register:
"The current distinction between sensitive and non-sensitive categories of personal data does not work well in practice," said the submission. "The Directive’s special categories of data may not match what individuals themselves consider to be ‘sensitive’ – for example their financial status or geo-location data about them."The ICO go on to suggest not a broadening of the definition of sensitive personal data, but a more 'flexible and contextual approach' to it - and they're right. Data can be sensitive in one context, not sensitive in another. However, I would suggest that they're not going nearly far enough. The problem is that the idea of the 'context' of any particular data is so broad as to be unmanageable. What matters isn't just who has got the data and what they might do with it, but a whole lot of other things concerning the data subject, the data holder, any other potential data user and so on.
For instance, consider data about someone's membership of the Barbra Streisand fan club. Sensitive data? In most situations, people might consider it not to be sensitive at all - who cares what kind of music someone listens to? However, liking Barbra Streisand might mean a very different thing for a 22 year old man than it does for a 56 year old woman. Extra inferences might be drawn if the data gatherer has also learned that the data subject has been searching for holidays only in San Francisco and Sydney, or spends a lot of time looking at hairdressing websites. Add to that the real 'geo-tag' kind of information about where people actually go, and you can build up quite detailed profiles without ever touching what others might consider sensitive. When you have all that information, even supposedly trivial information like favourite colours or favourite items in your Tesco online shopping could end up being sensitive - as an extra item in a profile that 'confirms' or 'denies' (according to the kinds of probabilistic analyses that are used for behavioural profiling) that a person fits into a particular category.
What does all this mean? Essentially that ANY data that can be linked to a person can become sensitive - and that judging the context is so difficult that it is almost impossible. Ultimately, if we believe that sensitive data needs particular protection, then we should apply that kind of protection to ALL personal data, regardless of how apparently sensitive it is....
Thursday, 30 September 2010
Every which way to lose your data...
The ACS 'data leak' story that's been emerging fairly dramatically over the last couple of days has got pretty much everything you could hope for in this kind of story: a bit of porn, a bit of piracy, some hacking, threats of huge fines, legal action and so on. It's already been widely reported on - Andrew Murray's blog on the subject gives an excellent description of what ACS do, and how this whole thing has to a great extent blown up in ACS's face. As he explains, it's a prime example of how symbiotic regulation works - and why the law is not the only thing that matters when regulating the internet.
There is, however, something else that is very graphically demonstrated by the whole saga - how many different ways your personal data can be at risk. This small story alone demonstrates at least five different ways that personal data can be vulnerable:
There is, however, something else that is very graphically demonstrated by the whole saga - how many different ways your personal data can be at risk. This small story alone demonstrates at least five different ways that personal data can be vulnerable:
- To monitoring and tracking - the initial data about the supposed copyright infringers was obtained by monitoring traffic on the internet.
- To 'legal' attack - ACS initially got a court order to demand that the ISPs involved (we know about BT, Sky and PlusNet in this case) disclose the personal details of the account holders suspected of copyright infringement, based upon this monitoring.
- To human error - BT have admitted that they sent this personal data on an unencrypted Excel file attached to an ordinary email, in breach of their official policies and practices.
- To hacking - at least this is part of what ACS have claimed - that their systems were hacked into in order for the data to be obtained in order to be leaked.
- To deliberate leaking - precisely who did the leaking is far from clear, and who wished for the data to be leaked, but there is certainly a possibility that someone wanted the names to be out in the public domain.
Of course the data itself is far from reliable. It is just the details of the account holders that are suspected of being used to share illegal content, without there being any direct evidence that the people themselves did the sharing - which brings even more dimensions of vulnerability into play: confusion, mistaken identity, even things like defamation by implication could come into play. If your name is on the list, you're not only being labelled a lawbreaker but a consumer of porn - and it might very easily not have been you doing it at all. Other people might be using your account, perhaps without your knowledge, perhaps without your permission, perhaps without your understanding.
Simon Davies, of Privacy International, quoted in the BBC, said that 'You rarely find an aspect where almost every aspect of the Data Protection Act (DPA) has been breached, but this is one of them'. It's also true that almost every aspect of data vulnerability has been demonstrated in one fell swoop.
Perhaps an even more important point, however, is the way that personal data - and individuals' privacy - is viewed almost as 'collateral damage' in the ongoing battle between the entertainment industry (and their hired guns like ACS:Law) and the 'pirates'. From the outside it looks as though as far as the 4chan hackers and ACS:Law are concerned, it's that battle that matters. ACS:Law wants to 'get' the pirates, while the 4chan hackers want to 'get' ACS:Law and to 'win' the war with the entertainment industry for the 'cause' of free and unfettered file-sharing. The fact that some 13,000 individuals have had their personal data released into the public domain and face all kinds of possible consequences from embarrassment (or humiliation) to legal action onwards seems somehow less important. Sadly it often seems to be that way. Privacy is squeezed by politics, law, business and a whole lot more. Every which way, privacy loses.
Perhaps an even more important point, however, is the way that personal data - and individuals' privacy - is viewed almost as 'collateral damage' in the ongoing battle between the entertainment industry (and their hired guns like ACS:Law) and the 'pirates'. From the outside it looks as though as far as the 4chan hackers and ACS:Law are concerned, it's that battle that matters. ACS:Law wants to 'get' the pirates, while the 4chan hackers want to 'get' ACS:Law and to 'win' the war with the entertainment industry for the 'cause' of free and unfettered file-sharing. The fact that some 13,000 individuals have had their personal data released into the public domain and face all kinds of possible consequences from embarrassment (or humiliation) to legal action onwards seems somehow less important. Sadly it often seems to be that way. Privacy is squeezed by politics, law, business and a whole lot more. Every which way, privacy loses.
Saturday, 18 September 2010
No more place for privacy?
With the launch of Facebook Places in the UK, 'location' services have really hit the mainstream. With Facebook Places, people can 'check in' to indicate exactly where they are to their 'friends' (and probably quite a lot of others too, unless they're very careful). It's another step - and perhaps a very big one - along a path that some might suggest has an inevitable outcome: the end of privacy, at least as we know it.
Scott McNealy, CEO of Sun Microsystems, told journalists, way back in 1998 that “You have zero privacy anyway, get over it.” Others, most recently and persistently Mark Zuckerberg, co-founder and CEO of Facebook, have suggested that the whole idea of that is simply outdated and now irrelevant – people just don't care about it anymore.
Are they right? Is privacy dead - or at least dying? Should we just 'get over it', join all those many millions of happy Facebook customers who don't care about privacy, and start enjoying all the advantages of having a truly 'transparent' life? Embrace such wonders as Facebook Places, and enjoy the pleasures of meeting people for coffee in unexpected places just through the medium of our smartphones - after all, it's so much more convenient than having to call and arrange things. Of course there's an obvious possible downside - but burglary's not much of a danger as long as you have state of the art security systems, or a ravenous Rottweiler, or employ someone to housesit whenever you're out.
That, however, is just the simplest and most obvious problem. The other, less obvious, but ultimately more important issue is what happens to all the data about where you are, where you've been, and so forth. The possibilities of using this data for profiling - and eventually predictive profiling - are immense, which presumably is why Facebook and many others are introducing products like this. They'll be able to learn even more about you than they already can.
Do we care? Zuckerberg would suggest not, but there isn't much evidence to back up his claims. McNealy would say that it doesn't matter whether or not we care, there's nothing we can do about it. Personally I don't think either of them are right. Events like the fall of Phorm and Facebook's own forced abandonment of their Beacon system, and the 30,000+ Germans who put their names to a challenge to data retention legislation, all suggest that there is still an appetite for privacy - and for some more control over what's going on.
Will Facebook Places be a huge success? Will people just embrace it, without considering the downsides? It will be an interesting test....
Scott McNealy, CEO of Sun Microsystems, told journalists, way back in 1998 that “You have zero privacy anyway, get over it.” Others, most recently and persistently Mark Zuckerberg, co-founder and CEO of Facebook, have suggested that the whole idea of that is simply outdated and now irrelevant – people just don't care about it anymore.
Are they right? Is privacy dead - or at least dying? Should we just 'get over it', join all those many millions of happy Facebook customers who don't care about privacy, and start enjoying all the advantages of having a truly 'transparent' life? Embrace such wonders as Facebook Places, and enjoy the pleasures of meeting people for coffee in unexpected places just through the medium of our smartphones - after all, it's so much more convenient than having to call and arrange things. Of course there's an obvious possible downside - but burglary's not much of a danger as long as you have state of the art security systems, or a ravenous Rottweiler, or employ someone to housesit whenever you're out.
That, however, is just the simplest and most obvious problem. The other, less obvious, but ultimately more important issue is what happens to all the data about where you are, where you've been, and so forth. The possibilities of using this data for profiling - and eventually predictive profiling - are immense, which presumably is why Facebook and many others are introducing products like this. They'll be able to learn even more about you than they already can.
Do we care? Zuckerberg would suggest not, but there isn't much evidence to back up his claims. McNealy would say that it doesn't matter whether or not we care, there's nothing we can do about it. Personally I don't think either of them are right. Events like the fall of Phorm and Facebook's own forced abandonment of their Beacon system, and the 30,000+ Germans who put their names to a challenge to data retention legislation, all suggest that there is still an appetite for privacy - and for some more control over what's going on.
Will Facebook Places be a huge success? Will people just embrace it, without considering the downsides? It will be an interesting test....
Tuesday, 10 August 2010
A creditable approach?
Is the new UK government 'privacy friendly' after all? Some of the early signs have been very promising - from the headline-grabbing cancellation of the ID card programme onward - but the latest news out of Downing Street should start certain alarm bells ringing.
David Cameron's announcement of a new 'crackdown' on benefit fraud might be politically simple and far from contentious on the surface - indeed, the early reports in various news sources focussed on the ideas that few could complain about, as 'everyone' knows that benefit fraud is 'a bad thing' - but the ideas that lie beneath the surface are potentially far more contentious, even dangerous. The key is the way that the 'crackdown' is to be performed: through the use of data from credit agencies. As Cameron put it "Why should government not use the same tools that are available to independent organisations?" Why indeed? Well, the one question begs another - are those tools, available to and used by independent organisations tools that should be used at all?
Credit agencies gather data on people and use that data to help other organisations make decisions that have a real, concrete impact on those people - and yet we really know very little about how they work and have very little control over how they work. What is clear is that they work through the gathering and analysing of data - data gathered from a whole variety of sources. Whether and how that data should be gathered and used is something that has not really been up for debate on a serious scale - and here we have David Cameron's government simply assuming that the use of the data is OK, and indeed endorsing its use. More than that, they're offering a potentially very lucrative contract to the credit agencies, offering even more incentive for them to gather more and more data about more and more people. Is that something that should be encouraged?
Benefit fraud has always been an easy target - one popular with politicians and tabloids alike - but is this just a starting point for more government use of this kind of data? And other kinds of data? A government who looked (and proclaimed itself to be) in favour of privacy and autonomy is taking quite the opposite approach with this announcement. Not a creditable approach at all.
David Cameron's announcement of a new 'crackdown' on benefit fraud might be politically simple and far from contentious on the surface - indeed, the early reports in various news sources focussed on the ideas that few could complain about, as 'everyone' knows that benefit fraud is 'a bad thing' - but the ideas that lie beneath the surface are potentially far more contentious, even dangerous. The key is the way that the 'crackdown' is to be performed: through the use of data from credit agencies. As Cameron put it "Why should government not use the same tools that are available to independent organisations?" Why indeed? Well, the one question begs another - are those tools, available to and used by independent organisations tools that should be used at all?
Credit agencies gather data on people and use that data to help other organisations make decisions that have a real, concrete impact on those people - and yet we really know very little about how they work and have very little control over how they work. What is clear is that they work through the gathering and analysing of data - data gathered from a whole variety of sources. Whether and how that data should be gathered and used is something that has not really been up for debate on a serious scale - and here we have David Cameron's government simply assuming that the use of the data is OK, and indeed endorsing its use. More than that, they're offering a potentially very lucrative contract to the credit agencies, offering even more incentive for them to gather more and more data about more and more people. Is that something that should be encouraged?
Benefit fraud has always been an easy target - one popular with politicians and tabloids alike - but is this just a starting point for more government use of this kind of data? And other kinds of data? A government who looked (and proclaimed itself to be) in favour of privacy and autonomy is taking quite the opposite approach with this announcement. Not a creditable approach at all.
Sunday, 11 July 2010
Quality matters!
Momentum seems to be building for the idea that internet access is a universal right - and more than that, that high quality internet access is a universal right. As seems often to be the case in the digital world, the lead is coming from Scandinavia - Finland have made broadband a 'legal' right, according to a report in the BBC. From the 1st of July 2010, every Finn has the right to access to a 1Mbps (megabit per second) broadband connection. As reported by the BBC, Finland's communication minister Suvi Linden sad that "We considered the role of the internet in Finns everyday life. Internet services are no longer just for entertainment."
That much is becoming clearer and clearer. We need internet access for proper access to government services, we need internet access to get the best prices for goods and services - indeed, there are some goods and services that are almost impossible to get without access to the net. We need internet access for access to information and news - and we need information and news if we are to fully participate in our society. What the Finnish government have realised is that it's not just 'access' that matters, but the quality of that access, if some of the 'digital divide' issues are to be dealt with - and that, surely, is what really matters.
From a human rights perspective, what is needed is an infrastructure that allows all people to fully participate in society. Making access to broadband a legal right doesn't just mean giving people the right to download music or watch YouTube videos fast, it means that they have an opportunity to take advantage of the huge benefits that the internet can bring - benefits that those on the 'advantaged' side of the digital divide are already enjoying. Try searching for legal advice as to your rights as an employee when your job is under threat - as so many are in the current economic climate - and you soon discover why broadband is important. If you have to sit there waiting and waiting when you don't even know what you're waiting for, it's all too easy to give up - and hence not to discover what your rights might be.
The Finns have taken the lead - but others will follow, and it is to be hoped that they will follow not just with bland statements or aspirations, but legal rights.
That much is becoming clearer and clearer. We need internet access for proper access to government services, we need internet access to get the best prices for goods and services - indeed, there are some goods and services that are almost impossible to get without access to the net. We need internet access for access to information and news - and we need information and news if we are to fully participate in our society. What the Finnish government have realised is that it's not just 'access' that matters, but the quality of that access, if some of the 'digital divide' issues are to be dealt with - and that, surely, is what really matters.
From a human rights perspective, what is needed is an infrastructure that allows all people to fully participate in society. Making access to broadband a legal right doesn't just mean giving people the right to download music or watch YouTube videos fast, it means that they have an opportunity to take advantage of the huge benefits that the internet can bring - benefits that those on the 'advantaged' side of the digital divide are already enjoying. Try searching for legal advice as to your rights as an employee when your job is under threat - as so many are in the current economic climate - and you soon discover why broadband is important. If you have to sit there waiting and waiting when you don't even know what you're waiting for, it's all too easy to give up - and hence not to discover what your rights might be.
The Finns have taken the lead - but others will follow, and it is to be hoped that they will follow not just with bland statements or aspirations, but legal rights.
Saturday, 22 May 2010
Why make privacy complicated?
The current 'row' about Facebook's privacy settings, and the similar 'affair' about privacy on Google Buzz raise one significant question: why do companies like Facebook and Google make privacy so complicated? That, it seems, is one of the key problems, particularly in Facebook's case. According to the New York Times, Facebook's privacy policy has 50 different settings and 170 options, and the policy is longer than the US Constitution - closing in on 6,000 words.
Why? Is it complicated simply because privacy itself is complicated? Well, it's certainly true that privacy isn't as simple and clear cut as some might imagine, but does that really mean that privacy policies, and privacy options need to be so complex as to require a law degree to even begin to understand? It's hard to justify - and for companies that demonstrate immense creativity when it comes to designing new products and services, and excellent ways to make those products and services simple to use and easy to understand, it does seem quite surprising that they can't make their privacy policies easy to understand and their privacy options simple to use. They have the experience and the expertise to find a way - if they really want to.
So why don't they? Two reasons immediately spring to mind, one simple and in some ways reasonable, the other much more pernicious. The first is that until recently they simply didn't care enough about it - and didn't think their users cared enough about it. A privacy policy was something that only concerned lawyers (to cover their potential liabilities) and geeks (who are those who bleat on about privacy), and lawyers and geeks don't need things to be simple to understand and use - they need things to cover all the relevant issues in a logical and coherent fashion.... which leads to documents the size of the US constitution and 170 options and 50 different settings. What's more, they want their creative minds and experienced programmers to be working on the 'important stuff', not wasting time and money on something like privacy policies that no-one really care about. So, from a business point of view, putting effort into making privacy simple and understandable would be wasteful. And boring, too, for the creative people.
The second possible reason is far more shady - maybe they want to make privacy complicated because they don't want people to know what they do and what the implications are? If an ordinary user has to wade through a document the size of the US constitution, and spend their time choosing between 170 options and 50 settings, the chances are that they simply won't bother. And if they don't bother, and leave the settings on what Facebook choose as the defaults, then everything's much happier, at least for Facebook.
I wouldn't like to suggest that the second is true - the first is far more likely. However, if the second does have an element of truth to it, we might start to see that over the next year or two. Public interest in privacy appears to be growing - the question is how companies like Facebook respond to it. If things change, and change quickly, that would tell us a lot. If they don't, and if there is more prevarication and less action, that would tell us something else entirely.
Why? Is it complicated simply because privacy itself is complicated? Well, it's certainly true that privacy isn't as simple and clear cut as some might imagine, but does that really mean that privacy policies, and privacy options need to be so complex as to require a law degree to even begin to understand? It's hard to justify - and for companies that demonstrate immense creativity when it comes to designing new products and services, and excellent ways to make those products and services simple to use and easy to understand, it does seem quite surprising that they can't make their privacy policies easy to understand and their privacy options simple to use. They have the experience and the expertise to find a way - if they really want to.
So why don't they? Two reasons immediately spring to mind, one simple and in some ways reasonable, the other much more pernicious. The first is that until recently they simply didn't care enough about it - and didn't think their users cared enough about it. A privacy policy was something that only concerned lawyers (to cover their potential liabilities) and geeks (who are those who bleat on about privacy), and lawyers and geeks don't need things to be simple to understand and use - they need things to cover all the relevant issues in a logical and coherent fashion.... which leads to documents the size of the US constitution and 170 options and 50 different settings. What's more, they want their creative minds and experienced programmers to be working on the 'important stuff', not wasting time and money on something like privacy policies that no-one really care about. So, from a business point of view, putting effort into making privacy simple and understandable would be wasteful. And boring, too, for the creative people.
The second possible reason is far more shady - maybe they want to make privacy complicated because they don't want people to know what they do and what the implications are? If an ordinary user has to wade through a document the size of the US constitution, and spend their time choosing between 170 options and 50 settings, the chances are that they simply won't bother. And if they don't bother, and leave the settings on what Facebook choose as the defaults, then everything's much happier, at least for Facebook.
I wouldn't like to suggest that the second is true - the first is far more likely. However, if the second does have an element of truth to it, we might start to see that over the next year or two. Public interest in privacy appears to be growing - the question is how companies like Facebook respond to it. If things change, and change quickly, that would tell us a lot. If they don't, and if there is more prevarication and less action, that would tell us something else entirely.
Monday, 10 May 2010
The politics of privacy - does privacy matter?
A few weeks ago I attended Privacy International's 20th anniversary party - a fascinating event, celebrating a truly admirable organisation which has done sterling work over the last twenty years, from a time when privacy seemed to be very much a 'niche' subject, one that most people didn't think mattered much at all. Over the last few years, however, that seems to have changed - privacy issues regularly make headlines, from lost data to the sell-out of Chinese dissidents, from ID cards to data retention. Emphasising that, one of the two keynote speakers at the party was Nick Clegg - and this was BEFORE the first of the UK's leadership debates, so before Clegg had etched himself on the public consciousness. He spoke powerfully, quite eloquently, and fairly passionately about privacy - and at the same time, since he and we all knew that the election was just aroung the corner, he used the occasion as an 'electoral address', suggesting that his party, the Lib Dems, was the best party for privacy, and would protect all our rights much better than the other two. No to ID cards. No to centralised databases for Data Retention. No to fingerprinting our children....
....well, now he's become the 'kingmaker', it will be interesting to see how high up his agenda privacy really is. Is it one of the points he makes to his potential coalition partners? Will he get his way? It's a very interesting test of both his political will and his judgment as to the views of his supporters. We should know in a week or two....
....well, now he's become the 'kingmaker', it will be interesting to see how high up his agenda privacy really is. Is it one of the points he makes to his potential coalition partners? Will he get his way? It's a very interesting test of both his political will and his judgment as to the views of his supporters. We should know in a week or two....
Tuesday, 6 April 2010
The business of rights...
Today sees the second reading in the House of Commons of the Digital Economy Bill, something I've mentioned in my blog more than once before. It is a bill that has been much discussed by privacy advocates and in the media, but that to the frustration (or even fury) of many is likely to get very little discussion time in the House of Commons, but rather be rushed through in the run up to the coming General Election. Even so, it is a very hot topic, and is very much in the news - and in the newspapers. Today, in advance of that second reading in the house, both 'sides' of the debate have taken out full page advertisements in the UK's national press. The trade union-led Creative Coalition Campaign (CCC) has come out in support of the bill, with their full page advertisement in the Guardian talking of job losses if the bill isn't passed, while the Open Rights Group and digital campaigners 38 Degrees have taken out their own ads, funded by donation, in both the Guardian and the Times, in opposition to the bill. A big fight - with the industry taking the canny approach of using the 'jobs' and leaving it to the unions rather than being seen so directly as the big, bad, business wolf, the pantomime villains of the piece, the enemies of 'rights'.
It does seem that all too often the positions become polarised, and what should be negotiation for mutual benefit ends up in conflict. The story surrounding music is a prime example - the digital revolution should (and does) represent a vast opportunity for both the music industry and for individual consumers and creators of music, and yet what we have is a series of law suits and a big and often very antagonistic conflict. What's happening with music is echoed almost everywhere else - with privacy advocates in conflict with the big boys of the internet world like Google, Microsoft and Yahoo over their data gathering and retention practices as another clear example. Does it have to be that way? Of course a certain degree of tension is inevitable - and indeed in many ways beneficial - does business really have to be 'an enemy' of individual rights? Sometimes it seems that way. I recently attended a lecture given by an expert practitioner in the field of Data Protection - he was talking to students about the realities of a career as a data protection lawyer. He was in most ways a very good and very positive person - and yet to listen to him it was clear that from the perspective of both businesses and the lawyers who work for business that data protection was seen as a barrier to be overcome (or even sidestepped or avoided) rather than in any sense a set of positive principles that could (or should) be for the benefit of the individual data subjects or indeed for society as a whole. From his perspective, rights weren't a beneficial thing so much as something that gets in the way of an enterprise's opportunity to make a profit.
From the other point of view, privacy advocates sometimes seem to take an equally antagonistic approach - and if you view someone as an enemy, they may find it all to easy to slip into that role. Attack someone and they are quite likely to defend themselves. It may seem necessary - indeed in the short term and in specific situations it may BE necessary - but in the long term, surely both sides would be better off looking at it from a more cooperative and positive perspective. Advocates might find it a better way to get what they want - and industries might find themselves new, better and more sustainable ways to build their businesses.
How to do this is the big question - we seem to know very well how NOT to do it, but not have much of a clue of the reverse. The starting point, however, has to be more talking. The idea of rushing through the Digital Economy Bill is the exact opposite of that. The government is effectively saying that enough discussion happened in the House of Lords, and so we don't need to talk more about it in the Commons. That can't be right, can it? We have two Houses for a reason - and the Commons is supposed to be the place where the people are represented. If the music industry wants these proposals to work, it ultimately needs to get the people on its side - and if it wants that, it needs to be willing to talk about it, and to see things a little more from the people's point of view. That, at the very least, would be a start.
It does seem that all too often the positions become polarised, and what should be negotiation for mutual benefit ends up in conflict. The story surrounding music is a prime example - the digital revolution should (and does) represent a vast opportunity for both the music industry and for individual consumers and creators of music, and yet what we have is a series of law suits and a big and often very antagonistic conflict. What's happening with music is echoed almost everywhere else - with privacy advocates in conflict with the big boys of the internet world like Google, Microsoft and Yahoo over their data gathering and retention practices as another clear example. Does it have to be that way? Of course a certain degree of tension is inevitable - and indeed in many ways beneficial - does business really have to be 'an enemy' of individual rights? Sometimes it seems that way. I recently attended a lecture given by an expert practitioner in the field of Data Protection - he was talking to students about the realities of a career as a data protection lawyer. He was in most ways a very good and very positive person - and yet to listen to him it was clear that from the perspective of both businesses and the lawyers who work for business that data protection was seen as a barrier to be overcome (or even sidestepped or avoided) rather than in any sense a set of positive principles that could (or should) be for the benefit of the individual data subjects or indeed for society as a whole. From his perspective, rights weren't a beneficial thing so much as something that gets in the way of an enterprise's opportunity to make a profit.
From the other point of view, privacy advocates sometimes seem to take an equally antagonistic approach - and if you view someone as an enemy, they may find it all to easy to slip into that role. Attack someone and they are quite likely to defend themselves. It may seem necessary - indeed in the short term and in specific situations it may BE necessary - but in the long term, surely both sides would be better off looking at it from a more cooperative and positive perspective. Advocates might find it a better way to get what they want - and industries might find themselves new, better and more sustainable ways to build their businesses.
How to do this is the big question - we seem to know very well how NOT to do it, but not have much of a clue of the reverse. The starting point, however, has to be more talking. The idea of rushing through the Digital Economy Bill is the exact opposite of that. The government is effectively saying that enough discussion happened in the House of Lords, and so we don't need to talk more about it in the Commons. That can't be right, can it? We have two Houses for a reason - and the Commons is supposed to be the place where the people are represented. If the music industry wants these proposals to work, it ultimately needs to get the people on its side - and if it wants that, it needs to be willing to talk about it, and to see things a little more from the people's point of view. That, at the very least, would be a start.
Tuesday, 23 March 2010
Consent: a red herring?
I asked Peter Fleischer, Google's Global Privacy Counsel, a question about 'opt-in' or 'opt-out', in a panel session at the Computers, Privacy and Data Protection Conference in Brussels in January, to which he gave an interesting answer, but one that was greeted with more than a little dismay. In essence, his answer was that the whole question of 'opt-in/opt-out', and by implication the whole issue of consent, was a bit of a red herring. Unsurprisingly, that was not a popular view at a conference where many of the delegates were privacy advocates - but he did and does have a very good point. He went on to explain, quite reasonably, that if someone wants something online, they'll just consent to anything - scrolling down through whatever legalese is put in the consent form without reading it, then clicking OK without a second thought, just to get at the service or website they want. And he's right, isn't he? That IS what we all do, except in the most exceptional circumstances.
The question, then, is what can or should be done about it. Peter Fleischer's implication - one shared, it appears, by most in the industry, is that we should realise that emptiness and unhelpfulness of consent, and not bang on so much about 'opt-in' or 'opt-out'. We're missing the point, and barking up the wrong tree. And, to a certain extent, I'm sure he's right. As things stand, consent, and opt-in, and not really very helpful. However, it seems to me that he's also missing the point - whether deliberately, as it suits the interests of his employers to have opt-out systems and allow such things as browse-wrap consent on the net, or because he thinks there's no alternative, I wouldn't like to say - in the conclusions that he draws, and the suggestions as to what we do next.
If consent, in its current form on the net, is next to meaningless, rather than abandoning the concept as useless wouldn't it be better to find a way to make it more meaningful? This is something that many people are wrestling with - including the EnCoRe (Ensuring Consent & Revocation) group - and something I shall be presenting a paper about at the BILETA conference in Vienna next week. The way I see it, the internet offers unprecedented opportunities for real-time communication and interaction, for supplying information and for allowing users choices and options - shouldn't there be a way to harness these opportunities to make the consent process more communicative, more interactive, more 'real-time', and to give users more choice and more options?
Peter Fleischer's employers, Google, actually do some really interesting and positive things in this field - the Google Dashboard and Google's AdPreferences both provide information and allow options and choices for people whose data is being gathered and used - the next stage is for these to be given more prominence, for right now they're pretty hidden away, and it's mostly just the hackers and privacy advocates that even know they exist, let alone use them well. If they can perhaps Google can help consent to become much more than a red herring, and instead part of the basic process of the internet.
The question, then, is what can or should be done about it. Peter Fleischer's implication - one shared, it appears, by most in the industry, is that we should realise that emptiness and unhelpfulness of consent, and not bang on so much about 'opt-in' or 'opt-out'. We're missing the point, and barking up the wrong tree. And, to a certain extent, I'm sure he's right. As things stand, consent, and opt-in, and not really very helpful. However, it seems to me that he's also missing the point - whether deliberately, as it suits the interests of his employers to have opt-out systems and allow such things as browse-wrap consent on the net, or because he thinks there's no alternative, I wouldn't like to say - in the conclusions that he draws, and the suggestions as to what we do next.
If consent, in its current form on the net, is next to meaningless, rather than abandoning the concept as useless wouldn't it be better to find a way to make it more meaningful? This is something that many people are wrestling with - including the EnCoRe (Ensuring Consent & Revocation) group - and something I shall be presenting a paper about at the BILETA conference in Vienna next week. The way I see it, the internet offers unprecedented opportunities for real-time communication and interaction, for supplying information and for allowing users choices and options - shouldn't there be a way to harness these opportunities to make the consent process more communicative, more interactive, more 'real-time', and to give users more choice and more options?
Peter Fleischer's employers, Google, actually do some really interesting and positive things in this field - the Google Dashboard and Google's AdPreferences both provide information and allow options and choices for people whose data is being gathered and used - the next stage is for these to be given more prominence, for right now they're pretty hidden away, and it's mostly just the hackers and privacy advocates that even know they exist, let alone use them well. If they can perhaps Google can help consent to become much more than a red herring, and instead part of the basic process of the internet.
Thursday, 18 March 2010
Now we're all at it... especially the good guys...
It's not just the German government who are using illegally acquired data to root out tax evaders - the latest revelation is that both the French and the UK Government are doing it to. A report from the Sunday Times, available online here, has revealed much more detail - and in particular that HMRC in the UK is very enthusiastic about getting hold of this illegally acquired data. A senior tax official is quoted as saying "It’s fair to say that the prospect of getting hold of this information has generated some excitement here."
The whole thing raises a lot of issues - some of which I mentioned in my post of 7th March - but the German, French and UK governments are all seemingly happy to do it, and at least so far there seems to be very little resistance or outcry about their tactics. The ends justify the means, perhaps. Personally, I don't think so, and an experience I had in the classes I teach (Information Technology & the Law) suggested to me why. The class was about surveillance in the digital environment, and we were discussing the nature of enhanced CCTV, and how it, combined with information from systems like Oyster Cards, could allow coordinated tracking of individuals. I teach three classes, with a mix of different individuals with very different backgrounds. In the first class, the reaction to this kind of tracking could be described as general interest, but nothing more. In the second, it might even be described as enthusiastic - with some agreement with the view of a Police CCTV Liaison Officer that "The cameras are there to help the police and to protect the community. There is no way anybody should be afraid of them unless they have something to hide."
The third class was different - the first person to speak had a reaction that I hadn't really heard in the first two classes. His immediate response was that he didn't want the government to be able to track him - and when asked why, he almost laughed, because to him it was so obvious. Why was it obvious to him, and not to the others in the previous classes? Because he happened to have experience of living in a country with what is close to an authoritarian regime. People who live in those circumstances are naturally and appropriately more likely to be suspicious and distrustful of government motives.
Here in the 'safe' West, where the governments are suspected much more of incompetence than evil, we don't really seem to care that much about things like this. Right now, we seem to mostly 'trust' our governments, and imagine that they will only use the powers we grant them (or allow them to take for themselves) for good purposes - like catching tax evaders, or tracking terrorists. We rarely imagine that they might end up using them for entirely different purposes, purposes for which we would have much less sympathy. What would it take to make us realise the risks, let alone take them seriously? It would be nice to think that we could do so before they are taken too far.
Tuesday, 16 March 2010
Digital Economy Bill passes the Lords...
Just a brief note - further to last week's post, the Digital Economy Bill has now passed its third reading in the House of Lords, and is expected to be rushed through the commons before the election (see the BBC report here). Do people really understand what's happening here? And more to the point, even if they do, do they care? There will be active campaigning against it for sure - not least by the Open Rights Group - and it will be interesting to see how much opposition to the disconnection provisions can be raised in the face of the Government's clear desire to get it done quickly. Will the UK demonstrate the kind of 'active community' that worked so well in Germany to deal with their data retention laws, as I mentioned a couple of weeks ago?
I certainly hope so - and at a time when an election is looming, the government should certainly be responsive to signs of popular resistance. Are we in the UK ready to stand up for freedom on and with the internet? Time will tell...
I certainly hope so - and at a time when an election is looming, the government should certainly be responsive to signs of popular resistance. Are we in the UK ready to stand up for freedom on and with the internet? Time will tell...
Thursday, 11 March 2010
All hail the Internet?
Two stories this week have emphasised the importance of the Internet in today's world.
The most recent, and perhaps the strangest, is the news that the Internet has been nominated for the Nobel Peace Prize, in a campaign mounted by Wired Italy - this is how the English language version of Wired is reporting it. Of course there have been stranger (and much more controversial) nominations over the years, but even so it does seem an unusual, though far from unwelcome suggestion. The Internet can be (and at times has been) a wonderful tool for peace. As said Riccardo Luna, editor-in-chief of the Italian edition of Wired magazine puts it: "The internet can be considered the first weapon of mass construction, which we can deploy to destroy hate and conflict and to propagate peace and democracy. What happened in Iran after the latest election, and the role the web played in spreading information that would otherwise have been censored, are only the newest examples of how the internet can become a weapon of global hope."
The second story comes from the BBC World Service, who commissioned a poll, covering more than 27,000 people in 26 countries across the digital divide which came up with some headline grabbing statistics, the most notable of which was that across the world, almost 80% of people now regard Internet access as a basic human right. There are many highly revealing findings, both on a country-by-country basis and giving more of a global picture, but the headline figure is certainly something about which we should stop and think. Internet access a basic human right, comparable with electricity and water? And this is something believed not just in technologically advanced countries, but right across the digital divide - countries such as Mexico, Brazil andTurkey most strongly supporting the idea of net access as a right.
So, two stories, one suggesting that the Internet should be considered for the Nobel Peace Prize, the other suggesting that access to the Internet is a fundamental human right - and what do we have happening in the UK, and seemingly quite likely to become law, but the idea of restricting or even cutting off internet access for people caught illegally file-sharing, in the shape of the Digital Economy Bill. Cutting off a fundamental human right, for something that, though illegal, is hardly of the most egregious of crimes, doesn't exactly seem proportionate. Though people like Ian Livingston, British Telecom's Chief Executive, who has publicly raised his concerns about the Bill, along with various other industry leaders (including representatives of BT, Virgin Media, Carphone Warehouse and Orange) may have a clear vested interest in opposing these terms within the Bill, it is certainly something that many more of us should be concerned about.
The most recent, and perhaps the strangest, is the news that the Internet has been nominated for the Nobel Peace Prize, in a campaign mounted by Wired Italy - this is how the English language version of Wired is reporting it. Of course there have been stranger (and much more controversial) nominations over the years, but even so it does seem an unusual, though far from unwelcome suggestion. The Internet can be (and at times has been) a wonderful tool for peace. As said Riccardo Luna, editor-in-chief of the Italian edition of Wired magazine puts it: "The internet can be considered the first weapon of mass construction, which we can deploy to destroy hate and conflict and to propagate peace and democracy. What happened in Iran after the latest election, and the role the web played in spreading information that would otherwise have been censored, are only the newest examples of how the internet can become a weapon of global hope."
The second story comes from the BBC World Service, who commissioned a poll, covering more than 27,000 people in 26 countries across the digital divide which came up with some headline grabbing statistics, the most notable of which was that across the world, almost 80% of people now regard Internet access as a basic human right. There are many highly revealing findings, both on a country-by-country basis and giving more of a global picture, but the headline figure is certainly something about which we should stop and think. Internet access a basic human right, comparable with electricity and water? And this is something believed not just in technologically advanced countries, but right across the digital divide - countries such as Mexico, Brazil andTurkey most strongly supporting the idea of net access as a right.
So, two stories, one suggesting that the Internet should be considered for the Nobel Peace Prize, the other suggesting that access to the Internet is a fundamental human right - and what do we have happening in the UK, and seemingly quite likely to become law, but the idea of restricting or even cutting off internet access for people caught illegally file-sharing, in the shape of the Digital Economy Bill. Cutting off a fundamental human right, for something that, though illegal, is hardly of the most egregious of crimes, doesn't exactly seem proportionate. Though people like Ian Livingston, British Telecom's Chief Executive, who has publicly raised his concerns about the Bill, along with various other industry leaders (including representatives of BT, Virgin Media, Carphone Warehouse and Orange) may have a clear vested interest in opposing these terms within the Bill, it is certainly something that many more of us should be concerned about.
Sunday, 7 March 2010
The good, the bad and the ugly side of privacy in Germany
Privacy advocates in the UK sometimes look across at Germany in wistful admiration - but is the story quite as rosy for privacy in Germany as it sometimes appears? Perhaps not, for though one recent event has shown Germany in its best light, as a beacon for privacy rights across Europe, another has demonstrated the opposite. Even Germany has an ugly side to how it deals with privacy.
First for the good. As reported widely (and in this case in out-law.com), this last week Germany's highest court has suspended that country's implementation of the EU Data Retention Directive by ruling that it violates citizens' rights to privacy. This suspension comes after a class action suit brought by 35,000 German citizens - a level of citizen activity that would be close to miraculous in the UK, particularly for as issue such as privacy. The law by which the German government implemented the Data Retention Directive has been found unconstitutional, failing to include enough safeguards for the privacy of the individuals that is required under Germany's constitution. A victory for privacy, albeit neither a complete nor a permanent one, since the court did not say that it would be impossible to implement the Data Retention Directive in a constitutionally acceptable way, just that this particular implementation was unconstitutional. Nonetheless, it is something about which German privacy advocates will feel justifiably proud - and many in other countries in Europe will hope signals changes elsewhere. It is hard to imagine, however, that it will be possible to achieve a similar result in the UK.
Then for the bad - or at least the ugly. A story reported far less widely, at least in the UK, is emerging concerning the German government's use of data concerning the use by German citizens of Swiss banks for the purposes of tax evasion. This data has been acquired through various methods, most of which would probably be considered illegal - certainly from the perspective of the Swiss banks. Reuters has reported on the subject - it is a somewhat complex story, but the essence of it is that private data, detailing the banking activities of German citizens, has been offered for sale to a number of German states. Some of that data may have come from insider whistle-blowers, but some has also come from hackers - and earlier this year the German Federal Government gave states the go-ahead to buy the data if they want, whether or not the data has been obtained illegally. At least one state, the State of North Rhine-Westphalia, has bought the data, and is using it to flush out tax evaders. As Reuters reports, nearly 6,000 German tax evaders have 'owned up' to the tax evasion as a result of this evidence - and more could still be about to come out of the woodwork. As DSTG head Dieter Ondracek said, "If we get a signal from the politicians that it'll only be possible for people to come clean this year, then we could have another 5,000 doing so with corresponding additional revenues," Ondracek told Reuters. "Then a billion euros could be possible."
This is not the first time that Germany has bought illegally acquired private data. Two years ago, something similar happened with bank data from Lichtenstein, effectively forcing the principality to relax its previously stringent bank secrecy laws. The current affairs over Swiss banking data might have a somewhat similar effect over the banking rules in Switzerland, though that of course could be a long way away - though already the Swiss have complied with a US request over tax evasion, and as reported in Reuters, Switzerland's justice minister questioned on Sunday whether tax evasion should continue to be treated as a misdemeanour rather than a crime.
It is hard, of course, to generate much sympathy for people evading tax through the use of bank accounts in Switzerland - but that should not blind us to the significance of the events that are taking place. It's not so much the nature of the data that's significant, but the way in which is has been acquired. Getting data through the use of official requests from one government to another, as in the case of the US, is one matter, but paying money for data acquired illegally, and quite likely through hacking, is quite another, and sets a very uncomfortable precedent. Moreover, it provides a new and potentially large incentive to hackers to go after this kind of data. And if this kind of data, why not other data? Aside from the obvious problems of Germany's potential obligations as a signatory of the Cybercrime Convention, there is an awkward parallel here with another recent event - the enormously publicised hacking of the gmail accounts of Chinese dissident groups. The Chinese government of course vigorously denies any involvement in the hack, but if it were to be offered data on illegal groups acquired by hacking, how different would it be for the Chinese government to buy it from the German government's buying of this Swiss banking data?
From the perspectives of the two governments, they're just seeking to root out people involved in illegal activities - for the Germans, tax evaders, for the Chinese, people involved in subversive (and illegal) activities. And in both cases, the fact that it might be possible to make money from selling this kind of data cannot help but be an incentive to try to acquire it. People in the West may have much more sympathy for Chinese dissidents than they do for German tax-evaders, but in some ways the principles are very much the same. Do we really want to set that kind of precedent?
First for the good. As reported widely (and in this case in out-law.com), this last week Germany's highest court has suspended that country's implementation of the EU Data Retention Directive by ruling that it violates citizens' rights to privacy. This suspension comes after a class action suit brought by 35,000 German citizens - a level of citizen activity that would be close to miraculous in the UK, particularly for as issue such as privacy. The law by which the German government implemented the Data Retention Directive has been found unconstitutional, failing to include enough safeguards for the privacy of the individuals that is required under Germany's constitution. A victory for privacy, albeit neither a complete nor a permanent one, since the court did not say that it would be impossible to implement the Data Retention Directive in a constitutionally acceptable way, just that this particular implementation was unconstitutional. Nonetheless, it is something about which German privacy advocates will feel justifiably proud - and many in other countries in Europe will hope signals changes elsewhere. It is hard to imagine, however, that it will be possible to achieve a similar result in the UK.
Then for the bad - or at least the ugly. A story reported far less widely, at least in the UK, is emerging concerning the German government's use of data concerning the use by German citizens of Swiss banks for the purposes of tax evasion. This data has been acquired through various methods, most of which would probably be considered illegal - certainly from the perspective of the Swiss banks. Reuters has reported on the subject - it is a somewhat complex story, but the essence of it is that private data, detailing the banking activities of German citizens, has been offered for sale to a number of German states. Some of that data may have come from insider whistle-blowers, but some has also come from hackers - and earlier this year the German Federal Government gave states the go-ahead to buy the data if they want, whether or not the data has been obtained illegally. At least one state, the State of North Rhine-Westphalia, has bought the data, and is using it to flush out tax evaders. As Reuters reports, nearly 6,000 German tax evaders have 'owned up' to the tax evasion as a result of this evidence - and more could still be about to come out of the woodwork. As DSTG head Dieter Ondracek said, "If we get a signal from the politicians that it'll only be possible for people to come clean this year, then we could have another 5,000 doing so with corresponding additional revenues," Ondracek told Reuters. "Then a billion euros could be possible."
This is not the first time that Germany has bought illegally acquired private data. Two years ago, something similar happened with bank data from Lichtenstein, effectively forcing the principality to relax its previously stringent bank secrecy laws. The current affairs over Swiss banking data might have a somewhat similar effect over the banking rules in Switzerland, though that of course could be a long way away - though already the Swiss have complied with a US request over tax evasion, and as reported in Reuters, Switzerland's justice minister questioned on Sunday whether tax evasion should continue to be treated as a misdemeanour rather than a crime.
It is hard, of course, to generate much sympathy for people evading tax through the use of bank accounts in Switzerland - but that should not blind us to the significance of the events that are taking place. It's not so much the nature of the data that's significant, but the way in which is has been acquired. Getting data through the use of official requests from one government to another, as in the case of the US, is one matter, but paying money for data acquired illegally, and quite likely through hacking, is quite another, and sets a very uncomfortable precedent. Moreover, it provides a new and potentially large incentive to hackers to go after this kind of data. And if this kind of data, why not other data? Aside from the obvious problems of Germany's potential obligations as a signatory of the Cybercrime Convention, there is an awkward parallel here with another recent event - the enormously publicised hacking of the gmail accounts of Chinese dissident groups. The Chinese government of course vigorously denies any involvement in the hack, but if it were to be offered data on illegal groups acquired by hacking, how different would it be for the Chinese government to buy it from the German government's buying of this Swiss banking data?
From the perspectives of the two governments, they're just seeking to root out people involved in illegal activities - for the Germans, tax evaders, for the Chinese, people involved in subversive (and illegal) activities. And in both cases, the fact that it might be possible to make money from selling this kind of data cannot help but be an incentive to try to acquire it. People in the West may have much more sympathy for Chinese dissidents than they do for German tax-evaders, but in some ways the principles are very much the same. Do we really want to set that kind of precedent?
Saturday, 6 March 2010
Welcome
Welcome to the Symbiotic Web blog... where I will post thoughts and stories relating to privacy, autonomy and the web, and in particular concerning stories related to the idea of the symbiotic web. This will be an occasional blog - when stories arise, rather than regular. The contents will mainly be musings and suggestions, and will in general represent my opinions and views rather than academically rigorous research!
Subscribe to:
Posts (Atom)