The right to be forgotten - can PR truly deal with data?
Has Linkedin blown it with its new 'social popularity' tool?

Beware the creeping algorithms softly invading your privacy

Awf14 debate picHot on the heels of the EU Court of Justice ruling around the 'right to be forgotten', I attended the Auckland Writers' Festival debate with Iraqi/English scientist Jim Al-Khalili, Dutch historian Frank Dikotter, New Zealand privacy expert Bob Stevens, and British broadcaster, writer and comedian Sandi Toksvig.  

Under scrutiny was the topic 'Is privacy an outdated concept' and, while Jim and Bob made some interesting points around the nature and history of privacy, Sandi and Frank won the night (for me at least) with their arguments in favour of retaining and strengthening rights to privacy both online and off.

Each keystroke or voice command we make drips a little more information into the data pool so others may trawl through our activities. Some say that by participating online we relinquish our privacy by default. I have to agree with Sandi's points concerning thresholds. She said that when when invited to dinner by her New Zealand hosts, she did not automatically run to their bedrooms to rifle through their draws, nor did she invade the kitchen and help herself unbidden to items in the pantry. We have, she said, as humans, an inate understanding of thresholds - the place or moment where we do not cross over but instead respect the rights, wishes and privacy of those with whom we connect.

The problems that I see ahead will come not so much from human intervention or manual data capture. More trouble is likely from automated algorithms programmed to collect and connect data but insufficiently capable of accurate semantic analysis with the result that the data collected on individuals and organisations will lead to a 'two-plus-two-equals-nine' scenario where misinformation is formed based on inaccurate parsing of data. My real identity will be blurred by the machine's interpretation of who I am and what I do.

Already, through simple things like targeted ads, we are picked out and compartmentalised through gleaned automated data collection. While it is laughable when those ads come up in your gmail, inaccurate and way off the mark, it is a slightly more sinister moment when an algorithm throws up a result based on the misinterpretation of words.

Google currently has under development what is possibly the biggest artificial intelligence lab in the world. IBM's Watson - which famously won the TV quiz show Jeopardy - now has five companions, each of which are deployed busily giving answers to things. For healthcare professionals, one Watson provides fast answers to complex medical questions, in other realms it brings what IBM terms 'the cognitive experience' to finance, marketing and service industries.

It is truly amazing that machines can help in the ways they are being tasked but still the critical factor that is missing is discernment. And it is this lack of discernment that I believe threatens our privacy even more than our ability to overshare, overwrite and over do things.

My smartphone probably knows me better than some of my friends by virtue of the information it has collected around my patterns of behaviour and my interaction with the technology. But I resent technology making assumptions based on interaction and those assumptions, if made and transmitted, breach my privacy. Put simply, machines do not understand or respect thresholds.

I suppose in pondering this question since the ruling and the debate, my concern is not that we have a right to be forgotten more that we have a right to quietly exist, without assumption or intrusion. And that is the condition that is most under threat.

 

Comments