Picturing privacy

Feb 23, 2012 14:44 · 748 words · 4 minute read privacy vome

Last night I spent a couple of hours talking in a group at a workshop organised by the VOME project - the broad topic was how how online services might handle issues of privacy and data protection, and how that aspect of our online lives could be better managed.

That kicked off a train of thought that’s been latent for a while, and I ended up scribbling out an idea that I’ve wondered about on and off.

The basic concept is that privacy of information is not an absolute. It’s contextual, and situational - whereas you might be comfortable discussing your most intimate medical details with your doctor, you wouldn’t have that same conversation with a casual acquaintance in a crowded bar.

The problem with many privacy policies and privacy controls is that they aren’t granular enough to support that kind of contextual adjustment; and when granularity exists, it makes interpreting the impact of the policy difficult to ascertain. Tickbox lists of “allow this group of contacts access to this information” can be difficult to control, so there’s a risk of over- or under-sharing.

That conversation got me thinking about how it might be possible to visualise the process and effect of managing information permissions. This is a very quick, rough, first attempt. It’s predicated on the idea that information about you exists on a continuum - from that which is most intimate and important to you as an individual, to the other extreme of not caring at all who knows a particular fact about you.

At the centre of the diagram is the information that you are least likely to want to share with others - an example of which might be intimate medical details. That’s not to say that you won’t share it in appropriate situation - just that those situations are unlikely to arise very often. The impact of sharing this type of information would also be very high - what impact would there be if the world at large knew your HIV status, for example?

As the diagram radiates out, the impact - and sensitivity - of the data decreases. Take financial data, for example - having your credit card details stolen is certainly not a good thing, but that’s a repairable situation. Cancel the card, file a fraud claim and the damage is largely repaired.

Location data (to me at least) feels less impactful than the previous two categories. It’s no secret that I live in the UK - my email address gives that away - and it’s not too hard to figure out that I live in Sheffield. Knowing that I’m in the fifth-largest city in the UK - is that a problem, per se? Probably not. I may want to control your access to my home address - but again, the impact of that knowledge being shared is relatively limited.

Then at the furthest reaches of the rings are contact details - email addresses and the like. These are effectively public, and in the case of services like Twitter, are public by default.

Where the diagram starts to add perspective is when you plot individuals or organisations in relation to the levels of the information hierarchy. When you apply for life insurance, for example, you’re probably going to share some pretty intimate details of your medical history.

That might not be something that’s necessarily avoidable - but it is going to make you look at the insurance company in a different light, and apply a different set of criteria to whether you feel that they’re trustworthy or not. Whereas a random eBay seller knowing your home address - that’s par for the course.

A key point is that the information elements that sit in each level will be specific to the individual. There are plenty of individuals who define themselves at least partially in their public personas through their HIV status, to go back to the medical analogy. But others won’t, and would guard that information much more closely. Each is a perfectly acceptable approach - so the ability to tailor the placement

What I don’t think this approach does is create a one-size-fits-all means of plotting, once and for all, who should have access to what fragment of your data exhaust. That’s where I think the checkbox approach fails.

But it might help to visualise and surface the potential impact of sharing information, and provide a way of objectively assessing what information is most important to you as an individual.