Does privacy protect "the right to fail"? -- and, the vexing problem of privacy harms.

An interesting post on Lawfare quoting David Hoffman on the right to privacy as the “right to fail”: 

In the past, I have discussed the European Commission’s “Right to be Forgotten” proposal, and the issues with trying to provide a comprehensive right to wipe a record clean. I have argued individuals need a sphere of privacy where they know they can make mistakes, without those errors following them for the rest of their lives. Individuals will shy away from risky or provocative ideas and efforts, if they fear organizations will use those activities to discriminate against them forever. These provocative ideas challenge the status quo and are often what is needed to break away from conformity and innovate. Technology companies are familiar with this need for space to allow employees to innovate, and many structure their performance review systems to create the ability for individuals to take risks.  I call the need for this space for innovation, “The Right to Fail”.

I appreciate new and thoughtful attempts at defining the value of privacy, and Hoffman’s idea has a ring of truth to it. 

This brings me to another topic: the vexing problem of privacy harms.  The most vexing failure of privacy scholarship, in my opinion, is that “privacy advocates” have failed to articulate in simple terms (to the public or any other audience) the value of privacy and the harm from undermining it. 

I’m not suggesting there is an easy solution to this problem, but I have some thoughts about its sources.  There are several reasons privacy harms and benefits are difficult to articulate, including the following:

(1) in addition to being an individual right, privacy is (in the most important ways) a collective or system-based right, and the harm from violating privacy rights and the benefits from protecting them are only apparent in the aggregate.  That makes these harms and benefits more difficult to articulate in simple terms. 

In this sense, privacy is like voting – it may be a relatively small societal harm to prevent one person from voting, but restricting the right to vote will, in the aggregate, fundamentally change the democratic nature of the system we live in.  In the same way, taking away a bit of privacy from one person might not be a huge deal, but curtailing privacy rights across the board may fundamentally change the type of society we live in – for example, by discouraging innovation, experimentation, or dissent.  

To be sure, the concept of privacy as a collective or systemic right is hardly new.  Julie Cohen’s book Configuring the Networked Self and Dan Solove’s recent book Nothing to Hide each cover some of the theory behind this understanding of privacy. 

(2) A second possible reason privacy harms and values are hard to articulate is the boiling frog problem.  Like the frog who doesn’t know it’s boiling until it’s too late, the harm from undermining privacy might not be apparent to us until it’s too late.  This is related to (1) above – we may not take notice of incremental encroachments on privacy rights, but we may find (hopefully not too late) that the the delayed, aggregate harm to the system may be very great indeed.

(3) A third reason privacy harms and values are difficult to articulate is that technology just isn’t there yet.  Believe it or not, we’re still at the beginning of the road when it comes to effectively collecting and processing the mountains of personal and public data in the world.  Just as we may have to wait for technology to catch up before we see the full value of that collection and processing, we may also have to wait to see the full scope of possible harms that could result. 

These are just quick thoughts. 


Notes

  1. invictalinux reblogged this from privacyandtechnology
  2. when-under-ether reblogged this from privacyandtechnology
  3. privacyandtechnology reblogged this from babaksiavoshy
  4. babaksiavoshy posted this