It turns out that Facebook is manipulating your news feed and OK Cupid is playing with your heart.
Neither of these digital algorithmic head games came as a surprise to folks in the digital know, but that doesn't make them OK.
And they did come as a surprise to most of us - who don't quite understand how software works and who still make assumptions about personal agency, control, and validity based on analog experiences. This is a world of "black box algorithms" - where software code is determining what information we see and don't see. It's a world where consent means "not really" - in which, as one wag puts it, "the biggest lie on the internet are the two words "I Agree." There's a new book out, Virtual Unreality, that makes it clear just how prevalent this "manipulation by software" really is. Be clear though - the phenomenon is not new, just the broader understanding that it is happening all around us.
At a recent innovation conference hosted by Stanford PACS and the International Red Cross and Red Crescent Society I called this moment in time "The Age of Digital Assumption." By that I meant a couple of things:
- For nonprofits - assume your volunteers will use digital tools (probably their own, as in "Bring your own device"). Assume someone, somewhere will create a new digital tool or apply existing ones to whatever crisis you are handling (see #DetroitWater). Assume you will not be in control of the way digital information is used to respond to any particular event or disaster.
- For people - assume your digital information is being collected, and that you don't really know what is being done with it. Assume that your privacy does not rank as high on the list of concerns of the organization collecting your data as it might on your own list. Assume that if you are giving information (or permission to gather information from your phone) such as your name, email, location data, etc. that the organization asking for it is, in fact, collecting it and doing something with it (maybe just storing it till later)
Moving the issue of ethics from the fringe to the center is also part of the "Age of Digital Assumptions." This age comes after what I might call the "Age of Digital Exceptionalism" in which we were still agog at all the things our "shiny objects" can do. Now we get to the harder questions, namely, "Just because we can, should we?"
We're putting ethics at the center of the conversation in an upcoming conference hosted by Stanford PACS Digital Civil Society Lab, the Harvard Humanitarian Initiative, and the Brown Institute at the Columbia School of Journalism along with DataKind, PopTech! the Responsible Data Forum and a growing list of other partners.
Here's the basic question - What are the ethical parameters for civil society organizations using digital information and infrastructure?
Nonprofits often talk about trust and integrity as being central to their work. If so, do we want (expect) nonprofits to act differently with our data then we expect Facebook to act? Do we want (expect) nonprofits to communicate more clearly to us about what they do with our data? Do we want (expect) to be able to access civil society organizations and their services without compromising our own data (or the digital whereabouts of all the folks whose addresses and phone numbers are stored on our phone address book?)
Similarly, do we want (expect) nonprofits to access and use remote digital data in different ways than other enterprises? If wealth managers are using satellite imagery for portfolio management, does that mean nonprofits can or should use access to the same data to determine levels of poverty? Or to find ways to better deliver medicines to remote areas with bad roads? How should funders think about ethical issues, and what role do they play in shaping behavior?
What, if any, incentives should be provided to encourage civil society's use of digital data for good? What, if any, limits should be put on civil society uses of digital data?
More about the conference is online here. It's by invitation only because of space limitations, but we'll be sharing resources on the web site and conversation on twitter via #EoD14.
What, if any, incentives should be provided to encourage civil society's use of digital data for good?
Data for Good provides data-driven projects for social good. That's hopefully an incentive to replicate these great projects in your own community.
Thanks for this insightful post. The remark toward the end about wealth management companies using satellite imagery to assess their clients was provocative. Your next comparison is about nonprofits using similar tools to evaluate poverty, but it makes me think about nonprofits potentially using similar tools to assess prospective donors! If we're talking about the ethics of data in philanthropy and civil society, should we also be having a discussion about ethical limits of "prospect research"? Or does the end (philanthropic support of social-benefit orgs) justify the means? I imagine the professional organizations have guidelines or codes of conduct in this area. It might be interesting to bring them into the conversation.
Absolutely - great question? What are the ethical limits of "prospect" research is a great question. Given how much information can be gathered, and the absolute value of that information, there are ways in which every business - nonprofit or for profit - is mining with social networks, satellites, sensors - you name it - for their own purposes.
One good thing about your example - it reminds us that the same technology (satellite or sensor, e.g.) can be used to gather info on the rich as well as the poor, and requires of them the same lack of consent. A strange equalizer - at least for now (I'd expect wealthy folks to figure out - or be provided with - all kinds of services to allow them more privacy at a price)
Post a Comment