Sunday, October 21, 2018

First question...

I've been talking to a lot of nonprofit and foundation folks + software developers lately. The good news is these two communities are starting to work together - from the beginning. But there is a long way to go. Just because you're working in or with a nonprofit/social sector/civil society organization doesn't mean unleashing the most sophisticated software/data analytic techniques is a good thing. In fact, using cutting edge algorithmic or analytic techniques that haven't been tried before in an effort to help already vulnerable people is quite possibly a really bad idea.

I've come to believe that the first question that these teams of well meaning people should ask about whatever it is they're about to build is:
"How will this thing be used against its intended purpose?"
How will it be broken, hacked, manipulated, used to derail the good intention it was designed for? If the software is being designed to keep some people safe, how will those trying to do harm use it? If it's intended to protect privacy, how will it be used to expose or train attention in another dangerous way?

Think about it this way - every vulnerable community is vulnerable because some other set of communities and structures is making them that way. Your software probably doesn't (can't) address those oppressive or exploitative actors motives or resources. So when you deploy it it will be used in the continuing context of intentional or secondary harms.

If you can't figure out the ecosystem of safety belts and air bags, traffic rules, insurance companies, drivers' education, and regulatory systems that need to help make sure that whatever you build does more help than harm, ask yourself - are we ready for this? Because things will go wrong. And the best tool in the wrong hands makes things worse, not better.

No comments: