IMPORTANT UPDATE: Please read this response and thread and my apologies for first using logo without permission. I very much appreciate having this broader perspective brought forth from @CassieRobinson, Stefaan Verhulst and others.
Original post
Since we started the Digital Civil Society Lab I’ve been invited to countless conferences, workshops, and philanthropic or corporate launches of “some kind of tech” for “some kind of good.”
I always say no. I
refer to it as the no "blank for good” rule. The framing is entirely wrong. And
it’s not just a little wrong; it’s a lot wrong. Everything about it, from the
grammar to the implications for humanity is wrong. And industry’s recent
responses – a thousand conferences and initiatives to put ethics into tech – gets
the grammar right, but the solution is still wrong.
Here’s a short list of reasons why I ask you to reconsider
your tech/data/machine learning/cloud/AI or other technology/computational
effort for social good, organized from snarky to serious (and back to snarky):
1)
First of all, enough
with the “good” language. If you’re not even willing to do the work of
defining what you mean by good, of understanding the specific challenges and
strengths of civil society or the public sector or providing a real definition
of “good,” then stop right there. There is no universal definition of good. This
phrasing is hollow marketing rhetoric for selling a product (perhaps at reduced
rates) to civil society or public sector agencies.
2) Repurposing a technology built for commercial
purposes (and that’s what the blanks almost always are) into civil society or
public sector work ignores that those other domains have fundamentally different
values and purpose – of equal importance to, but different from - commerce. Selling
ads and providing justice are two entirely different domains, inside and
outside of the algorithm. You can’t simply transfer a tool or way of working
designed for commercial purposes to the other systems and not cause harm.
3) Another level of #2 above: Context matters. In
addition to mattering IN AND OF itself, the context of civil society or public
service shapes the values, incentives, purposes, motivations, participants,
affected populations, and cost-benefit analyses of those sectors. Ignore it,
and you get a public sphere full of lies and propaganda, “efficient” decision making systems that
amplify racism, procedural approaches that trade fairness for scale, “security”
systems that make people less safe, and accelerators for injustice.
4) Commercial technologies prioritize scale and
efficiency. Those values rank below justice, equity, truth, participation, a
healthy environment, beauty, deliberation, and MANY other priorities in civil
society and the public sector. What commercial applications call externalities,
civil society and the public sector exist to address.
5) The lack of universality in terms of what “good”
means applies to ethics, also. There is no universal ethical frame. No “ethics”
you can pull down off a shelf and wash over your existing organizational
(usually commercial) priorities. A commitment to ethics is a commitment to
discuss, debate, decide and enforce a set of values, purpose, conflict,
tradeoffs, and loss. A debate about values and implications and choices and bad
outcomes and structural recourse for those who are harmed (groups, societies,
communities, collectives – not only individuals). Consequences matter –
internal and external. If you’re not willing to go there; well then, it’s marketing.
6) I think the focus on “ethics” in tech has largely become
anti-regulatory rhetoric (especially when used within companies). Ethics matter
– but they must be specified, debated over, deeply integrated into the people,
incentives, and structures of an enterprise, and learned and practiced
throughout one’s lifetime. They don’t come on/off like clean room booties. And
one officer or office of ethics inside billion-dollar companies? Not enough.
7)
(A return
to snark) The framing of “blank for good” is much more honest than its users
realize because it doesn’t just imply, it basically declares, that every other
use of the “blank” is for, shall we say, not good?
Simply from a grammatical standpoint, it’s a “subject
object” problem – but of course, it’s much more than grammar. It’s not that
civil society and democratic systems need tech, it’s that tech (and all of the
derivate disciplines noted above) needs to be purpose-built for humane values, a
commitment to just use, and we need publicly-determined redress and braking
systems. There are some things we just
should not do. We need to flip the
script; change the subject and object of the phrase. The opportunity cost of
not doing so are more hollow promises, more handwaving distractions, and more
digital corporate capture and damage to our already deeply broken governing
systems and communities.
As public awareness has grown about surveillance, bias,
power asymmetries, lack of accountability or due process, and opacity in the
digital systems we depend on, we’ve spent the last few years having conversations
about ethics in tech. For all the reasons I outline above this is not
sufficient. We need to reverse the subject (tech) and the object (good) to be
about teaching, designing, building, releasing, and being able to put societal values and just procedures into tech, not putting tech into “good.”
Here are some examples of what we need: Antiracism
in AI. Justice
in data. Equity in the cloud. Information symmetry in social media/search. Financial
resources to support vulnerable people in designing, directing, and governing systems
that serve their needs; where communities are
recognized as powerful actors whose needs can stop tech that furthers
inequities, is impoverishing to communities and the planet, sacrifices safety, prioritizes
individual wealth accumulation over people’s needs, or enables racism,
misogyny, hate, and domination.
Please, invite me to the launch of those initiatives, conferences,
and philanthropic efforts. I’ll be there; I have a lot to learn.
No comments:
Post a Comment