It’s not really a question of these organizations using artificial intelligence, which is how every one of these panels approaches it. For most civil society organizations, they may be buying software that’s going to use algorithmic analysis and some AI on a large dataset, perhaps through their vendors of fund development data or software. And then, yes, there are legitimate questions to be asked about the inner workings, the ethical implications, the effects on staff and board and so on. Important questions but hardly worth a conference panel (IMHO) - those are important software vendor considerations, and it is important for all organizations to understand how these things work, but not the “black magic” or “sector transforming phenomenon” that a conference organizer would want you to think.
The REAL issue is how large datasets (with all the legitimate questions raised about bias, consent and purpose) are being interrogated by proprietary algorithms (non-explainable, opaque, discriminatory) to feed decision making in the public and private sectors in ways that FUNDAMENTALLY shift how the people and communities served by nonprofits/philanthropy are being treated.
- Biased policing algorithms cause harm that nonprofits need to understand, advocate agains, deal with, and mitigate.
- AI driven educational programs shift the nature of learning environments and outcomes in ways that nonprofit after-school programs need to understand and (at worst) remediate, (at best) improve upon.
- The use of AI driven decision making to provide public benefits leaves people without clear paths of recourse to receive programs for which they qualify (read Virginia Eubanks’s Automating Inequality).
- Algorithmically-optimized job placement practices mean job training programs and economic development efforts need to understand how online applications are screened, as much as they help people actually add skills to their applications.
The real question for nonprofits and foundations is not HOW will they use AI, but how is AI being used within the domains within which they work and how must they respond?
* I try to avoid any conversations that are structured as “_____ for (social) good” and all situations that are “_[blank]_ for social good” where the [blank] is the name of a company or a specific type of technology.