Thursday, November 21, 2019

Artificially Intelligent Associations

My next big research project (I hope) will be coordinating a distributed team of researchers from many disciplines to better understand how our digital dependencies influence our associational opportunities and rights. For example:
  1. How do the personas that data-driven algorithms create for us align with, or not, how we see ourselves and how we associate? 
  2. How does platform control of information visibility bound our associational opportunities? 
There are many other questions and I'm working on putting together both a working group of scholars and a more complete outline of the project (Feel free to contact me if you're interested).

As with all of my research, I hope to do as much of this as possible "in public;" gathering, sharing, thinking, revising with interested parties. Here's video of one recent conversation I moderated on what I'm calling (because of the lovely alliteration) Artificially Intelligent Associations. This was recorded at Stanford University's HAI (Human Centered Artificial Intelligence) Conference on Ethics, Policy and Governance, October 29, 2019.

Featured participants are Eileen Donahoe, Terah Lyons, Sean McDonald, and Jasmine McNealy

Wednesday, October 16, 2019

Job boards for social sector, public sector, nonprofit, tech and policy jobs

Students often ask me for information on jobs. There's a noticeable (and welcome) uptick in interest in jobs in the public or social sectors at the intersections of policy, analysis, technology, and data. I asked Twitter for relevant job boards and here are the resources I received.*

Internet Freedom Festival Job Board

Tech Jobs for Good

Open Gov Jobs

Internet Law and Policy Jobs

Slack channel for jobs via SimplySecure

Giving Compass

Giving Tech Labs

 Ben Green's List - 

New America Public Interest says it has resources

Code For America Public Interest Job Board

Newsletter from Justice Codes often has great job announcements.

I'll update as I can. Let me know what else I missed.

*(No verification, endorsement or even claim that there are shared definitions at work here) 

Monday, June 17, 2019

"Hey (random woman's name applied to a connected digital speaker), donate money to...."

I heard this story last week, from the mother of a toddler.

The kid is home, playing with grandpa. Kid is just past the peek-a-boo stage, now experimenting with hide and go seek. Like most kids of this age, hiding generally involves standing on the other side of a chair or putting a piece of paper on her head. Not really hidden. But Grampa didn't get the message. When it's his time to hide he goes in the other room. Toddler takes hands off eyes, looks around. Doesn't see Grampa. Looks a little bit worried but doesn't move. Waits another minute. Shouts out, "Alexa, where's Grampa?"

I'm going to let you sit with that.

It led us to all kinds of questions. Including about advertising on these devices. Others in the group said it's small, but growing. This article says it's already here and that we (the people) like it.  All of us agreed it seems inevitable.

Question for nonprofits: you ready to pay whatever it will cost to make sure you are the one (and only) response when someone starts asking, "Alexa (others), who should I donate money to?"

Question for the rest of us: You really want some engineered algorithm (no doubt based on who paid the most) telling you where to give your money?


Monday, April 08, 2019

Nonprofits and artificial intelligence

I’ve participated in a lot of conferences, panels, discussions etc. about “nonprofits and AI,” “foundations and AI,” “AI for good”* and so on. The vast majority of them miss the point all together.

It’s not really a question of these organizations using artificial intelligence, which is how every one of these panels approaches it. For most civil society organizations, they may be buying software that’s going to use algorithmic analysis and some AI on a large dataset, perhaps through their vendors of fund development data or software. And then, yes, there are legitimate questions to be asked about the inner workings, the ethical implications, the effects on staff and board and so on. Important questions but hardly worth a conference panel (IMHO) - those are important software vendor considerations, and it is important for all organizations to understand how these things work, but not the “black magic” or “sector transforming phenomenon” that a conference organizer would want you to think.

The REAL issue is how large datasets (with all the legitimate questions raised about bias, consent and purpose) are being interrogated by proprietary algorithms (non-explainable, opaque, discriminatory) to feed decision making in the public and private sectors in ways that FUNDAMENTALLY shift how the people and communities served by nonprofits/philanthropy are being treated.
  • Biased policing algorithms cause harm that nonprofits need to understand, advocate agains, deal with, and mitigate. 
  • AI driven educational programs shift the nature of learning environments and outcomes in ways that nonprofit after-school programs need to understand and (at worst) remediate, (at best) improve upon. 
  • The use of AI driven decision making to provide public benefits leaves people without clear paths of recourse to receive programs for which they qualify (read Virginia Eubanks’s Automating Inequality). 
  • Algorithmically-optimized job placement practices mean job training programs and economic development efforts need to understand how online applications are screened, as much as they help people actually add skills to their applications.
This essay on “The Automated Administrative State” is worth a read.

The real question for nonprofits and foundations is not HOW will they use AI, but how is AI being used within the domains within which they work and how must they respond?

* I try to avoid any conversations that are structured as “_____ for (social) good” and all situations that are “_[blank]_ for social good” where the [blank] is the name of a company or a specific type of technology.