So it's no wonder that I have thought experiments on my mind. Yesterday when I opened The New York Times (the actual paper edition) I read a range of stories. Sometimes when I read the paper my mind plays tricks on me - like, did the person who wrote the story on page A1 read the story on page B8? Today, several stories in the paper seemed to be speaking to each other - the first one was on robots and jobs. The other one was on artificial intelligence research.
Here's where my mind went (bear with me, this will be about philanthropy and civil society)
First, some background:
This year while I was working on the Blueprint, I kept wanting to do more imaginative work on digital technologies. It's one reason I regularly read newsletters from Edge.org and the Singularity Institute, I go to campus lectures on neural networks and the block chain and other stuff I don't understand and I'm a member of the Long Now Foundation. All efforts to try to learn about far-edge technology. But the far-edge of technology is not the right topic for the Blueprint, since many people who read it are still coming to grips with hashtags and chat apps. So I keep a notebook of thoughts on edgy technology and civil society, trying to figure out what to do with them. This year, one of the pre-readers of the Blueprint said to me, " You know, this is all well and good, but what about the really interesting technological stuff - like neural networks and pattern recognition and AI and robots, and what not?"
So he and I went off and had coffee. And we're trying to figure out what to do with my notes, his observations, and the intersections and implications of cutting-edgier science and technology on philanthropy and civil society.*
Reading the two news stories noted above, I learn that many economists are no longer sure that technological advances (such as robotics) will create more jobs than they destroy. And then, turn a few pages, and I read about a 100 year study to look at the real effects of artificial intelligence. That is, the study begins now and will run for 100 years. Where will the study take place? Stanford University (with lots of academic partners). It's called AI100
So how do you reconcile these two thoughts - "tech is changing the economy but not augmenting it" with "let's invest now in something to run for 100 years." You make a gift to a nonprofit. Admittedly, a nonprofit with a large endowment, but what is to say that this university (or any institution) will withstand the very forces this study aims to examine? In other words, if you make a 100 year bet to understand a technology's impact, how do you know the technology won't destroy the place you made the bet before the century is up?**
Now, the thought experiment (When AI ate philanthropy)
Assume some folks begin to apply machine learning and artificial intelligence, data and data analytics to the increasing amount of quantitative data being generated by and about social organizations. This is a completely plausible step from where we are now - what with the effective altruism movement, random control trials, and the emerging data infrastructure for philanthropy. One possibility is such a trajectory leads to the Hedge Fund version of donors - "quant givers." Guided by algorithms and data (replacing program officers and philanthropy advisors) machines would match a donor's dollars with social causes. If we assume that the capacity to do this (at least in the short term) would take a lot of money to build out, we can also assume that the charitable dollars would be coming from wealthy people (or perhaps pools of money from donors, pushing further on the hedge fund metaphor), and so the gifts would also be large. As such, they'd be attractive to organizations, who would try to fit themselves into the models (just as organizations "optimize their search terms" to fit Google search algorithms). Because the algorithms "learn," they get better and better at finding and matching, data crunching and report generating. They create their own, powerful, self-sustaining feedback loop.
It's possible such algorithmic philanthropy could drive enough resources and draw enough attention from fund seeking nonprofits. The feedback loop thus grows to a larger force that begins to change the kinds of measures and metrics that lots of nonprofits track and provide. This leads to behavior change among the vast swath of nonprofits, changing the "marketplace" of options available to smaller, more passion-oriented (or simply less data-driven) donors (i.e. most of us). What would it do to the measures and data that were collected and fed into the machines?
How much would it take before the quant-driven feedback loop affected change throughout the ecosystem of nonprofits? How would parallel movements of impact investing and performance measures, earned revenue and double bottom lines play into this story? How would the more personal, direct-involvement approach of most donors counteract this approach? What countervaling influence could intuition, expression, personal passions, minority voice, and donor choice have on this feedback loop? Just how far can we rationalize/algorithmically structure giving? And how far should we? What if we took all the humanity out of philanthropy?
One likely answer to that question is that the measures would focus ever more on quantifiable, short term outputs that can be easily collected. From the perspective of the algorithm it would be harder to justify longer-term investments in hard-to- measure activities (it would also probably skip over startups with no metrics at all). Long term enterprises and startups might not even "make it into the data" being used by the algorithms.
What falls into this category of long-term hard-to-measure charitable opportunities? Well lots of things like advocacy, policy analysis, and basic research. Oh, and endowed universities.
So, are 100 year research efforts on technological change even possible? How do we factor in the effects of these digital technologies on the peripheral technologies like nonprofit universities that house them? What happens when the thing being researched consumes the place that conducts the research?
*What do you think we should do? A new blog? An annual look at just tech and philanthropy? A "hype cycle" analysis? A two by two?
**This paradox, by the way, is one of the reasons I believe so strongly in focusing on the effects of digital innovation on civil society. Most studies of digital change look at their implications for business or government and assume that nonprofits will just keep on keeping on. I think that's nuts.