So it's no wonder that I have thought experiments on my mind. Yesterday when I opened The New York Times (the actual paper edition) I read a range of stories. Sometimes when I read the paper my mind plays tricks on me - like, did the person who wrote the story on page A1 read the story on page B8? Today, several stories in the paper seemed to be speaking to each other - the first one was on robots and jobs. The other one was on artificial intelligence research.
Here's where my mind went (bear with me, this will be about philanthropy and civil society)
First, some background:
This year while I was working on the Blueprint, I kept wanting to do more imaginative work on digital technologies. It's one reason I regularly read newsletters from Edge.org and the Singularity Institute, I go to campus lectures on neural networks and the block chain and other stuff I don't understand and I'm a member of the Long Now Foundation. All efforts to try to learn about far-edge technology. But the far-edge of technology is not the right topic for the Blueprint, since many people who read it are still coming to grips with hashtags and chat apps. So I keep a notebook of thoughts on edgy technology and civil society, trying to figure out what to do with them. This year, one of the pre-readers of the Blueprint said to me, " You know, this is all well and good, but what about the really interesting technological stuff - like neural networks and pattern recognition and AI and robots, and what not?"
So he and I went off and had coffee. And we're trying to figure out what to do with my notes, his observations, and the intersections and implications of cutting-edgier science and technology on philanthropy and civil society.*
Reading the two news stories noted above, I learn that many economists are no longer sure that technological advances (such as robotics) will create more jobs than they destroy. And then, turn a few pages, and I read about a 100 year study to look at the real effects of artificial intelligence. That is, the study begins now and will run for 100 years. Where will the study take place? Stanford University (with lots of academic partners). It's called AI100
So how do you reconcile these two thoughts - "tech is changing the economy but not augmenting it" with "let's invest now in something to run for 100 years." You make a gift to a nonprofit. Admittedly, a nonprofit with a large endowment, but what is to say that this university (or any institution) will withstand the very forces this study aims to examine? In other words, if you make a 100 year bet to understand a technology's impact, how do you know the technology won't destroy the place you made the bet before the century is up?**
Now, the thought experiment (When AI ate philanthropy)
Assume some folks begin to apply machine learning and artificial intelligence, data and data analytics to the increasing amount of quantitative data being generated by and about social organizations. This is a completely plausible step from where we are now - what with the effective altruism movement, random control trials, and the emerging data infrastructure for philanthropy. One possibility is such a trajectory leads to the Hedge Fund version of donors - "quant givers." Guided by algorithms and data (replacing program officers and philanthropy advisors) machines would match a donor's dollars with social causes. If we assume that the capacity to do this (at least in the short term) would take a lot of money to build out, we can also assume that the charitable dollars would be coming from wealthy people (or perhaps pools of money from donors, pushing further on the hedge fund metaphor), and so the gifts would also be large. As such, they'd be attractive to organizations, who would try to fit themselves into the models (just as organizations "optimize their search terms" to fit Google search algorithms). Because the algorithms "learn," they get better and better at finding and matching, data crunching and report generating. They create their own, powerful, self-sustaining feedback loop.
It's possible such algorithmic philanthropy could drive enough resources and draw enough attention from fund seeking nonprofits. The feedback loop thus grows to a larger force that begins to change the kinds of measures and metrics that lots of nonprofits track and provide. This leads to behavior change among the vast swath of nonprofits, changing the "marketplace" of options available to smaller, more passion-oriented (or simply less data-driven) donors (i.e. most of us). What would it do to the measures and data that were collected and fed into the machines?
How much would it take before the quant-driven feedback loop affected change throughout the ecosystem of nonprofits? How would parallel movements of impact investing and performance measures, earned revenue and double bottom lines play into this story? How would the more personal, direct-involvement approach of most donors counteract this approach? What countervaling influence could intuition, expression, personal passions, minority voice, and donor choice have on this feedback loop? Just how far can we rationalize/algorithmically structure giving? And how far should we? What if we took all the humanity out of philanthropy?
One likely answer to that question is that the measures would focus ever more on quantifiable, short term outputs that can be easily collected. From the perspective of the algorithm it would be harder to justify longer-term investments in hard-to- measure activities (it would also probably skip over startups with no metrics at all). Long term enterprises and startups might not even "make it into the data" being used by the algorithms.
What falls into this category of long-term hard-to-measure charitable opportunities? Well lots of things like advocacy, policy analysis, and basic research. Oh, and endowed universities.
So, are 100 year research efforts on technological change even possible? How do we factor in the effects of these digital technologies on the peripheral technologies like nonprofit universities that house them? What happens when the thing being researched consumes the place that conducts the research?
SIDEBAR
*What do you think we should do? A new blog? An annual look at just tech and philanthropy? A "hype cycle" analysis? A two by two?
**This paradox, by the way, is one of the reasons I believe so strongly in focusing on the effects of digital innovation on civil society. Most studies of digital change look at their implications for business or government and assume that nonprofits will just keep on keeping on. I think that's nuts.
9 comments:
+1 for finding a home for the bleeding edge tech + philanthropy conversation (per your Sidebar comment).
I'd love to see the tech + philanthropy equivalent to this thread:
https://news.ycombinator.com/item?id=8308666. It's not interesting because of the leading blog post per se, but because of the rich discussion that takes place afterward.
If you're up for a little experimentation, may I suggest using Medium to write up your first blog post on the topic, sharing it on Hacker News as well as your usual outlets, then watching to see where people go to discuss.
If the readership is high but the discussion light, it might be a good indication of the need to concentrate the discussion via a "Hacker News for Digital Civil Society". That could be created in a weekend thanks to some great open source tools, e.g. http://www.telesc.pe/
Thanks for pushing the conversation!
Chad
Awesome! Thanks for the encouragement - this scratches yet another itch which is "What can I use Medium For?"
I'm heading off for 2 weeks of reading and writing time, so will see what I can pull off. Would welcome your insights on draft or offline also if you're up for it.
Happy Holidays
Lucy
Absolutely Lucy. Medium's built-in tools for soliciting feedback on drafts are actually pretty good.
Happy writing!
Big fan of this concept - but mostly that creating the equivalent of the SEC and NASDAQ for the third sector ought to have significant ramifications, mostly positive. What people do with it is open ended, but reliable infrastructure is required. IRS review of 501c3 is not anywhere near what is needed for compliance that gives quant-level comfort.
Interesting - I wasn't even thinking of the regulatory/oversight infrastructure needed on this, since I was so disheartened by the inhumanity of it.
Lucy
Chad,
you are writing: If the readership is high but the discussion light, it might be a good indication of the need to concentrate the discussion via a "Hacker News for Digital Civil Society". That could be created in a weekend thanks to some great open source tools, e.g. http://www.telesc.pe/
This thing sort of exists: DataLook. We've built it with Telescope. The focus is on data-driven projects for social good. If someone posts a project we invite the project owners to the discussion. The idea is to focus on projects so that people can get inspired and replicate them in their community.
Would love to hear your thoughts on our site (tobias[at]datalook.io).
Tobias
This is a very interesting thought exercise, Lucy. The short answer to your question--"what would happen if we took all the humanity out of philanthropy"--is that we would need to call it something else since the origins of the term would no longer apply. But I honestly don't think technology, quantitative techniques, super computers and algorithms will ever succeed in completely automating social investment. This is partly because of the dialectic way in which technologies and counter technologies develop, the capriciousness of human behavior, and because s@#* happens. In a way philanthropy is already subject to a kind of primitive tunnel vision based on over-reliance on personal networks and the confidence that philanthropists know all the information and people they need to know. Since it is difficult to imagine anybody launching a 100 year funding initiative, it might be interesting to try to answer your many questions by looking 100 or even 50 years back into the past. Venerable institutions like the one you mention, Stanford, have been around for even longer and weathered a lot of changes--airplanes, space travel, computers, plastics, atom bombs,electric guitars, et al--that surely seemed game-changing in their time.
Somehow a William Gibson quote seems appropriate: “Hell of a world we live in, huh? (...) But it could be worse, huh?" "That's right," I said, "or even worse, it could be perfect.”
Thank you for your thought provoking comments, Lucy.
Paradoxically, quant optimization of philanthropy models is at cross currents with digital civil society participants' wanting to participate and leave their own mark. (As you have said, the decisions are highly emotional versus numbers driven.) However, there is certainly room for consolidating goals (and sharing best practices) within the fragmented world of philanthropic missions. Big data feedback loops could certainly be useful in drilling down on what donors feel - on balance - is most important. The common goals could be more effectively defined & grouped, then marketed and executed. With bigger scale being thrown at these goals, perhaps new paradigms in solving them could be created.
Tim
Thank you for your thought provoking comments, Lucy.
Paradoxically, quant optimization of philanthropy models is at cross currents with digital civil society participants' wanting to participate and leave their own mark. (As you have said, the decisions are highly emotional versus numbers driven.) However, there is certainly room for consolidating goals (and sharing best practices) within the fragmented world of philanthropic missions. Big data feedback loops could certainly be useful in drilling down on what donors feel - on balance - is most important. The common goals could be more effectively defined & grouped, then marketed and executed. With bigger scale being thrown at these goals, perhaps new paradigms in solving them could be created.
Tim
Post a Comment