You’ve probably heard about OpenAI — a new, billion dollar nonprofit to focus on artificial intelligence research that is good for humanity. In their own words:
“OpenAI is a non-profit artificial intelligence research company. Our goal is to advance digital intelligence in the way that is most likely to benefit humanity as a whole, unconstrained by a need to generate financial return.
Since our research is free from financial obligations, we can better focus on a positive human impact. We believe AI should be an extension of individual human wills and, in the spirit of liberty, as broadly and evenly distributed as possible.
The outcome of this venture is uncertain and the work is difficult, but we believe the goal and the structure are right. We hope this is what matters most to the best in the field.”
In an interview posted on the Singularity University newsletter, one of the founding researchers, Andrej Karpathy, a Stanford doctoral candidate who interned at Google and DeepMind, says:
“A lot of it comes from OpenAI as a non-profit. … It’s not clear that you would want a big for-profit company to have a huge lead, or even a monopoly over the research. It is primarily an issue of incentives, and the fact that they are not necessarily aligned with what is good for humanity. We are baking that into our DNA from the start.
Also, there are some benefits of being a non-profit that I didn’t really appreciate until now. People are actually reaching out and saying “we want to help”; …
OpenAI… encourages us to publish, to engage the public and academia, to Tweet, to blog. …. If something like [CRISPR which has great potential for benefiting — and hurting — humankind. Because of these ethical issues there was a recent conference on it in DC to discuss how we should go forward with it as a society] happens in AI during the course of OpenAI’s research — well, we’d have to talk about it. We are not obligated to share everything — in that sense the name of the company is a misnomer — but the spirit of the company is that we do by default.
In the end, if there is a small chance of something crazy happening in AI research, everything else being equal, do you want these advances to be made inside a commercial company, especially one that has monopoly on the research, or do you want this to happen within a non-profit?
We have this philosophy embedded in our DNA from the start that we are mindful of how AI develops, rather than just [a focus on] maximizing profit.
It’s a lot of responsibility. It’s a “lesser evil” argument; I think it’s still bad. But we’re not the only ones “controlling” the field — because of our open nature we welcome and encourage others to join in on the discussion. Also, what’s the alternative? In a way a non-profit, with sharing and safety in its DNA, is the best option for the field and the utility of the field.”
So
here we have a case of knowledgeable people recognizing a threat and
deciding that the way forward is to create a nonprofit organization.
This moment has some historical precedent. In 1955, Albert Einstein,
Bertrand Russell and several other scientists got together and issued a
manifesto about the dangers of nuclear technologies. It would launch
decades of Pugwash conferences and the anti-nuclear movement.
The
founders of OpenAI also issued a manifesto. Back in January 2015,
Stephen Hawking and Elon Musk, one of OpenAI’s key funders, signed an “Open Letter on Research Priorities for Robust and Beneficial Artificial Intelligence.”
We’ve spent decades monitoring, signing treaties about, building and
dismantling nuclear weapons. The Einstein-Russell Manifesto and Pugwash
happened after the U.S. had detonated two atomic bombs and the world
could agree on the horror they created. Nuclear capacity still plays a
major role in global politics but efforts to prevent their use and
spread have mostly succeeded. My questions are not about nuclear power
or AI per se, they are about the viability of the nonprofit organization in today’s world of shifting public, private, and corporate roles.
- Will a set of voluntary manifestos and a few nonprofits work to keep AI from harming people? Unlike the nuclear weapons case, the likely builders of dangerous applications are not nation states, they are corporations. Unlike nuclear weapons (or energy) the component costs are dropping, not rising, and there is no singular component that can be tightly controlled (as in the case of enriched uranium).
- Is the strategy of “out R+D’ing” the commercial competitors via a nonprofit that will share some (most) of what it learns realistic? In most cases where there is competition between for-profits and nonprofits the capital scheme favors the commercial players (see, for example, car sharing. medical devices, pharmaceuticals)
- Is the governance model of nonprofits — no shareholders, public purpose mission, excess holdings limitations, nondistribution clauses — enough to direct research with potentially harmful applications away from those harms?
There
will need to be other strategies to “control” dangerous AI. But the
application of the U.S. nonprofit corporate model to a global challenge
such as this provides a great moment to ask ourselves”.
- What can the nonprofit enterprise form do and what cannot it not accomplish?
- What kind of structure can best manage intellectual, digital, and algorithmic resources for long-term public benefit?
- How might open source governance models augment the nonprofit enterprise form and where, if anywhere, do they conflict?
Unlike
the machine learning experts who are joining OpenAI, I see no reason to
assume that the nonprofit structure is sufficient to accomplish their
goals. If the challenges of AI are as great as these minds seem to
suggest, is the “lesser evil” OK?
I
commend them for directing their expertise toward beneficial uses of
the technology. I also think it’s time to reconsider the institutional
technology of the nonprofit corporation. We need institutions that can
generate, direct and hold digital resources for long-term public
benefit; I’m not sure the current nonprofit firm is the answer.