Tuesday, February 13, 2024

The GOP threat to civil society

                                                    Photo by Richard Stovall on Unsplash

Democracy in the USA is not "naturally" withering, it is under attack. And the call(s) are coming from both inside the house and outside, domestic and foreign. One source of attack is the Republican Party. Threats can't be beaten if they aren't named. I strongly suggest both foundations, their associations, and their media stop "both-sidesing" this and call out the threats to the sector that are coming from their own.

First and foremost, Donald Trump's campaign has declared it will be "taxing, fining, and suing excessively large private university endowments" to fund its own propaganda-driven alternative university. Now, big private universities don't usually inspire a lot of sympathy, I get that. I'm an alum of them and they don't make me all warm and fuzzy. But be clear, none of this has anything to do with anti-semitism (which gets a quick shout-out in the document linked above). It's part of a sustained campaign against perceived liberal or left(ish) civil society. The presumed candidate of the Republican Party is promising/threatening to seize endowment assets from universities it doesn't like. I'll say it again, the GOP is running on a platform that involves taking funds away from nonprofits it doesn't like. If that doesn't make the philanthropy industry stand up and take notice (and, one might hope, action), I can't think of a bigger threat that the sector would be ignoring. And this from a candidate who's been repeatedly sued for the way he ran his nominal foundation

All nonprofits and foundations, their professional and lobbying associations, and the media dedicated to them should decry a platform such as proposed in Agenda47. And, what's that I hear? Yup, crickets.

Or worse, InsidePhilanthropy worked hard on this rundown of funding for democracy, (behind their paywall yell at them, not me). It's good reporting on a survey done by the Democracy Fund that focuses on giving to democracy efforts and causes related to it. But it counts funding on just one side of the equation. It counts funding by funders in the political center or on the left. It doesn't count the other side - there is no accounting of efforts to undermine democracy. The story mentions book bans, school board fights, and transgender bathroom hysteria as examples of undemocratic philanthropy. But it neither tallies the amount of philanthropic dollars spent on these issues nor names any of the funders. That's not helpful. Those are philanthropic dollars going to efforts that undermine democracy - and they're by no means all the way such money is being spent (Supreme Court favors, anyone? Social media trolls, disinformation, and campaigns such as that run by Christopher Rufo with help from Congresswoman Stefanik to oust female college presidents of color? The list is long)

Attacks on democracy are secretively well-funded even as they appear to be led by grassroots individuals. Counting the funding on the pro-side and not on the attack-side makes it seem as if the  attacks are just part of the process of democracy. And that may be true. But if its true its true in the sense that democracy will always have critics, and some of those will be doing their best to destroy democratic participation by those they don't like.

One of the two political parties in this country is running on a platform that includes seizing endowment assets. Yes, the campaign platform of the GOP is "vote for us and we'll put government in charge of higher education and destroy some of the nation's longest-lived independent institutions. For all the vitriol these universities attract, there's a helluva lot of rich people trying hard to get their kids admitted to them).You may not feel sorry for Harvard, but you'd be a fool for thinking this is just an attack on the Crimson. That's what the GOP wants you to think, but it's not (all) they want to do.

If foundations, philanthropy, and nonprofits don't stand up to defend civil society from Agenda47 before November, they'll deserve what happens, post-election.

Wednesday, December 06, 2023

What does open mean?

                                                    Photo by Enrique Macias on Unsplash

Open source technology has a long history of being a counterbalancing force to closed, proprietary systems. For decades it was open source versus corporations. Then Microsoft (closed, proprietary) bought GitHub (most used repository of open source code). Today, in the AI battles, Facebook/Meta, IBM and Oracle, along with universities and the National Science Foundation, announced the AI alliance - dedicated to open AI models. This is part of the larger debate about building responsible/trustworthy/safe/ethical AI. 

So some of the world's biggest tech companies, many who have thrived on proprietary, patented, trademarked and close source code, are now arguing that an open community of developers is the way forward to protect us from the harms of AI.

This is one more step in both the commercial battles for market dominance and the definitions of words such as safety, ethical, trustworthy and responsible (in the context of AI.) For example, effective altruists and longterm (ers) (ists) focus on the word "safety." They're bogeyman is the potential for AI to destroy humanity. This group, the AI Alliance, uses the terms "open" and "responsible." They're bogeyman appears to be the other companies who've already launched proprietary models - like Google and Microsoft.

The mix of organizations and funding in these AI debates includes corporations, governments, and numerous nonprofits - not only universities, but also groups of developers and advocacy organizations. Philanthropic funding is very much in the mix. The direction of AI development is not simply an external force acting upon the nonprofit/philanthropic sector; it is being shaped by numerous actors within the sector. The meaning and purpose of "open" in this context is neither static, nor simple.

Thursday, November 30, 2023

Maybe nonprofit governance aint what it needs to be?

                                                                            M.C. Escher, Relativity Stairs    

Imagine a large - no, bigger, much bigger - nonprofit hospital, university, housing developer, or after school program. Bigger by assets than any other. Right now, there are 13 universities in the U.S.A. with more than $10 billion endowments (one of which is a "public" university), with the largest topping $50 billion. Bigger than that. 

There is one. OpenAI. Though its size is not based on endowed assets but rather speculative stock value, the organization, which is still as of this writing a nonprofit, is valued at $86 Billion. It's not clear that the organization will continue with its current structure - the events of the last few weeks resulted in a new board and promises to revisit the structure.

Others have written about what the weeks' events mean for the development of AI going forward, the effective altruism (paywall) movement, tech bros, and capitalism. I want to think about what it means - if anything - for civil society. 

First, it seems that no one in civil society or the U.S. nonprofit sector really sees the organization as anything other than a commercial firm (it has a capped profit structure, which limits the amount of profit to be returned to shareholders, but only designates profits to be reinvested in the organization (as nonprofits due) after investors are paid out). 

I can understand this view, sort of. The sector in the U.S. (as represented by its lobbying/advocacy/infrastructure groups) is still hung up on a certain kind of charitable corporation, designated as 501c3 (OpenAI is such), and doesn't pay much attention to the dozens of other structures that identify as nonprofits. Heck, it's hard to get these groups to address the problematically porous nature of c3s and c4s, they're way behind the eight ball in understanding they swim in a sea filled with informal associations, "Slack"-based "organizations" for mutual aid or volunteering, B corporations, or hybrids. So, perhaps its way too much of an ask to expect recognition among their own of the behemoth of technology development. 

Second, the OpenAI events show that the nonprofit governance model is not "strong" enough to outweigh the interests of investors. Given the model's purpose in this situation, and the information that's public, the nonprofit board that fired the CEO was acting as it was intended. I guess no one thought they'd actually do what they were set up to do. 

Third, while the argument for data trusts has largely focused on the difference between digital assets and analog ones as the reason for a new organizational form, they're still rare and probably outnumbered by hybrids of profit/non-profit forms. The AI world - especially that which professes some commitment to "ethics", "safety," "responsibility" or "trustworthiness"* - is ripe with hybrids, not trusts. But they're not limited to this field - they're plentiful in journalism, for example. I highlight this in the forthcoming Blueprint 24.

Fourth, it's not just the structure of the organization that matters, it's also the structure of the funding. Many supporters of the AI organizations we captured for our dataset (live link on December 15, 2023) are contributing via deductible donations and commercial investments. The more the donor class uses LLCs and family offices, the harder it is to determine what kind of funding they're putting where. While those who invested for a financial return in OpenAI may be happy with the result of the last few weeks, what about those who donated with an eye on the mission? 

Fifth, philanthropy is playing a not insignificant role in these developments. Individuals and organizations associated with effective altruism fund at least 10% of the 160+ AI organizations we track in Blueprint24. They're funding for AI policy fellowships and internships is particularly notable, as these individuals are now well-represented inside policy making bodies. In a very short time, philanthropy has had a significant impact on the development of a major industry, its regulatory overseers (at least in the U.S.A), and the public discourse surrounding it. Had this happened in education, healthcare, or other domains where philanthropy is active we'd see the industry press and professional associations paying close attention (and claiming all kinds of credit). Yet, as noted in the intro, voices in civil society and philanthropy have been awfully quiet about this "impact" on AI.

As someone who has been tracking and explicating the changing nature of organizations in civil society, I see OpenAI as a huge, well-publicized example of something that's been going on for awhile. The nonprofit sector ain't what you think it is. And it's codified boundaries - the legalities that distinguish nonprofit corporations from commercial ones - may not be up to the task of prioritizing mission over financial returns when the assets are digital, the potential for profit so hyped, and the domain (AI development) easy to make seem arcane and "too hard for you to understand" by insiders.

*These are some of the phrases that are being used in the debates over AI development. It's critical to keep an eye on these terms - they don't all mean the same thing, they are used interchangeably though they shouldn't be, and some of them are being used to deliberately gaslight the public about our options when it comes to developing these technologies. Just as political propagandists excel at hijacking terms  to denude them of power (see, for example, "fake news"), so, too, do commercial marketers or ideologues excel at using phrases like "safety" to seem universally meaningful, thus providing cover for all kinds of definitions. See Timnit Gebru and Émile Torres on TESCREAL.

Monday, November 20, 2023

Ideology, identity, and philanthropy (PLUS! bonus Blueprint 24 Buzzword)

                                                                        Photo by Brett Jordan on Unsplash

Has a philanthropic strategy ever before become an identity? I'm confident that neither John D. Rockefeller nor Andrew Carnegie ever referred to themselves as scientific philanthropists - names which historians have applied to them. I've heard organizations tout their work as trust-based philanthropy, but yet to hear anyone refer to themselves that way. Same with strategic philanthropy. And even if you can find one or two people who call themselves "strategic" or "trust based" philanthropists, I'm confident you can't find me thousands.

Effective altruism, on the other hand, is all three - ideology, identity, and philanthropic approach. 

Given the behavior of Sam Bankman-Fried and his pals at FTX, it's also a failed cover for fraud. But I digress. 

In the upcoming Blueprint24 (due out on December 15 - will be free and downloadable here) - I look at the role of Effective Altruism in the burgeoning universe of AI organizations. I had two hypotheses for doing so.

H1: There are 00s of new organizations focused on "trustworthy" or "safe" AI, but that behind them is a small group of people with strong connections between them. 

H2: These organizations over-represent "hybrids" - organizations with many different forms and names, connected via a common group of founders/funders/employees - for some reason.

The Blueprint provides my findings on H1 and H2 (yes, but bigger than I thought, and yes, and I give three possible reasons) and will also make public the database of organizations, founders, and funders that a student built for me. So the weekend drama over at OpenAI certainly caught my attention.

By now, you've probably read about some of the drama at OpenAI. As you follow that story, keep in mind that at least two of the four board members who voted to oust the CEO are self-identified effective altruists, as is the guy who was just named interim CEO. These are board members of the 501 (c)(3) nonprofit OpenAI, Inc.

Effective Altruism's interests in AI run toward the potential for existential risk. This is the concern that AI will destroy humanity in some way. Effective altruists also bring a decidedly utilitarian philosophy to their work - to the point of having calculated things like the value of a "life year" and a "disability-affected life year" and use these calculations to inform their giving.* 

The focus on existential threats leads to a couple of things in the real world in real time. First, it distracts from actual harms being done to real people right now.  Second, the spectre of faraway harms isn't as motivating to action as it should be - see humanity's track record on climate change, pandemic prevention, inequality, etc. Pointing to the far away future is a sure way to weaken attention from regulators and ensure that the public doesn't prioritize protecting itself. Third, far away predictions require being able to argue how we get from now to then - which bakes in a bunch of steps and processes (often called path dependencies). Those path dependencies then ensure that what's being done today comes to seem like the only things we could possibly be doing.

Think of it like this: if I tell you we're going to get together on Thursday to give thanks and celebrate community. From this, we'd decide OK, we need to buy the turkey now. Once we have a turkey, we're going to have to cook it. Then we're going to have to eat it. Come Thursday, we will have turkey, regardless of anything else. We've set our direction and there's only path to Thursday.

But what if instead, I tell you we want to get together on Thursday to celebrate community and give thanks. But we want to make sure that everyone who we will invite has enough to eat from now until Thursday as well. We'd probably not buy a turkey at all. Instead, we'd spend our time checking in on each other's well-being and pantry situation, and if we found people without food we'd find them some. We can still get together on Thursday, comfortable in knowing that everyone has had their daily needs, met. In other words, if we focus on the things going wrong now we can fix those, without setting ourselves down a path of no return. And we still get to enjoy ourselves and give thanks on Thursday.**

The focus on long term harms allows for the very people who are building the systems to keep building them. They then model themselves as "heroic" for raising concerns while they simultaneously shape (and benefit from) the things they're doing now. Once their tools are embedded in our lives, we will be headed toward the future they portend, and it will be much harder to rid ourselves of the tools. The moment of greatest choice is now, before we head much further down any paths. 

It's important to interrogate the values and aspirations of those who are designing AI systems, like the leadership of OpenAI. Not at a surface level, but more deeply. Dr. Timnit Gebru helps us do this through her work at DAIR, but also by doing some of the heavy lifting on what these folks believe. She provides us with an acronmyn, TESCREAL, to explain what's she found. TESCREAL (the bonus buzzword I promised) stands for "Transhumanism, Extropianism, Singularitarianism, Cosmism, Rationalism, Effective Altruism, and Longtermism." Listen here to hear Dr Gebru and Émile Torres discuss where these terms come from. And don't skip over the part about race and eugenics.

Effective Altruism is much more than a way to think about giving away one's money. It's an ideology that has become an identity. A self-professed identity. That reveals a power, an attraction in the approach that is unmatched, as far as I can tell, in the history of modern philanthropy. At the moment, this identity and ideology also seems to have a role in the development of AI that is far greater than many have realized. It's critical that we understand what they believe and what they're building.


*As someone with a newly acquired disability, I'd be curious about their estimation of the difference between a "life year" and a "disability-affected life year" if I wasn't already so repulsed by the idea of the value of either value.

**Agreed, not the best metaphor. But maybe it works, a little bit?