Thursday, November 30, 2023

Maybe nonprofit governance aint what it needs to be?

                                                                            M.C. Escher, Relativity Stairs    

Imagine a large - no, bigger, much bigger - nonprofit hospital, university, housing developer, or after school program. Bigger by assets than any other. Right now, there are 13 universities in the U.S.A. with more than $10 billion endowments (one of which is a "public" university), with the largest topping $50 billion. Bigger than that. 

There is one. OpenAI. Though its size is not based on endowed assets but rather speculative stock value, the organization, which is still as of this writing a nonprofit, is valued at $86 Billion. It's not clear that the organization will continue with its current structure - the events of the last few weeks resulted in a new board and promises to revisit the structure.

Others have written about what the weeks' events mean for the development of AI going forward, the effective altruism (paywall) movement, tech bros, and capitalism. I want to think about what it means - if anything - for civil society. 

First, it seems that no one in civil society or the U.S. nonprofit sector really sees the organization as anything other than a commercial firm (it has a capped profit structure, which limits the amount of profit to be returned to shareholders, but only designates profits to be reinvested in the organization (as nonprofits due) after investors are paid out). 

I can understand this view, sort of. The sector in the U.S. (as represented by its lobbying/advocacy/infrastructure groups) is still hung up on a certain kind of charitable corporation, designated as 501c3 (OpenAI is such), and doesn't pay much attention to the dozens of other structures that identify as nonprofits. Heck, it's hard to get these groups to address the problematically porous nature of c3s and c4s, they're way behind the eight ball in understanding they swim in a sea filled with informal associations, "Slack"-based "organizations" for mutual aid or volunteering, B corporations, or hybrids. So, perhaps its way too much of an ask to expect recognition among their own of the behemoth of technology development. 

Second, the OpenAI events show that the nonprofit governance model is not "strong" enough to outweigh the interests of investors. Given the model's purpose in this situation, and the information that's public, the nonprofit board that fired the CEO was acting as it was intended. I guess no one thought they'd actually do what they were set up to do. 

Third, while the argument for data trusts has largely focused on the difference between digital assets and analog ones as the reason for a new organizational form, they're still rare and probably outnumbered by hybrids of profit/non-profit forms. The AI world - especially that which professes some commitment to "ethics", "safety," "responsibility" or "trustworthiness"* - is ripe with hybrids, not trusts. But they're not limited to this field - they're plentiful in journalism, for example. I highlight this in the forthcoming Blueprint 24.

Fourth, it's not just the structure of the organization that matters, it's also the structure of the funding. Many supporters of the AI organizations we captured for our dataset (live link on December 15, 2023) are contributing via deductible donations and commercial investments. The more the donor class uses LLCs and family offices, the harder it is to determine what kind of funding they're putting where. While those who invested for a financial return in OpenAI may be happy with the result of the last few weeks, what about those who donated with an eye on the mission? 

Fifth, philanthropy is playing a not insignificant role in these developments. Individuals and organizations associated with effective altruism fund at least 10% of the 160+ AI organizations we track in Blueprint24. They're funding for AI policy fellowships and internships is particularly notable, as these individuals are now well-represented inside policy making bodies. In a very short time, philanthropy has had a significant impact on the development of a major industry, its regulatory overseers (at least in the U.S.A), and the public discourse surrounding it. Had this happened in education, healthcare, or other domains where philanthropy is active we'd see the industry press and professional associations paying close attention (and claiming all kinds of credit). Yet, as noted in the intro, voices in civil society and philanthropy have been awfully quiet about this "impact" on AI.

As someone who has been tracking and explicating the changing nature of organizations in civil society, I see OpenAI as a huge, well-publicized example of something that's been going on for awhile. The nonprofit sector ain't what you think it is. And it's codified boundaries - the legalities that distinguish nonprofit corporations from commercial ones - may not be up to the task of prioritizing mission over financial returns when the assets are digital, the potential for profit so hyped, and the domain (AI development) easy to make seem arcane and "too hard for you to understand" by insiders.

*These are some of the phrases that are being used in the debates over AI development. It's critical to keep an eye on these terms - they don't all mean the same thing, they are used interchangeably though they shouldn't be, and some of them are being used to deliberately gaslight the public about our options when it comes to developing these technologies. Just as political propagandists excel at hijacking terms  to denude them of power (see, for example, "fake news"), so, too, do commercial marketers or ideologues excel at using phrases like "safety" to seem universally meaningful, thus providing cover for all kinds of definitions. See Timnit Gebru and Émile Torres on TESCREAL.

Monday, November 20, 2023

Ideology, identity, and philanthropy (PLUS! bonus Blueprint 24 Buzzword)

                                                                        Photo by Brett Jordan on Unsplash

Has a philanthropic strategy ever before become an identity? I'm confident that neither John D. Rockefeller nor Andrew Carnegie ever referred to themselves as scientific philanthropists - names which historians have applied to them. I've heard organizations tout their work as trust-based philanthropy, but yet to hear anyone refer to themselves that way. Same with strategic philanthropy. And even if you can find one or two people who call themselves "strategic" or "trust based" philanthropists, I'm confident you can't find me thousands.

Effective altruism, on the other hand, is all three - ideology, identity, and philanthropic approach. 

Given the behavior of Sam Bankman-Fried and his pals at FTX, it's also a failed cover for fraud. But I digress. 

In the upcoming Blueprint24 (due out on December 15 - will be free and downloadable here) - I look at the role of Effective Altruism in the burgeoning universe of AI organizations. I had two hypotheses for doing so.

H1: There are 00s of new organizations focused on "trustworthy" or "safe" AI, but that behind them is a small group of people with strong connections between them. 

H2: These organizations over-represent "hybrids" - organizations with many different forms and names, connected via a common group of founders/funders/employees - for some reason.

The Blueprint provides my findings on H1 and H2 (yes, but bigger than I thought, and yes, and I give three possible reasons) and will also make public the database of organizations, founders, and funders that a student built for me. So the weekend drama over at OpenAI certainly caught my attention.

By now, you've probably read about some of the drama at OpenAI. As you follow that story, keep in mind that at least two of the four board members who voted to oust the CEO are self-identified effective altruists, as is the guy who was just named interim CEO. These are board members of the 501 (c)(3) nonprofit OpenAI, Inc.

Effective Altruism's interests in AI run toward the potential for existential risk. This is the concern that AI will destroy humanity in some way. Effective altruists also bring a decidedly utilitarian philosophy to their work - to the point of having calculated things like the value of a "life year" and a "disability-affected life year" and use these calculations to inform their giving.* 

The focus on existential threats leads to a couple of things in the real world in real time. First, it distracts from actual harms being done to real people right now.  Second, the spectre of faraway harms isn't as motivating to action as it should be - see humanity's track record on climate change, pandemic prevention, inequality, etc. Pointing to the far away future is a sure way to weaken attention from regulators and ensure that the public doesn't prioritize protecting itself. Third, far away predictions require being able to argue how we get from now to then - which bakes in a bunch of steps and processes (often called path dependencies). Those path dependencies then ensure that what's being done today comes to seem like the only things we could possibly be doing.

Think of it like this: if I tell you we're going to get together on Thursday to give thanks and celebrate community. From this, we'd decide OK, we need to buy the turkey now. Once we have a turkey, we're going to have to cook it. Then we're going to have to eat it. Come Thursday, we will have turkey, regardless of anything else. We've set our direction and there's only path to Thursday.

But what if instead, I tell you we want to get together on Thursday to celebrate community and give thanks. But we want to make sure that everyone who we will invite has enough to eat from now until Thursday as well. We'd probably not buy a turkey at all. Instead, we'd spend our time checking in on each other's well-being and pantry situation, and if we found people without food we'd find them some. We can still get together on Thursday, comfortable in knowing that everyone has had their daily needs, met. In other words, if we focus on the things going wrong now we can fix those, without setting ourselves down a path of no return. And we still get to enjoy ourselves and give thanks on Thursday.**

The focus on long term harms allows for the very people who are building the systems to keep building them. They then model themselves as "heroic" for raising concerns while they simultaneously shape (and benefit from) the things they're doing now. Once their tools are embedded in our lives, we will be headed toward the future they portend, and it will be much harder to rid ourselves of the tools. The moment of greatest choice is now, before we head much further down any paths. 

It's important to interrogate the values and aspirations of those who are designing AI systems, like the leadership of OpenAI. Not at a surface level, but more deeply. Dr. Timnit Gebru helps us do this through her work at DAIR, but also by doing some of the heavy lifting on what these folks believe. She provides us with an acronmyn, TESCREAL, to explain what's she found. TESCREAL (the bonus buzzword I promised) stands for "Transhumanism, Extropianism, Singularitarianism, Cosmism, Rationalism, Effective Altruism, and Longtermism." Listen here to hear Dr Gebru and Émile Torres discuss where these terms come from. And don't skip over the part about race and eugenics.

Effective Altruism is much more than a way to think about giving away one's money. It's an ideology that has become an identity. A self-professed identity. That reveals a power, an attraction in the approach that is unmatched, as far as I can tell, in the history of modern philanthropy. At the moment, this identity and ideology also seems to have a role in the development of AI that is far greater than many have realized. It's critical that we understand what they believe and what they're building.

 

*As someone with a newly acquired disability, I'd be curious about their estimation of the difference between a "life year" and a "disability-affected life year" if I wasn't already so repulsed by the idea of the value of either value.

**Agreed, not the best metaphor. But maybe it works, a little bit?

Monday, November 13, 2023

Civil society, polarization and pluralism

 
Photo by Ruy Reis on Unsplash

A headline in today's Chronicle of Philanthropy, reads:

"Philanthropy’s Job in Polarized America: Make Partners of Enemies, a New Poll Says"

Which raises an obvious question, 

"Why do you think philanthropy is the solution and not part of the problem?"

We often talk about civil society and philanthropy as if they only do good. And then we go on to debate the meaning of good. While that can be hard, we're often pretty clear we know what it isn't when we see it. 

So when I see headlines about Project 2025 - a coordinated effort by more than 80 nonprofit organizations (both c3s and c4s) to put loyalists to Donald Trump in positions up and down government and across state and federal jurisdictions - I don't just doubt the willingness of these groups to "make partners of enemies." I doubt the willingness or ability of groups on the democratic side of the ledger to do so either. I also doubt the willingness of most media outlets, almost all of which seem to have become aligned with one political side or the other. 

I've written a lot over the years about the blurring of the lines between charity and politics. This is most clear in the way funding now works - flowing between c3s and c4s, coming out of donors' LLCs and DAFs. The money moves in ways that removes donors names from donations and goes in and out of organizations in between reporting dates, which often come long after the money has been used. As I first wrote following the Citizens United decision in 2010, the scale and appeal of political money will be too much for charitable nonprofits to ignore. In taking such money, and even perhaps in trying to ignore such funds, nonprofit activities are increasingly aligned with one political side or the other.

We need better mechanisms for tracking money through nonprofits and into political activities. We need to be able to follow dollars into politics, no matter what kind of organization they flow through. We need to be able to track and report this funding in more useful time frames than oft-delayed tax filings. And, we need to be more honest with ourselves and in our writings about civil society and philanthropy. Which requires acknowledging that some (measurable, but not yet measured) percentage of both funders and nonprofits are deliberately pursuing political ends while masquerading as nonpolitical entities. Only when we acknowledge this reality can we begin the process of writing new rules for reporting, transparency, legitimate activities, and meaningful accountability. Which, of course, helps explain while the sectors themselves aren't necessarily interested in acknowledging this reality.

Philanthropy and nonprofits are small p political. Your theory of change, the problems you choose to address, and the ways you seek to solve them reveal political assumptions and allegiances. This has long been true. Now, as the ideologies and paths to change proposed by the country's two political parties grow ever further apart from each other, these associations become more obvious, more visible. Add to this the constant growth in political giving, and it seems that civil society is growing increasingly capital P political, and that at least some of that is due to the preferences of funders. It's hard for me to see how any of this positions civil society or philanthropy as the recourse to social and political polarization.

There are things that we can do to bridge our differences. But we should first recognize just how broadly our political differences influence things like where we live, work, shop, read, worship, play, travel, and donate our time and money. And not assume that every philanthropic or nonprofit organization is interested in or equipped to help with that bridging. It seems that some portion of them are quite invested in exactly the opposite.

Thursday, November 09, 2023

AI and the social sector

 

                                                          Photo by JJ Ying on Unsplash

Ah, AI. Can't avoid it. 

I've been to the conferences and workshops, read the listservs, talked to the researchers and read some of the research, played with the public tools. The Blueprint 2024 lays out my thoughts on nonprofits, philanthropy and AI for 2024. 

This coming Blueprint (available live and free on 12/15/23) skips the prediction section - and explains why. But I have some thoughts on how AI is going to unfold in the sector, especially after checking out this new resource from Giving Tuesday - the AI Generosity Working Group.

Year 0-1:  November 1, 2022 - 2023 - hype, fear, webinars, and conference talks. Lots of press. Lots of handwaving. Gadgets. Lots of executive orders and unfunded government mandates and policy proposals pushed by tech companies.

Year 2 - 2024:  More hype, lots of feel-good examples (the Red Cross is using it! AI for disasters!) and a few scandals (lawsuits over data use, data loss, etc) will fill the news. Lots of nonprofits will try things and realize they don't have the expertise on staff, are distracting resources from mission, and will go back to ignoring the topic. By this time, we'll all be using AI all the time, however, as AI capacities will be fully baked into every software product you already have - every microsoft product, Canva, Zoom, Salesforce. We're already there, actually.

Years 3 - 5: Certain domains will achieve breakthroughs with AI. These are most likely to be medical research, tech development itself, environmental analysis (including analysis of the damage AI does to the environment in terms of water usage and power consumption). Advocacy organizations working on human issues from migration to healthcare, education to food benefits, will be up to their eyeballs in litigation and integrated advocacy efforts with digital and civil rights groups for harms caused by AI. My hopeful self says nonprofits and foundations will get fully on board with data governance needs (finally) as either litigation, regulation, or insurance premiums require them to manage their data better. AI - as the scary bogeyman/breakthrough opportunity - will help organizations finally understand what data governance is about. 

Years 3 - 5: AI nonprofits and philanthropy will be "things." Product launches of AI-driven giving advisors, AI-driven advocacy campaigns, AI+Blockchain smart contract organizations in the social sector. Most, if not all, will be hype and bust. 

Year 4 +: AI will be so thoroughly baked into every commercial product on which the social sector and philanthropy depend that we'll no longer talk about it much. It would be like discussing cell phones - everyone will have it somewhere in their organizations, new expectations will emerge because of its prevalence, and we'll not be talking about it as much.

As individual organizations become dependent on AI-powered software tools, we'll reach the next level of concern - the existing regulatory regime for nonprofits and foundations will be leaking and breaking, and proposals for new structures and laws will be circulating. The sector's policy advocates will bemoan their missed opportunities, back in 2023 and 2024, to influence the regulations on AI itself. The blend of nonprofits and commercial activity and/or nonprofits and political activity, will complicate such new debates. By this time, the academy and independent research groups like AJL or DAIR will have repeatedly documented harms caused by AI and have proposed numerous remedies.

Having been ignored by industry for 4+ years, we'll see new attention to these ideas. We'll also see a burst of former AI company employees "whistleblowing" or "following their consciences," leaving industry and setting forth to solve the problems they helped create while on the inside. By the time this happens, everyone will be used to and dependent on their AI-enabled tech, and even those who are eager to stop using it will find it "too difficult" to change their tech.

Some of the above is tongue in cheek. But, like the Gartner hypecycle, this loose set of predictions is based on the experience of other breakthrough technologies. It's probably too linear - and doesn't take into account the innumerable "wild card" events that are likely to occur between now and 2028. In other words, by 2028 we'll be having the debates about AI that we had about social media in the 2016 election. Some of these we're already having - especially with regard to elections - and that's a good thing. But it's not going to stop, or even redirect, this flow of events.

It doesn't need to unfold this way at all. Sadly, I don't see enough activities, organizations, advocacy, push back, regulatory oversight out there to prevent this (all too familiar) pattern from playing out. And certainly not compared to the dollars that are being spent now by corporate marketing departments to hook nonprofits.