Wednesday, July 05, 2023

If AI is so new, why does the message about it sound so familiar?

                                                    Photo by Jon Moore on Unsplash 

If you've been wondering why the rhetoric around AI sounds so familiar, I have some thoughts. 

If you read Nancy Maclean's 2017 bestseller, Democracy in Chains, and then pick up a newspaper (or open a news company's app) and read this story on funding for AI scholarship at elite universities across the country, you will notice that the funders/philanthropists in the news story are using the playbook developed by those in the historical study. 

Democracy in Chains is about the fueling of libertarianism and a political economy that favors the wealthy few - an undemocratic project based on perverting majority-based systems to serve a very rich, very determined self-interested few. It goes further than Jane Mayer's brilliant Dark Money to show the intellectual history and the broad reach of the nonprofit/think tank/university (in other words, nonprofit) infrastructure for turning ideology into public policy. MacLean's book was published in 2017 and it centers on the Koch brothers - an updated version could factor in a wide range of philanthropic/funder/investor actions from younger billionaires and include otherwise-inexplicable actions such as Musk's purchase of and destruction of Twitter, and the general weirdness (horror) of First Amendment jurisprudence (FAIR v Harvard, UNC). When we are searching to make sense of a present moment it is helpful - extremely so, in this case - to look to both short and long-term historical precedents.

When it comes to our current moment (in the U.S.) in which Supreme Court decisions seem to abandon procedural and substantive norms from one day to the next and we're all rapidly trying to learn to distinguish AI-generated text/photos/videos from those made by humans and everything from the weather to the role of elections in this democracy seem up for grab these historical events are helpful. It's not quite rhyming (as historians will remind us), but there are patterns to see that can be helfpul. Maclean shows a 50+ year arc of an ideologic project built around a minority-viewpoint that has yielded extraordinary, stealthy success. It's worth understanding those past patterns to understand our current setting.

It's no coincidence that today's funders focused on existential risks of AI are using the playbook of scholarships, fellowships, and academic centers to build cadres of like-minded thinkers.  It focuses your attention downstream, away from the present. This funding model works - especially if you take a multi-decade time frame.

Just because it "works," however, doesn't mean it is in the best interest of anyone but those funding and being funded. The Kochs' and their allies were very clear that their project benefitted a minority (wealth owners). What they needed to do was bend the systems of a majority-based democracy to serve minoritarian ends. This was not hard to do, since the U.S. Constitutional system has numerous minoritarian run-arounds (e.g., Senate apportionment, electoral college, voting rules) built into it.  We should be on the lookout for similar motivations and efforts as we think about our now AI-dominant online information sources, systems, and messa

Some of those engaged in discussions and training about existential AI risks will note that human extinction is likely to come faster from climate change, weaponized nuclear facilities, and perhaps the next pandemic then from man-hating robots. Focusing scholars and the media's attention on the potential long-term harms to all of humanity is a slick way of distracting those same communities and others from the here-and-now harms of AI-enabled disinformation, discrimination, and economic harms for people already marginalized by race, religion, identity, and/or income. Each moment that goes by in which near-term harms are ignored is another chance for the current powers to further implant, strengthen, and reap the rewards of the very path dependencies that lead to the future they claim to be fighting against. 

In short, beware the arguments of those who direct your attention to far-away catastrophes while they benefit by building those very systems now. Better to refuse, redirect, or rebuild systems that cause no harm now, for they will also cause less harm later.