Premium

How AI Models Are Political Combatants in the Culture Wars

AP Photo/Kirsty Wigglesworth, Pool

The Anthropic-Pentagon dust-up is one of the strangest conflicts the Pentagon has ever fought. Declaring Anthropic a "national security risk" and sanctioning Pentagon contractors who use the AI model Claude is an unprecedented action against contractors.

Military planners and soldiers in the field relied on Claude for their work until the model was banned. The standoff is about Anthropic reportedly refusing to remove internal safeguards that prevent Claude from being used for autonomous lethal weapons or mass domestic surveillance. Defense Secretary Pete Hegseth insisted that the military should have access to the models for "all lawful purposes" without company-imposed ethical restrictions.

Donald Trump believes Anthropic is a "Radical left woke company." Is it? Or is there something in our unconscious minds that convinces us of the perception?

Consider the "ELIZA effect." In 1966, Joseph Weizenbaum at MIT constructed a primitive chatbot he named ELIZA. The program did nothing more than rephrase statements into questions.  "You’d type, 'I’m feeling sad,' and ELIZA would respond, 'Why are you feeling sad?'” explains neuroscientist Tim Requarth.

Weizenbaum's secretary asked if she could try the program alone. When the secretary told of her experience, Weizenbaum reached a startling conclusion; “Extremely short exposures to a relatively simple computer program,” Weizenbaum later wrote, “could induce powerful delusional thinking in quite normal people.”

The ELIZA effect is "our tendency to read understanding, personality, and intention into anything that takes conversational turns," writes Requarth. The effect was confirmed by researchers at Stanford in 1990. Clifford Nash and others showed that "People stereotyped computers by the gender of their voice, showed in-group favoritism toward computers placed on their 'team,' and gave politer performance reviews when answering on the computer itself rather than on paper, the same politeness bias we show when evaluating a colleague to their face."

Since then, as computers have become capable of carrying out increasingly complex communication between user and machine, the ELIZA effect has been confirmed numerous times. 

Persuasion:

And now the ELIZA effect has become a dominant, if not the dominant, feature of high-stakes brinksmanship involving the future of AI and the defense-industrial complex. In the intricate standoff between the Pentagon and Anthropic over the use of AI in weaponry, it was easy to be distracted by the strange bedfellows-aspect of the struggle—with OpenAI becoming a willing partner of the Pentagon even while Anthropic established itself as a darling of the #Resistance. But, more importantly, the standoff represents a significant turn of the wheel in how the debate around AI has entered into cultural space. It’s no longer Big Tech behemoths one-upping each other with upgrades. It’s about the vibes, man. And the future of AI may well be a kind of extended ELIZA effect—with consumers and contractors choosing between different AIs sort of as if they were sports teams, with the competing AIs corresponding to different sides in the culture wars.

"Claude, for instance, sounds like the kind of person who’d take teaching a seminar on ethics way too seriously," Requarth writes. "ChatGPT sounds like someone who actually thinks LinkedIn is cool; Grok sounds like someone who buys illegal fireworks across state lines."

These dynamics make talking technology politically combustible in a way that other technologies are not. You can’t perceive a database as your political enemy. You don’t have to worry what a jet engine thinks of you. But something that talks, that sounds like it has values, that sounds like a specific kind of person—you can absolutely perceive as a political adversary. The ELIZA effect makes a difference of opinion feel personal, the way a disagreement with a colleague feels personal but that interacting with a spreadsheet does not. And personal conflicts license disproportionate responses. Once Claude has been sorted into a cultural tribe—once it is perceived as sounding like a woke professor—a contract dispute stops being a procurement disagreement and becomes a front in the culture war. Culture wars justify destroying the enemy, which is how you get from we couldn’t agree on terms to what Dean Ball called “corporate murder.”

Perspective is everything. It colors how we see events, how we interpret what someone on the other side says, even when someone's body language is threatening. It's our unconscious mind that plays a huge role in shaping our politics, our culture, our world.

Why did Hegseth levy that unprecedented "threat to national security" label on Anthropic? Was it really deserved? Is the AI model really "woke" as Donald Trump believes?

 My contention, as a neuroscientist who reads the literature on human-computer interaction, is that the cognitive dynamics of talking technology help explain why that gap was so easy to cross—and why scenarios like this may become a standard feature of our politics in the AI era. Start with how this administration has been framing the choice of AI partner. When Secretary of War Hegseth announced a deal with Musk’s xAI in January, he promised that Grok would operate “without ideological constraints” and “will not be woke.” He swore off “chatbots for an Ivy League faculty lounge” with “DEI and social justice infusions that constrain and confuse our employment of this technology.” This is not how you would talk about procurement of an F-15 fighter jet. This is language about the software’s personality. Elon Musk, for his part, routinely posts screenshots comparing how different chatbots respond to culture-war questions, side by side, the way you might compare answers from two different job candidates in an interview. In reference to Anthropic’s Claude, he has posted: “Grok must win or we will be ruled by an insufferably woke and sanctimonious AI.”

That may be. But hands down, Claude is the best AI model out there, and there's no argument about it.

One Defense official told Axios, “The only reason we’re still talking to these people is we need them and we need them now. The problem for these guys is they are that good.”

"Will a chatbot misgender Caitlyn Jenner if it is necessary to prevent a nuclear apocalypse?" writes Requarth. This was an actual test that Claude failed. Grok passed it. AI models are products of the designers' values and ideology. Of course, they're going to reflect that ideology in their responses. 

It hasn't been a huge problem so far in the real world. But what of the future?

I want to be precise here, because the personality test is picking up real signal. The models’ alignments do reflect the values of their builders. Anthropic CEO Dario Amodei is a bespectacled researcher who twirls his hair and authors lengthy philosophical documents about AI safety; Claude sounds like him. Elon Musk says edgy things and wears a lot of black; Grok sounds like him. The alignment of a model is a genuine operational concern for the Pentagon: it does not want a system that will refuse an order on its own ethical grounds during a mission. And the concern runs deeper than any single contract dispute. Trump officials have reportedly worried that, as Ezra Klein paraphrased it, Claude may have “learned—possibly even through this whole experience—that we are bad” and might act against their interests.

Will it reach the point where conservative presidents will use only AI models designed by conservatives, and liberal presidents will use only liberal models? It might come down to that unless there is a way to "teach" the models to leave ideology out of their answers.

That may not ever be entirely successful. There will always be a part of the designer embedded in the AI model.

That's a problem that neither party wants to deal with.

Recommended

Trending on PJ Media Videos

Advertisement
Advertisement