Premium

Stop Downplaying the Threat of AI to Our Jobs and Society

AP Photo/Michael Dwyer

I'm not a tech expert. Not by a long shot. I write about tech's impact on society and, sometimes, whether it's good or bad for us as Americans.

Naturally, my writing depends a lot on what other people, usually tech experts, think about a product or new technology. Not fully understanding the science that undergirds most new technology doesn't stop me from writing about what people think its impact will be on those who use it, the economy, or society as a whole.

Nevertheless, when artificial intelligence experts — the people building, creating, and selling AI products and services — start issuing warnings about the potential damage AI can do to us, I have to take notice.

These experts are not "Nervous Nellies" or "chicken littles." They believe in AI as a force for good, a force that can change the world and make everyone's life better. 

They are concerned that the small group of people who are steering the development of artificial intelligence has lost sight of the dangers posed by this powerful new technology, with the real possibility that it may get beyond our control and become something unintended. 

AI models are becoming better and faster, and even building new products on their own. This led Jason Calacanis, tech investor and co-host of the "All-In" podcast, to issue a warning.

Several AI researchers have resigned this week, citing "ethical concerns." An OpenAI employee, Hieu Pham, wrote on X, "I finally feel the existential threat that AI is posing," according to Axios.

Another tech writer, Zoe Hitzig, wrote, "OpenAI has the most detailed record of private human thought ever assembled. Can we trust them to resist the tidal forces pushing them to abuse it?"

Matt Schumer, CEO of Hyperwrite, wrote an essay on X that was viewed 56 million times in less than 36 hours. "I think we're in the 'this seems overblown' phase of something much, much bigger than Covid," he wrote. Schumer is talking about the massive disruption that the COVID-19 pandemic wrought in the workplace as the office became an optional location for the job, and companies realized that they didn't need a lot of employee classifications. 

Schumer found, to his shock, that the just-released ChatGPT model will be a game-changer. "The experience that tech workers have had over the past year, of watching AI go from 'helpful tool' to 'does my job better than I do', is the experience everyone else is about to have."

The models available today are unrecognizable from what existed even six months ago. The debate about whether AI is "really getting better" or "hitting a wall" — which has been going on for over a year — is over. It's done. Anyone still making that argument either hasn't used the current models, has an incentive to downplay what's happening, or is evaluating based on an experience from 2024 that is no longer relevant. I don't say that to be dismissive. I say it because the gap between public perception and current reality is now enormous, and that gap is dangerous... because it's preventing people from preparing.

Part of the problem is that most people are using the free version of AI tools. The free version is over a year behind what paying users have access to. Judging AI based on free-tier ChatGPT is like evaluating the state of smartphones by using a flip phone. The people paying for the best tools, and actually using them daily for real work, know what's coming.

Even the free version of Gemini could write a decent article for PJ Media (of course, not with my savoir-faire, joie de vivre, or any other impressive-sounding foreign phrase describing my writing). No doubt, Salem Media, the parent company of PJ Media, is already planning for the day when it no longer has to use human beings to write for its numerous websites.

[Editor's note: PJ Media policy prohibits the use of AI to write our content. All PJ Media articles are written by real human beings.]

“I should be clear about something up front: even though I work in AI, I have almost no influence over what’s about to happen, and neither does the vast majority of the industry," writes Schumer. "The future is being shaped by a remarkably small number of people: a few hundred researchers at a handful of companies… OpenAI, Anthropic, Google DeepMind, and a few others."

For such a world-altering technology, it's unsettling to know that so few people can shape the future.

Also for our VIPs: The Future Is Now: Russia Building a Plasma-Powered Booster That Could Reach Mars in 30 Days

Recommended

Trending on PJ Media Videos

Advertisement
Advertisement