"My prediction is by the end of 2028, it's more likely than not that we have an AI system where you would be able to say to it: 'Make a better version of yourself.' And it just goes off and does that completely autonomously," Jack Clark, who heads The Anthropic Institute, told Axios.
Clark, co-founder of Anthropic, says his institute is seeing signs of "AI contributing to speeding up the research and development of AI itself," a process known as recursive self-improvement.
Clark adds, "It's always been the case that humans outside the technology need to come up with the ideas that they then put back into it. What happens if we have a technology that can generate ideas within itself for how to improve itself? That's a new concept."
Too fast, too soon. The speed with which AI systems are evolving is far outstripping our ability to gauge the impact on humans and society. Lots of good things can happen in medicine, biology, and other sciences where AI is already making a big impact. The speed and autonomy of artificial intelligence models promise an abundant future.
Or something totally unforeseen.
"What do you do with a tremendous amount of growth or a tremendous amount of abundance in many, many different fields of science all at once?" Clark asked. "Today's institutions have very, very narrow pipes through which you push new drug candidates. How do you massively broaden the size of those pipes in advance of this abundance?"
We don't even know how much we have to broaden those "pipes," nor can we foresee whether or not the act of broadening them might pose other critical problems.
Too fast. Too soon.
It's like dealing with Russian Matryoshka dolls, or nesting dolls, which separate in the middle to reveal a smaller figure of the same sort inside, which, in turn, has another figure inside it, and so on. Each improvement of AI presents its own set of challenges that need to be addressed before opening the next doll. You can't open the next doll without fully understanding the significance of the doll you're working with.
"The motivation has always been: Tell the whole story," Clark told Axios. "Sometimes that means that we talk about risks that we're worried about. Sometimes that means that we're going to talk about amazing, hitherto uncontemplated amounts of abundance.… I'm just trying to get ahead of what I think of as the next big question and get Anthropic ahead of that."
Anthropic's Research Agenda is looking to get ahead of the AI learning curve.
The five-page document warns of a possible "intelligence explosion" — long a theoretical term confined to AI safety circles. Now it's in writing, in an official Anthropic document.
Clark told us an intelligence explosion is when AI systems suddenly start improving at blinding speed. Lots of bad things can happen (cyber meltdowns and biological attacks). And lots of good:
"What do you do with a tremendous amount of growth or a tremendous amount of abundance in many, many different fields of science all at once?" he asked. "Today's institutions have very, very narrow pipes through which you push new drug candidates. How do you massively broaden the size of those pipes in advance of this abundance?"
What's new: The Anthropic Institute is part research arm, part early-warning system, with an agenda built alongside Anthropic's Long-Term Benefit Trust.
Clark certainly has an optimistic outlook for AI and its impact on humanity. He asks if AI is building itself, will we need AI companies in the future?
"We and the other companies are going to be taking this technology and trying to get it to do good in the world," Clark told Axios. "To help push forward things like biology or medicine or robotics… To steer that technology into domains where it's really, really, really hard to make progress, like cancer research."
Exclusively for our VIPs: Physics of the Apocalypse: Testing the ‘Self-Destruct’ Button of Reality
What we're looking at today are earnest, altruistic efforts to protect society from the potential ravages of uncontrolled AI. Any system capable of acting autonomously is a threat to humanity. It's good that Jack Clark recognizes that, but do the Chinese see it that way? The Russians? Other AI companies?
This is the reasoning behind my observation that AI development is already close to being out of control. Even well-meaning geniuses like Elon Musk, Sam Altman, and Jack Clark aren't able to gauge all the ramifications of their creations. How could they? The industry is moving at light speed. Some of it is healthy competition, the kind of innovation that America does best.
But in the rush to be best, are we sacrificing safety? The tech giants don't think so, but I'm not as confident. It's too easy for humans to confuse their own self-interest with what's best for all. History is replete with examples of people acting in their own selfish interests while believing they were acting for the greater good.
When dealing with AI, the road to disaster is paved with good intentions.
Editor’s Note: Help us continue to report the truth about corrupt politicians.
Join PJ Media VIP and use promo code FIGHT to receive 60% off your membership.







Join the conversation as a VIP Member