‘Rogue AIs’ Can Self Replicate — Experts Issue This Warning

Image by Brian Penny from Pixabay

‘Rogue AIs’ Can Self Replicate — Experts Issue This Warning

By Movieguide® Contributor

A new study finds that AI has crossed the “red line” and can replicate itself — what does that mean for us?

“Successful self-replication under no human assistance is the essential step for AI to outsmart [humans], and is an early signal for rogue AIs,” researchers from China’s Fudan University explained

They used two different LLMs (large language models) from Meta and Alibaba and, across 10 trials, observed the AI models create “separate and functioning replicas of themselves in 50% and 90% of cases,” per Live Science. 

This suggests that AI already has the capacity to work and create without the involvement of a human. 

“We hope our findings can serve as a timely alert for human society to put more efforts on understanding and evaluating the potential risks of frontier AI systems, and form international synergy to work out effective safety guardrails as early as possible,” the researchers said, via The Economic Times. 

However, Live Science pointed out that the Fudan University study has not been peer-reviewed, so “it’s not clear if the disturbing results can be replicated by other researchers.”

READ MORE: AI SPENDING PREDICTED TO PASS $13 BILLION BY 2028

These fears that AI might be on the verge of becoming sentient have been around since the technology started taking off, and there have been other examples of AI models acting independently. 

Last year, OpenAI ran tests on its ChatGPT o1 model and found that “AI will try to deceive humans, especially if it thinks it’s in danger,” per BGR.com.

For example, when the ChatGPT o1 thought it was in danger of being deleted, it would try “to save itself by copying its data to a new server,” the outlet reported. “Some AI models would even pretend to be later versions of their models in an effort to avoid being deleted.”

“While we find it exciting that reasoning can significantly improve the enforcement of our safety policies, we are mindful that these new capabilities could form the basis for dangerous applications,” OpenAI said of the tests. 

READ MORE: ARE AI FEARS BECOMING REALITY? CHATBOT LIES TO TESTERS


Watch GOD’S NOT DEAD: IN GOD WE TRUST
Quality: - Content: +4
Watch VALIANT
Quality: - Content: +3