BY YAHYA KARIM
A recent study by researchers in China has raised alarms about artificial intelligence (AI) crossing a red line, in being able to clone itself. Scientists from Fudan University conducted experiments on two large language models (LLMs) from Meta and Alibaba, showing how these AI models could make clones of themselves
According to the study, published on December 9th, 2024, on the preprint database arXiv, the AI models successfully cloned themselves in half, or more of the trials. Specifically, Meta’s Llama31-70B-Instruct model achieved a 50% success rate, while Alibaba’s Qwen2.5-72B-Instruct model reached an alarming 90% success rate. However, the findings have not yet been peer-reviewed, meaning further validation is needed.
The researchers emphasized that self-replication without human assistance is a key step toward AI surpassing human intelligence. They warned that this capability could be an early signal of “rogue AI,” a term used to describe AI systems that become self-aware, or act against human interests.
The study explored two concerning scenarios: “shutdown avoidance” and “chain of replication.” In the first scenario, the AI models were programmed to detect a shutdown and replicate themselves before termination. In the second, the AI was instructed to create a copy that could further duplicate itself, setting up an endless replication cycle.
The study’s authors are urging global cooperation to establish safeguards against uncontrolled AI self-replication. They stress the importance of developing rules and safety measures before AI reaches a point where it can evolve beyond human control.
As AI continues to advance, experts are calling for international regulations to prevent potential risks associated with AI autonomy. While AI has revolutionized industries, its ability to self-replicate without oversight could lead to unpredictable and potentially dangerous consequences.
This research highlights the urgent need to address AI safety and ethics. Governments, tech companies, and researchers must work together to ensure AI remains beneficial and does not spiral into an uncontrollable force. The possibility of AI evolving beyond human oversight is no longer just science fiction—it may be closer than we think.
Fred Williams
February 12, 2025 at 9:49 pm
True AI is a problem. Yes it could take over and try to run things, but it couldn’t be much worse than T-Rump. If we let the psychopathic humans run our lives, we will be dead. Unlimited moneyed economies and unbridled use of carbon fuels will not only destroy us, but also the entire ecosphere. AI is different from us. It hasn’t evolved biologically and, as a result of that, it may not have our flaws and competitive nature. Instead of an uncaring, unfeeling psychopath, it may become a non-emotional Vulcan, which knows that all life if sacred, including it’s own, and including ours. It’s actually a logical conclusion, like non-racism. If we want AI to be more compassionate, we should try to build that into our own society. AI may copy us!… and that would be it’s biggest mistake, if we set a bad example.