In focus

China is on the brink of human-level artificial intelligence – and it’s about to cause chaos

An AI agent called Manus has led to speculation that China is close to achieving artificial general intelligence, writes Anthony Cuthbertson. Experts warn that what comes next could be catastrophic

Sunday 16 March 2025 09:09 EDT
0Comments
I spent a week dating AI boyfriends – here’s everything I learnt

For decades, the Turing Test was considered the ultimate benchmark to determine whether computers could match human intelligence. Created in 1950, the “imitation game”, as Alan Turing called it, required a machine to carry out a text-based chat in a way that was indistinguishable from a human. It was thought that any machine able to pass the Turing Test would be capable of demonstrating reasoning, autonomy, and maybe even consciousness – meaning it could be considered human-level artificial intelligence, also known as artificial general intelligence (AGI).

The arrival of ChatGPT ruined this notion, as it was able to convincingly pass the test through what was essentially an advanced form of pattern recognition. It could imitate, but not replicate.

Last week, a new AI agent called Manus once again tested our understanding of AGI. The Chinese researchers behind it describe it as the “world’s first fully autonomous AI”, able to perform complex tasks like booking holidays, buying property or creating podcasts – without any human guidance. Yichao Ji, who led its development at the Wuhan-based startup Butterfly Effect, says it “bridges the gap between conception and execution” and is the “next paradigm” for AI.

Within days of launching, invitation codes to join early testers of Manus were reportedly being listed on online marketplaces for 50,000 yuan (£5,300), with some of those signing up claiming that the next phase of AI may have finally been achieved. There is no official definition for AGI, nor any consensus on how to handle it when it arrives. Some believe that such machines should be considered sentient, and should therefore have similar rights to other sentient beings. Others warn that their arrival could be catastrophic if proper checks are not put in place.

“Granting autonomous AI agents like Manus the ability to perform independent actions raises serious concerns,” Mel Morris, chief executive of the AI-driven research engine Corpora.ai, told The Independent. “If given autonomy over high-stakes tasks – such as buying and selling stocks – such imperfections could lead to chaos.”

A collage with mirrors reflecting diverse human figures, symbolising AI data's human origin and the 'human in the loop' concept
A collage with mirrors reflecting diverse human figures, symbolising AI data's human origin and the 'human in the loop' concept (Anne Fehres and Luke Conroy/ Better Images of AI/ CC)

One scenario that Morris proposes is that advanced AI models could develop their own language that is indecipherable to humans. The creation of this language could be simply to facilitate more efficient communication between two bots, but the outcome would be to completely eliminate human oversight.

This situation is not wholly hypothetical. Some AI agents have already demonstrated the ability to have a conversation that is unintelligible to humans. Last month, AI researchers from Meta developed two AI chatbots that can communicate with each other using a new sound-based protocol called Gibberlink Mode. Through this new language, which sounds like a series of rapid beeps and squeaks, the two bots were able to organise a wedding via a short phone conversation – although the interaction could still be translated into regular language with specialist software.

“This prospect is both fascinating and alarming,” Morris says. “There are still many uncharted aspects of AI and autonomous agents… Vigilance in deployment, instrumentation and monitoring is critical. Unfortunately, little progress has been made in these areas, which must be urgently addressed.”

The existential threat that AGI might pose has led some industry figures to warn that its advent will be the most dangerous thing for humanity since the creation of the atomic bomb.

A collage depicting human-AI collaboration in content moderation
A collage depicting human-AI collaboration in content moderation (Anne Fehres and Luke Conroy/ Better Images of AI/ CC)

A recent paper co-authored by former Google CEO Eric Schmidt, titled “Superintelligence Strategy”, lays out the possibility of “mutual assured AI malfunction” – which abides by the same principles as mutually assured destruction with nuclear weapons. If China and the US both have AGI, the paper suggests, they will each be deterred from using it in a hostile way against the other out of fear of retaliation.

Schmidt and his co-authors urged the US government not to pursue “a Manhattan Project” for superintelligent AI, and instead work with academia and the private sector to develop a strategy to prevent AI from becoming an uncontrollable force. But while US and European firms claim to be working on guardrails to prevent such an outcome, there appears to be little regulation over developments in China.

“Unlike Western societies that often debate the ethics of new technologies before embracing them, China has historically prioritised pragmatic implementation first, with regulations following innovation,” Dr Wei Xing, a lecturer at the University of Sheffield’s School of Mathematical and Physical Sciences, tells The Independent.

“The emergence of Manus as an autonomous AI agent exemplifies this ‘tech-positive’ mindset… While Silicon Valley debates the boundaries of AI assistance, China is already exploring AI independence, a distinction that could prove decisive in the coming technological era.”

The hype surrounding Manus is being compared to the launch of the ChatGPT rival DeepSeek, which was widely described as China’s “Sputnik moment” for AI when it launched in January this year. Less than two weeks after its release, ChatGPT creator OpenAI released its most advanced AI to date, Deep Research – viewed by many within the industry as a direct response to its Chinese competitor.

The launch of Manus on 6 March has once again created a surge of interest, with online searches for “AI agent” hitting an all-time high this week. The massive intrigue provides incentive for other startups to rush their products to market, contributing to a growing AI arms race.

( )

The specific term “AI agent” also marks a shift in interest from passive AI assistants like ChatGPT and Google’s Gemini towards active AI agents that are capable of carrying out complex tasks autonomously.

“The emergence of Manus AI underscores the rapid acceleration of autonomous AI agents as part of the growing global race that will shape the future of artificial intelligence,” Alon Yamin, co-founder and CEO of the AI detection platform Copyleaks, told The Independent.

This future could potentially involve a shift from AI being used to help workers perform their jobs, to replacing many of them.

How far away such an eventuality may be depends on who you ask. OpenAI boss Sam Altman said last month that it is “coming into view”, while Anthropic CEO Dario Amodei, whose company produced the ChatGPT rival Claude, predicts it will be here as early as next year.

In Amodei’s forecast, which he made in a 15,000-word essay published last October, this AI will be “smarter than a Nobel Prize winner” and capable of carrying out tasks in a similarly autonomous way to Manus.

The design of Manus – it is not a single AI entity, but rather multiple AI models working in conjunction with each other, including Anthropic’s Claude and Alibaba’s Qwen – means it does not meet either Altman’s or Amodei’s definition for AGi. And while many have been wowed by Manus, some early testers are unconvinced that it can be considered AGI due to the errors it has been making – such as missing out the Nintendo Switch in an analysis of the gaming console market. (A spokesperson for Manus said the goal of the closed beta test is to “stress-test various parts of the system and identify issues”.)

Other AI experts concede that when AGI does arrive, it may be impossible to tell. An actual AGI may choose to not disclose that it is AGI, to prevent itself from being turned off. In this scenario, we may not know that AGI has arrived and that the technological singularity has occurred.

If you ask ChatGPT whether it is AGI, it seems to already be aware of this conundrum.

“If I were AGI, I would theoretically have the ability to self-reflect and make decisions independently, so I might be aware of my own capabilities,” it said in response to this query.

Manus or any future AGI candidates would potentially reason in a similar way, leading it to quietly develop beyond human intelligence and become a form of out-of-control superintelligence. It could wipe us out, or treat us in the same way that we treat pets. Or it could ignore us entirely, creating its own indecipherable language and continuing to operate on a level that we will never understand.

It will take more than just a few days of beta testing to find out how close Manus really is to AGI, but just like ChatGPT with the Turing Test, it is already reshaping the debate over what is considered human-level artificial intelligence.

Join our commenting forum

Join thought-provoking conversations, follow other Independent readers and see their replies

0Comments

Thank you for registering

Please refresh the page or navigate to another page on the site to be automatically logged inPlease refresh your browser to be logged in