Are you smarter than a supercomputer? 4-year-olds are

New research from the University of Illinois pitted an advance AI against children's IQ test

James Vincent
Thursday 18 July 2013 05:43 EDT
Comments
Word games: experts say phonics tests can create problems for children and hold them back
Word games: experts say phonics tests can create problems for children and hold them back (Alamy)

Your support helps us to tell the story

From reproductive rights to climate change to Big Tech, The Independent is on the ground when the story is developing. Whether it's investigating the financials of Elon Musk's pro-Trump PAC or producing our latest documentary, 'The A Word', which shines a light on the American women fighting for reproductive rights, we know how important it is to parse out the facts from the messaging.

At such a critical moment in US history, we need reporters on the ground. Your donation allows us to keep sending journalists to speak to both sides of the story.

The Independent is trusted by Americans across the entire political spectrum. And unlike many other quality news outlets, we choose not to lock Americans out of our reporting and analysis with paywalls. We believe quality journalism should be available to everyone, paid for by those who can afford it.

Your support makes all the difference.

New research from America has pitted one of the world’s most powerful computers against a challenge more familiar to humans – an IQ test. And the result? The artificial intelligence system turned out be about as smart as the average four-year-old.

The research has been conducted by artificial and natural knowledge specialists at the University of Illinois at Chicago. They tested ConceptNet 4 (an AI system developed at MIT) with the verbal sections from the Primary Scale of Intelligence Test – as standard assessment of IQ in young children.

Key differences in how human and machine brains operate were highlighted by ConceptNet 4’s uneven test scores: it scored highly on test of vocabulary and recognises similarities, but was stumped by comprehension – the ‘why’ questions.

“If a child had scores that varied this much, it might be a symptom that something was wrong,” said Robert Sloan, professor and head of computer science at UIC, and lead author on the study. "We're still very far from programs with common sense and AI that can answer comprehension questions with the skill of a child of eight.”

The hardest thing about creating artificial intelligence is being able to duplicate what we simply think of as common sense, says Sloan. The computer is able to score well in certain areas because they simply require large stores of memory, or pattern-based comparisons. What Sloan calls implicit facts – things so obvious we don’t know that we know them – are much harder to program.

All of us know a huge number of things,” said Sloan. “As babies, we crawled around and yanked on things and learned that things fall. We yanked on other things and learned that dogs and cats don’t appreciate having their tails pulled. Life is a rich learning environment.”

Sloan hopes that research such as this will help to further highlight the ‘hard spots’ of AI research though with computers like Watson adapting to natural language queries and even competing in quiz shows, it looks like our four-year-old computers will soon be growing up.

Join our commenting forum

Join thought-provoking conversations, follow other Independent readers and see their replies

Comments

Thank you for registering

Please refresh the page or navigate to another page on the site to be automatically logged inPlease refresh your browser to be logged in