Play Live Radio
Next Up:
0:00
0:00
0:00 0:00
Available On Air Stations
Local Newscast
Hear the latest from the WRKF/WWNO Newsroom.

AI Model Fundamentally Cracks CAPTCHAs, Scientists Say

A representation of the letter A, which can be used to crack CAPTCHAs.
Vicarious AI
A representation of the letter A, which can be used to crack CAPTCHAs.

Scientists say they have developed a computer model that fundamentally breaks through a key test used to tell a human from a bot.

You've probably passed this test hundreds of times. Text-based CAPTCHAs, a rough acronym for Completely Automated Public Turing Test To Tell Computers and Humans Apart, are groups of jumbled characters along with squiggly lines and other background noise.

You might be asked to type in these characters before signing up for a newsletter, for example, or purchasing concert tickets.

There are a staggering number of ways that letters can be rendered and jumbled together where it is usually intuitive for a human to read, but difficult for a computer. The ability to crack CAPTCHAs has become a key benchmark for artificial intelligence researchers.

Captcha code on a laptop.
milindri / Getty Images/iStockphoto
/
Getty Images/iStockphoto
Captcha code on a laptop.

Many have tried and seen some success – for example, a decade ago, Ticketmaster sued a tech company because it was able to bypass the CAPTCHA system and purchase concert tickets on a massive scale.

But those previous attempts simply exploited weaknesses in a particular kind of CAPTCHA, which could be easily defended against with slight changes in the program, says .

A new model, described in research published today in Science, fundamentally breaks the CAPTCHA's defenses by parsing the text more effectively than previous models with less training, George says.

He says that previous models trying to get machines to learn like humans have largely relied on a prevailing AI technique called deep learning.

"Deep learning is a technique where you have layers of neurons and you train those neurons to respond in a way that you decide," he says. For example, you could train a machine to recognize the letters A and B by showing it hundreds of thousands of example images of each. Even then, it would have difficulty recognizing an A overlapping with a B unless it had been explicitly trained with that image.

"It replicates only some aspects of how human brains work," George says. We are, of course, able to learn from examples. But a human child would not need to see a huge number of each character to recognize it again. For example, George says, a brain would recognize an A even if it were larger or slanted.

George's team used a different approach, called a Recursive Cortical Network, that he says it is better able to reason about what it is seeing even with less training.

"We found that there are assumptions the brain makes about the visual world that the [deep learning] neural networks are not making," George says. Here's how their new approach works:

"During the training phase, it builds internal models of the letters that it is exposed to. So if you expose it to As and Bs and different characters, it will build its own internal model of what those characters are supposed to look like. So it would say, these are the contours of the letter, this is the interior of the letter, this is the background, etc. And then, when a new image comes in ... it tries to explain that new image, trying to explain all the pixels of that new image in terms of the characters it has seen before. So it will say, this portion of the A is missing because it is behind this B."

There are multiple kinds of CAPTCHAs. According to the paper, the model "was able to solve reCAPTCHAs at an accuracy rate of 66.6% ..., BotDetect at 64.4%, Yahoo at 57.4% and PayPal at 57.1%."

The point of this research, though, actually has nothing to do with CAPTCHAs. It's about making robots that can visually reason like humans.

"The long-term goal is to build intelligence that works like the human brain," George says. "CAPTCHAs were just a natural test for us, because it is a test where you are checking whether your system can work like the brain."

"Robots need to understand the world around them and be able to reason with objects and manipulate objects," George adds. "So those are cases where requiring less training examples and being able to deal with the world in a very flexible way and being able to reason on the fly is very important, and those are the areas that we're applying it to."

What does he say to people who are uneasy about robots with human-like capabilities? Simply: "This is going to be the march of technology. We will have to take it for granted that computers will be able to work like the human brain."

It's not clear how big an impact this research will have on information security. George points out that Google has already moved away from text-based CAPTCHAs, using more advanced tests. As AI gets smarter, so too will the tests required to prove that a user is human.

Copyright 2021 NPR. To see more, visit https://www.npr.org.

Tags
Merrit Kennedy is a reporter for NPR's News Desk. She covers a broad range of issues, from the latest developments out of the Middle East to science research news.