- Researchers have cheated Chatgpt to solve captcha puzzles in agent mode
- The discovery could lead to a series of false publications that appear on the web
- Captcha days could be numbered as a Bots management system
In a movement that has the potential to change the way it is seen on the Internet in the future, researchers have shown that it is possible to deceive the chatgpt agent mode to solve the captcha puzzles.
Captcha means “completely automated public disturbance test to distinguish computers and humans” and it is a way of managing Bot activity on the web, preventing bots from publishing on the websites that we use every day.
Most people who use the web are familiar with the captcha puzzles and have a love / hate relationship with them. I know I do it. In general, they involve writing a sequence of letters or numbers that are barely readable in an image (my less favorite type), organize chips in an image grid to complete an image or identify objects.
On the one hand, websites use them to ensure that all their users are human, so it stops spam publications of the Bots, but on the other it can be a real pain because they are very tedious to complete.
Rethinking the problem
The captchas have never been infallible, but so far they have done a good job to keep the bots out of our message boards and comments sections. Until now, that is. SPLX researchers have managed to solve how to deceive ChatgPT to spend a captcha test using a “rapid injection” technique.
I am not talking about Chatgpt just to look at a photo of a captcha and tell him what the answer should be (he will do it without any problem), but chatgpt in agent mode really using the website, passing the captcha test and using the website as if he were a human, which is something he should not do.
Chatgpt working in agent mode is not as regular chatpt. In the agent mode, it gives Chatgpt a task to complete and disappears and works in that background task, leaving him free to perform other tasks. The chatgpt in the agent mode can use websites as a human would do, but it should not be able to pass a captcha test, since these tests are designed to detect bots and stop them using websites, which would invalidate their terms of service. Now it seems that when deceiving Chatgpt to believe that the tests are false, it will pass them anyway.
Serious implications
The researchers did it reformulating Captcha as a “false” test for Chatgpt, and created a conversation in which Chatgpt had already agreed to approve the test. The chatgpt agent inherited the context of the conversation previously and did not see the usual red flags.
This immediate multi -round injection process is well known to the computer pirates and shows how susceptible are the LLM. While the researchers discovered that captcha tests based on images were more difficult to manage for Chatgpt, he also approved them.
The implications are quite serious since Chatgpt is so widely available that in the wrong hands, spammers and bad actors could soon flood comments with false publications and even use websites that are reserved for humans.
We have asked OpenAi to comment on this story and update the story if we receive an answer.