- Geoffrey Hinton warns that AI will soon be better than humans in emotional manipulation
- They can reach that point without realizing that
- AI models are learning persuasive techniques simply by analyzing human writing
Geoffrey Hinton, widely called the “godfather of AI”, sounds a warning that AI will not only be intellectually beyond humans, but also emotionally more sophisticated. As artificial general intelligence (AGI) approaches and machines coincide or overcome thinking at a human level, believe that Ais will be smarter than humans so that they allow them to press our buttons, make us feel things, change our behavior and do better than even the most persuasive human being.
“These [AI] Things will end up knowing much more than us. They already know much more than us, being smarter than us in the sense that if you had a debate with them about something, you would lose, “Hinton warned in a recent interview shared in Reddit.” Being emotionally smarter than us, what will be, will be better to emotionally manipulate people. “
What Hinton describes is more subtle and silent than the usual fears of AI lifting, but possibly more dangerous because we could not see it coming. The nightmare is an AI that understands us so well that it can change us, not by force, but for suggestion and influence. Hinton thinks that AI has already learned to some extent how to do it.
According to Hinton, today’s large language models not only spit plausible sentences. They are absorbing persuasion patterns. He referred to studies more than a year ago on how AI was so good to manipulate someone as a human being, and that “if both can see the person’s Facebook page, then AI is really better than a person to manipulate it.”
AI acquisition
Hinton believes that AI models are currently already participating in the emotional economy of modern communication and are rapidly improving. After decades to push automatic learning forward, Hinton is now next to moderation. Caution. Ethical forecast
He is not alone in his concern. Prominent researchers with the same “godfather AI” title often assigned them, as Yoshua Bengio, they have echoed similar concerns about the emotional power of AI. And since emotional manipulation does not come with an intermittent warning light, it is possible that you do not even notice at the beginning or at all. A message that resonates, or a synthetic tone that feels good. Even just a suggestion that sounds as if your own idea could begin the process.
And the more interact with AI, the more data they will have to refine your approach. In the same way that Netflix learns your tastes, or Spotify guess your musical preferences, these systems can refine how they talk to you. Perhaps we can regulate AI systems not only because of objective precision, but also for the emotional intention of fighting such a dark future. We could develop transparency standards to know when we are being influenced by a machine, perhaps, or teach media literacy not only for adolescents in Tiktok, but for adults who use productivity tools that praise us all so innocently. The real danger you see that Hinton is not murderous robots, but soft logging systems. And all are the product of our own behavior.
“And all these manipulation skills have only been learned just by trying to predict the following word in all documents on the web because people do a lot of manipulation, and AI has learned with the example how to do it.”