Dr. Geoffrey Hinton deserves credit for helping to build the base of practically all the generative AI based on the neuronal network we use today. You can also accredit it in recent years consistency: it still believes that the rapid expansion of development and the use of AI will lead to some fairly serious results.
Two years ago, in an interview with The New York TimesDr. Hinton warned: “It is difficult to see how you can prevent bad actors from using it for bad things.”
Now, in a new sitting, this time with CBS News, the Nobel Prize winner is increasing concern, admitting that when he discovered how to make a computer brain work more like a human brain, “I did not believe that we would arrive here in 40 years”, adding that “10 years ago I did not believe we would get here.”
However, now we are here, and we rush towards an unknowable future, with the rhythm of the development of the AI model that easily exceeds the rhythm of Moore’s law (which establishes that the number of transistors in a chip doubles approximately every 18 months). Some might argue that artificial intelligence doubles in capacity every 12 months or so, and certainly produces significant jumps quarterly.
Naturally, Dr. Hinton’s reasons of concern are now multiple. Here is something he told CBS News.
1. There is a risk of 10% to 20% that Ais takes over
That, according to CBS News, is the current evaluation of Dr. Hinton of the human risk factor. It is not that Dr. Hinton does not believe that the advances of AI do not pay dividends in medicine, education and climatic science; I guess the question here is, at what point Ai becomes so intelligent that we don’t know what he is thinking or, perhaps, plotting?
Dr. Hinton did not directly approach the artificial general intelligence (AGI) in the interview, but that must be in his mind. Agi, which is still a somewhat amorphous concept, could mean that the machines of AI overcome human intelligence, and if they do, at what point does AI begin, as humans do, act in their own interest?
2. Is it a “beautiful puppy” who could one day kill you?
When trying to explain his concerns, Dr. Hinton compared the current AI with someone who has a tiger puppy. “He is such a nice tiger puppy, unless you can be very sure that he won’t want to kill you when he is grown.”
The analogy makes sense when considering how most people get involved with AIS such as Chatgpt, Copilot and Gemini, using them to generate fun images and videos, and declaring: “Isn’t it adorable?” But behind all that fun and shared images there is a system without emotions that is only interested in offering the best result, since its neuronal network and the models understand it.
3. Computer pirates will be more effective: banks and more could be at risk
When it comes to current AI threats, Dr. Hinton is clearly taking them seriously. He believes that AI will make computer pirates more effective to attack objectives such as banks, hospitals and infrastructure.
The AI, which can codify it and help you solve difficult problems, could overcome your efforts. Dr. Hinton’s response? Risk mitigation by spreading their money in three banks. It seems good advice.
4. Authoritarian can use bad AI
Dr. Hinton is so concerned about the imminent threat of the one that he told Cbs News that he is happy that he is 77 years old, which I suppose means that he hopes to have gone before the worst case that involves the potentially comes out.
However, I am not sure to leave on time. We have a growing legion of authoritarian worldwide, some of which are already using images generated by AI to boost their propaganda.
5. Technology companies do not focus enough on the safety of AI
Dr. Hinton argues that the big technological companies that focus on AI, namely, OpenAi, Microsoft, Meta and Google (where Dr. Hinton worked previously), are focusing too much on short -term gains and not enough on the safety of AI. That is difficult to verify and, in their defense, most governments have done a bad job by enforcing any real regulation of AI.
Dr. Hinton has noticed when some try to sound the alarm. He told CBS News that he was proud of his former protected and the former head scientist of Openi, Ilya Sutskever, who helped briefly expel the OpenAi CEO, Sam Altman, for AI’s security concerns. Altman soon returned and Sutskever finally left.
As for what comes next, and what we must do about it, Dr. Hinton offers no response. In fact, it seems almost as overwhelmed by everything as the rest of us, telling CBS News that, although it does not despair, “we are at this very special point in history where in a relatively short time everything could totally change to a scale that we had never seen before. It is difficult to absorb that emotionally.”
You can say that again, Dr. Hinton.