- Chatgpt’s O3 model obtained 136 in the IQ Mensa test and 116 in a personalized off -line test, surpassing most humans
- A new survey found that 25% of generation Z believes that AI is already aware, and more than half believe it will be soon
- The change in intellectual coefficient and belief in AI conscience has happened extremely fast
The new Chatgpt model of OpenAI, called O3, has just obtained an IQ of 136 in the Norway Mensa test, more than 98% of humanity, is not bad for a glorified self -fulfillment. In less than a year, AI models have become enormously more complex, flexible and, in some way, intelligent.
The jump is so steep that it may be causing some to think that AI has become Skynet. According to a new Edubirdie survey, 25% of the Z generation now believes that AI is already aware of itself, and more than half think it is only a matter of time before its chatbot becomes sensitive and possibly demands the voting rights.
There is some context to consider when it comes to the IQ test. The Norway Mensa test is public, which means that it is technically possible for the model to have used the answers or questions for training. Then, Maximumtruth.org researchers created a new IQ test that is completely out of line and out of the reach of training data.
In that test, which was designed to be equivalent in difficulty for the Mensa version, the O3 model obtained 116. That is still high.
He put O3 in the upper 15% of human intelligence, floating somewhere between the “postgraduate student” and the “annoying and intelligent trivia night.” Without feelings. Conscienceless. But logic? He has it in swords.
Compare that last year, when there is no AI tested above 90 on the same scale. In May last year, the best AI fought with rotating triangles. Now, O3 is comfortably parked to the right of the bell curve among the brightest of humans.
And that curve is full now. Claude has advanced. Gemini scored in the 90s. Even GPT-4O, the predetermined baseline model for Chatgpt, is only a few IQ points below O3.
Even so, it’s not just that these Ais are becoming smarter. They are learning quickly. They are improving as the software does, not as humans do. And for a generation raised in the software, that is a type of disturbing growth.
I don’t think consciousness means what you think means
For those raised in a world navigated by Google, with a Siri in its pocket and a Alexa on the shelf, AI means something different from its strictest definition.
If you reached the age of majority during a pandemic when most conversations were mediated through screens, an AI partner probably doesn’t feel very different from a zoom class. So, it may not be a shock that, according to Edubirdie, almost 70% of Zers genus says “please” and “thanks” when I speak with ia.
Two thirds of them use regularly for labor communication, and 40% use it to write emails. A quarter uses it for finals Awkward Slack responds, with almost 20% sharing confidential information in the workplace, such as contracts and personal details of colleagues.
Many of the respondents depend on AI for several social situations, from asking for days to simply say no. One in eight already talks to AI about the drama in the workplace, and one in six has used the therapist.
If you trust the AI, or find it attractive enough to treat as a friend (26%) or even a romantic couple (6%), then the idea that AI is conscious seems less extreme. The longer you spend something like a person, the more you start feeling as one. Answer questions, remember things and even imitate empathy. And now that it is becoming demonstrably smarter and philosophical questions follow naturally.
But intelligence is not the same as consciousness. IQ scores do not mean self -consciousness. You can qualify a perfect 160 in a logical test and still be a toaster, if your circuits are connected that way. AI can only think about the sense that it can solve problems using scheduled reasoning. You could say that I am not different, only with meat, or circuits. But that would damage my feelings, something you don’t have to worry about any current product of AI.
Maybe that changes someday, even one day soon. I doubt it, but I am open to being wrong. I get the will to suspend disbelief with AI. It may be easier to believe that your AI assistant really understands it when you are shedding your heart at 3 am and receiving support and useful responses instead of thinking about its origin as a predictive language model trained in the collective over -valuation of the Internet.
Perhaps we are on the verge of genuine artificial intelligence conscious of itself, but maybe we are only really anthropomorphicing good calculators. Anyway, do not say secrets to an AI that you do not want to be used to train a more advanced model.