“We are at the time of AI,” that is what I tell people now when they try to understand the rapid rhythm of AI and all connected technologies, advances.
Although only two and a half years have passed since Openai unleashed the chatgpt in the world, I have intuitively known for months that in the world of technology, we are no longer operating in Moore’s law: the number of transistors in a chip doubles every two years. This is now the AI model law, in which the generative capabilities of the model double every three months.
Even if you do not believe that large language models (LLM) develop at that rhythm, unprecedented adoption speed cannot be denied.
A new report (or rather a presentation of 340 pages) by Mary Meeker, a general partner of Bond Investments, paints the clearest image of the transformative nature of AI and how is different from any other previous technological era.
“The rhythm and scope of change related to the evolution of artificial intelligence technology are not precedent, as the data supported,” Meeker wrote and his co -authors.
Google, what?
A particular statistic stood out for me: Google took nine years to reach 365 billion annual searches. Chatgpt reached the same milestone in two years.
Meeker’s presentation illustrates something that I have been trying to articulate for some time. There has never been a moment like this.
I have lived some great technological changes: the increase in personal computer science, the change of analog to digital publication tools and the online revolution. Most of this change was gradual, however, granted, he felt quickly at that time.
VI Digital publication tools for the first time in the mid -1970s, and it was not until the mid -1980s that many of us made the change, which is also at the time when personal computers began to arrive, although they would not become omnipresent for at least another decade.
The time of AI leaves a little time, I believe, for self -reflection.
With the arrival of the public internet in 1993, it would be years before most people were broadband. Knowledge workers did not get up immediately. Instead, there was a slow and constant change in the workforce.
I would say that we had a decade of solid adjustment before the Internet and its associated systems and platforms became an inexorable part of our lives.
I still remember how confused the average person was online. In the today today in 1994, the hosts literally asked him aloud: “What is the Internet?” AI and platforms such as Chatgpt, Copilot, Claude AI and others have not encountered the same level of confusion.
Register
Meeker’s report indicates that Chatgpt users fired from zero in October 2020 to 400m at the end of 2024 and 800m in 2025. A surprising 20 million people is paying subscribers. It took decades to convince people to pay any internet content, but for AI, people are already aligned with their open wallets.
I suppose that the emergence of the Internet and ubiquitous and mobile computer science could have prepared for the AI era. It is not as if artificial intelligence appeared out of nowhere. On the other hand, he did.
Almost a decade ago, we are marveling at Deep Blue from IBM, the first AI to overcome a chess teacher, Gary Kasparov. That was followed in 2005 by an autonomous car that completed the Darpa challenge. A decade after that, we saw Deepmind Alphago overcome the best GO player in the world.
Some of these developments were surprising, but they were reaching a relatively digestible pace. Even so, things began to resume in 2016, and several groups began to sound the warning bells on AI. No one was publicly using the terms “LLM” or “generative”. Even so, the concern was such that IBM, Amazon, Facebook, Microsoft and Deepmind de Google formed the non -profit association in AI, which was intended to “address opportunities and challenges with AI technologies to benefit people and society.”
However, that group still exists, I am not sure that anyone is paying attention to their recommendations. The time of AI leaves a little time, I believe, for self -reflection.
A study by Stanford University 2016 on AI in 2030 (no longer available online) pointed out that “contrary to the most fantastic predictions for AI in the popular press, the study panel found no reason that AI is an imminent threat to humanity.”
However, Meeker’s presentation presents an accelerated image that, I think, raises some reason for concern, with a warning: predictions come from Chatgpt (which is an even greater cause of concern).
By 2030, for example, predicts AI’s ability to create long -term movies and games. I would say that Gemini’s I see 3 is proof that we are on the way.
It promises AI’s ability to operate human robots. I would add that the time of AI has accelerated humanoid robotic development in a way that, in my 25 years of covering robotics, I had never seen before.
He says that AI will build and execute autonomous businesses.
In 10 years, Chatgpt believes that AI will be able to simulate minds as human.
If we remember that Chatgpt, like most LLMs, bases most of their knowledge on the known universe, I think we can assume that these predictions are, in any case, little simple. Even AI does not know what we don’t know.
There was some argument in the office that he had the wrong equation. There is no AI model law, there is only Huang’s law (for Jensen Huang, founder and CEO of Nvidia). This law predicts a duplication of GPU’s performance at least every two years. Without the power of these processors, the positions of AI. Maybe, but I believe that the power of these models has not yet been updated with the processing power provided by the NVIDIA GPUs.
Huang is simply building for a future in which each person and company want GPU -based generative power. That means we need more processors, more data and development jumps to prepare for coming models. However, the development of the model in real time is not hindered by the development of GPU. These generative updates are occurring much faster than silicon advances.
If you accept that there is something like AI’s time and that AI’s model law (Devils, let’s call it “ULANOFF Law”) is something real, then it is easy to accept Chatgpt’s vision of our imminent reality.
It may not be ready for it, but it is arriving anyway. I wonder what Chatgpt thinks about that.