Sam Altman:- The Future of GPT Models: Moving Beyond Gigantic AI Models
Why Is the Era of Gigantic Models Ending?
No We are not talking about this MODEL:
Sam Altman, CEO of OpenAI, recently made a bold statement at an MIT event, announcing that the era of massive AI models like GPT-4 is drawing to a close. He believes that future advancements in AI will require innovative ideas, rather than just scaling up existing models. This revelation is noteworthy, considering the exponential growth of OpenAI's Language Models (LLMs) in recent years.
The Growth of GPT Models
Over the years, GPT models have seen rapid expansion in terms of parameters:
➡️GPT-2 (2019): 1.5 billion parameters
➡️GPT-3 (2020): 175 billion parameters
➡️GPT-4: (2023): amount undisclosed – but likely trillions of parameters
Why Is the Era of Gigantic Models Ending?
Sam Altman suggests several reasons why the current trend of increasing model size is unsustainable:
➡️RETURNS: Larger models yield diminishing returns.
➡️PHYSICAL LIMITS: There are constraints on the construction and speed of data centers.
➡️COST: The development of ChatGPT alone exceeded 100 million dollars.
Data Access Challenges
Altman did not explicitly mention that obtaining data for AI models is becoming increasingly difficult and expensive. The following factors contribute to this growing problem:
1. Copyright Issues: Various entities, including Getty Images and individual artists, are suing AI companies for unauthorized use of their content. Universal Music requested that Spotify and Apple Music restrict AI companies from accessing their songs for training purposes.
2. Privacy Concerns and Regulation: Italy temporarily banned ChatGPT over privacy concerns. Other countries, such as Germany, France, Ireland, Canada, and Spain, remain wary. Samsung has warned its employees against using AI tools like ChatGPT for security reasons.
3. Data Monetization: Twitter, Reddit, Stack Overflow, and other platforms are demanding payment from AI companies for training on their data. Grimes, however, allows her voice to be used for AI-generated songs in exchange for a 50% profit share.
4. Web3's Impact: If Web3 realizes its potential, users could store data in personal vaults or cryptocurrency wallets, making it more challenging for LLMs to access the data they need.
5. Geopolitics: Cross-border data exchange is increasingly complicated, as seen in the case of China and TikTok.
6. Data Contamination: Feeding data generated by generative AI chatbots back into their LLMs could lead to data contamination and other issues.
The Road Ahead
As the accessibility of data becomes a more significant challenge, leaders like Sam Altman are exploring ways to improve AI models without relying on increasing amounts of data. To learn more about this topic and other related discussions, check out the latest Radar podcast episode (link in the comments), featuring Steven Van Belleghem, Peter Hinssen, Pascal Coppens, and Julie Vens - De Vos.
In this episode, the panel also explores Twitter, TikTok, Walmart, Amazon, Schmidt Futures, the Never Normal Tour with Mediafin in New York, the human energy crisis, Apple's new high-yield savings account, the return of China, BYD, AI investment strategies, the power of proximity, the end of Buzzfeed news, and more.
Don't miss this insightful conversation on the future of GPT models and other fascinating developments in the world of technology and AI.


