19.04.2023
347

Era of Gigantic AI Models Comes to an End

Yuliia Zablotska
Author at ApiX-Drive
Reading time: ~1 min

OpenAI CEO Sam Altman spoke at the Massachusetts Institute of Technology at the «Imagination in Action» event, dedicated to AI-oriented business. The head of the research lab suggested that existing gigantic artificial intelligence models are unlikely to grow even larger. Most likely, their current size is already at the limit.

One of the main factors hindering the progress of large language models (LLM) is the extremely high and unstable costs of powerful graphics processors. For example, training the widely known AI chatbot ChatGPT required over 10,000 such processors, and even more are needed for 24/7 operation. The cost of new Nvidia H100 graphics processors, specifically designed for high-performance computing (HPC) and AI, can reach $30,600 per unit. According to Run AI co-founder and CTO Ronen Dar, training next-generation LLMs will require computing resources costing hundreds of millions of dollars.

So, how will large AI models develop? «We will improve them in other ways», Sam Altman noted in his speech. Due to the rapid increase in financial costs, LLM developers will focus on improving their architecture, advancing algorithmic methods, and increasing data efficiency instead of scaling. In other words, they plan to shift the focus from quantity to quality, which will only benefit AI.