

It has been more than just hyperscaling. First of all, the invention of transformers would likely be significantly delayed without the hype around CNNs in the first AI wave in 2014. OpenAI wouldn‘t have been founded and their early contributions (like Soft Actor-Critic RL) could have taken longer to be explored.
While I agree that the transformer architecture itself hasn‘t advanced far since 2018 apart from scaling, its success has significantly contributed to self-learning policies.
RLHF, Direct Policy Optimization, and in particular DeepSeek‘s GRPO are huge milestones for Reinforcement Learning which arguably is the most promising trajectory for actual intelligence. Those are a direct consequence of the money pumped into AI and the appeal it has to many smart and talented people around the world
I believe that we are not yet in the end stage of AI. LLMs are certainly useful, but they cannot solve the most important problems of mankind.
More research is required to solve e.g. a) Sustainable Energy Supply b) Imbalanced demographies of industrialized countries c) Treatment of several diseases
Like it or not, AI that can do research for us, or even increase efficiency of human researchers, is the most promising trajectory for accelerating progress on these important problems.
Right now, AI has not exceeded this scope. Yeah, AI can generate quite realistic fake videos. But propaganda has been possible before (look at China, Russia or Nazi Germany - even TikTok without any AI is dangerous enough to severely threaten democracies).
As a researcher in the domain, let me tell you that no one who seriously knows about video generation etc. is afraid of the current state of AI