{"id":94579,"date":"2025-05-13T04:14:18","date_gmt":"2025-05-13T04:14:18","guid":{"rendered":"https:\/\/neclink.com\/index.php\/2025\/05\/13\/improvements-in-reasoning-ai-models-may-slow-down-soon-analysis-finds\/"},"modified":"2025-05-13T04:14:18","modified_gmt":"2025-05-13T04:14:18","slug":"improvements-in-reasoning-ai-models-may-slow-down-soon-analysis-finds","status":"publish","type":"post","link":"https:\/\/neclink.com\/index.php\/2025\/05\/13\/improvements-in-reasoning-ai-models-may-slow-down-soon-analysis-finds\/","title":{"rendered":"Improvements in &#8216;reasoning&#8217; AI models may slow down soon, analysis finds"},"content":{"rendered":"<p> <br \/>\n<\/p>\n<div>\n<p id=\"speakable-summary\" class=\"wp-block-paragraph\">An <a href=\"https:\/\/epoch.ai\/gradient-updates\/how-far-can-reasoning-models-scale\" target=\"_blank\" rel=\"noreferrer noopener nofollow\">analysis<\/a> by Epoch AI, a nonprofit AI research institute, suggests the AI industry may not be able to eke massive performance gains out of reasoning AI models for much longer. As soon as within a year, progress from reasoning models could slow down, according to the report\u2019s findings.<\/p>\n<p class=\"wp-block-paragraph\">Reasoning models such as OpenAI\u2019s <a href=\"https:\/\/techcrunch.com\/2025\/04\/16\/openai-launches-a-pair-of-ai-reasoning-models-o3-and-o4-mini\/\">o3<\/a> have led to substantial gains on AI benchmarks in recent months, particularly benchmarks measuring math and programming skills. The models can apply more computing to problems, which can improve their performance, with the downside being that they take longer than conventional models to complete tasks.<\/p>\n<p class=\"wp-block-paragraph\">Reasoning models are developed by first training a conventional model on a massive amount of data, then applying a technique called reinforcement learning, which effectively gives the model \u201cfeedback\u201d on its solutions to difficult problems. <\/p>\n<p class=\"wp-block-paragraph\">So far, frontier AI labs like OpenAI haven\u2019t applied an enormous amount of computing power to the reinforcement learning stage of reasoning model training, according to Epoch. <\/p>\n<p class=\"wp-block-paragraph\">That\u2019s changing. OpenAI has said that it applied around 10x more computing to train o3 than its predecessor, o1, and Epoch speculates that most of this computing was devoted to reinforcement learning. And OpenAI researcher Dan Roberts recently revealed that the company\u2019s future plans call for <a href=\"https:\/\/www.youtube.com\/watch?v=_rjD_2zn2JU&amp;list=PLOhHNjZItNnMEqGLRWkKjaMcdSJptkR08&amp;index=7\" target=\"_blank\" rel=\"noreferrer noopener nofollow\">prioritizing reinforcement learning<\/a> to use far more computing power, even more than for the initial model training.<\/p>\n<p class=\"wp-block-paragraph\">But there\u2019s still an upper bound to how much computing can be applied to reinforcement learning, per Epoch.<\/p>\n<figure class=\"wp-block-image aligncenter size-large\"><img loading=\"lazy\" loading=\"lazy\" decoding=\"async\" width=\"3200\" height=\"2340\" src=\"https:\/\/techcrunch.com\/wp-content\/uploads\/2025\/05\/figure-1-2.png?w=680\" alt=\"Epoch reasoning model training\" class=\"wp-image-3006232\" srcset=\"https:\/\/techcrunch.com\/wp-content\/uploads\/2025\/05\/figure-1-2.png 3200w, https:\/\/techcrunch.com\/wp-content\/uploads\/2025\/05\/figure-1-2.png?resize=150,110 150w, https:\/\/techcrunch.com\/wp-content\/uploads\/2025\/05\/figure-1-2.png?resize=300,219 300w, https:\/\/techcrunch.com\/wp-content\/uploads\/2025\/05\/figure-1-2.png?resize=768,562 768w, https:\/\/techcrunch.com\/wp-content\/uploads\/2025\/05\/figure-1-2.png?resize=680,497 680w, https:\/\/techcrunch.com\/wp-content\/uploads\/2025\/05\/figure-1-2.png?resize=1200,878 1200w, https:\/\/techcrunch.com\/wp-content\/uploads\/2025\/05\/figure-1-2.png?resize=1280,936 1280w, https:\/\/techcrunch.com\/wp-content\/uploads\/2025\/05\/figure-1-2.png?resize=430,314 430w, https:\/\/techcrunch.com\/wp-content\/uploads\/2025\/05\/figure-1-2.png?resize=720,527 720w, https:\/\/techcrunch.com\/wp-content\/uploads\/2025\/05\/figure-1-2.png?resize=900,658 900w, https:\/\/techcrunch.com\/wp-content\/uploads\/2025\/05\/figure-1-2.png?resize=800,585 800w, https:\/\/techcrunch.com\/wp-content\/uploads\/2025\/05\/figure-1-2.png?resize=1536,1123 1536w, https:\/\/techcrunch.com\/wp-content\/uploads\/2025\/05\/figure-1-2.png?resize=2048,1498 2048w, https:\/\/techcrunch.com\/wp-content\/uploads\/2025\/05\/figure-1-2.png?resize=668,488 668w, https:\/\/techcrunch.com\/wp-content\/uploads\/2025\/05\/figure-1-2.png?resize=513,375 513w, https:\/\/techcrunch.com\/wp-content\/uploads\/2025\/05\/figure-1-2.png?resize=844,617 844w, https:\/\/techcrunch.com\/wp-content\/uploads\/2025\/05\/figure-1-2.png?resize=708,518 708w\" sizes=\"auto, (max-width: 3200px) 100vw, 3200px\"\/><figcaption class=\"wp-element-caption\"><span class=\"wp-element-caption__text\">According to an Epoch AI analysis, reasoning model training scaling may slow down<\/span><span class=\"wp-block-image__credits\"><strong>Image Credits:<\/strong>Epoch AI<\/span><\/figcaption><\/figure>\n<p class=\"wp-block-paragraph\">Josh You, an analyst at Epoch and the author of the analysis, explains that performance gains from standard AI model training are currently quadrupling every year, while performance gains from reinforcement learning are growing tenfold every 3-5 months. The progress of reasoning training will \u201cprobably converge with the overall frontier by 2026,\u201d he continues.<\/p>\n<p class=\"wp-block-paragraph\">Epoch\u2019s analysis makes a number of assumptions, and draws in part on public comments from AI company executives. But it also makes the case that scaling reasoning models may prove to be challenging for reasons besides computing, including high overhead costs for research. <\/p>\n<p class=\"wp-block-paragraph\">\u201cIf there\u2019s a persistent overhead cost required for research, reasoning models might not scale as far as expected,\u201d writes You. \u201cRapid compute scaling is\u00a0potentially\u00a0a very important ingredient in reasoning model progress, so it\u2019s worth tracking this closely.\u201d<\/p>\n<p class=\"wp-block-paragraph\">Any indication that reasoning models may reach some sort of limit in the near future is likely to worry the AI industry, which has invested enormous resources developing these types of models. Already, studies have shown that reasoning models, which can be <a href=\"https:\/\/techcrunch.com\/2025\/04\/10\/the-rise-of-ai-reasoning-models-is-making-benchmarking-more-expensive\/\">incredibly expensive to run<\/a>, have serious flaws, like a tendency to <a href=\"https:\/\/techcrunch.com\/2025\/04\/18\/openais-new-reasoning-ai-models-hallucinate-more\/\">hallucinate more<\/a> than certain conventional models.<\/p>\n<\/div>\n<p><br \/>\n<br \/><a href=\"https:\/\/techcrunch.com\/2025\/05\/12\/improvements-in-reasoning-ai-models-may-slow-down-soon-analysis-finds\/\">Source link <\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p>An analysis by Epoch AI, a nonprofit AI research institute, suggests the AI industry may not be able to eke massive performance gains out of<\/p>\n","protected":false},"author":1,"featured_media":94580,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"_monsterinsights_skip_tracking":false,"_monsterinsights_sitenote_active":false,"_monsterinsights_sitenote_note":"","_monsterinsights_sitenote_category":0,"footnotes":""},"categories":[149],"tags":[],"class_list":["post-94579","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-business"],"aioseo_notices":[],"_links":{"self":[{"href":"https:\/\/neclink.com\/index.php\/wp-json\/wp\/v2\/posts\/94579","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/neclink.com\/index.php\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/neclink.com\/index.php\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/neclink.com\/index.php\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/neclink.com\/index.php\/wp-json\/wp\/v2\/comments?post=94579"}],"version-history":[{"count":0,"href":"https:\/\/neclink.com\/index.php\/wp-json\/wp\/v2\/posts\/94579\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/neclink.com\/index.php\/wp-json\/wp\/v2\/media\/94580"}],"wp:attachment":[{"href":"https:\/\/neclink.com\/index.php\/wp-json\/wp\/v2\/media?parent=94579"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/neclink.com\/index.php\/wp-json\/wp\/v2\/categories?post=94579"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/neclink.com\/index.php\/wp-json\/wp\/v2\/tags?post=94579"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}