{"id":93115,"date":"2025-04-07T03:27:03","date_gmt":"2025-04-07T03:27:03","guid":{"rendered":"https:\/\/neclink.com\/index.php\/2025\/04\/07\/metas-benchmarks-for-its-new-ai-models-are-a-bit-misleading\/"},"modified":"2025-04-07T03:27:03","modified_gmt":"2025-04-07T03:27:03","slug":"metas-benchmarks-for-its-new-ai-models-are-a-bit-misleading","status":"publish","type":"post","link":"https:\/\/neclink.com\/index.php\/2025\/04\/07\/metas-benchmarks-for-its-new-ai-models-are-a-bit-misleading\/","title":{"rendered":"Meta&#8217;s benchmarks for its new AI models are a bit misleading"},"content":{"rendered":"<p> <br \/>\n<\/p>\n<div>\n<p id=\"speakable-summary\" class=\"wp-block-paragraph\">One of the <a href=\"https:\/\/techcrunch.com\/2025\/04\/05\/meta-releases-llama-4-a-new-crop-of-flagship-ai-models\/\">new flagship AI models<\/a> Meta released on Saturday, Maverick, <a rel=\"nofollow\" href=\"https:\/\/lmarena.ai\/?leaderboard\">ranks second on LM Arena<\/a>, a test that has human raters compare the outputs of models and choose which they prefer. But it seems the version of Maverick that Meta deployed to LM Arena differs from the version that\u2019s widely available to developers. <\/p>\n<p class=\"wp-block-paragraph\">As <a rel=\"nofollow\" href=\"https:\/\/x.com\/natolambert\/status\/1908913635373842655\">several<\/a> <a rel=\"nofollow\" href=\"https:\/\/x.com\/suchenzang\/status\/1908938638869909724\">AI<\/a> <a rel=\"nofollow\" href=\"https:\/\/x.com\/ZainHasan6\/status\/1908943306936967597\">researchers<\/a> pointed out on X, Meta noted in its announcement that the Maverick on LM Arena is an \u201cexperimental chat version.\u201d A chart on the <a rel=\"nofollow\" href=\"http:\/\/llama.com\">official Llama website<\/a>, meanwhile, discloses that Meta\u2019s LM Arena testing was conducted using \u201cLlama 4 Maverick optimized for conversationality.\u201d<\/p>\n<p class=\"wp-block-paragraph\"><a href=\"https:\/\/techcrunch.com\/2024\/09\/05\/the-ai-industry-is-obsessed-with-chatbot-arena-but-it-might-not-be-the-best-benchmark\/\">As we\u2019ve written about before<\/a>, for various reasons, LM Arena has never been the most reliable measure of an AI model\u2019s performance. But AI companies generally haven\u2019t customized or otherwise fine-tuned their models to score better on LM Arena \u2014 or haven\u2019t admitted to doing so, at least. <\/p>\n<p class=\"wp-block-paragraph\">The problem with tailoring a model to a benchmark, withholding it, and then releasing a \u201cvanilla\u201d variant of that same model is that it makes it challenging for developers to predict exactly how well the model will perform in particular contexts. It\u2019s also misleading. Ideally, benchmarks \u2014 <a href=\"https:\/\/techcrunch.com\/2024\/03\/07\/heres-why-most-ai-benchmarks-tell-us-so-little\/\">woefully inadequate as they are<\/a> \u2014 provide a snapshot of a single model\u2019s strengths and weaknesses across a range of tasks.<\/p>\n<p class=\"wp-block-paragraph\">Indeed, researchers on X have <a rel=\"nofollow\" href=\"https:\/\/x.com\/TheXeophon\/status\/1908900306580074741\">observed stark<\/a> <a rel=\"nofollow\" href=\"https:\/\/x.com\/suchenzang\/status\/1908812055014195521\">differences in the behavior<\/a> of the publicly downloadable Maverick compared with the model hosted on LM Arena. The LM Arena version seems to use a lot of emojis, and give incredibly long-winded answers.<\/p>\n<blockquote class=\"wp-block-quote twitter-tweet is-layout-flow wp-block-quote-is-layout-flow\">\n<p class=\"wp-block-paragraph\">Okay Llama 4 is def a littled cooked lol, what is this yap city <a rel=\"nofollow\" href=\"https:\/\/t.co\/y3GvhbVz65\">pic.twitter.com\/y3GvhbVz65<\/a><\/p>\n<p class=\"wp-block-paragraph\">\u2014 Nathan Lambert (@natolambert) <a rel=\"nofollow\" href=\"https:\/\/twitter.com\/natolambert\/status\/1908893136518098958?ref_src=twsrc%5Etfw\">April 6, 2025<\/a><\/p>\n<\/blockquote>\n<blockquote class=\"wp-block-quote twitter-tweet is-layout-flow wp-block-quote-is-layout-flow\">\n<p class=\"wp-block-paragraph\">for some reason, the Llama 4 model in Arena uses a lot more Emojis<\/p>\n<p class=\"wp-block-paragraph\">on together . ai, it seems better: <a rel=\"nofollow\" href=\"https:\/\/t.co\/f74ODX4zTt\">pic.twitter.com\/f74ODX4zTt<\/a><\/p>\n<p class=\"wp-block-paragraph\">\u2014 Tech Dev Notes (@techdevnotes) <a rel=\"nofollow\" href=\"https:\/\/twitter.com\/techdevnotes\/status\/1908851730386657431?ref_src=twsrc%5Etfw\">April 6, 2025<\/a><\/p>\n<\/blockquote>\n<p class=\"wp-block-paragraph\">We\u2019ve reached out to Meta and Chatbot Arena, the organization that maintains LM Arena, for comment.<\/p>\n<\/div>\n<p><script async src=\"\/\/platform.twitter.com\/widgets.js\" charset=\"utf-8\"><\/script><br \/>\n<br \/><br \/>\n<br \/><a href=\"https:\/\/techcrunch.com\/2025\/04\/06\/metas-benchmarks-for-its-new-ai-models-are-a-bit-misleading\/\">Source link <\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p>One of the new flagship AI models Meta released on Saturday, Maverick, ranks second on LM Arena, a test that has human raters compare the<\/p>\n","protected":false},"author":1,"featured_media":93116,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"_monsterinsights_skip_tracking":false,"_monsterinsights_sitenote_active":false,"_monsterinsights_sitenote_note":"","_monsterinsights_sitenote_category":0,"footnotes":""},"categories":[178],"tags":[],"class_list":["post-93115","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-tech"],"aioseo_notices":[],"_links":{"self":[{"href":"https:\/\/neclink.com\/index.php\/wp-json\/wp\/v2\/posts\/93115","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/neclink.com\/index.php\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/neclink.com\/index.php\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/neclink.com\/index.php\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/neclink.com\/index.php\/wp-json\/wp\/v2\/comments?post=93115"}],"version-history":[{"count":0,"href":"https:\/\/neclink.com\/index.php\/wp-json\/wp\/v2\/posts\/93115\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/neclink.com\/index.php\/wp-json\/wp\/v2\/media\/93116"}],"wp:attachment":[{"href":"https:\/\/neclink.com\/index.php\/wp-json\/wp\/v2\/media?parent=93115"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/neclink.com\/index.php\/wp-json\/wp\/v2\/categories?post=93115"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/neclink.com\/index.php\/wp-json\/wp\/v2\/tags?post=93115"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}