{"id":94091,"date":"2025-05-01T03:56:17","date_gmt":"2025-05-01T03:56:17","guid":{"rendered":"https:\/\/neclink.com\/index.php\/2025\/05\/01\/microsofts-most-capable-new-phi-4-ai-model-rivals-the-performance-of-far-larger-systems\/"},"modified":"2025-05-01T03:56:17","modified_gmt":"2025-05-01T03:56:17","slug":"microsofts-most-capable-new-phi-4-ai-model-rivals-the-performance-of-far-larger-systems","status":"publish","type":"post","link":"https:\/\/neclink.com\/index.php\/2025\/05\/01\/microsofts-most-capable-new-phi-4-ai-model-rivals-the-performance-of-far-larger-systems\/","title":{"rendered":"Microsoft&#8217;s most capable new Phi 4 AI model rivals the performance of far larger systems"},"content":{"rendered":"<p> <br \/>\n<\/p>\n<div>\n<p id=\"speakable-summary\" class=\"wp-block-paragraph\">Microsoft <a rel=\"nofollow\" href=\"https:\/\/azure.microsoft.com\/en-us\/blog\/one-year-of-phi-small-language-models-making-big-leaps-in-ai\/\">launched several new \u201copen\u201d AI models<\/a> on Wednesday, the most capable of which is competitive with OpenAI\u2019s o3-mini on at least one benchmark.<\/p>\n<p class=\"wp-block-paragraph\">All of the new pemissively licensed models \u2014 Phi 4 mini reasoning, Phi 4 reasoning, and Phi 4 reasoning plus \u2014 are \u201creasoning\u201d models, meaning they\u2019re able to spend more time fact-checking solutions to complex problems. They expand Microsoft\u2019s Phi \u201csmall model\u201d family, which the company launched a year ago to offer a foundation for AI developers building apps at the edge.<\/p>\n<p class=\"wp-block-paragraph\">Phi 4 mini reasoning\u00a0was trained on roughly 1 million synthetic math problems generated by Chinese AI startup DeepSeek\u2019s R1 reasoning model. Around 3.8 billion parameters in size, Phi 4 mini reasoning is designed for educational applications, Microsoft says, like \u201cembedded tutoring\u201d on lightweight devices. <\/p>\n<p class=\"wp-block-paragraph\">Parameters roughly correspond to a model\u2019s problem-solving skills, and models with more parameters generally perform better than those with fewer parameters.<\/p>\n<p class=\"wp-block-paragraph\">Phi 4 reasoning, a 14-billion-parameter model, was trained using \u201chigh-quality\u201d web data as well as \u201ccurated demonstrations\u201d from OpenAI\u2019s aforementioned o3-mini. It\u2019s best for math, science, and coding applications, according to Microsoft.<\/p>\n<p class=\"wp-block-paragraph\">As for Phi 4 reasoning plus, it\u2019s Microsoft\u2019s previously-released Phi-4 model adapted into a reasoning model to achieve better accuracy on particular tasks. Microsoft claims that Phi 4 reasoning plus approaches the performance levels of R1, a model with significantly more parameters (671 billion). The company\u2019s internal benchmarking also has Phi 4 reasoning plus matching o3-mini on OmniMath, a math skills test.<\/p>\n<p class=\"wp-block-paragraph\">Phi 4 mini reasoning, Phi 4 reasoning, and Phi 4 reasoning plus are available on the <a rel=\"nofollow\" href=\"https:\/\/huggingface.co\/microsoft\">AI dev platform Hugging Face<\/a> accompanied by detailed technical reports.<\/p>\n<div class=\"wp-block-techcrunch-inline-cta\">\n<div class=\"inline-cta__wrapper\">\n<p>Techcrunch event<\/p>\n<div class=\"inline-cta__content\">\n<p>\n\t\t\t\t\t\t\t\t\t<span class=\"inline-cta__location\">Berkeley, CA<\/span><br \/>\n\t\t\t\t\t\t\t\t\t\t\t\t\t<span class=\"inline-cta__separator\">|<\/span><br \/>\n\t\t\t\t\t\t\t\t\t\t\t\t\t<span class=\"inline-cta__date\">June 5<\/span>\n\t\t\t\t\t\t\t<\/p>\n<p>\t\t\t\t\t\t\t<a href=\"https:\/\/techcrunch.com\/events\/tc-sessions-ai\/exhibit\/?promo=tc_inline_exhibit&amp;utm_campaign=tcsessionsai2025&amp;utm_content=exhibit&amp;utm_medium=ad&amp;utm_source=tc\" class=\"inline-cta__register-button\"><br \/>\n\t\t\t\t\t<span>BOOK NOW<\/span><br \/>\n\t\t\t\t<\/a>\n\t\t\t\t\t<\/div>\n<\/p><\/div>\n<\/div>\n<p class=\"wp-block-paragraph\">\u201cUsing distillation, reinforcement learning, and high-quality data, these [new] models balance size and performance,\u201d wrote Microsoft in a <a rel=\"nofollow\" href=\"https:\/\/azure.microsoft.com\/en-us\/blog\/one-year-of-phi-small-language-models-making-big-leaps-in\">blog post<\/a>. \u201cThey are small enough for low-latency environments yet maintain strong reasoning capabilities that rival much bigger models. This blend allows even resource-limited devices to perform complex reasoning tasks efficiently.\u201d<\/p>\n<\/div>\n<p><br \/>\n<br \/><a href=\"https:\/\/techcrunch.com\/2025\/04\/30\/microsofts-most-capable-new-phi-4-ai-model-rivals-the-performance-of-far-larger-systems\/\">Source link <\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p>Microsoft launched several new \u201copen\u201d AI models on Wednesday, the most capable of which is competitive with OpenAI\u2019s o3-mini on at least one benchmark. All<\/p>\n","protected":false},"author":1,"featured_media":94092,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"_monsterinsights_skip_tracking":false,"_monsterinsights_sitenote_active":false,"_monsterinsights_sitenote_note":"","_monsterinsights_sitenote_category":0,"footnotes":""},"categories":[178],"tags":[],"class_list":["post-94091","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-tech"],"aioseo_notices":[],"_links":{"self":[{"href":"https:\/\/neclink.com\/index.php\/wp-json\/wp\/v2\/posts\/94091","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/neclink.com\/index.php\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/neclink.com\/index.php\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/neclink.com\/index.php\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/neclink.com\/index.php\/wp-json\/wp\/v2\/comments?post=94091"}],"version-history":[{"count":0,"href":"https:\/\/neclink.com\/index.php\/wp-json\/wp\/v2\/posts\/94091\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/neclink.com\/index.php\/wp-json\/wp\/v2\/media\/94092"}],"wp:attachment":[{"href":"https:\/\/neclink.com\/index.php\/wp-json\/wp\/v2\/media?parent=94091"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/neclink.com\/index.php\/wp-json\/wp\/v2\/categories?post=94091"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/neclink.com\/index.php\/wp-json\/wp\/v2\/tags?post=94091"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}