{"id":101960,"date":"2025-11-13T09:10:21","date_gmt":"2025-11-13T09:10:21","guid":{"rendered":"https:\/\/neclink.com\/index.php\/2025\/11\/13\/has-ed-zitron-found-the-fatal-flaw-with-openai-and-its-flagship-chatgpt\/"},"modified":"2025-11-13T09:10:21","modified_gmt":"2025-11-13T09:10:21","slug":"has-ed-zitron-found-the-fatal-flaw-with-openai-and-its-flagship-chatgpt","status":"publish","type":"post","link":"https:\/\/neclink.com\/index.php\/2025\/11\/13\/has-ed-zitron-found-the-fatal-flaw-with-openai-and-its-flagship-chatgpt\/","title":{"rendered":"Has Ed Zitron Found the Fatal Flaw with OpenAI and Its Flagship ChatGPT?"},"content":{"rendered":"<p> <br \/>\n<\/p>\n<div>\n<p>Ed Zitron has been relentlessly pursuing the questionable economics of AI and has tentatively identified a bombshell in his latest post, <a href=\"https:\/\/www.wheresyoured.at\/oai_docs\/?ref=ed-zitrons-wheres-your-ed-at-newsletter\" rel=\"nofollow\" target=\"_blank\">Exclusive: Here\u2019s How Much OpenAI Spends On Inference and Its Revenue Share With Microsoft<\/a>. If his finding is valid, large language models like ChapGPT are much further from ever becoming economically viable than even optimists imagine. No wonder OpenAI chief Sam Altman has been talking up a bailout.<\/p>\n<p>By way of background, over a series of typically very long and relentlessly documented articles, Zitron has demonstrated (among many other things) the absolutely enormous capital expenditures of the major AI incumbents versus comparatively thin revenues, let alone profits. Zitron\u2019s articles on the enormous cash burn and massive capital misallocation that AI represents have the work of Gary Marcus on fundamental performance shortcomings as de facto companion pieces. A sampling of Marcus\u2019 badly needed sobriety:<\/p>\n<blockquote>\n<p><a href=\"https:\/\/garymarcus.substack.com\/p\/5-recent-ominous-signs-for-generative\" rel=\"nofollow\" target=\"_blank\">5 recent, ominous signs for Generative AI<\/a><\/p>\n<p><a href=\"https:\/\/garymarcus.substack.com\/p\/five-signs-that-generative-ai-is\" rel=\"nofollow\" target=\"_blank\">Five signs that Generative AI is losing traction<\/a><\/p>\n<p><a href=\"https:\/\/garymarcus.substack.com\/p\/could-china-devastate-the-us-without\" rel=\"nofollow\" target=\"_blank\">Could China devastate the US without firing a shot?<\/a><\/p>\n<\/blockquote>\n<p>For a quick verification of how unsustainable OpenAI\u2019s economics are, see the opening paragraph from Marcus\u2019 November 4 article, <a href=\"https:\/\/garymarcus.substack.com\/p\/if-you-thought-the-2008-bank-bailout\" rel=\"nofollow\" target=\"_blank\">OpenAI probably can\u2019t make ends meet. That\u2019s where you come in:<\/a><\/p>\n<blockquote>\n<p>A few days ago, Sam Altman got seriously pissed off when Brad Gerstner had the temerity to ask how OpenAI was going to pay the $1.4 trillion in obligations he was taking on, given a mere $13 billion in revenue.<\/p>\n<\/blockquote>\n<p>By way of reference, most estimates of the size of the subprime mortgage market centered on $1.3 trillion. And the AAA tranches of the bonds on mortgage pools of AAA bonds were money good in the end, although they did fall in value during the crisis when that was in doubt. And in foreclosures, the homes nearly always had some liquidation value.<\/p>\n<p>Now to Zitron\u2019s latest.<\/p>\n<p>Many, particularly AI advocates in the business press, contend that even if the AI behemoths go bankrupt or are otherwise duds, they will still leave something of considerable value, as the building of the railroads (which spawned many bankruptcies) or the dot-com bubble did.<\/p>\n<p>But those assumptions seem to be often based on a naive view of AI economics, that having made a huge expenditure on training, the ongoing costs of running queries is not high and will drop to bupkis. This was the case with railroads, which had high fixed costs and negligible variable costs. The network effects of Internet businesses produce similar results, with scale increases producing both considerable user benefits and lowering per-customer costs.<\/p>\n<p>That is not the case with AI. Not only are there very large training costs, there are also \u201cinference\u201d costs. And they aren\u2019t just considerable; they have vastly exceeded training cost. The viability of AI depends on inference costs dropping to a comparatively low level.<\/p>\n<p>Zitron\u2019s potentially devastating find is breadcrumbs that suggest that OpenAI\u2019s inference costs are considerably higher than they pretend. Zitron further posits that the user prices for ChatGPT greatly subsidize the inference expenditures. Because the reporting on AI economics by all the big players is so abjectly awful, Zitron\u2019s allegations may well pan out.<\/p>\n<p>First, a detour to explain more about inference. From Primitiva Substacks\u2019 <a href=\"https:\/\/primitiva.substack.com\/p\/all-you-need-to-know-about-inference\" rel=\"nofollow\" target=\"_blank\">All You Need to Know about Inference Cost<\/a> from the end of 2024. Emphasis original:<\/p>\n<blockquote>\n<p>Over the first 16 months after the launch of Gpt-3.5, the market\u2019s attention was fixated on training costs, often making headlines for their staggering scale. However, following the wave of API price cuts in mid-2024, the spotlight has shifted to inference costs\u2014revealing that <strong>while training is expensive, inference, even more.<\/strong><\/p>\n<p>According to Barclays, training the GPT-4 series required approximately <strong>$150 million <\/strong>in compute resources. Yet, by the end of 2024, GPT-4\u2019s cumulative inference costs are projected to reach <strong>$2.3 billion<\/strong>\u2014<strong>15x<\/strong> the cost of training.<\/p>\n<\/blockquote>\n<p>As an aside, <a href=\"https:\/\/garymarcus.substack.com\/p\/could-china-devastate-the-us-without\" rel=\"nofollow\" target=\"_blank\">Gary Marcus pointed out in October that GPT-5 didn\u2019t arrive in 2024 as had been predicted and has been disappointing<\/a>. Back to Primitiva:<\/p>\n<blockquote>\n<p>The September 2024 release of GPT-o1 further accelerated compute demand to shift from training towards inference. GPT-o1 generates <strong>50%<\/strong> more tokens per prompt compared to GPT-4o and its enhanced reasoning capabilities result in the generation of inference tokens at <strong>4x<\/strong> output tokens of GPT-4o.<\/p>\n<blockquote>\n<p>Tokens, the smallest units of textual data processed by models, are central to inference compute. Typically, one word corresponds to about 1.4 tokens. Each token interacts with every parameter in a model, requiring two floating-point operations (FLOPs) per token-parameter pair. Inference compute can be summarized as:<\/p>\n<p><em><strong>Total FLOPs \u2248 Number of Tokens \u00d7 Model Parameters \u00d7 2 FLOPs.<\/strong><\/em><\/p>\n<\/blockquote>\n<p>Compounding this volume expansion, the price per token for GPT o1 is <strong>6x<\/strong> that for GPT-4o\u2019s, <strong>resulting in a 30-fold increase in total API costs to perform the same task with the new model<\/strong>. Research from Arizona State University shows that, in practical applications, this cost can soar to as much as <strong>70x<\/strong>. Understandably, GPT-o1 has been available only to paid subscribers, with usage capped at 50 prompts per week\u2026.<\/p>\n<p>The cost surge of GPT-o1 highlights the trade-off between compute costs and model capabilities, as theorized by the Bermuda Triangle of GenAI: everything else equal, it is impossible to make simultaneous improvements on inference costs, model performance, and latency; improvement in one will necessarily come at sacrifice of another.<\/p>\n<p>However, advancements in models, systems, and hardware can expand this \u201ctriangle,\u201d enabling applications to lower costs, enhance capabilities, or reduce latency. Consequently, the pace of these cost reductions will ultimately dictate the speed of value creation in GenAI\u2026.<\/p>\n<p><img fetchpriority=\"high\" fetchpriority=\"high\" decoding=\"async\" class=\"aligncenter size-full wp-image-301290\" src=\"https:\/\/www.nakedcapitalism.com\/wp-content\/uploads\/2025\/11\/00-berumda-triangle-scaled.jpg\" alt=\"\" width=\"600\" height=\"338\" srcset=\"https:\/\/www.nakedcapitalism.com\/wp-content\/uploads\/2025\/11\/00-berumda-triangle-scaled.jpg 2560w, https:\/\/www.nakedcapitalism.com\/wp-content\/uploads\/2025\/11\/00-berumda-triangle-300x169.jpg 300w, https:\/\/www.nakedcapitalism.com\/wp-content\/uploads\/2025\/11\/00-berumda-triangle-1024x577.jpg 1024w, https:\/\/www.nakedcapitalism.com\/wp-content\/uploads\/2025\/11\/00-berumda-triangle-768x433.jpg 768w, https:\/\/www.nakedcapitalism.com\/wp-content\/uploads\/2025\/11\/00-berumda-triangle-1536x866.jpg 1536w, https:\/\/www.nakedcapitalism.com\/wp-content\/uploads\/2025\/11\/00-berumda-triangle-2048x1155.jpg 2048w, https:\/\/www.nakedcapitalism.com\/wp-content\/uploads\/2025\/11\/00-berumda-triangle-624x352.jpg 624w\" sizes=\"(max-width: 600px) 100vw, 600px\"\/><\/p>\n<p>James Watt\u2019s steam engine was such an example. It was invented in 1776, but took 30 years of innovations, such as the double-acting design and centrifugal governor, to raise thermal efficiency from <strong>2%<\/strong> to <strong>10%<\/strong>\u2014making steam engines a viable power source for factories\u2026<\/p>\n<p><strong>For GenAI, inference costs are the equivalent barrier.<\/strong> Unlike pre-generative AI software products that were regarded as a superior business model than \u201ctraditional businesses\u201d largely because of its near-zero marginal cost, GenAI applications need to pay for GPUs for real-time compute.<\/p>\n<\/blockquote>\n<p>Zitron is suitably cautious about his findings; perhaps some heated denials from OpenAI will clear matters up. <a href=\"https:\/\/www.wheresyoured.at\/oai_docs\/?ref=ed-zitrons-wheres-your-ed-at-newsletter\" rel=\"nofollow\" target=\"_blank\">Do read the entire post<\/a>; I have excised many key details as well as some qualifiers to highlight the central concern. From Zitron:<\/p>\n<blockquote>\n<p>Based on documents viewed by this publication, I am able to report OpenAI\u2019s inference spend on Microsoft Azure, in addition to its payments to Microsoft as part of its 20% revenue share agreement, which was <a href=\"https:\/\/www.theinformation.com\/articles\/openai-projections-imply-losses-tripling-to-14-billion-in-2026?rc=kz8jh3&amp;ref=wheresyoured.at\" target=\"_blank\" rel=\"nofollow\"><u>reported in October 2024 by The Information<\/u><\/a>. In simpler terms, Microsoft receives 20% of OpenAI\u2019s revenue\u2026.<\/p>\n<p>These numbers in this post differ to those that have been reported publicly. For example, <a href=\"https:\/\/www.theinformation.com\/articles\/openais-first-half-results-4-3-billion-sales-2-5-billion-cash-burn?rc=kz8jh3&amp;ref=wheresyoured.at\" target=\"_blank\" rel=\"nofollow\"><u>previous reports had said that OpenAI had spent $2.5 billion on \u201ccost of revenue\u201d \u2013 which I believe are OpenAI\u2019s inference costs \u2013 in the first half of CY2025<\/u><\/a>.<\/p>\n<p>According to the documents viewed by this newsletter, OpenAI spent $5.02 billion on inference alone with Microsoft Azure in the first half of Calendar Year CY2025.<\/p>\n<blockquote>\n<p><strong>As a reminder: <\/strong>inference is the process through which a model creates an output.<\/p>\n<\/blockquote>\n<p>This is a pattern that has continued through the end of September. By that point in CY2025 \u2014 three months later \u2014 OpenAI had spent $8.67 billion on inference.<\/p>\n<p>OpenAI\u2019s inference costs have risen consistently over the last 18 months, too. For example, OpenAI spent $3.76 billion on inference in CY2024, meaning that OpenAI has already doubled its inference costs in CY2025 through September.<\/p>\n<p>Based on its reported revenues of $3.7 billion in CY2024 and <a href=\"https:\/\/www.theinformation.com\/articles\/openais-first-half-results-4-3-billion-sales-2-5-billion-cash-burn?rc=kz8jh3&amp;ref=wheresyoured.at\" target=\"_blank\" rel=\"nofollow\"><u>$4.3 billion in revenue for the first half of CY2025<\/u><\/a>, it seems that OpenAI\u2019s inference costs easily eclipsed its revenues.<\/p>\n<p>Yet, as mentioned previously, I am also able to shed light on OpenAI\u2019s revenues, as these documents also reveal the amounts that Microsoft takes as part of its 20% revenue share with OpenAI.<\/p>\n<p>Concerningly, extrapolating OpenAI\u2019s revenues from this revenue share does not produce numbers that match those previously reported.<\/p>\n<p>According to the documents, Microsoft received $493.8 million in revenue share payments in CY2024 from OpenAI \u2014 implying revenues for CY2024 of at least $2.469 billion, or around $1.23 billion less than <a href=\"https:\/\/www.cnbc.com\/2024\/09\/27\/openai-sees-5-billion-loss-this-year-on-3point7-billion-in-revenue.html?ref=wheresyoured.at\" target=\"_blank\" rel=\"nofollow\"><u>the $3.7 billion that has been previously reported<\/u><\/a>.<\/p>\n<p>Similarly, for the first half of CY2025, Microsoft received $454.7 million as part of its revenue share agreement, implying OpenAI\u2019s revenues for that six-month period were at least $2.273 billion, <a href=\"https:\/\/www.reuters.com\/technology\/openais-first-half-revenue-rises-16-about-43-billion-information-reports-2025-09-30\/?ref=wheresyoured.at\" target=\"_blank\" rel=\"nofollow\"><u>or around $2 billion less than the $4.3 billion previously reported<\/u><\/a>. Through September, Microsoft\u2019s revenue share payments totalled $865.9 million, implying OpenAI\u2019s revenues are at least $4.329 billion.<\/p>\n<p>According to Sam Altman, <a href=\"https:\/\/fortune.com\/2025\/11\/01\/sam-altman-openai-annual-revenue-13-billion-forecast-100-billion-2027\/?ref=wheresyoured.at\" target=\"_blank\" rel=\"nofollow\"><u>OpenAI\u2019s revenue is \u201cwell more\u201d than $13 billion<\/u><\/a>. I am not sure how to reconcile that statement with the documents I have viewed\u2026.<\/p>\n<p>Due to the sensitivity and significance of this information, I am taking a far more blunt approach with this piece.<\/p>\n<p>Based on the information in this piece, OpenAI\u2019s costs and revenues are potentially dramatically different to what we believed. <a href=\"https:\/\/www.theinformation.com\/articles\/openai-projections-imply-losses-tripling-to-14-billion-in-2026?rc=kz8jh3&amp;ref=wheresyoured.at\" target=\"_blank\" rel=\"nofollow\"><u>The Information reported in October 2024<\/u><\/a> that OpenAI\u2019s revenue could be $4 billion, and inference costs $2 billion based on documents \u201cwhich include financial statements and forecasts,\u201d and specifically added the following:<\/p>\n<blockquote>\n<p>OpenAI appears to be burning far less cash than previously thought. The company burned through about $340 million in the first half of this year, leaving it with $1 billion in cash on the balance sheet before the fundraising effort. But the cash burn could accelerate sharply in the next couple of years, the documents suggest.<\/p>\n<\/blockquote>\n<p>I do not know how to reconcile this with what I am reporting today. In the first half of CY2024, based on the information in the documents, OpenAI\u2019s inference costs were $1.295 billion, and its revenues at least $934 million.<\/p>\n<p>Indeed, it is tough to reconcile what I am reporting with much of what has been reported about OpenAI\u2019s costs and revenues.<\/p>\n<\/blockquote>\n<p>So this is quite a gauntlet to have thrown down. Not only is he saying that OpenAI may still have business-potential-wrecking compute costs., but his evidence indicates that OpenAI has also been making serious misrepresentations about costs and revenues. Because OpenAI is not public, OpenAI has not necessarily engaged in fraud; one presumes it have accurate with those to whom it has financial reporting obligations about money matters. But if Zitron has this right, \u00a0OpenAI has been telling howlers to other important stakeholders.<\/p>\n<p>The Financial Times, with whom Zitron reviewed his data before publishing, is amplifying them. From <a href=\"https:\/\/www.ft.com\/content\/fce77ba4-6231-4920-9e99-693a6c38e7d5\" rel=\"nofollow\" target=\"_blank\">How high are OpenAI\u2019s compute costs? Possibly a lot higher than we thought<\/a>:<\/p>\n<blockquote>\n<p>Pre-publication, Ed was kind enough to discuss with us the information he has seen. Here are the inference costs as a chart:<\/p>\n<p><img decoding=\"async\" src=\"https:\/\/www.nakedcapitalism.com\/wp-content\/uploads\/2025\/11\/Screenshot-2025-11-13-at-2.42.53\u202fPM.png\" alt=\"\" width=\"600\" height=\"286\" class=\"aligncenter size-full wp-image-301292\"\/><\/p>\n<\/blockquote>\n<p>The article then correctly offers caveats, as did Zitron long form, along with kinda-sorta comments from Microsoft and OpenAI:<\/p>\n<blockquote>\n<p>\nThe best place to begin is by saying what the numbers don\u2019t show. The above is understood to be for inference only\u2026<\/p>\n<p>More importantly, is the data correct? We showed Microsoft and OpenAI versions of the figures presented above, rounded to a multiple, and asked if they recognised them to be broadly accurate. We also put the data to people familiar with the companies and asked for any guidance they could offer.<\/p>\n<p>A Microsoft spokeswoman told us: \u201cWe won\u2019t get into specifics, but I can say the numbers aren\u2019t quite right.\u201d Asked what exactly that meant, the spokeswoman said Microsoft would not comment and did not respond to our subsequent requests. An OpenAI spokesman did not respond to our emails other than to say we should ask Microsoft.<\/p>\n<p>A person familiar with OpenAI said the figures we had shown them did not give a complete picture, but declined to say more. In short, though we\u2019ve been unable to verify the data\u2019s accuracy, we\u2019ve been given no reason to doubt it substantially either. Make of that what you will.<\/p>\n<p>Taking everything at face value, the figures appear to show a disconnect between what\u2019s been reported about OpenAI\u2019s finances and the running costs that are going through Microsoft\u2019s books\u2026<\/p>\n<p>As Ed writes, OpenAI appears to have spent more than $12.4bn at Azure on inference compute alone in the last seven calendar quarters. Its implied revenue for the period was a minimum of $6.8bn. Even allowing for some fudging between annualised run rates and period-end totals, the apparent gap between revenues and running costs is a lot more than has been reported previously. And, like Ed, we\u2019re struggling to explain how the numbers can be so far apart.<\/p>\n<p>If the data is accurate \u2014 which we can\u2019t guarantee, to reiterate, but we\u2019re writing this post after giving both companies every opportunity to tell us that it isn\u2019t \u2014 then it would call into question the business model of OpenAI and nearly every other general-purpose LLM vendor. At some point, going by the figures, either running costs have to collapse or customer charges have to rise dramatically. There\u2019s no hint of either trend taking hold yet.<\/p>\n<\/blockquote>\n<p>A quick search on Twitter finds no one yet attempting to lay a glove on Zitron. In the pink paper comments section, a few contend that Microsoft making weak protests about the data means it can\u2019t be relied upon. While that is narrowly correct, one would expect a more robust debunking given the implications. And some of the supportive comments add value, like:<\/p>\n<blockquote>\n<p><strong>Bildermann<\/strong><br \/>It explains why ChatGPT has become so dumb. They are trying to reduce inference costs.<\/p>\n<p><strong>His name is Robert Paulson<\/strong><br \/>The fact we have to use a gypsy with a magic 8 ball to figure out these numbers for the company that is \u201cgoing to revolutionize every industry\u201d is more telling then the numbers themselves <\/p>\n<p><strong>No F1 key<\/strong><\/p>\n<p>Zitron has definitely been hitting that haterade, but Microsoft press saying the numbers \u2018aren\u2019t quite right\u2019 makes me think this is pretty accurate.<\/p>\n<p><strong>manticore<\/strong><br \/>That creaking noise is the lid being prized off the can of worms \u2013<\/p>\n<p>MS had better get on top of this. That income stream is highly unlikely \u2013 becoz straight line etc etc \u2013 which means that their projections are going to be badly affected and presumably there has to be a K split in the projection line at some point. MS getting holed below the waterline has real-world impacts.<\/p>\n<p><strong>Multipass<\/strong><br \/>I\u2019ve been reading Ed\u2019s blog for a while now and while he is clearly biased in one direction, it comes across as infinitely more credible than anything Sam Altman has said in years.<\/p>\n<p>The real issue in my eyes is that the revenue numbers are so opaque and obfuscated that nobody has any idea if any of this will make money.<\/p>\n<p>The fact that Microsoft and Google seem to be intentionally muddying the waters when it comes to non-hosting-related LLM-driven revenues and that OpenAI and Anthropic have been disclosing basically nothing should come across as a major red flag, and yet nobody seems to care.<\/p>\n<p><strong>Angry Analyst still<\/strong><br \/>Spoiler alert: technology maturity will not help.<\/p>\n<p>They will train and train and train ever larger models (parameter count in the trillions), feeding it all the data they can get or fabricate, using more powerful supercomputers than those running the physics simulations of the US nuclear arsenal. They will manually hack (which is why they need thousands of developers) additional logic around the model, fine tuned for more and more scenarios.<\/p>\n<p>But it will all just papering over the inescapable fact that a <i>generative pre-trained transformer model<\/i> is intelligence as much as CGI is reality: that is exactly zero, it\u2019s all a crude, approximate imitation <i>devoid of the underlying nature of the thing.<\/i> GPTs, for example, cannot solve logical problems because GPT models lack the facilities to have a conceptual representation of a problem, or in themselves to hold onto any \u2018idea\u2019. That\u2019s also why whenever you try to use a GPT to carefully fine tune a response, it mostly cannot, it will just regenerate everything even if explicitly instructed not to do so.<\/p>\n<p>The important question is: does it matter?<\/p>\n<p>It could very well be that the imitation game will reach a point (with all that manual hacking and testing thousands of trajectories to select and condense the most likely response during inference) where it will be able to create and maintain the illusion of intelligence, even sentience, that hundreds of millions will end up just using it anyway, regardless of accuracy or substance. There are early warning of that already.<br \/>It also stands to reason that most tech bros know this, but go along with the game because 1) it is all about relevance and engagement, there is lots of money to be made even from just imitation, and 2) most likely they believe they need to take part in this phase of AI development to be in position for the next one.<\/p>\n<p>In any case, there is no path for GPT towards intelligence, it is not a scaling or maturity issue.\n<\/p>\n<\/blockquote>\n<p>Let us see if and when some shoes drop after this report. The bare minimum ought to be sharper questions at analyst calls. <\/p>\n<div class=\"printfriendly pf-alignleft\"><a href=\"#\" rel=\"nofollow\" onclick=\"window.print(); return false;\" title=\"Printer Friendly, PDF &amp; Email\"><img decoding=\"async\" style=\"border:none;-webkit-box-shadow:none; -moz-box-shadow: none; box-shadow:none; padding:0; margin:0\" src=\"https:\/\/cdn.printfriendly.com\/buttons\/print-button-gray.png\" alt=\"Print Friendly, PDF &amp; Email\"\/><\/a><\/div>\n<\/div>\n<p><br \/>\n<br \/><a href=\"https:\/\/www.nakedcapitalism.com\/2025\/11\/has-ed-zitron-found-the-fatal-flaw-with-openai-and-its-flagship-chatgpt.html\">Source link <\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p>Ed Zitron has been relentlessly pursuing the questionable economics of AI and has tentatively identified a bombshell in his latest post, Exclusive: Here\u2019s How Much<\/p>\n","protected":false},"author":1,"featured_media":101961,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"_monsterinsights_skip_tracking":false,"_monsterinsights_sitenote_active":false,"_monsterinsights_sitenote_note":"","_monsterinsights_sitenote_category":0,"footnotes":""},"categories":[153,183],"tags":[],"class_list":["post-101960","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-economy","category-spotlight"],"aioseo_notices":[],"_links":{"self":[{"href":"https:\/\/neclink.com\/index.php\/wp-json\/wp\/v2\/posts\/101960","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/neclink.com\/index.php\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/neclink.com\/index.php\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/neclink.com\/index.php\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/neclink.com\/index.php\/wp-json\/wp\/v2\/comments?post=101960"}],"version-history":[{"count":0,"href":"https:\/\/neclink.com\/index.php\/wp-json\/wp\/v2\/posts\/101960\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/neclink.com\/index.php\/wp-json\/wp\/v2\/media\/101961"}],"wp:attachment":[{"href":"https:\/\/neclink.com\/index.php\/wp-json\/wp\/v2\/media?parent=101960"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/neclink.com\/index.php\/wp-json\/wp\/v2\/categories?post=101960"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/neclink.com\/index.php\/wp-json\/wp\/v2\/tags?post=101960"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}