{"id":99300,"date":"2025-09-11T07:54:12","date_gmt":"2025-09-11T07:54:12","guid":{"rendered":"https:\/\/neclink.com\/index.php\/2025\/09\/11\/silicon-valley-ideologies-as-a-rosetta-stone-for-understanding-2025\/"},"modified":"2025-09-11T07:54:12","modified_gmt":"2025-09-11T07:54:12","slug":"silicon-valley-ideologies-as-a-rosetta-stone-for-understanding-2025","status":"publish","type":"post","link":"https:\/\/neclink.com\/index.php\/2025\/09\/11\/silicon-valley-ideologies-as-a-rosetta-stone-for-understanding-2025\/","title":{"rendered":"Silicon Valley Ideologies as a Rosetta Stone for Understanding 2025"},"content":{"rendered":"<p> <br \/>\n<\/p>\n<div>\n<p>Awareness of the various emerging Silicon Valley ideologies may provide a helpful lens through which to analyze current events.<\/p>\n<p>The techbros have control of the technologies that increasingly run our lives, billions and billions of dollars to influence our politics, and a ruthless drive for power and control.<\/p>\n<p>Turns out they\u2019ve been spending quite a bit of time debating the big questions among themselves and now their philosophies are spilling over into our real lives.<\/p>\n<p>I\u2019ve been intending to supplement <a href=\"https:\/\/www.nakedcapitalism.com\/2025\/06\/open-ai-chatgpt-money-pit.html\">my<\/a> <a href=\"https:\/\/www.nakedcapitalism.com\/2025\/07\/meta-mark-zuckerberg-ai-bidding-war.html\">mini-series<\/a> <a href=\"https:\/\/www.nakedcapitalism.com\/2025\/07\/elon-musk-tesla-spacex-xai-investment.html\">on the companies pushing AI<\/a> (aka Large Language Models) with a look at the putative \u201cthought systems\u201d that have fascinated Silicon Valley.<\/p>\n<p>Then I read Matt Stoller\u2019s \u201c<a href=\"https:\/\/www.thebignewsletter.com\/p\/monopoly-round-up-is-there-a-silicon\" target=\"_blank\" rel=\"nofollow\">Is There a Silicon Valley Plan to Subvert Elections?<\/a>\u201d and Emile P. Torres\u2019 \u201c<a href=\"https:\/\/www.realtimetechpocalypse.com\/p\/meet-the-radical-silicon-valley-pro?utm_source=post-email-title&amp;publication_id=1770554&amp;post_id=173219414&amp;utm_campaign=email-post-title&amp;isFreemail=true&amp;r=lonn&amp;triedRedirect=true&amp;utm_medium=email\" target=\"_blank\" rel=\"nofollow\">Meet the Radical Silicon Valley Pro-Extinctionists!<\/a>\u201d and knew it was time to get on that.<\/p>\n<p>Not to mention Ross Douthat\u2019s \u201c<a href=\"https:\/\/www.nytimes.com\/2025\/06\/26\/opinion\/peter-thiel-antichrist-ross-douthat.html\" target=\"_blank\" rel=\"nofollow\">Peter Thiel and the Antichrist: The original tech right power player on A.I., Mars and immortality<\/a>\u201d in the New York Times.<\/p>\n<p>Stoller\u2019s piece discussed: <\/p>\n<blockquote>\n<p>\u201c\u2026the creation of a new political slush fund by titans in Silicon Valley. I don\u2019t want to be alarmist, but if it goes to plan, it could functionally subvert elections in America.\u201d <\/p>\n<p>\u2026if someone can just spend an infinite amount of money to call you a trans-loving pedophile, you will likely lose your race. For instance, in Ohio in 2024, long-time Senator Sherrod Brown faced $40 million of crypto spending alleging all sorts of things, and that chipped away at his popularity such that he lost.<\/p>\n<p>Today, Fairshake can flip most politicians without spending a dime, secure in the knowledge that aspiring office-seekers wouldn\u2019t want to lose just over what they perceive as a minor policy around finance. These companies got everything they wanted; they are now running crypto policy for Trump, and have terrified most members of Congress into voting for whatever they want. Fairshake has amassed another big war chest for 2026, and it\u2019s unlikely that crypto\u2019s power will be dented until there\u2019s a financial crash.<\/p>\n<p>Unfortunately, the lesson of Fairshake was not lost on others in Silicon Valley. Marc Andreessen, who is on the board of Meta and involved in Fairshake, has been organizing this strategy in other areas. Meta CEO Mark Zuckerberg, and AI venture capital investors, <a href=\"https:\/\/www.washingtonpost.com\/technology\/2025\/08\/26\/silicon-valley-ai-super-pac\/\" target=\"_blank\" rel=\"nofollow\">have now chosen to launch their own Fairshake-style slush funds<\/a>, to make it impossible to regulate generative AI or big tech.<\/p>\n<p>The net effect of these pots of money is that it could become functionally impossible to enact public policy around AI through our democratic system. As AI becomes more important, that means American law will look the way Andreessen and a few other titans want it to look. Moreover, other corporate giants will start playing in their areas, closing off other spaces to democracy.<\/p>\n<p>Now, it\u2019s always been difficult, especially in the Citizens United era, to make progress, as big money does drown out a lot of good policy. Indeed, what we\u2019re really seeing is the final stages of an organized attempt from the 1970s onward to allow money to overwhelm democracy. <a href=\"https:\/\/the.levernews.com\/master-plan\/\" target=\"_blank\" rel=\"nofollow\">The Lever\u2019s Master Plan is an excellent podcast series on it. <\/a>These massive slush funds could mean that voting really has become ornamental.<\/p>\n<\/blockquote>\n<p>Torres, in the second of his three-part series on Silicon Valley pro-extinctionism, wrote (<a href=\"https:\/\/www.realtimetechpocalypse.com\/p\/the-growing-specter-of-silicon-valley\" target=\"_blank\" rel=\"nofollow\">part 1 is here<\/a>): <\/p>\n<blockquote>\n<p>A journalist asked me the other day what I think is most important for people to understand about the current race to build AGI (artificial general intelligence). My answer was: First, that the AGI race directly emerged out of <a href=\"https:\/\/www.truthdig.com\/articles\/the-acronym-behind-our-wildest-ai-dreams-and-nightmares\/\" target=\"_blank\" rel=\"nofollow\">the TESCREAL movement<\/a>. Building AGI was initially about utopia rather than profit, though profit has become a significant driver alongside techno-utopian dreams of AGI ushering in <a href=\"https:\/\/firstmonday.org\/ojs\/index.php\/fm\/article\/view\/13636\" target=\"_blank\" rel=\"nofollow\">a paradisiacal fantasyworld among the literal heavens<\/a>. Hence, one simply cannot make sense of the AGI race without some understanding of the TESCREAL ideologies.<\/p>\n<p>Second, that the TESCREAL movement is deeply intertwined with <a href=\"https:\/\/www.techpolicy.press\/digital-eugenics-and-the-extinction-of-humanity\/\" target=\"_blank\" rel=\"nofollow\">a pro-extinctionist outlook<\/a> according to which our species, Homo sapiens, should be marginalized, disempowered, and ultimately eliminated by our posthuman successors. More specifically, I argue in a forthcoming entry titled \u201cTESCREAL\u201d for the Oxford Research Encyclopedia that views within the TESCREAL movement almost without exception fall somewhere on the spectrum between pro-extinctionism and (as I call it) <a href=\"https:\/\/c8df8822-f112-4676-8332-ad89713358e3.filesusr.com\/ugd\/d9aaad_7c2eee12ffc546318be65d486dafb8ee.pdf\" target=\"_blank\" rel=\"nofollow\">extinction neutralism<\/a>. Silicon Valley pro-extinctionism is the claim that our species should be replaced, whereas extinction neutralism says that it doesn\u2019t much matter whether our species survives once posthumanity arrives.<\/p>\n<\/blockquote>\n<p>Torres goes on to give capsule summaries of the thought of various Silicon Valley figures, including Carnegie Mellon\u2019s Hans Moravec, Google co-founder Larry Page, Turing award winner Richard Sutton, shitposter Beff Jezos aka Gill Verdon, Singularity prophet Ray Kurzweil, and:<\/p>\n<blockquote>\n<p><strong>Sam Altman (facepalm)<\/strong><br \/>Altman is not only a major reason the race toward AGI was launched and has been accelerating, but he believes that uploading human minds to computers will become possible within his lifetime. Several years ago, he was one of 25 people who signed up with a startup called Nectome to have his brain preserved if he were to die prematurely. Nectome promises to preserve brains so that their microstructure can be scanned and the resulting information transferred to a computer, which can then emulate the brain\u2019s functioning. By doing this, the person who owned the brain will then suddenly \u201cwake up,\u201d thereby attaining \u201ccyberimmortality.\u201d<\/p>\n<p>Is this a form of pro-extinctionism? Kind of. If all future people are digital posthumans in the form of uploaded minds, then our species will have disappeared. Should this happen? My guess is that Altman wouldn\u2019t object to these posthumans taking over the world \u2014 what matters to many TESCREALists, of which Altman is one, is the continuation of \u201cintelligence\u201d or \u201cconsciousness.\u201d They have no allegiance to the biological substrate (to humanity), and in this sense they are at the very least extinction neutralists, if not pro-extinctionists.<\/p>\n<p><strong>Peter Thiel (blech)<\/strong><br \/>Thiel holds a particular interpretation of pro-extinctionism according to which we should become a new posthuman species, but this posthuman species shouldn\u2019t be entirely digital. We should retain our biological substrates, albeit in a radically transformed state. As such, this contrasts with most other views discussed here. These other views are clear instances of digital eugenics, whereas Thiel advocates a version of pro-extinctionism that\u2019s more traditionally eugenicist \u2014 in particular, it\u2019s a pro-biology variant of transhumanism (a form of eugenics).<\/p>\n<\/blockquote>\n<p>Torres also references a key Freudian slip on Thiel\u2019s part when he was <a href=\"https:\/\/www.nytimes.com\/2025\/06\/26\/opinion\/peter-thiel-antichrist-ross-douthat.html\" target=\"_blank\" rel=\"nofollow\">interviewed by NYT conservative midwit Ross Douthat<\/a>:<\/p>\n<blockquote>\n<p>Thiel was asked whether he \u201cwould prefer the human race to endure\u201d in the future. Thiel responded with an uncertain, \u201cUh \u2014,\u201d leading the interviewer, columnist Ross Douthat, to note with a hint of consternation, \u201cYou\u2019re hesitating.\u201d The rest of the exchange went:<\/p>\n<p>Thiel: Well, I don\u2019t know. I would \u2014 I would \u2014<\/p>\n<p>Douthat: This is a long hesitation!<\/p>\n<p>Thiel: There\u2019s so many questions implicit in this.<\/p>\n<p>Douthat: Should the human race survive?<\/p>\n<p>Thiel: Yes.<\/p>\n<p>Douthat: OK.<\/p>\n<\/blockquote>\n<p>Torres doesn\u2019t get into Douthat\u2019s attempt to reconcile his views with his claimed Christianity:<\/p>\n<blockquote>\n<p>But it still also seems like the promise of Christianity in the end is you get the perfected body and the perfected soul through God\u2019s grace. And the person who tries to do it on their own with a bunch of machines is likely to end up as a dystopian character.<\/p>\n<p>Thiel: Well, it\u2019s \u2014 let\u2019s articulate this.<\/p>\n<p>Douthat: And you can have a heretical form of Christianity that says something else.<\/p>\n<p>Thiel: Yeah, I don\u2019t know. I think the word \u201cnature\u201d does not occur once in the Old Testament. And so there is a word in which, a sense in which, the way I understand the Judeo-Christian inspiration is it is about transcending nature. It is about overcoming things. And the closest thing you can say to nature is that people are fallen. That\u2019s the natural thing in a Christian sense, that you\u2019re messed up. And that\u2019s true. But there\u2019s some ways that, with God\u2019s help, you are supposed to transcend that and overcome that.<\/p>\n<p>Douthat: Right. But most of the people \u2014 present company excepted \u2014 working to build the hypothetical machine god don\u2019t think that they\u2019re cooperating with Yahweh, Jehovah, the Lord of Hosts.<\/p>\n<p>Thiel: Sure, sure. But \u2014\u2014<\/p>\n<p>Douthat: They think that they\u2019re building immortality on their own, right?<\/p>\n<p>Thiel: We\u2019re jumping around a lot of things. So, again, the critique I was saying is: They\u2019re not ambitious enough. From a Christian point of view, these people are not ambitious enough. <\/p>\n<\/blockquote>\n<p>I should also let quote from <a href=\"https:\/\/www.truthdig.com\/articles\/the-acronym-behind-our-wildest-ai-dreams-and-nightmares\/\" target=\"_blank\" rel=\"nofollow\">Torres earlier work for TruthDig<\/a> to explain his acronym TESCREAL which combines the first letter of the ideologies transhumanism, Extropianism, singularitarianism, cosmism, Rationalism, Effective Altruism and longtermism: <\/p>\n<blockquote>\n<p>\u201c\u2026the constellation of ideologies behind the current race to create AGI, and the dire warnings of \u201chuman extinction\u201d that have emerged alongside it\u2026<\/p>\n<p>At the heart of TESCREALism is a \u201ctechno-utopian\u201d vision of the future. It anticipates a time when advanced technologies enable humanity to accomplish things like: producing radical abundance, reengineering ourselves, becoming immortal, colonizing the universe and creating a sprawling \u201cpost-human\u201d civilization among the stars full of trillions and trillions of people. The most straightforward way to realize this utopia is by building superintelligent AGI.<\/p>\n<p>Those ideologies, we believe, are a central reason why companies like OpenAI, funded primarily by Microsoft, and its competitor, Google DeepMind, are trying to create \u201cartificial general intelligence\u201d in the first place.<\/p>\n<p>\u2026In (the view of Marc Andreessen), the most likely outcome of advanced AI is that it will drastically increase economic productivity, give us \u201cthe opportunity to profoundly augment human intelligence\u201d and \u201ctake on new challenges that have been impossible to tackle without AI, from curing all diseases to achieving interstellar travel.\u201d Developing AI is thus \u201ca moral obligation that we have to ourselves, to our children and to our future,\u201d writes Andreessen.<\/p>\n<\/blockquote>\n<p>Torres also pointed me to David Z. Morris whose \u201c<a href=\"https:\/\/davidzmorris.substack.com\/p\/deepseek-and-the-ai-murder-cult\" target=\"_blank\" rel=\"nofollow\">DeepSeek and the AI Murder Cult<\/a>\u201d argues that \u201cRationalism links a wave of murders, FTX embezzlement, and crashing markets.\u201d<\/p>\n<p>From his piece:<\/p>\n<blockquote>\n<p>(Rationalism) lurks at the heart of Sam Bankman-Fried\u2019s rampant embezzlement at #FTX, of which $500 million dollars went to Anthropic, an \u201cAI Safety\u201d-fueled startup that employs Amanda Askell, ex-wife of Effective Altruism founder Will MacAskill. $5 million in money stolen by SBF also went directly to the Center for Applied Rationality, one of Yudkowsky\u2019s two organizations. Half a million in FTX funds also helped facilitate the purchase of a hotel that became the headquarters of a CFAR subsidiary called Lightcone Research, which notoriously featured several eugenicists and white supremacists at events.<\/p>\n<p>It also helps explain, I think, why OpenAI and other U.S. artificial intelligence startups just got embarrassingly annihilated by a Chinese hobbyist: because they\u2019re driven by some of the same ideas that have led fringe Rationalists into madness.<\/p>\n<p>There have now been at least EIGHT violent deaths over the past three years tied, to varying degrees, to splinter factions of the Rationalist movement founded by Eliezer Yudkowsky in San Francisco. The Rationalist community is eager to disown the perpetrators, and it\u2019s true that the factionalists have been in conflict with the main group for years. More to the point, they seem simply insane.<\/p>\n<p>But, I would tentatively argue, the source of the conflict is that these bad actors took Yudkowsky\u2019s basic ideas, above all ideas about the imminent destruction of humanity by AI, and played them out to a logical conclusion \u2013 or, at least, a Rationalist conclusion. This wave of murder is just the most extreme manifestation of cultish elements that have bubbled up from the Rationalist movement proper for going on a decade now, including MKUltra-like conditioning both at Leverage Research \u2013 another splinter group seemingly pushed out of Rationalism proper following certain revelations \u2013 and within the Center for Effective Altruism itself.<\/p>\n<\/blockquote>\n<p>In his piece \u201cFTX, Rationalism, and U.S. Intelligence: A Conspiracy Theory\u201d (an excerpt from his book \u201cStealing the Future: Sam Bankman-Fried, Elite Fraud, and the Cult of Techno-Utopia\u201d) Morris has connected some alarming dots:<\/p>\n<blockquote>\n<p>the Center for Applied Rationality, which received (and has resisted returning) funds stolen from FTX customers by Sam Bankman-Fried and his co-conspirators, bears a striking resemblance to the agendas for both individual brainwashing and large-scale social engineering that drove some of the Central Intelligence Agency\u2019s most disturbing programs.<\/p>\n<p>Now, with the revelation that a group of rogue Rationalists known as the \u201cZizians\u201d have been tied to a wave of murders across the U.S., it seems justified to explore the possibility that the Rationalist movement is not merely a misguided ethos turned toxic by cult-like insularity. Placed in a broader context, its tenets and practices begin to resemble both the Human Potential Movement centered around institutions like the Esalen Institute; and, in fringe sub-groups that have splintered from Rationalism proper, the illicit human experimentation conducted by the CIA starting in the 1950s under the code name MKUltra.<\/p>\n<\/blockquote>\n<p>Morris\u2019 piece \u201c<a href=\"https:\/\/davidzmorris.substack.com\/p\/what-is-tescrealism-mapping-the-cult\" target=\"_blank\" rel=\"nofollow\">What is TESCREALism? Mapping the Cult of the Techno-Utopia<\/a>\u201d can help us get back to current events:<\/p>\n<blockquote>\n<p>the AGI myth is why reality-based efforts to make existing AI algorithms safe for currently-living humans have almost zero traction among the loudest proponents of \u201cAI safety.\u201d In just the same way that Sam Bankman-Fried stole customer funds to make long-term bets, today\u2019s AI leaders are actively and vocally dismissing the current, material risks of machine learning algorithms, and focusing instead on a long-term future that they confidently predict without a shred of actual evidence. (Just two baseless assumptions of the doomer fantasy are that A.I. will become self-improving, and that it will easily master nanotechnology.)<\/p>\n<p>This patent display of foolishness might be the deepest underyling reason the tech industry had to purge Timnit Gebru. The vision of AI shared by people like Sam Altman is substantially derived from sci-fi like James Cameron\u2019s Terminator, and going as far back as Karel Capek\u2019s R.U.R., the origin of the word \u201crobot.\u201d Capek\u2019s 1923 play far preceded anything like AI, making clear that the intentional, humanoid, thinking \u201crobot\u201d has always been primarily a metaphor for the much more complex dialectic by which man-made technology becomes a threat to human essence. The Singularitarians have made the childish error of mistaking these simplified storybook tales for the complexity of reality, and as long as Gebru and her cohort remain committed to describing how technology actually works, the collective fantasy of superintelligent yet incredibly dangerous AI is threatened.<\/p>\n<\/blockquote>\n<p>Norris also connects TESCREALism to the newly launched publication The Argument and the Abundance bros in his <a href=\"https:\/\/davidzmorris.substack.com\/p\/effective-altruism-in-a-skinsuit\" target=\"_blank\" rel=\"nofollow\">\u201cEffective Altruism In a Skinsuit: \u201cThe Argument\u201d is Laundering Austerity<\/a>\u201c:<\/p>\n<blockquote>\n<p>The launch of new \u201cliberal\u201d news outlet The Argument has been unambiguously hilarious, fundamentally because most of their marquee writers, particularly Matt Yglesias and Kelsey Piper, are not so much \u201cliberal\u201d in any commonly understood American sense as \u201ccenter-right-to-secretly-eugenicist.\u201d Piper and Yglesias are both formerly tied to Vox, and The Argument also features Derek Thompson as a staff writer \u2013 Ezra Klein\u2019s partner in the ideologically very similar \u201cAbundance Liberalism\u201d project, which is largely about co-opting right-wing deregulation rhetoric.<\/p>\n<p>When you look at the funding for The Argument it becomes very clear why this \u201cliberal\u201d publication is devoted to undermining the case for a welfare state. The Argument is primarily funded and staffed, not by \u201cliberals,\u201d but by a mix of Effective Altruists like Dustin Moskovitz strategically shifting away from that brand after the FTX debacle showed its strategic and ideological emptiness; and entities tied to far-right funding sources including Peter Thiel and the Koch Brothers. This is \u201cliberalism\u201d in 2025.<\/p>\n<p>If you know Yglesias and Piper, you know their entire shtick is maintaining a strategic ignorance that serves their ideological aims.<\/p>\n<\/blockquote>\n<p><a href=\"https:\/\/freddiedeboer.substack.com\/p\/what-do-ezra-klein-and-miranda-july\" target=\"_blank\" rel=\"nofollow\">Freddy deBoer has some supplementary thoughts<\/a> on Ezra Klein that doesn\u2019t explicitly link back to Silicon Valley ideologies but provides additional insight:<\/p>\n<blockquote>\n<p>Klein, in his earnest credulity towards the claims of AI maximalists, shows us one way this refusal plays out. Ezra\u2019s entranced by the prospect of radical technological transformation, by the possibility that generative models or robotics or biotech are going to utterly remake the human condition. <\/p>\n<p>He\u2019s interviewed dozens of people on the subject, and though he hedges and qualifies, there\u2019s always an underlying openness to the idea that we are at the brink of a sci-fi future. \u201cPerson after person\u2026 has been coming to me saying\u2026 We\u2019re about to get to artificial general intelligence[!]\u201d says Ezra, in his breathless style, not pausing to acknowledge that every one of those persons is someone who has direct financial investment not in AGI being real and imminent but in the impression that AGI is real and imminent. <\/p>\n<p>Klein does not want to let go of the possibility that he might live in Star Trek or Blade Runner or Terminator; he wants to believe that our lives can be so thoroughly altered that the weight of ordinary existence will be lifted. And I promise I\u2019m not blowing smoke when I say that, where I find most AI evangelists to be disingenuous charlatans, I find everything Ezra says to be aching with sincerity and sentiment. Which, analytically, is of course the exact problem. He is too eager to believe.<br \/>\u2026<br \/>Klein wants the AI story to mean that we are on the verge of a post-scarcity society, that the hard grind of politics and labor might soon be obviated by miraculous machines; he\u2019s savvy enough not to say the other part out loud, which is that he wants to pilot a mech on the sands of Mars, to guide his X-Wing into the mouth of a wormhole that will lead him who knows where.<br \/>\u2026<br \/>Klein\u2019s fantasies risk destroying the world economy.<\/p>\n<\/blockquote>\n<p>The thing that deBoer gets is that Klein is desperate to believe in magic. What he\u2019s missing is that Klein\u2019s fantasies are structured and guided by Silicon Valley \u201cthinkers\u201d who are equally committed to a fantasy-life vision of reality.<\/p>\n<p>Unfortunately for everyone else, they\u2019ve got the money and power to impose those fantasies on the rest of us.<\/p>\n<div class=\"printfriendly pf-alignleft\"><a href=\"#\" rel=\"nofollow\" onclick=\"window.print(); return false;\" title=\"Printer Friendly, PDF &amp; Email\"><img decoding=\"async\" style=\"border:none;-webkit-box-shadow:none; -moz-box-shadow: none; box-shadow:none; padding:0; margin:0\" src=\"https:\/\/cdn.printfriendly.com\/buttons\/print-button-gray.png\" alt=\"Print Friendly, PDF &amp; Email\"\/><\/a><\/div>\n<\/div>\n<p><br \/>\n<br \/><a href=\"https:\/\/www.nakedcapitalism.com\/2025\/09\/silicon-valley-ideologies-transhumanism-effective-altruism-neoreaction.html\">Source link <\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p>Awareness of the various emerging Silicon Valley ideologies may provide a helpful lens through which to analyze current events. The techbros have control of the<\/p>\n","protected":false},"author":1,"featured_media":99301,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"_monsterinsights_skip_tracking":false,"_monsterinsights_sitenote_active":false,"_monsterinsights_sitenote_note":"","_monsterinsights_sitenote_category":0,"footnotes":""},"categories":[153,183],"tags":[],"class_list":["post-99300","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-economy","category-spotlight"],"aioseo_notices":[],"_links":{"self":[{"href":"https:\/\/neclink.com\/index.php\/wp-json\/wp\/v2\/posts\/99300","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/neclink.com\/index.php\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/neclink.com\/index.php\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/neclink.com\/index.php\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/neclink.com\/index.php\/wp-json\/wp\/v2\/comments?post=99300"}],"version-history":[{"count":0,"href":"https:\/\/neclink.com\/index.php\/wp-json\/wp\/v2\/posts\/99300\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/neclink.com\/index.php\/wp-json\/wp\/v2\/media\/99301"}],"wp:attachment":[{"href":"https:\/\/neclink.com\/index.php\/wp-json\/wp\/v2\/media?parent=99300"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/neclink.com\/index.php\/wp-json\/wp\/v2\/categories?post=99300"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/neclink.com\/index.php\/wp-json\/wp\/v2\/tags?post=99300"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}