{"id":105798,"date":"2026-02-11T10:59:53","date_gmt":"2026-02-11T10:59:53","guid":{"rendered":"https:\/\/neclink.com\/index.php\/2026\/02\/11\/reuters-finds-ai-using-surgery-devices-harmed-patients-nature-magazine-and-reuters-find-medical-chatbots-not-beating-patients-own-internet-sleuthing\/"},"modified":"2026-02-11T10:59:53","modified_gmt":"2026-02-11T10:59:53","slug":"reuters-finds-ai-using-surgery-devices-harmed-patients-nature-magazine-and-reuters-find-medical-chatbots-not-beating-patients-own-internet-sleuthing","status":"publish","type":"post","link":"https:\/\/neclink.com\/index.php\/2026\/02\/11\/reuters-finds-ai-using-surgery-devices-harmed-patients-nature-magazine-and-reuters-find-medical-chatbots-not-beating-patients-own-internet-sleuthing\/","title":{"rendered":"Reuters Finds AI-Using Surgery Devices Harmed Patients; Nature Magazine and Reuters Find Medical Chatbots Not Beating Patients&#8217; Own Internet Sleuthing"},"content":{"rendered":"<p> <br \/>\n<\/p>\n<div>\n<p>We have been alarmed at how the  profit-driven US medical system has been eagerly embracing AI despite ample evidence of uneven performance, such as too-frequent, serious errors in what ought to be simple tasks like transcriptions of MD treatment notes. Two new Reuters investigations plus an article in Nature give further cause for pause. <\/p>\n<p>The most troubling one is the newer of the two new Reuters investigations, which includes examples of harm done by AI-enhanced surgical tools. Worse, it specifically finds that recent AI updates have increased the incidence of serious errors and resulting harm to patients. <a href=\"https:\/\/www.reuters.com\/investigations\/ai-enters-operating-room-reports-arise-botched-surgeries-misidentified-body-2026-02-09\/\" rel=\"nofollow\" target=\"_blank\">The article opens with a major AI backfire by Johnson &amp; Johnson<\/a>:<\/p>\n<blockquote>\n<p>In 2021, a unit of healthcare giant Johnson &amp; Johnson announced \u201ca leap forward\u201d: It had added artificial intelligence to a medical device used\u2026..to assist ear, nose and throat specialists in surgeries.<\/p>\n<p>The device had already been on the market for about three years. Until then, the U.S. Food and Drug Administration had received unconfirmed reports of seven instances in which the device malfunctioned and another report of a patient injury. Since AI was added to the device, the FDA has received unconfirmed reports of at least 100 malfunctions and adverse events.<\/p>\n<p>At least 10 people were injured between late 2021 and November 2025, according to the reports. Most allegedly involved errors in which the TruDi Navigation System misinformed surgeons about the location of their instruments while they were using them inside patients\u2019 heads during operations.<\/p>\n<p>Cerebrospinal fluid reportedly leaked from one patient\u2019s nose. In another reported case, a surgeon mistakenly punctured the base of a patient\u2019s skull. In two other cases, patients each allegedly suffered strokes after a major artery was accidentally injured.<\/p>\n<\/blockquote>\n<p>Let\u2019s stop here. First, some of these injuries were severe. Second, they call into question one of the premises of medicine practiced by AI, which is that it can do an adequate job of what in humans would be visualization and resulting decision-making. A comment on Twitter challenged this premise independent of these accounts, based on whether enough good training data could be obtained. <\/p>\n<blockquote class=\"twitter-tweet\">\n<p lang=\"en\" dir=\"ltr\">AI and robotics might be advancing at an amazing pace, but this is just BS, you still need a huge amount of training data to develop stand-alone surgical robots and this is not simply not available. Would be surprised if even 1% of surgeons is recording their procedures at the\u2026 <a href=\"https:\/\/t.co\/3gC40zwGoE\" target=\"_blank\" rel=\"nofollow\">https:\/\/t.co\/3gC40zwGoE<\/a><\/p>\n<p>\u2014 Dries Develtere (@DriesDeveltere) <a href=\"https:\/\/twitter.com\/DriesDeveltere\/status\/2021470333518119245?ref_src=twsrc%5Etfw\" target=\"_blank\" rel=\"nofollow\">February 11, 2026<\/a><\/p>\n<\/blockquote>\n<p>Admittedly the use case here is full AI surgery as opposed to surgical assistance. But the fact that patients are suffering major injuries in surgery via the tool mis-locating where it is  a huge red flag. <\/p>\n<p>The training set concern also raises doubts as to whether AI can ever adequately substitute for visual and manual examination by a doctor. KLG recently gave an example of the hazards of merely over-relying on a chart history versus physical presentation.<sup>1<\/sup><\/p>\n<p>Reuters later describes one of the strokes attributed to TrueDi. Note it resulted after what should have been a minor procedure:<\/p>\n<blockquote>\n<p>In June 2022, a surgeon inserted a small balloon into Erin Ralph\u2019s sinus cavity\u2026Dr. Marc Dean was employing the TruDi Navigation System, which uses AI, to confirm the position of his instruments inside her head.<\/p>\n<p>The procedure, known as a sinuplasty, is a minimally invasive technique to treat chronic sinusitis. A balloon is inflated to enlarge the sinus cavity opening, to allow better drainage and relieve inflammation.<\/p>\n<p>But the TruDi system \u201cmisled and misdirected\u201d Dean, according to the lawsuit Ralph filed\u2026 A carotid artery \u2013 which supplies blood to the brain, face and neck \u2013 allegedly was injured, leading to a blood clot\u2026.Ralph\u2019s lawyer told a judge that Dean\u2019s own records showed he \u201chad no idea he was anywhere near the carotid artery.\u201d\u2026<\/p>\n<p>After Ralph left the hospital, it became apparent that she had suffered a stroke\u2026A section of her skull was removed \u201cto allow her brain room to swell,\u201d the GoFundMe appeal stated.<\/p>\n<p>\u201cI am still working in therapy,\u201d Ralph said in an interview more than a year later in a blog about stroke victims. \u201cIt is hard to walk without a brace and to get my left arm back working, again.\u201d<\/p>\n<\/blockquote>\n<p>The story reports a later horrorshow with the same doctor:<\/p>\n<blockquote>\n<p>In May 2023, Dean was using TruDi in another sinuplasty operation when patient Donna Fernihough\u2019s carotid artery allegedly \u201cblew.\u201d Blood \u201cwas spraying all over\u201d \u2013 even landing on an Acclarent representative [which distributes TrueDi] who was observing the surgery\u2026<\/p>\n<\/blockquote>\n<p>And we soon learn that Dean had a major conflict of interest:<\/p>\n<blockquote>\n<p>Dean began consulting for Acclarent in 2014 and received more than $550,000 in consultant\u2019s fees from the company through 2024, according to Open Payments, a federal database that tracks financial ties between companies and physicians. At least $135,000 of those fees related to the TruDi system.<\/p>\n<\/blockquote>\n<p>While the focus on Dean might lead readers to conclude he\u2019s just a not-very-good doctor somehow made worse by TruDi, the Reuters story describes how this Johnson &amp; Johnson TruDi AI-induced performance deterioration is not an isolated case:<\/p>\n<blockquote>\n<p>At least 1,357 medical devices using AI are now authorized by the FDA \u2013 double the number it had allowed through 2022. The TruDi system isn\u2019t the only one to come under question: The FDA has received reports involving dozens of other AI-enhanced devices, including a heart monitor said to have overlooked abnormal heartbeats and an ultrasound device that allegedly misidentified fetal body parts.<\/p>\n<p>Researchers from Johns Hopkins, Georgetown and Yale universities recently found that 60 FDA-authorized medical devices using AI were linked to 182 product recalls, according to a research letter published in the JAMA Health Forum in August. Their review showed that 43% of the recalls occurred less than a year after the devices were greenlighted. That\u2019s about twice the recall rate of all devices authorized under similar FDA rules\u2026.<\/p>\n<p>Reuters found that at least 1,401 of the reports filed to the FDA between 2021 and October 2025 concern medical devices that are on an FDA list of 1,357 products that use AI. The agency says the list isn\u2019t comprehensive. Of those reports, at least 115 mention problems with software, algorithms or programming.<\/p>\n<p>One FDA report in June 2025 alleged that AI software used for prenatal ultrasounds was misidentifying fetal body parts. Called Sonio Detect, it uses machine learning techniques to help analyze fetal images\u2026.<\/p>\n<p>At least 16 reports claimed that AI-assisted heart monitors made by medical-device giant Medtronic failed to recognize abnormal rhythms or pauses. None of the reports mentioned injuries. Medtronic told the FDA that some of the incidents were caused by \u201cuser confusion.\u201d.<\/p>\n<\/blockquote>\n<p>Even as these incidents are rising, so too is AI deployment in devices:<\/p>\n<p><iframe id=\"datawrapper-chart-2oL7l\" title=\"Growth in AI medical devices\" src=\"https:\/\/www.reuters.com\/graphics\/AI-MEDICAL\/egvbbewgbvq\/media-embed.html\" height=\"508\" frameborder=\"0\" scrolling=\"no\" aria-label=\"Reuters interactive chart\"><\/iframe><\/p>\n<p>Reuters takes pains to point out that software and computer assistance, including what formerly might have been called algos, is not new, and these advances have often been beneficial, such as pattern-matching to enhance images in cancer exams.  It also laments at considerable length how the FDA had established a large team to evaluate AI in devices but the Trump Administration has been. <\/p>\n<p>I wish the Reuters account had mentioned two other issues. First is that FDA regulation of devices is much more permissive than that of drugs. Second is that any introduction of software into a medical device, which is on the rise, is problematic. Like buyers of smart homes, patients are at risk of a vendor going out of business or (as with the AI) updates making matters worse. <\/p>\n<p>We\u2019ll give a shorter recap of the Reuters post from the start of the week, but urge you to read it in full.  <a href=\"https:\/\/www.reuters.com\/investigations\/ai-powered-apps-bots-are-barging-into-medicine-doctors-have-questions-2026-02-09\/\" rel=\"nofollow\" target=\"_blank\">AI-powered apps and bots are barging into medicine. Doctors have questions<\/a> gives numerous examples of patients seeking medical advice from AI and too many instances getting alarmingly bad readings, like telling a particular patient twice within months that he was set to die soon of cancer. But the vendors insist that the AI isn\u2019t giving \u201cadvice\u201d. Help me:<\/p>\n<blockquote>\n<p>A growing number of mobile apps available on the Apple and Google app stores claim to use AI to assist patients with their medical complaints \u2013 even though they\u2019re not supposed to offer diagnoses.<\/p>\n<p>Under U.S. Food and Drug Administration guidelines, AI-based medical apps don\u2019t require approval if they \u201care intended generally for patient education, and are not intended for use in the diagnosis of disease or other conditions.\u201d Many apps have disclaimers that they aren\u2019t a diagnostic tool and shouldn\u2019t be used as a substitute for a physician. Some developers seem to be stretching the limits.<\/p>\n<p>An app called \u201cEureka Health: AI Doctor\u201d touted itself as \u201cYour all-in-one personal health companion.\u201d It stated on Apple\u2019s App Store that it was \u201cFOR INFORMATIONAL PURPOSES ONLY\u201d and \u201cdoes not diagnose or treat disease.\u201d<\/p>\n<p>But its developer, Sam Dot Co, also promoted the app on a website, where it stated in big letters: \u201cBecome your own doctor.\u201d<\/p>\n<p>\u201cAsk, diagnose, treat,\u201d the site stated. \u201cOur AI doesn\u2019t just diagnose \u2013 it connects you to prescriptions, lab orders, and real-world care.\u201d<\/p>\n<\/blockquote>\n<p>Apple removed the Eureka Health app only after Reuters made inquiries, confirming lax oversight. <\/p>\n<p>And some of these apps are piss poor:<\/p>\n<blockquote>\n<p>\u201cAI Dermatologist: Skin Scanner\u201d says on its website that it has more than 940,000 users and \u201chas the same accuracy as a professional dermatologist.\u201d Users can upload photos of moles and other skin conditions, and AI provides an \u201cinstant\u201d risk assessment. \u201cAI Dermatologist can save your life,\u201d the site claims\u2026.<\/p>\n<p>The app claims \u201cover 97% accuracy.\u201d But it has drawn hundreds of one-star reviews on app stores, and many users complain it\u2019s inaccurate.<\/p>\n<\/blockquote>\n<p>Finally to the paper in Nature. Troublingly, it compares AI chatbot performance in diagnosis and treatment to patients winging it. I am not making that up. Nature is using as a control for diagnosis patients NOT seeing a doctor. While one can see this as look at how a very carefully designed chatbot works (it was subjected to multiple levels of design, review, and testing), it has the effect of normalizing the idea of having AI play at being doctors. <\/p>\n<p>A Twitter overview:<\/p>\n<blockquote class=\"twitter-tweet\">\n<p lang=\"en\" dir=\"ltr\">A paper in Nature Medicine suggests that large language models may not help members of the public make better decisions about their health in everyday medical situations. <a href=\"https:\/\/t.co\/k086BJL2Qn\" target=\"_blank\" rel=\"nofollow\">https:\/\/t.co\/k086BJL2Qn<\/a> <a href=\"https:\/\/t.co\/ND6kWq9cZ4\" target=\"_blank\" rel=\"nofollow\">pic.twitter.com\/ND6kWq9cZ4<\/a><\/p>\n<p>\u2014 Nature Portfolio (@NaturePortfolio) <a href=\"https:\/\/twitter.com\/NaturePortfolio\/status\/2021313222712635737?ref_src=twsrc%5Etfw\" target=\"_blank\" rel=\"nofollow\">February 10, 2026<\/a><\/p>\n<\/blockquote>\n<p>Note that the summary does not convey that \u201ceveryday situations\u201d includes whether or not to go to the ER. <a href=\"https:\/\/www.nature.com\/articles\/s41591-025-04074-y\" rel=\"nofollow\" target=\"_blank\">From the paper<\/a>:<\/p>\n<blockquote>\n<p>We tested whether LLMs can assist members of the public in identifying underlying conditions and choosing a course of action (disposition) in ten medical scenarios in a controlled study with 1,298 participants. Participants were randomly assigned to receive assistance from an LLM (GPT-4o, Llama 3, Command R+) or a source of their choice (control). <\/p>\n<\/blockquote>\n<p>The LLMs had performed well\u2026except with actual patient-type humans:<\/p>\n<blockquote>\n<p>Tested alone, LLMs complete the scenarios accurately, correctly identifying conditions in 94.9% of cases and disposition in 56.3% on average. However, participants using the same LLMs identified relevant conditions in fewer than 34.5% of cases and disposition in fewer than 44.2%, both no better than the control group.<\/p>\n<\/blockquote>\n<p>The researchers did not constrain what the control group did, so they may have used the Internet, asked someone who had had similar symptoms or was otherwise deemed to be knowledgeable, or maybe even just relied on earlier doctor warnings. The point is the AI performed no better than patients stumbling around on their own.  <\/p>\n<p>So there is indeed a lot not to like about our Brave New World of more AI, less care by medical practitioners. But it\u2019s going to be foisted even more on all of us. Enjoy being a guinea pig. <\/p>\n<p>_____<\/p>\n<p><sup>1<\/sup> From <a href=\"https:\/\/www.nakedcapitalism.com\/2026\/02\/coffee-break-science-and-medicine-bad-and-good.html\" rel=\"nofollow\">Coffee Break: Science and Medicine, Bad and Good<\/a>:<\/p>\n<blockquote>\n<p>Dr. Will Lyon is a geriatrician from Wauwatosa, Wisconsin (pop. ~48,000).\u00a0 From <em>Front Porch Republic<\/em> he writes of the practice of modern medicine in <a href=\"https:\/\/www.frontporchrepublic.com\/2026\/01\/doctoring-and-the-device-paradigm\/\" target=\"_blank\" rel=\"nofollow\">Doctoring and the Device Paradigm<\/a>:<\/p>\n<blockquote>\n<p>Before doctors see a patient, they perform a procedure called \u201cchart review.\u201d This involves reviewing the patient\u2019s history, medications, lab or imaging data, and notes from any recent specialist visits or hospital stays. There is variation in how much chart review one prefers to perform before meeting a patient, but in general it is good and necessary to be sufficiently informed and prepared before the visit. But chart review can be a double-edged sword: it can save time and help put the history you obtain and the physical exam you perform into context, but it can also box you in to a false understanding of who the patient is. In the age of ubiquitous electronic health records, which promise an ostensibly more efficient method of chart review but also contain vast amounts of information, chart review can become daunting.<\/p>\n<p>\u2026<\/p>\n<p>Several years ago, when my wife\u2019s grandfather \u2013 \u201cOpa\u201d \u2013 presented to the ER with shortness of breath while I was on service in the hospital, I learned the value of meeting the real patient first.<\/p>\n<p>I only learned of his arrival because I was notified by my family, and I could not access his medical record. Instead, I went straight to meet him in his emergency department room. On my elevator ride down, I thought about his shortness of breath. I knew that he had had a myocardial infarction earlier in the year, treated with the placement of a coronary stent.<\/p>\n<p>When I walked in the room, he looked almost as pale as the bedsheets. When I shook his hands, I noticed that they were cool. He described feeling lightheaded whenever he stood up at home and was so short of breath that he wasn\u2019t able to walk across his living room \u2013 a drastic change in his functional status. All of these signs suggested a common cause \u2013 anemia. However, the iPatient\u2019s story suggested at a different suspected cause: new or recurrent heart problems. Or so I learned when the ER doctor stopped by.<\/p>\n<\/blockquote>\n<p>Turns out it was anemia and not a heart attack, and according to Dr. Lyon \u201cI still think of Opa\u2019s case when I get lost in the weeds of chart review and need to remember that <strong>sometimes, the most valuable information is gathered from the patient by using our eyes, ears, and hands<\/strong>.\u201d\u00a0 This is the lesson we try to teach our medical students from their first few weeks of medical school, even as they are consumed by biochemistry, genetics, and cell biology.\u00a0 From Dr. Lyon:<\/p>\n<blockquote>\n<p>I do not intend to minimize the importance of reviewing the patient\u2019s chart. Oftentimes, a thorough review provides critical information that guides your clinical approach (in the case I described, the fact that Opa was on blood thinners would increase the likelihood of blood loss as the cause of his anemia). Failing to identify key history on chart review can have devastating consequences, especially in the case of complex medical patients. The error comes when  we mistake the iPatient for the flesh-and-blood human being in the exam room or hospital bed.\n<\/p>\n<\/blockquote>\n<div class=\"printfriendly pf-alignleft\"><a href=\"#\" rel=\"nofollow\" onclick=\"window.print(); return false;\" title=\"Printer Friendly, PDF &amp; Email\"><img decoding=\"async\" style=\"border:none;-webkit-box-shadow:none; -moz-box-shadow: none; box-shadow:none; padding:0; margin:0\" src=\"https:\/\/cdn.printfriendly.com\/buttons\/print-button-gray.png\" alt=\"Print Friendly, PDF &amp; Email\"\/><\/a><\/div>\n<\/blockquote>\n<\/div>\n<p><script async src=\"\/\/platform.twitter.com\/widgets.js\" charset=\"utf-8\"><\/script><br \/>\n<br \/><br \/>\n<br \/><a href=\"https:\/\/www.nakedcapitalism.com\/2026\/02\/reuters-finds-ai-using-surgery-devices-harmed-patients-nature-magazine-and-reuters-find-medical-chatbots-not-beating-patients-own-internet-sleuthing.html\">Source link <\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p>We have been alarmed at how the profit-driven US medical system has been eagerly embracing AI despite ample evidence of uneven performance, such as too-frequent,<\/p>\n","protected":false},"author":1,"featured_media":105799,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"_monsterinsights_skip_tracking":false,"_monsterinsights_sitenote_active":false,"_monsterinsights_sitenote_note":"","_monsterinsights_sitenote_category":0,"footnotes":""},"categories":[153,183],"tags":[],"class_list":["post-105798","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-economy","category-spotlight"],"aioseo_notices":[],"_links":{"self":[{"href":"https:\/\/neclink.com\/index.php\/wp-json\/wp\/v2\/posts\/105798","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/neclink.com\/index.php\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/neclink.com\/index.php\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/neclink.com\/index.php\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/neclink.com\/index.php\/wp-json\/wp\/v2\/comments?post=105798"}],"version-history":[{"count":0,"href":"https:\/\/neclink.com\/index.php\/wp-json\/wp\/v2\/posts\/105798\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/neclink.com\/index.php\/wp-json\/wp\/v2\/media\/105799"}],"wp:attachment":[{"href":"https:\/\/neclink.com\/index.php\/wp-json\/wp\/v2\/media?parent=105798"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/neclink.com\/index.php\/wp-json\/wp\/v2\/categories?post=105798"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/neclink.com\/index.php\/wp-json\/wp\/v2\/tags?post=105798"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}