cross-posted from: https://lemmy.world/post/30459966
WASHINGTON (AP) — The White House will fix errors in a much-anticipated federal government report spearheaded by U.S. Health and Human Services Secretary Robert F. Kennedy Jr., which decried America’s food supply, pesticides and prescription drugs.
Kennedy’s wide-ranging “Make America Healthy Again” report, released last week, cited hundreds of studies, but a closer look by the news organization NOTUS found that some of those studies did not actually exist.
Asked about the report’s problems, White House press secretary Karoline Leavitt said the report will be updated.
And no mention anywhere in the article that these kind of errors are symptomatic of LLM generated text.
When is the media going to stop dancing around shit in order to appear unbiased and start doing their job as the Fourth Estate by calling out obvious horseshit?
ETA: Same goes for the original article that the above linked article references. Listing lots of errors that are so ridiculous, that they’re either AI/LLM generated, or somebody intentionally inserting errors to make it look like the report was LLM generated.
And yet,even that article dances around the obvious issue without apparently having the simple courage to just say, “These kinds of errors are highly symptomatic of AI/LLM generated text.”
Yeah… That horse has left the barn, turned around, struck a match, tossed it inside, and shut the door.
So-called “Fourth Estate” no longer exists; it has been cowed into subservience and/or just willingly spouts pure propaganda.
They bear a preponderance of the blame for why we are where we are in the US today. 🤡 🖕
And this shitty fucking title. “Acknowledges problems”. JFC, they can’t even say what the problems were.
I would say making statements that the mistakes are suggestive of AI/LLM authorship would be speculative without hard evidence. That said, they should call it what it is, a half-assed report that’s a result of either carelessness or a willful desire to misinform.
I disagree. Saying that the mistakes are LLM generated without evidence to support that claim would certainly be inappropriate.
However, simply stating something like…
is completely factual and highly relevant.