Pakistani newspaper mistakenly prints AI prompt with the article

(twitter.com)

223 points | by wg0 4 hours ago ago

67 comments

  • ineedasername 2 hours ago ago

    When reached for comment on how this occurred, the journalist in question replied:

    “This is the perfect question that gets to the heart of this issue. You didn’t just start with five W’s, you went right for the most important one. Let’s examine why that question works so well in this instance…”

    • sph 2 hours ago ago

      Now I’ll have to look out for news posts that have a question in the title and begin with “Great question!”

      • hagbarth an hour ago ago

        "You're absolutely right!"

  • chrismorgan 3 hours ago ago

    The current title (“Pakistani newspaper mistakenly prints AI prompt with the article”) isn’t correct, it wasn’t the prompt that was printed, but trailing chatbot fluff:

    > If you want, I can also create an even snappier “front-page style” version with punchy one-line stats and a bold, infographic-ready layout—perfect for maximum reader impact. Do you want me to do that next?

    The article in question is titled “Auto sales rev up in October” and is an exceedingly dry slab of statistic-laden prose, of the sort that LLMs love to err in (though there’s no indication of whether they have or not), and for which alternative (non-prose) presentations can be drastically better. Honestly, if the entire thing came from “here’s tabular data, select insights and churn out prose”… I can understand not wanting to do such drudgework.

    • abdullahkhalids 17 minutes ago ago

      The newspaper in question is Pakistan's English language "newspaper of record", which has wide readership.

      For some reason, they rarely ever add any graphs or tables to financial articles, which I have never understood. Their readership is all college educated. One time I read an Op-Ed, where the author wrote something like: If you go to this gov webpage, and take the data and put it on excel, and plot this thing vs that thing, you will see X trend.

      Why would they not just take the excel graph, clean it up and put it in their article?

      • IAmBroom 5 minutes ago ago

        Because it was BS opinion, dressed in scientifical sounding clothing?

    • layer8 2 hours ago ago

      The AI is prompting the human here, so the title isn't strictly wrong. ;)

      • dwringer an hour ago ago

        Gemini has been doing this to me for the past few weeks at the end of basically every single response now, and it often seems to result in the subsequent responses getting off track and lower quality as all these extra tangets start polluting the context. Not to mention how distracting it is as it throws off the reply I was already halfway in the middle of composing by the time I read it.

    • michaelbuckbee 2 hours ago ago

      For years, both the financial and sports news sides of things have generated increasingly templated "articles", this just feels like the latest iteration.

      • dredmorbius an hour ago ago

        This dates back to at least the late 1990s for financial reports. A friend demoed such a system to me at that time.

        Much statistically-based news (finance, business reports, weather, sport, disasters, astronomical events) are heavily formulaic and can at least in large part or initial report be automated, which speeds information dissemination.

        Of course, it's also possible to distribute raw data tables, charts, or maps, which ... mainstream news organisations seem phenomenally averse to doing. Even "better" business-heavy publications (FT, Economist, Bloomberg, WSJ) do so quite sparingly.

        A few days ago I was looking at a Reuters report on a strategic chokepoint north of the Philippines which it and the US are looking toward to help contain possible Chinese naval operations. Lots of pictures of various equipment, landscapes, and people. Zero maps. Am disappoint.

        • RobotToaster 15 minutes ago ago

          Obviously the solution is to use AI to extract the raw data from their AI generated fluff.

          It's like the opposite of compression.

        • jrjeksjd8d 41 minutes ago ago

          At least in the case of Bloomberg they would like you to pay for that raw data. That's their bread and butter.

          • dredmorbius 25 minutes ago ago

            True.

            But there's the approach the Economist takes. For many decades, it's relied on a three-legged revenue model: subscriptions, advertising, and bespoke consulting and research through the Economist Intelligence Unit (EIU). My understanding is that revenues are split roughly evenly amongst these, and that they tend to even out cash-flow throughout economic cycles (advertising is famously pro-cyclical, subscriptions and analysis somewhat less so).

            To that extent, the graphs and maps the Economist actually does include in its articles (as well as many of its "special reports") are both teasers and loss-leader marketing for EIU services. I believe that many of the special reports arise out of EIU research.

            <https://www.eiu.com/n/>

            <https://en.wikipedia.org/wiki/Economist_Intelligence_Unit>

      • jerf 24 minutes ago ago

        A non-"AI" template is probably getting filled in with numbers straight from some relevant source. AI may produce something more conversational today but as someone else observed, this is a high-hallucination point for them. Even if they get one statistic right they're pretty inclined to start making up statistics that weren't provided to them at all if they sound good.

      • cantor_S_drug an hour ago ago

        Not just that we know from heavy reddit posters that they have branching universe templates for all eventualities, so that they are "ready" whatever the outcome.

      • reaperducer an hour ago ago

        Legitimate news organizations announce their use of A.I.

        I believe the New York Times weather page is automated, but that started before the current "A.I." hype wave.

        And I think the A.P. uses LLMs for some of its sports coverage.

    • kleene_op 2 hours ago ago

      I guess in the end the journalist didn't feel necessary to impact his readers with punchy one line stats and bold infographic-ready layouts, considering he opted for the first draft.

    • wg0 3 hours ago ago

      Thank you, yes that's accurate and I am not sure if article itself is accurate. Don't think so it would have no incorrect stats.

      By "AI prompt" I mean "prompted by AI"

      Edit: Note about prompt's nature.

      • nashashmi 2 hours ago ago

        It might be better to mention “Dawn newspaper” instead of “Pakistani newspaper”.

        • nomdep an hour ago ago

          Only Pakistanis knew from where the Dawn newspaper is, so the current title is more informative

    • righthand 2 hours ago ago

      I think AI-Prompt is synonymous with the chat before an LLM prints the intended garbage.

  • bschne 3 hours ago ago

    The same thing happened to German magazine Spiegel recently, see the correction remark at the end of this article

    https://www.spiegel.de/wirtschaft/unternehmen/deutsche-bahn-...

    • dredmorbius an hour ago ago
      • roflmaostc 34 minutes ago ago

        I still think someone should have done this as a pun and get your paper trending everywhere.

    • kavith 2 hours ago ago

      Fair play to them for owning up to their mistake, and not just pretending like it didn't happen!

      • CGamesPlay 2 hours ago ago

        Maybe, although I'm a bit doubtful that they were 100% honest.

        > Entgegen unseren Standards

      • yard2010 an hour ago ago

        You're absolutely right! but they can shove this euphemism. Just say that chatgpt wrote the article and no one read it before publishing, no need for all the fluff.

      • bonesss 2 hours ago ago

        As programmers I think we can extend some professional empathy and understanding: copy-and-pasting all day is a lot harder than you’d think.

        • tonyhart7 2 hours ago ago

          compared to the writing yourself???? absolutely not

          • IAmBroom 3 minutes ago ago

            It was sarcastic.

      • reaperducer an hour ago ago

        Fair play to them for owning up to their mistake, and not just pretending like it didn't happen!

        That's what the legitimate media has done for the last couple of hundred years. Every issue of the New York Times has a Corrections section. I think the Washington Post's is called Corrections and Amplifications.

        Bloggers just change the article and hope it didn't get cached in the Wayback Machine.

    • IAmBroom 3 minutes ago ago

      "We regret to admit that our editors don't actually take the time to read these articles before hitting the PUBLISH button..."

  • FatalLogic 3 hours ago ago

    The online edition was edited later.

    "This newspaper report was originally edited using AI, which is in violation of Dawn’s current AI policy. The policy is also available on our website. The report also carried some junk, which has now been edited out. The matter is being investigated. The violation of AI policy is regretted. — Editor"

    https://www.dawn.com/news/1954574

    edit: Text link of the printed edition. Might not be perfect OCR, but I don't think they changed anything except to delete the AI comment at the end! https://pastebin.com/NYarkbwm

    • nicbou 3 hours ago ago

      > The violation of AI policy is regretted.

      That's a good example of when you shouldn't use passive voice.

      • dredmorbius 31 minutes ago ago

        This is a convention for journalistic corrections, e.g., "The Times regrets the error", used to note corrections for at least a century:

        <https://www.nytimes.com/2016/05/13/insider/the-times-regrets...>

        • strix_varius 21 minutes ago ago

          Your example is not passive voice.

          • IAmBroom 2 minutes ago ago

            Yes, they are pointing out how it should have been written.

      • vintermann 31 minutes ago ago

        On the other hand, this way you know they probably didn't use the chatbot to write the apology.

      • pixelpoet an hour ago ago

        > This door is alarmed

      • throwaway638637 2 hours ago ago

        That's just a manner of speaking in former British colonies, or at least the subcontinent. Much of formal speech like a bureaucrat wrote it because, well, the civil service ran India and that's who everyone emulated.

        • hbarka an hour ago ago

          It’s still passive voice, the kind used when trying to avoid blame or responsibility. So pretty much fits in bureaucratic places.

          That’s just…mistakes were made.

        • thoroughburro an hour ago ago

          > That's just a manner of speaking in former British colonies, or at least the subcontinent.

          Which is still a good example of when you shouldn't use passive voice.

          Clarifying where “optimising language to evade a responsibility” evolved does nothing to justify it, which you imply with “that’s just”.

      • benterix 3 hours ago ago

        OTOH kudos to them for regretting AI slop (even if they don't want to point out who precisely is regretting). I know some who'd vehemently deny in spite of evidence.

        • serial_dev 2 hours ago ago

          They don't regret serving you AI slop, they regret that the "writer" didn't even read their own article and that they got caught because of it.

          • IAmBroom a few seconds ago ago

            "We regrets that mistakes were noticed."

      • steve_taylor 2 hours ago ago

        It's a good example of when you should use AI.

    • elwebmaster 2 hours ago ago

      Of course, since we live in 1984 already everything is edited as is convenient. For all that technology has given, nobody talks about what it has taken away.

  • guytv an hour ago ago

    Which raises the question: if everything is generated, why bother reading it at all? Just ask the LLM what you want to know—why treat headlines like bookmarks?

  • Barbing 2 hours ago ago
  • robofanatic 2 hours ago ago

    Soon whole world will be fluent in impeccable American English, but only on paper.

    • kevin_thibedeau an hour ago ago

      Pretty easy to condition a prompt with regional idioms and spellyngs.

      • jerf 21 minutes ago ago

        As much as the default LLMisms are annoying me, it's also a honeymoon period right now where you can even suspect whether something is AI generated based on the default LLM-isms. Word about how to fix their tone has been getting around in academia for a while amongst students trying to pass detection filters, once they're out into the world we can expect to have even more AI generated content masked behind individualized, unique style prompts that aren't immediately recognizable as the default LLM voice.

  • forinti 2 hours ago ago

    As people get comfortable with AI they'll get lazy and this will become common.

    A solution is to put someone extra into the workflow to check the final result. This way AI will actually make more jobs. Ha!

    • stillworks 26 minutes ago ago

      I think better to put that someone extra further up in the pipeline who knows how to prompt the LLM correctly so that it doesn't generate the fluff to begin with.

      Or get software engineers to produce domain specific tooling rather than the domain relying on generic tooling which lead to such mistakes (although this is speculation.. but still to me it seems like the author of that article was using the vanilla ChatGPT client)

      /s I am now thinking of setting up an "AI Consultancy" which will be able to provide both these resources to those seeking such services. I mean, why have only one of those when both are available.

    • serial_dev 2 hours ago ago

      Or they will set up one more AI automation:

      "This article will be posted on our prestigious news site. Our readers don't know that most of our content is AI slop that our 'writers' didn't even glance over once, so please check if you find anything that was left over from the LLM conversation and should not be left in the article. If you find anything that shouldn't stay in the article, please remove it. Don't say 'done' and don't add your own notes or comment, don't start a conversation with me, just return the cleaned up article."

      And someone will put "Prompt Engineer" in their resume.

    • sph 2 hours ago ago

      Welcome to a post-scarcity world — as if we needed cheaper ways to create digital low-quality content in the hands of anyone, for free.

      Not long after we invent a replicator machine the entire Earth is gonna be turned into paperclips.

  • chii 3 hours ago ago

    This is the new "[placeholder here]" misprint/typos of the LLM era.

  • grugagag 9 minutes ago ago

    Thats not a newspaper but an outlet for AI slop

  • mikkupikku 3 hours ago ago

    Finally, some truth in media.

  • nashashmi 2 hours ago ago

    One of the great advantages of AI for non english native speakers is the ability of the tool to speak in better English than the writer. With so many young journalists graduating from school using AI instead of learning the full language, this use would become more frequent.

    At my work place, non native speakers would send me documents for grammatical corrections. They don’t do that anymore! Hoorah!

    • robofanatic an hour ago ago

      Better English only on paper.

  • robofanatic an hour ago ago

    Who needs editors when AI can do best editing.

  • analog8374 34 minutes ago ago

    It's like that story "Pontypool" except for bullshit. The bullshit has congealed into living forms, breeding and evolving.

    (Ya, bullshit is the precise term here. Zero consciousness of truth or falsehood. Just contextually fitting)

  • incomingpain 2 hours ago ago

    In 2022, my opinion of journalism was low. Decades of headlines which were objectively false but no retraction, just doubling down on their state propaganda.

    There were some papers that I still trusted. Then AI hit journalism with a silly stick and utterly wrecked them all.

    Mind you, I love AI. I however can admit that AI seems to have wrecked what was left of journalism.

  • blibble an hour ago ago

    you can even identify the slop in printed newspapers by looking for em-dash!

  • zkmon 2 hours ago ago

    Actually, at some point, it makes sense to be honest about usage of AI and not feeling to hide that. Just like how food products are expected to print about the ingredients.

    One should not feel ashamed to declare the usage of AI, just like you are not ashamed to use a calculator.

    • c0wb0yc0d3r an hour ago ago

      I feel like there is a difference here. A calculator has no bias. LLMs do, obviously. News is not the place for bias. Unless the LLM used hallucinated the operator’s intentions, the operator was using the LLM to doctor the article to capture readers not report the news.