Writing with LLM is not a shame

(reflexions.florianernotte.be)

64 points | by flornt 13 hours ago ago

102 comments

  • nicbou 13 hours ago ago

    I think it's fair to use AI as an editor, to get feedback about how your ideas are packaged.

    It's also fair to use it as a clever dictionary, to find the right expressions, or to use correct grammar and spelling. (This post could really use a round of corrections.)

    But in the end, the message and the reasoning should be yours, and any facts that come from the LLM should be verified. Expecting people to read unverified machine output is rude.

    • amiga386 12 hours ago ago

      > Expecting people to read unverified machine output is rude.

      Quite. Its the attention economy, you've demanded people's attention, and then you shove crap that even you didn't spend time reading in their face.

      Even if you're using it as an editor... you know that editors vary in quality, right? You wouldn't accept a random editor just because they're cheap or free. Prose has a lot in it, not just syntax, spelling and semantics, but style, tone, depth... and you'd want competent feedback on all of that. Ideally insightful feedback. Unless you yourself don't care about your craft.

      But perhaps you don't care about your craft. And if that's the case... why should anyone else care or waste their time on it?

      • krisoft 4 hours ago ago

        > You wouldn't accept a random editor just because they're cheap or free.

        If the alternative is no editor then yeah i would. Most of what i write receives no checks by anyone other than me. A very small percentage of my output gets a second set of eyes. And it is usually a coworker or a friend (depending on the context of what is being written.) Their qualification is usually that they were available and amenable.

        > Unless you yourself don't care about your craft.

        This is a tad bit elitist. I care about my craft and would love if a competent, and insightfull editor would go over every piece of writing i put out for others to read. It would cost too much, and would be to hard to arrange. I just simply can’t afford it. On the other hand I can afford to send my writings through an LLM, and improve it here and there occasionaly. Not because i don’t care about my craft, but precisely because I do.

      • jasonb05 an hour ago ago

        > ...why should anyone else care or waste their time on it?

        Sometimes we (I) might follow ideas over authority/authorship. e.g.: I'll happily read ai generated stuff all day long on topics I'm super into.

        Do I have to be the instigator? Can someone else prompt/filter/etc. for me? I think so. They'll do it differently and perhaps better than me.

      • dr_dshiv 12 hours ago ago

        > Its the attention economy, you've demanded people's attention, and then you shove crap that even you didn't spend time reading in their face.

        That’s the rudeness. But this takes care of itself— we just adjust trust accordingly

        • bluefirebrand 2 hours ago ago

          > But this takes care of itself— we just adjust trust accordingly

          This should be viewed as an absolute unacceptable outcome

          I want society to become higher trust not even lower trust :(

          • exe34 an hour ago ago

            first, tell the tide to stop coming in.

    • gs17 5 hours ago ago

      This approach is how I prefer to use it too. I write, it gives feedback, I revise based on which parts I thought it was right about. If I don't want to read raw LLM output, why would I make anyone else do it?

    • ekianjo 12 hours ago ago

      > message and the reasoning should be yours,

      I think we havent realized yet that most of us don't really have original thoughts. Even in creative industries the amount of plagiarism (or so called inspiration) is at all times high (and that's before LLMs were available).

      • aeonik 12 hours ago ago

        Even novel thoughts are rarely original.

        Every time I come up with an algorithm idea, or a system idea, I'm always checking who has done it before, and I always find significant prior art.

        Even for really niche things.

        I think my name Aeonik Chaos might be one of the only original, never before done things. And even that was just an extension of established linguistic rules.

        • treetalker 8 hours ago ago

          My great-great grandfather was named Aeonik Chaos!

          • aeonik 3 hours ago ago

            I knew it! My google searches failed me.

      • saalweachter 12 hours ago ago

        Sure, but also, curation is a service.

        An author that does nothing but "plagiarize" and regurgitate the ideas of others is incredibly valuable... if they exercise their human judgement and only regurgitate the most interesting and useful ideas, saving the rest of us the trouble of sifting through their sources.

      • lewdwig 12 hours ago ago

        With code, I’m much more interested in it being correct and good rather than creative or novel. I see it is my job to be the arbiter of taste because the models are equally happy to create code I’d consider excellent and terrible on command.

      • mediumsmart 4 hours ago ago

        Very few people do anything creative after the age of thirty-five. The reason is that very few people do anything creative before the age of thirty-five.

  • CuriouslyC 3 hours ago ago

    AI prose is mediocre right now. Too verbose, indirect constructions, passive, etc. That being said, it's actually a great editor and can pick out all those issues consistently.

    My workflow right now is to use AI for rough draft and developmental editing stages, then switch AI from changing files to leaving comments on files suggesting I change something. It is slower than letting it line/copyedit itself, but models derp up too much so letting them handle edits at this stage tends to be 2 steps forward 2 steps back.

    • NicuCalcea 2 hours ago ago

      That's my main criticism as well. Even before we get to the ethical implications of AIs communicating on your behalf without a disclaimer, LLM writing is just poor and making me read through it is disrespectful of my time.

      I recently had a colleague send me a link to a ChatGPT conversation instead of responding to me. Another colleague organised a quiz where the answers were hallucinated by Grok. In some Facebook groups I'm in where people are meant to help each other, people have started just pasting the questions into ChatGPT and responding with screenshots of the conversation. I use LLMs almost daily, but this is all incredibly depressing. The only time I want to interact with an LLM is when I choose to, not when it's forced on me without my consent or at least a disclaimer.

    • pton_xd 3 hours ago ago

      AI prose has been mediocre since the release of ChatGPT. My layman's interpretation is there's just no strong creativity / humor / etc signals to train on, as compared to say math or coding. Current models are "smarter" so when asked to produce eg a joke they think harder, but the end result always misses the mark just the same.

      • CuriouslyC 2 hours ago ago

        There's a difference between AI being bad at prose and storycraft. Good prose is totally achievable and it's just that it hasn't really been a priority for the tech shops, and I think they also often don't understand what makes really good prose so they're not good at optimizing for it anyhow. I expect given people's aversion to slop that the big laps will start to push hard on it soon and get their act together though.

  • everdrive an hour ago ago

    Writing with LLM is also not writing. In some abstract sense, it may be plagiarism. In another sense, you're robbing yourself of one of the most crucial parts of writing: improved cognition. Anyone who edits voice transcripts knows just how much a normal person wanders, pauses, misspeaks, etc when talking. The act of writing forces you to refine thoughts you would not have otherwise had.

  • macmar 2 hours ago ago

    This part of the text caught my attention the most.

    "There are a lot of tools out there (Gramarly, Antidote for naming the most famous) and I did not see someone mentioning he used this or that."

    I was criticized in another thread because I used a translation assistant to improve my text, a tool that, long before the current AI hype, everyone used to write more effectively.

    People need to stop believing that the watchdogs of reason are the all-seeing eye(1989). Many people, in general, seek to be ethical and utilize tools to enhance their ideas (such as a text in a non-native language), and that's okay.

  • mentalgear 13 hours ago ago

    It's good for what all other LLMs are good for: semantic search, where the output can be generated texts to help you. But never get wrapped into the illusion that there is actual causal thinking. The thinking is still your responsibility, LLMs are just newer discovery/browsing engines.

    • lewdwig 12 hours ago ago

      There are nascent signs of emergent world models in current LLMs, the problem is that they decohere very quickly due to them lacking any kind of hierarchical long term memory.

      A lot of what is structurally important the model knows about your code gets lost whenever the context gets compressed.

      Solving this problem will mark the next big leap in agentic coding I think.

  • matt123456789 3 hours ago ago

    When I put real time and thought into an email—and the response I get back is obviously AI-generated—and it comes with no disclaimer—it infuriates me. Maybe the model happened to spit out exactly what the sender meant—just dressed up and grammatically polished. Doesn’t matter—I’d rather someone talk to me directly than funnel a thought through a word-grinder and hit send. Downvote me—call me anti-progress—I don’t care. I cannot stand undisclosed AI in conversation.

    • multjoy an hour ago ago

      I don't understand what people get from using a chatbot to write correspondence. It saves no time, and just ends up being long winded nonsense.

      My stance is that if you're about to ask co-pilot, or whatever, to respond to me, then just send me the prompt you're about to enter as that will probably answer the question!

    • antonymoose 2 hours ago ago

      I recently had my first AI recruiter experience. To be clear, the person behind the account was a real person with a real business - except everything was uncanny valley levels of bad correspondence. I shortly disconnected and blocked this jerk.

  • recursive 3 hours ago ago

    Only you can decide if you feel shame for it. Just like only I can decide if I judge you for it.

  • dep_b 12 hours ago ago

    Just got a few recommendations by my colleagues on LinkedIn that were clearly written by an LLM, the long emdash was even present. But then again, the message was tuned to specific things I did. Also they were from Eastern Europe, so I imagine they just fixed their input.

    If you call yourself a writer, having tell tale LLM signs is bad. But for people who's work doesn't involve having a personal voice in written language, it might help them getting them to express things in a better way than before.

    • SweetSoftPillow 12 hours ago ago

      I've been using em dashes since long before LLMs existed, and I won't stop. Some people might think it's a sign of an LLM, but I know it's just a sign of their own short-sightedness.

      • AlecSchueler 11 hours ago ago

        It's really frustrating to have to adjust my writing style to seem more human despite being entirely human. Many of us have been using em dashes for a long time, who else do people think the LLMs learnt it from?

        • viccis 2 hours ago ago

          Em dashes are fine in throwaway casual writing like internet comments or tweets or whatever. However, I think that, in any writing that is significant enough that LLM usage is scrutinized, they often just come across as a crutch to avoid more planned out sentence flow. I think it's actually a good thing that people are feeling like they should cut down on them.

          • zahlman an hour ago ago

            This issue, as I understand it, is about the actual choice to use an emdash character (—) rather than a hyphen (-), and about the effort involved in doing so. It's not about sentence structure.

            I don't really understand how AI developed a bias towards doing it correctly rather than doing it the lazy way. But hearing so much about emdashes qua LLM detection mechanism eventually just got me to decide that typing an ordinary hyphen really is just lazy. And then I ended up configuring my system to make it reasonably easy to type them.

        • d4rkp4ttern 10 hours ago ago

          Exactly. I think the whole emdash thing is a nonsense meme propagated by Xfluencers or LinkFluencers.

      • fluidcruft 2 hours ago ago

        Yeah, we smart people were using en and em dashes appropriately long before LLMs mimics appeared.

        Latex power users unite against the markdown monkey keyboard mashers!

        So... sorry (not sorry!) that LLMs try to be like us and not the heathens.

    • Gigachad 12 hours ago ago

      Craziest thing I saw at work was someone using AI generated text in a farewell card. Like it's so obvious, it's so much more offensive to send someone an AI generated message than to just not send anything at all.

      • singpolyma3 11 hours ago ago

        What made it obvious?

        • Gigachad an hour ago ago

          Non native English speaker suddenly using very elaborate language, a particularly long message without any specific details, just fluffy phrases. And em dashes.

    • exe34 an hour ago ago

      you'll have to get my en/em dashes out of my cold dead fingers.

    • amiga386 12 hours ago ago

      > it might help them getting them to express things in a better way than before.

      You know what people did before the AI fad? They read other people's books. They found and talked to interesting people. They found themselves in, or put themselves in, interesting situations. They spent a lot of time cogitating and ruminating before they decided they ought to write their ideas down. They put in a lot of effort.

      Now the AI salemen come, and insist you don't need a wealth of ezperience and talent, you just need their thingy, price £29.99 from all good websites. Now you can be like a Replicant, with your factory-implanted memories instead of true experience.

      • bilvar 12 hours ago ago

        Did people really use to do all that work when someone asked them to write a recommendation on LinkedIn?

        • amiga386 7 hours ago ago

          No, but people who called themselves a writer did, or should.

    • latexr 12 hours ago ago

      > clearly written by an LLM, the long emdash was even present.

      Can we please stop propagating this accusation? Alright, sure, maybe LLMs overuse the em-dash, but it is a valid topographical mark which was in use way before LLMs and is even auto-inserted by default by popular software on popular operating systems—it is never sufficient on its own to identify LLM use (and yes, I just used it—multiple times—on purpose on 100% human-written text).

      Sincerily,

      Someone who enjoys and would like to be able to continue to use correct punctuation, but doesn’t judge those who don’t.

      • ginko 11 hours ago ago

        So do you always put in the ALT+<code> incantation to get an emdash or copy&paste?

        I feel the emdash is a tell because you have to go out of your way to use it on a computer keyboard. Something anyone other than the most dedicated punctuation geeks won't do for a random message on the internet.

        Things are different for typeset books.

        • latexr 11 hours ago ago

          > So do you always put in the ALT+<code> incantation to get an emdash or copy&paste?

          There’s no incantation. On macOS it’s either ⌥- (option+hyphen) or ⇧⌥- (shift+option+hyphen) depending on keyboard layout. It’s no more effort than using ⇧ for an uppercase letter. On iOS I long-press the hyphen key. I do the same for the correct apostrophe (’). These are so ingrained in my muscle memory I can’t even tell you the exact keys I press without looking at the keyboard. For quotes I have an Alfred snippet which replaces "" with “” and places the cursor between them.

          But here’s the thing: you don’t even have to do that because Apple operating systems do it for you by default. Type -- and it converts to —; type ' in the middle of a word and it replaces it with ’; quotes it also adds the correct start and end ones depending on where you type them.

          The reason I type these myself instead of using the native system methods is that those work a bit too well. Sometimes I need to type code in non-code apps (such as in a textarea in a browser) and don’t want the replacements to happen.

          > I feel the emdash is a tell because you have to go out of your way to use it on a computer keyboard.

          You do not. Again, on Apple operating systems these are trivial and on by default.

          > Something anyone other than the most dedicated punctuation geeks won't do for a random message on the internet.

          Even if that were true—which, as per above, it’s not, you don't have to be that dedicated to type two hyphens in a row—it makes no sense to conflate those who care enough about their writing to use correct punctuation and those who don’t even care enough to type the words themselves. They stand at opposite ends of the spectrum.

          Again, using em-dashes as one signal is fine; using it as the principal or sole signal is not.

        • zahlman an hour ago ago

          On Linux, I configured my Caps Lock key to function as a compose key, and then use my ~/.XCompose file to make it easier.

          I also set things up such that hitting Caps Lock twice in a row sends an Escape character, which makes using Vim a tiny bit nicer.

        • criddell 4 hours ago ago

          On Windows, I use autohotkey and have a bunch of keyboard shortcuts for producing characters that I use fairly often but are difficult to type.

          My keyboard has no keypad so I’m not sure there’s another way.

        • exe34 an hour ago ago

          no I use -- and ---. not all of us use Microsoft word for serious writing.

        • acheron 9 hours ago ago

          You type -- and it gets auto converted.

      • jascha_eng 12 hours ago ago

        Fact is that I maybe saw it in 10% of blogs and news articles before Chatgpt. And now it pops up in emails, slack messages, HN/reddit comments and probably more than half of blog posts?

        Yes it's not a guarantee but it is at least a very good signal that something was at least partially LLM written. It is also a very practical signal, there are a few other signs but none of them are this obvious.

        • latexr 12 hours ago ago

          > Fact is that I maybe saw it in 10% of blogs and news articles before Chatgpt.

          I believe you. But also be aware of the Frequency Illusion. The fact that someone mentions that as an LLM signal also makes you see it more.

          https://en.wikipedia.org/wiki/Frequency_illusion

          > Yes it's not a guarantee but it is at least a very good signal that something was at least partially LLM written.

          Which is perfectly congruent with what I said with emphasis:

          > it is never sufficient on its own to identify LLM use

          I have no quarrel with using it as one signal. My beef is when it’s used as the principal or sole signal.

        • yoz-y 7 hours ago ago

          Dubious. The only signal this gives that in aggregate people use AI. On individual basis, presence of em dashes means nothing.

        • CRConrad 12 hours ago ago

          > And now it pops up in emails, slack messages, HN/reddit comments and probably more than half of blog posts?

          Yeah, maybe that's the one thing people who didn't know how to do it before have learnt from "AI" output.

    • CRConrad 12 hours ago ago

      > the long emdash [...] tell tale LLM signs

      I so wish people would stop spouting this bogus "sign" — but I know I'm going to be disappointed.

    • singpolyma3 11 hours ago ago

      ... you know all serious writers use mdash right? This is not so magic LLM watermark

  • dsq 12 hours ago ago

    I would rewrite the title as "There's no shame in writing with LLMs", or, "Writing with LLMs is nothing to be ashamed of".

    • dang 6 hours ago ago

      You're right, of course, but the original title manages to still be grammatical and the altered meaning has its charm.

  • pessimizer 3 hours ago ago

    The problem with LLMs is that they write badly. If you want to use a LLM to write, prompt it with what you wrote and ask it to summarize concisely. If it doesn't understand what you meant, you should fix that part and resubmit (to a fresh context.)

    The main reason, however, that one shouldn't "write" with LLMs is because it's a waste of everyone's time. If they wanted to know what GPT-5 thinks, they can ask it themselves.

    edit:

    > The problem is not the use of AI but the people how think they can, arbitrarily, criticize the work from someone else because he used or not AI in the name of “ethics”.

    Ah, I didn't realize that the real problem is that people complain about it. If we can figure out a way to make those people shut up, then using LLMs to write for you would be perfectly fine.

  • jillesvangurp 12 hours ago ago

    Of course, there’s no shame in using tools that are available to us. We’re a tool-using species. We’re just a bunch of stupid monkeys without tools. A lot of what we do is about using tools to free up time to do more interesting things than doing things the tools already do better than us.

    Like it or not, people are using LLMs a lot. The output isn’t universally good. It depends on what you ask for and how you criticize what comes back. But the simple reality is that the tools are pretty good these days. And not using them is a bit of a mistake.

    You can use LLMs to fix simple grammar and style issues, to fact-check argumentation, and to criticize and identify weaknesses. You can also task LLMs with doing background research, double-checking sources, and more.

    I’m not a fan of letting LLMs rewrite my text into something completely different. But when I'm in a hurry or in a business context, I sometimes let LLMs do the heavy lifting for my writing anyway.

    Ironically, a good example is this article which makes a few nice points. But it’s also full of grammar and style issues that are easily remedied with LLMs without really affecting the tone or line of argumentation (though IMHO that needs work as well). Clearly, this is not a native speaker. But that’s no excuse these days to publish poorly written text. It's sloppy and doesn't look good. And we have tools that can fix it now.

    And yes, LLMS were used to refine this comment. But I wrote the comment.

    • Refreeze5224 5 hours ago ago

      If the tool does the task for you, then you didnt do the task. I don't keep my food cold, my refrigerator does. I just turned it on. This doesn't matter unless I am for some reason pretending I myself am keeping my food cold somehow, and then that becomes a lie.

      When a tool blurs the line between who performed the task, and you take full credit despite being assisted, that is deceitful.

      Spell checking helps us all pretend we're better spellers than we are, but we've decided as a society that correct spelling is more important than proving one's knowledge of spelling.

      But if you're purportedly a writer, and you're using a tool that writes for you, then I will absolutely discount your writing ability. Maybe one day we will decide that the output is more important than the connection to the person who generated it, but to me, that day has not arrived.

      • QRY 4 hours ago ago

        When does a woodworker cease to be one? When he uses a handsaw? A circular saw? A sawmill?

        > When a tool blurs the line between who performed the task

        Who saws the wood? He who operates the tool, or the tool performing its function? What is the value of agency in a business that, supposedly, sells product? Code authorship isn't like writing, is it? Should it be?

        Or is the distinction not in the product, but in the practice? Is the difference in woodworking vs lumber processing?

        Or is it about expectation? e.g. when we no longer expect a product to be made by hand due to strong automation in the industry, we prepend terms such as "hand-made" or "artisanal". Are we currently still in the expectation phase of "software is written by hand"?

        I have no dog in this race, really. I like writing software, and I like exploring technology. But I'm very confused and have a lot of questions that I have trouble answering. Your comment resonated though, and I'm still curious about how to interpret it all.

        • fragmede 2 hours ago ago

          There's also the perception of time. How long did it take you to write that email/comment/code? Did you laborious pour over every word, every line, for hours, before you hit send, regardless of if you used an LLM or not. Or did you spend barely five minutes, and just pasted whatever ChatGPT shit out?

          That's the real question that people are trying to suss out.

      • monkaiju 3 hours ago ago

        I like the distinction between syntactic tools, like spellcheck, and semantic tools, like AI. The former clearly doesn't impugn the author, the latter does. They seem clearly and fundamentally different to me.

        • handoflixue 39 minutes ago ago

          Where do you put the line? What do you do with the ambiguous categories?

          Clearly a trucker does not "deliver goods" and a Taxi Driver is not in the business of ferrying passengers - the vehicle does all of that, right?

          Writers these days rarely bother with the actual act of writing now that we have typing.

          I've rarely heard a musician, but I've heard lots of CDs and they're really quite good - much cheaper than musicians, too.

          Is my camera an artist, or is it just plagiarizing the landscape and architecture?

          • monkaiju 18 minutes ago ago

            I'm not sure it makes sense to assume the creative act of a person writing to other people, which is fundamentally about a consciousness communicating to others, is anything like delivering goods.

            The distinction I pointed out, applied to people producing writing intended for other people to read, seems to give a really clear "line". Syntactic tools, you're still fully producing the writing, semantic tools, you're not. You can find some small amount of blurriness if you really want, like does using a thesaurus count as semantic, but it seems disingenuous to pretend that has even close to the same impact on the authorship of the piece as using AI.

  • klabb3 12 hours ago ago

    It's very similar to the Stack Overflow debate of the previous decade. Bad developers would copy paste without understanding. It's the same here. Without understanding, you just can't build very sophisticated things, or debug hard issues. And even if AI got better at this, anyone else can do it too, so you'll be a dime a dozen engineer.

    Those who don't compromise on understanding will benefit from an extra tool under their belt. Those who actively leverage the tool to improve their understanding will do even better.

    Those who want shortcuts and not bother understanding are like cheating in school – not in a morally wrong way, but rather in a they missed the entire point way.

  • phoenixhaber an hour ago ago

    There are several issues at play here that I think need to be disentangled seeing as I'm someone that cares deeply about writing and books.

    First, there is the question of the mythology of the author. Would Shakespeare be himself if he had an AI ghost write his books? Would we care as much?

    Setting that aside there's nothing to say that an AI will come up with something wholly novel that's not a pastiche of what's come before. Would it be able to come up with the next Dracula? Or the next meme genre of your particular favorite? What about writing style? It could mimic Clarice Lispector but it couldn't create a new one of her. If it did so we wouldn't recognize it as something human that we would be forced to care about in some way. IF an AI came up with a Lispector and we hadn't seen a type of her before perhaps we would think that the machine is hallucinating.

    More than that though, why should I buy a book that an AI wrote? I can just ask an AI to tell me a story. Or I can read all of the books that were written pre 2000 - there are more than enough to satisfy my curiosity and desire for enlightenment before machines were used to print money for those that have access to them. For me that's the most galling - it shows that the people that have access to money and the means to make a machine do the thinking for them are unable to come up with an original idea, excepting insofar as they push a button or give a prompt. In a few years when AI achieves consciousness, which I believe it will, we'll be able to have machines that can write their own novels if we wish and they want to do so. Then we can judge them by it's own merits. In the meantime if the person writing the book doesn't have anything interesting to say and isn't an intelligent person and wants to send me a dead tree with information inside it that a machine wrote, what's the value added other than me taking a picture of the blurb on the back and feeding it into an AI and having that AI recreate the book? The paper it's printed on?

    EDIT - where AI (not AGI) is important is in doing the sort of hard combinatorial analysis that is so difficult in diffuse systems like traffic control and industrial control of city services or combining chemical and biological synthesis for drug research such as protein folding. AI as a tool for art is one thing, but having an AI create your doctoral dissertation or come up with a book is another. If you can ask an AI to find a cure for a disease or a novel drug and it tells you how step by step by all means do it because it would be absurd not to. It doesn't prove how intelligent you are in that field however - there probably should be altered qualifications for how we rank how useful people are in society given AI prompts and there will be over time unless society just devolves into a "whoever has the most compute wins" dystopia. In which case I'm going back to Plato and Jules Verne.

  • jaredcwhite 7 hours ago ago

    Consumers have a right to know what the source is of the content they are ingesting into their minds, and specifically if that content originated in another actual human mind or if it's the slop generated by a synthetic text extruder.

    It's really a pretty straightforward proposition to understand, and disclosure is absolutely the key so that consumers, if they choose as I do to boycott such output, can make informed decisions.

  • godelski 3 hours ago ago

    I'm an AI critic, but I use AI every day. In fact, I am an AI researcher and work on making models more capable and powerful (probably where a lot of my criticism stems from).

    My main problem with AI usage is that people use it and turn their brains off. This isn't a new problem, but it is a new scale. People mindlessly punch numbers into a formula, run software they don't understand, or read a summary of a complex topics declaring mastery. The problem is sloppiness and our human tendencies to be lazy. Lazy by focusing on the least amount of energy at the moment, not the least amount of energy through time. That's the critical distinction. Slop is momentary laziness while thoughtfulness is amortized laziness.

    The problem is in a way not the AI but us and the cultures we have created. At the end of the day no one cares if you wrote AI code (or docs or whatever), they care about how well it was done. You want to do things fast, but speed is nothing if the quality suffers.

    I really like how Mitchell put it in this Ghostty PR[0,1]. The disclosure is to help people know what to pay more attention to. It is a declaration of where you were lazy or didn't have expertise or took some shortcut. It tells us what the actually problem is: slop isn't always obvious.

    A little slop generally doesn't do too much harm (unless it grows and compounds), but a lot of slop does. If you are concerned about slop and the rate of slop is increasing then it means you must treat everything as potential slop. Because slop isn't easily recognized, it makes effort increase, exponentially. So by producing AI slop (or any kind of slop) you aren't decreasing the workload, you're outsourcing it to someone else. Often, that outsourcing produces additional costs. It only creates the illusion of productivity.

    It's not about the AI, it is about shoving your work onto others. Doesn't matter if you use a shovel or bulldozer. But people are sure going to be louder (or cross that threshold where they'll actually speak up) if you start using a bulldozer to offload your work to others. The problem is it makes others have to constantly be in System 2 thinking all the time. It is absolutely exhausting.

    [0] https://github.com/ghostty-org/ghostty/pull/8289

    [1] https://news.ycombinator.com/item?id=44976568

  • mythrwy 3 hours ago ago

    For code no, no shame as long as you understand it and agree. For internet comments or blog posts or emails ya, shame. In my opinion.

    But I'm a native English speaker and (I think) a decent writer. But if I had to write something in another language I was only marginally fluent in I'd probably reach for an LLM pretty quickly.

  • echelon_musk 13 hours ago ago

    Writing with LLMs is not a shame

    Or

    Writing with an LLM is not a shame

    • latexr 12 hours ago ago

      I’d suggest “not shameful” instead of “not a shame”.

      • akkad33 4 hours ago ago

        That's a shame

    • riz_ 13 hours ago ago

      Should have written with an LLM.

      • squid_ca 12 hours ago ago

        You’re absolutely right!

        • ares623 12 hours ago ago

          Not just right - genius!

      • CRConrad 11 hours ago ago

        What says they didn't?

    • ekianjo 12 hours ago ago

      > Writing with an LLM is not a shame

      Should be "Writing with a LLM is not a shame", no reason to put a "an" here.

      • catapart 12 hours ago ago

        "el" begins with a vowel sound, so "an" is the appropriate article.

        Is not about the letter, it's about practical pronunciation. "An r before a u, and an m or an f".

        • akkad33 4 hours ago ago

          It depends on how you read it. You could say "a large language model".. I think both are right

      • CRConrad 11 hours ago ago

        Nope, that's not how a / an works in English.

  • tjpnz 12 hours ago ago

    If you don't have time to write it I'm not going to make time to read it.

    • redwall_hp 5 hours ago ago

      Exactly. LLM garbage is a straight up insult.

      1. They deliberately chose to not take a few minutes to communicate with you, but expect something of you.

      2. The hard part of writing is organizing thoughts into something coherent, not typing something out. If you don't understand something enough to write it in the first place, the LLM can't magically read your mind and understand what you want to say for you.

  • kosolam 5 hours ago ago

    A relevant satirical post I stumbled on today much about the same subject: https://medium.com/@Justwritet/stop-competing-with-the-machi...

    • nuancebydefault 3 hours ago ago

      Interesting! How did you tell that it's satirical though?

  • lewdwig 12 hours ago ago

    I use Claude Code almost daily now, and I think I’d rather cut off my own arm than go without it, but I don’t delude myself into thinking that current gen tools don’t have significant limitations and that it is my job to manage those limitations.

    So just like any other tool really.

    I have discovered this week that Claude is really good at redteaming code (and specs, and ADRs, and test plans), much better than most human devs who don’t like doing it because it’s thankless work and don’t want to be “mean” to colleagues by being overly critical.

    • torium 12 hours ago ago

      Would you share with us what kind of job you do?

      I keep seeing people saying how amazing it is to code with these things, and I keep failing at it. I suspect that they're better at some kinds of codebases than others.

      • girvo 12 hours ago ago

        > I suspect that they're better at some kinds of codebases than others.

        Probably. My works custom dev agent poops the bed on our front-end monorepo unless you're very careful about context, but then being careful about context is sort of the name of the game anyway...

        I'm using them, mainly for scaffolding out test boilerplate (but not actual tests, most of its output there is useless) and so on, or component code structure based on how our codebase works. Basically a way more powerful templating tool I guess.

      • lewdwig 12 hours ago ago

        Devops/SRE/Platform Engineering

        Downside: lots of Python, and Python indentation causes havoc with a lot of agentic coding tools. RooCode in particular seems to mangle diffs all the time, irrespective of model.

  • satisfice 3 hours ago ago

    The author of this piece commits a common mistake: analyzing AI use as if communication is nothing more than an isolated transaction. Instead communication is usually a process of creating and maintaining a relationship of some kind with other people.

    Here’s a thought experiment: Imagine if I handed you a $100 bill and asked you to examine it carefully. Is it real money? Perhaps you immediately suspect it is counterfeit, and subject it to stringent tests. Let’s say all the tests pass. Okay, given that it is indistinguishable from a legit $100 bill, is it therefore correct and ethical for me to spend this money?

    You know the answer: “not necessarily.”

    This is because spending money is about more than a series of steps in a transaction. It is based on certain premises that, if false, represent a hazard to the social contract by which we all live in peace and security.

    It seems to me that many AI fanboys are arguing that as long as their money passes your scrutiny, it doesn’t matter if it was stolen or counterfeit. In some narrow sense, it really doesn’t matter. But narrow senses are not the only ones that matter.

    When I read writing that you give me and present it as your work, I am getting to know you. I am learning how I can trust you. I am building a simulation of you in my mind that I use to anticipate your ideas and deeds. All that is disrupted and tainted by AI.

    It’s not comparable to a grammar checker, because grammar is like clothing. When an editor modifies my grammar, this does not change my message or prevent me from getting across my ideas. But AI is capable of completely altering your ideas. How do you know it didn’t?

    You can only know through careful proofreading. Did you proofread carefully? Whether you did or not: I don’t believe that people who want AI to write for them are the kind of people who carefully proofread what comes out of AI. And of course, if you ask AI to come up with ideas by itself, for all we know that is plagiarism— stolen words.

    Therefore: if you use AI in your writing, you better hide that from me. And if I find out you are using, I will never trust you again.

    • handoflixue 33 minutes ago ago

      Every day cashiers accept $100 bills on the basis that they pass the counterfeit tests, and every day society has failed to collapse from what you posit is a "hazard to the social contract"

  • ath3nd 12 hours ago ago

    Riding a bike with training wheels is also not a shame. If you need the training wheels, by all means feel free to use them.

    But LLMs are training wheels being forced on everyone, including experienced developers and we are being gaslit that if we don't use them, we are getting behind. In reality, however, the only study up to date shows 19% decline in productivity for experienced devs using LLMs.

    I don't mind folks using crutches if they help them. The cognitive decline and reasoning skills of people using LLMs is not yet studied well but preliminary results show its a thing. I gotta ask: why are you guys doing that to yourselves?

    • godelski 3 hours ago ago

      This is a lot about how I feel (I wrote a longer comment too). Training wheels are fine but at what time do the training wheels come off? But maybe there's a more apt metaphor, since people who have been riding bikes for awhile don't use training wheels.

      It's also fine to use tire chains when you're going through icy roads, but you have to drive much slower and should take them off when it isn't icy. It's about knowing the environment and conditions. Maybe some people don't need chains in that environment because they have winter tires (experience in our metaphor?). Sure, you can drive faster with chains on an icy road than you can without, but you still have to drive slow and be far more alert than you would when driving on a summer road. It is all about context.

    • monkaiju 3 hours ago ago

      Some combination of cargo cult hype fomo mixed with laziness

  • latexr 12 hours ago ago

    > One argument to not disclaim it: people do not disclaim if they Photoshop a picture after publishing it and we are surrounded by a lot of edited pictures.

    That is both a false equivalence and a form of whataboutism.

    https://en.wikipedia.org/wiki/False_equivalence

    https://en.wikipedia.org/wiki/Whataboutism

    It is a poor argument in general, and a sure-fire way to increase shittiness in the world: “Well, everyone else is doing this wrong thing, so I can too”. No. Whenever you mention the status quo as an excuse to justify your own behaviour, you should look inward and reflect on your actions. Do you really believe what you’re doing is the right thing? If it is, fine; but if it is not, either don’t mention it or (ideally) do something about it.

    > why don’t we see people mentioning they used specific tools to proofread before AI apparition?

    Whenever I see this argument, I have a hard time believe it is made in good faith. Can you truly not see the difference between using a tool to fix mistakes in your work or to do the work for you?

    > It feels like an obligation we have to respect in a way.

    This was obvious from the beginning of the post. Throughout I never got the feeling you were struggling with the question intrinsically, for yourself, but always in a sense of how others would judge your actions. You quote opinion after opinion and it felt you were in search of absolution—not truth—for something you had already decided you did not want to do.

    • flornt 12 hours ago ago

      Thanks. Really appreciate your comments. It opens some perspectives I haven't considered and gives more things to think about regarding this. I'll digest it and update the content based on your observations!

  • wfhrto 12 hours ago ago

    At this point, it would be shameful to not write with LLMs. I don't want to spend time reading plain human text when improved AI text is an option.

    • latexr 12 hours ago ago

      > improved AI text

      It is certainly your prerogative to believe that, but know your opinion is far from universal. It is a widespread view that AI-written text is worse.

    • lomase 12 hours ago ago

      > improved AI text

      Why are you on hackernews and not talking to an LLM?

    • satisfice 3 hours ago ago

      I assume that you wrote that with AI, then. If so, I assume it’s not really your opinion. You provided some prompt, which is hidden from us.

      I don’t know you, don’t trust you, and if you write with AI nobody else will get to know you or trust you, either, unless they fall for your false AI mask.

    • gred 4 hours ago ago

      That's a great point!

      Large Language Models (LLMs), like GPT-4, offer numerous benefits for writing tasks across various domains. Here’s a breakdown of the key advantages:

      1. Enhanced Productivity

      Faster Drafting: Quickly generate drafts for essays, reports, emails, blog posts, and more.

      24/7 Availability: Instant support with no downtime or fatigue.

      Reduced Writer’s Block: Provides starting points and creative prompts to overcome mental blocks.

      2. Improved Writing Quality

      Grammar and Style: Corrects grammar, punctuation, and stylistic issues.

      Tone Adjustment: Adapts tone to suit professional, casual, persuasive, or empathetic contexts.

      Clarity and Conciseness: Helps simplify complex ideas and remove redundant language.

      3. Creativity and Ideation

      Brainstorming: Assists in generating titles, outlines, metaphors, and analogies.

      Storytelling: Offers plot ideas, character development, and dialogue suggestions for creative writing.

      Variations: Produces multiple versions of the same message (e.g., for A/B testing).

      4. Language Versatility

      Multilingual Support: Translates and writes in many languages.

      Localization: Tailors content for different cultural contexts or regions.

      5. Research Assistance

      Summarization: Condenses large documents or articles into key points.

      Information Retrieval: Provides background context on topics quickly (though should be fact-checked for critical work).

      Citation Help: Assists in generating citations in formats like APA, MLA, or Chicago.

      6. Editing and Rewriting

      Paraphrasing: Rewrites text to avoid plagiarism or improve readability.

      Consistency Checks: Maintains tone, terminology, and formatting across long documents.

      Content Expansion: Adds detail to thin content or elaborates on underdeveloped points.

      7. Customization and Integration

      Prompt Engineering: Tailors responses for specific industries (e.g., legal, medical, technical).

      API Integration: Can be embedded into writing tools, content platforms, or CMS systems.

      8. Cost Efficiency

      Reduces Need for Human Writers: Especially for repetitive or low-complexity tasks.

      Scales Effortlessly: One model can serve multiple users or projects simultaneously.

      Would you like a breakdown of how these benefits apply to a specific type of writing (e.g., academic, marketing, business)?