I'd rather read the prompt

(claytonwramsey.com)

1332 points | by claytonwramsey a day ago ago

130 comments

  • sn9 a day ago ago

    > I should hope that the purpose of a class writing exercise is not to create an artifact of text but force the student to think; a language model produces the former, not the latter.

    It's been incredibly blackpilling seeing how many intelligent professionals and academics don't understand this, especially in education and academia.

    They see work as the mere production of output, without ever thinking about how that work builds knowledge and skills and experience.

    Students who know least of all and don't understand the purpose of writing or problem solving or the limitations of LLMs are currently wasting years of their lives letting LLMs pull them along as they cheat themselves out of an education, sometimes spending hundreds of thousands of dollars to let their brains atrophy only to get a piece of paper and face the real world where problems get massively more open-ended and LLMs massively decline in meeting the required quality of problem solving.

    Anyone who actually struggles to solve problems and learn themselves is going to have massive advantages in the long term.

  • necovek a day ago ago

    I've already asked a number of colleagues at work producing insane amount of gibberish with LLMs to just pass me the prompt instead: if LLM can produce verbose text with limited input, I just need that concise input too (the rest is simply made up crap).

  • bost-ty a day ago ago

    I like the author's take: it isn't a value judgement on the individual using ChatGPT (or Gemini or whichever LLM you like this week), it's that the thought that went into making the prompt is, inevitably, more interesting/original/human than the output the LLM generates afterwards.

    In my experiments with LLMs for writing code, I find that the code is objectively garbage if my prompt is garbage. If I don't know what I want, if I don't have any ideas, and I don't have a structure or plan, that's the sort of code I get out.

    I'd love to hear any counterpoints from folks who have used LLMs lately to get academic or creative writing done, as I haven't tried using any models lately for anything beyond helping me punch through boilerplate/scaffolding on personal programming projects.

  • EigenLord a day ago ago

    I think the answer to the professor's dismay is quite simple. Many people are in university to survive a brutal social darwinist economic system, not to learn and cultivate their minds. Only a very small handful of them were ever there to study Euler angles earnestly. The rest view it as a hoop they have to jump through to hopefully get a job that might as well be automated away by AI anyway. Also viewed from a conditional reinforcement perspective, all the professor has to do is start docking grade points from students who are obviously cheating. Theory predicts they will either stop doing it, or get so good at it that it becomes undetectable-possibly an in-demand skill for the future.

  • laurentlb a day ago ago

    There are many ways to use LLMs.

    The issue, IMO, is that some people throw in a one-shot, short prompt, and get a generic, boring output. "Garbage in, generic out."

    Here's how I actually use LLMs:

    - To dump my thoughts and get help organizing them.

    - To get feedback on phrasing and transitions (I'm not a native speaker).

    - To improve tone, style (while trying to keep it personal!), or just to simplify messy sentences.

    - To identify issues, missing information, etc. in my text.

    It’s usually an iterative process, and the combined prompt length ends up longer than the final result. And I incorporate the feedback manually.

    So sure, if someone types "write a blog post about X" and hits go, the prompt is more interesting than the output. But when there are five rounds of edits and context, would you really rather read all the prompts and drafts instead of the final version?

    (if you do: https://chatgpt.com/share/6817dd19-4604-800b-95ee-f2dd05add4...)

  • kouru225 a day ago ago

    Ever since AI came out I’ve been talking about the prompt to output ratio. We naturally assume that the prompt will be smaller than the output just because of the particulars of the systems we use, but as you get more and more particular of what you want, the prompt grows while the output stays the same size. This is logical. If instead of writing an essay, I just describe what I want the essay to say, the description is necessarily gonna be a larger amount of text than the essay itself. It’s more text to describe what’s said, than to just say it. The fact that we expect to do less effort and get back more effort indicates exactly what we’re getting here: a bunch of filler.

    In that way, the prompt is more interesting, and I can’t tell you how many times I’ve gone to go write a prompt because I dunno how to write what I wanna say, and then suddenly writing the prompt makes that shit clear to me.

    In general, I’d say that AI is way more useful to compress complex ideas into simple ones than to expand simplistic ideas in to complex ones.

  • Animats a day ago ago

    That's because the instructor is asking questions that merely require the student to regurgitate the instructor's text.

    To actually teach this, you do something like this:

    "Here's a little dummy robot arm made out of Tinkertoys. There are three angular joints, a rotating base, a shoulder, and an elbow. Each one has a protractor so you can see the angle.

    1. Figure out where the end of the arm will be based on those three angles. Those are Euler angles in action. This isn't too hard.

    2. Figure out what the angles should be to touch a specific point on the table. For this robot geometry, there's a simple solution, for which look up "two link kinematics". You don't have to derive it, just be able to work out how to get the arm where you want it. Is the solution unambiguous? (Hint: there may be more than one solution, but not a large number.)

    3. Extra credit. Add another link to the robot, a wrist. Now figure out what the angles should be to touch a specific point on the table. Three joints are a lot harder than two joints. There are infinitely many solutions. Look up "N-link kinematics". Come up with a simple solution that works, but don't try too hard to make it optimal. That's for the optimal controls course.

    This will give some real understanding of the problems of doing this.

  • Ancalagon a day ago ago

    I fully support the author’s point but it’s hard to argue with the economics and hurdles around obtaining degrees. Most people do view obtaining a degree as just a hurdle to getting a decent job, that’s just the economics of it. And unfortunately the employers these days are encouraging this kind of copy/paste work. Look at how Meta and Google claim the majority of the new code written there is AI created?

    The world will be consumed by AI.

  • andy99 a day ago ago

    I used to teach, years before LLMs, and got lots of copy-pasted crap submitted. I always marked it zero, never mentioning plagiarism (which would require some university administration) and just commenting that I asked for X and instead got some pasted together nonsense.

    As long as LLM output is what it is, there is little threat of it actually being competitive on assignments. If students are attentive enough to paraphrase it into their own voice I'd call it a win; if they just submit the crap that some data labeling outsourcer has RLHF'd into a LLM, I'd just mark it zero.

  • jjani 19 hours ago ago

    > I’ll now cover the opposite case: my peers who see generative models as superior to their own output. I see this most often in professional communication, typically to produce fluff or fix the tone of their original prompts. Every single time, the model obscures the original meaning and adds layers of superfluous nonsense to even the simplest of ideas.

    I'm going to call out what I see as the elephant in the room.

    This is brand new technology and 99% of people are still pretty clueless at properly using it. This is completely normal and expected. It's like the early days of the personal computer. Or Geocities and <blink> tags and under construction images.

    Even in those days, incredible things were already possible by those who knew how to achieve them. The end result didn't have to be blinking text and auto-playing music. But for 99% it was.

    Similarly, with current LLMs, it's already more than possible to use them in effective ways, without obscuring meaning or adding superfluous nonsense. In ways whose results have none of the author's criticisms apply. People just don't know how to do it yet. Many never will, just like many never learnt how to actually use a PC past Word and Excel. But many others will learn.

  • blintz a day ago ago

    Call it the iron law of LLMs:

    "No worthy use of an LLM involves other human beings reading its output."

    If you use a model to generate code, let it be code nobody has to read: one-off scripts, demos, etc. If you want an LLM to prove a theorem, have it generate some Coq and then verify the proof mechanically. If you ask a model to write you a poem, enjoy the poem, and then graciously erase it.

  • Krisando 14 hours ago ago

    > I have never seen any form of create generative model output (be that image, text, audio, or video) which I would rather see than the original prompt.

    I've used LLM before to document command-line tools and APIs I've made; they aren't the final product since I also tweaked the writing and fixed misunderstandings from the LLM. I don't think the author would appreciate the original prompts, where I essentially just dump a lot of code and give instructions in bullet point form on what to output.

    These generated documentation are immensely useful, and I use them all the time for myself. I prefer the documentation to reading the code because finding what I need at a glance is not trivial nor is remembering all the conditions, prerequisites, etc.

    That being said, the article seems to focus on a use case where LLM is ill-suited. It's not suited for writing papers to pretend you wrote a paper.

    > I say this because I believe that your original thoughts are far more interesting

    Looking at the example posted, I'm not convinced that most people's original thoughts on gimbal lock will be more interesting than a succinct summary by an LLM.

  • ineptech a day ago ago

    Relatedly, there was a major controversy at work recently over the propriety of adding something like this to a lengthy email discussion:

    > Since this is a long thread and we're including a wider audience, I thought I'd add Copilot's summary...

    Someone called them out for it, several others defended it. It was brought up in one team's retro and the opinions were divided and very contentious, ranging from, "the summary helped make sure everyone had the same understanding and the person who did it was being conscientious" to "the summary was a pointless distraction and including it was an embarrassing admission of incompetence."

    Some people wanted to adopt a practice of not posting summaries in the future but we couldn't agree and had to table it.

  • Workaccount2 a day ago ago

    I just want to point out that AI generated material is naturally a confirmation bias machine. When the output is obviously AI, you confirm that you can easily spot AI output. When the output is human-level, you just pass through it without a second thought. There is almost no regular scenario where you are retroactively made aware something is AI.

  • pasquinelli 14 hours ago ago

    > Why do we write, anyway?

    > I believe that the main reason a human should write is to communicate original thoughts.

    in fairness to the students, how does the above apply to school work?

    why does a student write, anyway? to pass an assignment, which has nothing to do with communicating original thoughts-- and whose fault is that, really?

    education is a lot of paperwork to get certified in the hopes you'll get a job. it's as bereft of intelectual life as the civil service examinations in imperial china. original thought doesn't enter the frame.

  • derefr a day ago ago

    > They are invariably verbose, interminably waffly, and insipidly fixated on the bullet-points-with-bold style.

    No, this is just the de-facto "house style" of ChatGPT / GPT models, in much the same way that that that particular Thomas Kinkade-like style is the de-facto "house style" of Stable Diffusion models.

    You can very easily tell an LLM in your prompt to respond using a different style. (Or you can set it up to do so by telling it that it "is" or "is roleplaying" a specific type-of-person — e.g. an OP-ED writer for the New York Times, a textbook author, etc.)

    People just don't ever bother to do this.

  • rocqua a day ago ago

    Is the author forgetting a baby in the bathwater he is throwing out? Especially on coding. He points out that vibe coding is bad, and then concluding that any program written through the use of an AI is bad.

    For example if you already have a theory of your code, and you want to make some stuff that is verbose but trivial. It is just more efficient to explain the theory to an LLM and extract the code. I do like the idea of storing the underlying prompt in a comment.

    Same for writing. If you truly copy paste output, it's obviously bad. But if you workshop a paragraph 5 or 6 times that can really get you unstuck.

    Even the euler angles example. That output would be a good starting point for an investigation.

  • YmiYugy a day ago ago

    Hate the game not the player. For the moment we continue to live in a world where the form and tone of communication matters and where foregoing the use of AI tools can put you at a disadvantage. There are countless homework assignments where teachers will give better grades to LLM outputs. An LLM can quickly generate targeted cover letters dramatically increasing efficiency while job hunting. Getting a paper accepted requires you to adhere to an academic writing style. LLMs can get you there. Maybe society just needs a few more years to adjust and shift expectations. In the meantime you should probably continue to use AI.

  • internet_points a day ago ago

    > I should hope that the purpose of a class writing exercise is not to create an artifact of text but force the student to think; a language model produces the former, not the latter.

    This so much. A writing exercise sharpens your mind, it forces you to think clearly through problems, gives you practice in both letting your thoughts flow onto paper, and in post-editing those thoughts into a coherent structure that communicates better. You can throw it away afterwards, you'll still be a better writer and thinker than before the exercise.

  • oncallthrow a day ago ago

    LLM cheating detection is an interesting case of the toupee fallacy.

    The most obvious ChatGPT cheating, like that mentioned in this article, is pretty easy to detect.

    However, a decent cheater will quickly discover ways to conduce their LLM into producing text that is very difficult to detect.

    I think if I was in the teaching profession I'd just leave, to be honest. The joy of reviewing student work will inevitably be ruined by this: there is 0 way of telling if the work is real or not, at which point why bother?

  • mightyham 17 hours ago ago

    While I agree with the thrust of the article being that students are cheating themselves by relying on LLMs, it's important to reflect on ways in which educators have encouraged this behavior. Anyone who has been to college in the age of the internet knows that many professors, particularly in the humanities, lazily pad out their class work with short menial writing assignments, often in the form of a "discussion board", that are rarely even graded on content. For students already swamped with work, or having to complete these assignments for general ed courses unrelated to their major/actual interests, it is totally understandable why they would outsource this work to a machine. This is a totally fixable issue: in-person discussions and longer writing assignments with well structured progress reports/check-ins and rounds of peer review are a couple ways that I can think of off the top of my head. Professors need to be held accountable for creating course loads that are actually intellectually interesting and are at least somewhat challenging to use LLMs to complete. When professors are constantly handing out an excess of low-effort assignments, using shortcuts becomes a learned behavior of students.

  • neogodless 10 hours ago ago

    > Lately I’ve seen more people in their cars thwarting stoplight boredom—­that is, unable to sit unmediated for even the few moments that it takes a red light to turn green, they reach for their smartphones.

    I wish it was only at stoplights. But then just a few days ago, I witnessed a totally unnecessary accident. Left-lane got green, and someone in the straight lane noticed the movement but didn't look up and drove right into the car in front of them...

  • llsf 9 hours ago ago

    The most brain intensive activity I ever did at school, happened in the very last two years (so, ~20 years at school lead to it), when I rubbed my brain to Lambda Prolog.

    I almost had headaches after intense thinking of problems and ways to solve them in lambda prolog. That was the most interesting and satisfying to physically feel the effect of high focus combined with applying what was a new logic.

    Computer science at the university, taught me how to learn and explore new ideas. I might sound like my grandpa who told me when I was 8yo that using calculator would lead to people not able to count... and here I am saying that LLM might lead to people who do not know how to write.

    Actually, I am a bit concern that we might produce more text in the short term because it is becoming cheap to write tons of documentation with LLMs. But those feel like death by Terms and Conditions, i.e. text that no one reads. So not only we would lose our ability to write, but we can seriously affect our ability to read. Sure LLM can summarize as well, but then we lose the nuances.

    Nature is lazy, but should we be lazy and delegate our ability to think (read/write), to a software ? Think about it :)

  • Terr_ a day ago ago

    > I'd rather read the prompt

    Yeah, to recycle a comment [0] from a few months back:

    > Yeah, one of their most "effective" uses is to counterfeit signals that we have relied on--wisely or not--to estimate deeper practical truths. Stuff like "did this person invest some time into this" or "does this person have knowledge of a field" or "can they even think straight." [...]we might have to cope by saying stuff like: "Fuck it, personal essays and cover letters are meaningless now, just put down the raw bullet-points."

    In other words, when the presentation means nothing, why bother?

    [0] https://news.ycombinator.com/item?id=41675602

  • Arch-TK 17 hours ago ago

    I will often write a bunch of stuff and then use an LLM to pre-process it a little bit and suggest some improvements. I will then work through the differences and consider them individually and either accept them, or use them to write my own improvements. This is kind of like having an okay editor working for you. No substitute for a real editor, but it means that a: what I intended to say is preserved, b: there's no additional waffle (the prompt includes instructions not to expand on any topic, but only ever to summarise where possible), c: everything still goes by me in the end, and if it doesn't feel like something I would actually write then it doesn't get used.

    I believe that it has improved my writing productivity somewhat, especially when I'm tired and not completely on the ball. Although I don't usually reach for this most of the time (e.g. not for this comment).

  • robertlagrant a day ago ago

    Time to go back to writing essays in exams, live, on paper.

  • jjaksic 11 hours ago ago

    I think AI can be an amazing tool that can help us learn even better when used correctly and when not used as a substitute for learning and understanding.

    It can be used as a personal tutor. How awesome is it to have a tutor always available to answer almost any question from any angle to really help you understand? Yes, AI won't get everything right 100%, but for students who are still learning basics, it's fair to assume that having an AI tutor can yield far better results than having no tutor at all.

    It can also be used as a tool for doing mundane work, so you can focus more on the interesting and creative work. Kind of like a calculator or a spreadsheet. Would math majors become better mathematicians if they had to do all calculations by hand?

    I think instead of banning AI, education needs to reform. Teaching staff should focus less time on giving lectures and grading papers (those things can be recorded and automated) and more time on ORAL EXAMS where they really probe student's knowledge and there's no possibility of cheating.

    Students can and should use AI to help them prepare. E.g. don't ask AI to write an essay for you, write it yourself and ask it to critique it. Don't ask it to give you answers for a test, ask it to ask you questions on the topic and find gaps in your knowledge. Etc.

  • sizzzzlerz 14 hours ago ago

    This is equivalent to students using AI to complete computer programming assignments. They misconstrue the purpose of an assignment as just one of generating output instead something to teach the principles and techniques they'll require later if they want a job in the profession. While they may believe they're fooling the teacher, all they're really doing is fooling, and cheating, themselves.

    Whether it be writing or computer programming, or exercising, for that matter, if you aren't willing to put in the work to achieve your goals, why bother?

  • rralian a day ago ago

    I’ve used ChatGPT as an editor and had very good results. I’ll write the whole thing myself and then feed it into ChatGPT for editing. And then review its output to manually decide which pieces I want to incorporate. The thoughts are my own, but sometimes ChatGPT is capable of finding more succinct ways of making the points.

  • baalimago 21 hours ago ago

    The problem isn't LLM, it's how universities are designed. With short terms and high pressure, students develop 'knowledge bulimia' (in lack of a better term). They have to study highly complex fields in short amounts of time, then move on to often unrelated fields quickly thereafter with no emphasize on persistent learning: the knowledge learned in previous exam can be mostly discarded. They may need to 're-learn' it for another exam, but that's fine, they are very good at learning new things which later on can get discarded.

    Using LLMs to achieve this is just another step in the evolution of a broken education system. The fix? IMO, make the exams for the courses delayed by one semester. So during the exam study-period, the students have to 'catch up' on the lectures they had a few months ago.

  • zahlman a day ago ago

    > [Not a student’s real answer, but my handmade synthesis of the style and content of many answers]

    > You only have to read one or two of these answers to know exactly what’s up: the students just copy-pasted the output from a large language model, most likely ChatGPT. They are invariably

    This is validating. Your imitation completely fooled me (I thought it really was ChatGPT and expected to be told as much in an entirely unsurprising "reveal") and the subsequent description of the style is very much in agreement with how I'd characterize it.

    In previous discussions here, people have tried to convince me that I can't actually notice these obvious signs, or that I'm not justified in detecting LLM output this way. Well, it may be the case that all these quirks derive from the definitely-human training data in some way, but that really doesn't make them Turing-test-passing. I can remember a few times that other people showed me LLM prose they thought was very impressive and I was... very much not impressed.

    > When someone comments under a Reddit post with a computer-generated summary of the original text, I honestly believe that everyone in the world would be better off had they not done so. Either the article is so vapid that a summary provides all of its value, in which case, it does not merit the engagement of a comment, or it demands a real reading by a real human for comprehension, in which case the summary is pointless. In essence, writing such a comment wastes everyone’s time.

    I think you've overlooked some meta-level value here. By supplying such a comment, one signals that the article is vapid to other readers who might otherwise have to waste time reading a considerable part of the article to come to that conclusion. But while it isn't as direct as saying "this article is utterly vapid", it's more socially acceptable, and also more credible than a bald assertion.

  • b0a04gl 12 hours ago ago

    >I say this because I believe that your original thoughts are far more interesting, meaningful, and valuable than whatever a large language model can transform them into

    Can't agree more, but let's take a step back and understand why even in first place somebody uses LLM to generate content? imo everyone feel it's more of grunt work which doesn't deserve their piece of mind to focus, they downscale it as low hanging fruit and let them be automated for them. it's very thin line between getting automated everything mundane vs losing actually creativity.

    so it's author's stack ranking on whether writing this content in original thoughts have more value ( research report ) or LLM brings more value ( eg: very basic but heavy content like admission LoRs, essays ).

    i feel it's more subjective to the situation being handled at the moment.

  • cranium 19 hours ago ago

    I've found really saddening to see students submit written-by-ChatGPT arguments to the department council when their university spot was on the line (for failing grades). This was their ultimate chance to prove their worth and they left it to ChatGPT.

    At first, I thought they didn't care. However, it was so pervasive that it couldn't be the only explanation. I was forced to conclude they trusted ChatGPT more than themselves to argue their case... (Some students did not care, obviously.)

  • theturtletalks a day ago ago

    AI has changed how we learn by making the process of improving work much easier. Normally, learning involves writing a draft, finding mistakes, and fixing them over time. This helps build critical thinking. AI, trained on tons of refined data, can create polished work right away. While this seems helpful, it can skip the important step of learning through trial and error.

    The question is: Should we limit AI to keep the old way of learning, or use AI to make the process better? Instead of fixing small errors like grammar, students can focus on bigger ideas like making arguments clearer or connecting with readers. We need to teach students to use AI for deeper thinking by asking better questions.

    We need to teach students that asking the right questions is key. By teaching students to question well, we can help them use AI to improve their work in smarter ways. The goal isn’t to go back to old methods for iterating but change how we iterate altogether.

  • nitwit005 a day ago ago

    None of the people writing these sorts of posts seem willing to acknowledge how prevalent not doing your own work was before AI wad around.

    The hardest hit industry by AI has been essay writing services.

    If anything, it seems they're noticing because the AI is doing a worse job.

  • austin-cheney 14 hours ago ago

    > Don’t let a computer write for you!

    To play devil's advocate original code alienates you from many programming jobs. This was true before LLMs, and remains true now. Many developers abhor original code. They need frameworks or packages from Maven, NPM, pip, or whatever. They need to be told exactly what to do in the code, but copy/paste is better, and a package that already does it for you is better still. In these jobs, yes, absolutely let a computer write it for you (or at least anybody that is an untrusted outside stranger). Writing the code yourself will often alienate you from your peers and violate some internal process.

  • Noumenon72 a day ago ago

    An exception to test the rule with: people are generating lifelike video based on the pixel graphics from old video games. I have no interest in seeing a prompt that says "Show me a creature from Heroes of Might and Magic 3, with influences from such and so", but it's incredible to see the monsters I've spent so much time with coming to life. https://www.youtube.com/watch?v=EcITgZgN8nw&lc=UgxrBrdz4BdEE...

    Maybe the problem is that the professor doesn't want to read the student work anyway, since it's all stuff he already knows. If they managed to use their prompts to generate interesting things, he'd stop wanting to see the prompts.

  • bobdosherman 15 hours ago ago

    Writing sharpens your thoughts. Working math problems sharpens your ability to do math. In the context of the education system though, where grades are a signal of future ability, there's a strong incentive to engage in rent-seeking by either searching for a solutions manual or "refining" an LLM's output. The little I've done when I've taught math-based econ is to make it clear that in-class tests have very high weight on your final grade, and out-of-class problem sets have very low weight. I can only mouth words as to why it's vital for students to struggle independently on the problem sets as a tool for learning.

  • FinnLobsien a day ago ago

    > A typical belief among students is that classes are a series of hurdles to be overcome; at the end of this obstacle course, they shall receive a degree as testament to their completion of these assignments.

    I agree with the broader point of the article in principle. We should be writing to edify ourselves and take education seriously because of how deep interaction with the subject matter will transform us.

    But in reality, the mindset the author cites is more common. Most accounting majors probably don't have a deep passion for GAAP, but they believe accounting degrees get good jobs.

    And when your degree is utilitarian like that, it just becomes a problem of minimizing time spent to obtain the reward.

  • skerit 21 hours ago ago

    > You only have to read one or two of these answers to know exactly what’s up: the students just copy-pasted the output from a large language model.

    I don't understand this either. I use it a lot, but I never just use what an LLM says verbatim. It's so incredibly obvious it's not written by a human. Most of the time I write an initial draft, ask Claude to check it and improve it, and then I might touch up a few sentences here and there.

    > Vibe coding; that is, writing programs almost exclusively by language-model generation; produces an artifact with no theory behind it. The result is simple: with no theory, the produced code is practically useless.

    Maybe I still don't know what vibe coding is, but for the few times when I _can_ use an LLM to write code for me, I write a pretty elaborate instruction on what I want, how it should be written, ... Most of the time I use it for writing things I know it can do and seem tedious to me.

  • ruuda a day ago ago

    By now I consider LLM text a double insult. It says “I couldn’t be bothered to spend time writing this myself,” yet it makes _me_ waste time reading through the generated fluff! I agree with the article, I'd rather read the prompt.

    https://ruudvanasseldonk.com/2025/llm-interactions

  • zhyder 13 hours ago ago

    Strongly agree with the author that the original prompt is much more substantive, but I think they're mistaken that "skin in the game" is a small motivating factor for human written text. It's the entire motivation: we all want to look like we've done more (get better grades at school, or get better compensation at work) while minimizing effort. We're not incentivized to produce just the substance, coz effort scales O(substance).

  • qwertox 13 hours ago ago

    I write my emails to people like the HoA myself, but always feed it into an LLM to make sure the point comes across. There are always so many corrections suggested, that I'd end up writing a precise mail, but one which I'd never had written that way. It's just not me in that mail. So I have the task to find a middle ground of what I need to remove from mine in order for it to be as easily understandable as the one the LLM suggests.

    Am I alone with this?

  • psychoslave a day ago ago

    >I believe that the main reason a human should write is to communicate original thoughts

    More than communicate, I would say to induce thoughts.

    I write poetry here and there (on paper, just for me). I like how exploration through lexical and syntactic spaces can be intertwined with semantics and pragnatic matters. More importantly, I appreciate how careful thoughts are playing with attention and other uncharted thoughts. The invisible side effects on mental structures happening in the creation of expression can largely outweight the importance of what is left as an artefact publicly visible.

    For a far more trivial example, we can think about how notes in the margin of a book can radically change the way we engage with the reading. Even a careful spare word highlight can be a world of difference in how we engage with the topic. It's the very opposite of "reading" a few pages before realizing that not a single thought percolated into consciousness as it was wandering on something else.

  • LeroyRaz 5 hours ago ago

    Isn't the writer themselves using the "insipid bullet points with bold style"?

  • tabbott a day ago ago

    I have a very similar experience. Some students who want to get involved in contributing to open source will try to contribute to Zulip by taking whatever they wanted to say and asking ChatGPT to write it better for them, and posting the result.

    Even when no errors are introduced in the process, the outcome is always bad: 3 full paragraphs of text with bullets and everything where the actual information is just the original 1-2 sentences that the model was prompted with.

    I never am happy reading one of those; it's just a waste of time. A lot of the folks doing it are not native English speakers. But for their use case, older tools like Grammarly that help improve the English writing are effective without the problematic decompression downsides of this class of LLM use.

    Regardless of how much LLMs can be an impactful tool for someone who knows how to use one well, definitely one of the impacts of LLMs on society today is that a lot of people think that they can improve their work by having an LLM edit it, and are very wrong.

    (Sometimes, just telling the LLM to be concise can improve the output considerably. But clearly many people using LLMs think the overly verbose style it produces is good.)

  • neilv a day ago ago

    > [...] but not so distinctive to be worth passing along to an honor council. Even if I did, I’m not sure the marginal gains in the integrity of the class would be worth the hours spent litigating the issue.

    The school should be drilling into students, at orientation, what some school-wide hard rules are regarding AI.

    One of the hard rules is probably that you have to write your own text and code, never copy&paste. (And on occasions when copy&paste is appropriate, like in a quote, or to reuse an off-the-shelf function, it's always cited/credited clearly and unambiguously.)

    And no instructors should be contradicting those hard rules.

    (That one instructor who tells the class on the first day, "I don't care if you copy&paste from AI for your assignments, as if it's your own work; that just means you went through the learning exercise of interacting with AI, which is what I care about"... is confusing the students, for all their other classes.)

    Much of society is telling students that everything is BS, and that their job is to churn BS to get what they want. Early "AI' usage popular practices so far looks to be accelerating that. Schools should be dropping a brick wall in front of that. Well, a padded wall, for the students who can still be saved.

  • TeMPOraL a day ago ago

    Is bringing up Naur's paper and arguing that theory of program is all that matters and LLMs cannot do that, just a 2025 version of calling LLMs stochastic parrots and claiming they don't model or work in terms of concepts? Feels like it.

    EDIT: Not a jab at the author per se, more that it's a third or fourth time I see this particular argument in the last few weeks, and I don't recall seeing it even once before.

  • hi_hi 19 hours ago ago

    So it seems like the future is people to write in a command prompt style for llms to better parse and repeat back our information. God I hope that isn't the future of the internet.

    How about an emoji like library designed exclusively for LLMs, so we can quickly condense context and mood without having to write a bunch of paragraphs, or the next iteration of "txt" speech for LLMs. What does the next step of users optimising for LLMs look like?

    I miss the 80's/90's :-(

  • wseqyrku 15 hours ago ago

    I think prompt should be treated just like source code, as in the actual "craft" that you're paid to produce. Source code, if computer generated, feels more like artifacts (as in binaries).

    If you use AI all that is important is your ability to specify the problem, of course, as it always has been, you can just reiterate faster.

  • ctkhn a day ago ago

    > The model produces better work. Some of my peers believe that large language models produce strictly better writing than they could produce on their own. Anecdotally, this phenomenon seems more common among English-as-a-second-language speakers. I also see it a lot with first-time programmers, for whom programming is a set of mysterious incantations to be memorized and recited.

    AI usage is a lot higher in my work experience among people who no longer code and are now in business/management roles or engineers who are very new and didn't study engineering. My manager and skip level both use it for all sorts of things that seem pointless and the bootcamp/nontraditional engineers use it heavily. Our college hires we have who went through a CS program don't use it because they are better and faster than it for most tasks. I haven't found it to be useful without an enormous prompt at which point I'd rather just implement the feature myself.

  • cryptoegorophy a day ago ago

    As someone who is an immigrant that had to go to high school in English speaking country and who struggled a lot and couldn’t do anything about improving essay writing no matter what I did, I say all these English teachers deserve this. I wish ChatGPT existed during my school years, I would’ve at least had someone(thing) explain me how to write better.

  • YmiYugy a day ago ago

    I mostly use LLMs as a more convenient Google and to automate annoying code transformations with a conveniency of a natural language interface. Sometimes, I use it to "improve" my writing style.

    I have to admit I was a bit surprised how bad LLMs are at the continue this essay task. When I read it in the blog I suspected this might have been a problem with the prompt or the using one of the smaller variants of Gemini. So I tried it with Gemini 2.5 Pro and iterated quite a bit providing generic feedback without offering solutions. I could not get the model to form a coherent well reasoned argument. Maybe I need to recalibrate my expectations of what LLMs are capable, but I also suspect that current models have heavy guardrails, use a low temperature and have been specifically tuned for problem solving and avoid hallucinations as much as possible.

  • junto a day ago ago

    Back in the 90’s I remember similar sorts of kickback from traditional media about the internet and from academia about students not using the library on campuses anymore.

    Libraries are still in every campus, often with internet access.

    Traditional media have transitioned to become online content media farms. The NYT Crossword puzzle is now online. Millions of people do Wordle every day online.

    This is just kickback. Every paradigm shift needs kickback in order to let the dust settle and for society to readjust and find equilibrium again.

  • colbyn 20 hours ago ago

    > I should hope that the purpose of a class writing exercise is not to create an artifact of text but force the student to think; a language model produces the former, not the latter.

    Personally, I’ve been enjoying using ChatGPT to explore different themes of writing. It’s fun. In my case the goal is specifically to produce artifacts of text that’s different from what I’d normally produce.

  • zjp a day ago ago

    I occasionally pair write with LLMs, but I give them my piece and then say, "I don't want your edits, just your feedback" and ask them some simple questions about the content and different angles on it. When the LLM says what I want it to say, I consider the piece good enough. That is to say, if a machine understands what you're saying and a human doesn't, that human's criticism might be below engaging with.

  • robwwilliams a day ago ago

    I ask Claude to respond like Hemingway would. It works.

  • tptacek a day ago ago

    I have a lot of sympathy for the author's position but I may have missed the point in the article where he explained why clarity of writing and genuineness of human expression was so vital to a robotics class. It's one thing for an instructor to appreciate those things; another for them to confound their own didactic purpose with them. This point seems obvious enough that I feel like I must have missed something.

    As always, I reject wholeheartedly what this skeptical article has to say about LLMs and programming. It takes the (common) perspective of "vibe coders", people who literally don't care what code says as long as something that runs comes out the other side. But smart, professional programmers use LLMs in different ways; in particular, they review and demand alterations to the output, the same way you would doing code review on a team.

  • myth2018 a day ago ago

    > Anecdotally, this phenomenon seems more common among English-as-a-second-language speakers

    That part caught my attention. As an English-as-a-second-language speaker myself, I find it so difficult to develop any form of "taste" in English the same way I have in my mother tongue. A badly written sentence in my mother tongue feels painful in a sort of physical way, while bad English usually sound OK to me, especially when asserted in the confident tone LLMs are trained in. I wish I could find a way to develop such sense for the foreign languages I currently use.

  • Tteriffic a day ago ago

    If you ask, just show me the prompts, you will invariable just get llm generated sets of prompts.

  • lgiordano_notte 18 hours ago ago

    If you outsource that to a model, you often end up with words but shallow or no understanding. Writing forces you to clarify your ideas. LLMs substitute genuine thinking with surface-level prose, which might sound alright but often lacks depth behind it.

  • nathants a day ago ago

    the solution is obvious. stop grading the result, and start grading the process.

    if you can one-shot an answer to some problem, the problem is not interesting.

    the result is necessary, but not sufficient. how did you get there? how did you iterate? what were the twists and turns? what was the pacing? what was the vibe?

    no matter if with encyclopedia, google, or ai, the medium is the message. the medium is you interacting with the tools at your disposal.

    record that as a video with obs, and submit it along with the result.

    for high stakes environments, add facecam and other information sources.

    reviewers are scrubbing through video in an editor. evaluating the journey, not the destination.

  • A_Stefan 21 hours ago ago

    The output is so convenient that these students seem like they don't even change bits of it to make it their own.

    Since there is no "interdiction" to use any LLM, perhaps it should be mandatory to include the prompt as well when used. Feels like that could be the seed that sparks the curiosity..

  • markusde a day ago ago

    Preach about the bullet points. I was grading some assignments a while ago and by some mysterious coincidence like a third of the answers were written in this strange bullet point format listing the the same 3 ideas.

    The punchline? Bullet point 3 was wrong (it was a PL assignment and I'm 99% sure the AI was picking up on the word macro and regurgitating facts abut LISP). 0 points all around, better luck next time.

  • xixixao 14 hours ago ago

    I have copilot turned off for markdown files. Cursor has this built in now. I’d never want AI to help write docs (except for narrow cases, repetitive references).

  • spatchcock 14 hours ago ago

    I’m learning C programming at the moment, originally I was doing it to understand security vulnerabilities more deeply, but I’ve found that I really enjoy the mental exercise of it (and the benefits of that exercise in my career, life etc.) Hopefully the ideas in this article will get to a lot of people eventually, otherwise I feel that people are going to dig themselves in a hole with using LLMs and not thinking for themselves.

  • jez a day ago ago

    Where I especially hold this viewpoint is for end-of-year peer performance reviews.

    People say “I saved so much time on perf this year with the aid of ChatGPT,” but ChatGPT doesn’t know anything about your working relationship with your coworker… everything interesting is contained in the prompt. If you’re brain dumping bullet points into an LLM prompt, just make those bullets your feedback and be done with it? Then it’ll be clear what the kernel of feedback is and what’s useless fluff.

  • jonniebullie 16 hours ago ago

    Am a student, the main message I have taken from this article, I should love to write and be comfortable with my thoughts no matter the situation. Thanks for this amazing writing.

  • Gud 21 hours ago ago

    Although I agree with the OP that copy pasting verbatim from the LLM is a meaningless exercise, I just like to say that LLMs can be a fantastic study tool. I am using ChatGPT to help me with learning German, and it has had a profound impact on my learning.

  • lqr a day ago ago

    For math and writing, we still have in-class exams as an LLM-free evaluation tool.

    I wish there was some way to do the same for programming. Imagine a classroom full of machines with no internet connection, just a compiler and some offline HTML/PDF documentation of languages and libraries.

  • afavour a day ago ago

    ChatGPT English is set to be the a ubiquitous, remarkably inefficient data transmission format that sits on top of email.

    I wish to communicate four points of information to you. I’ll ask ChatGPT to fluff those up into multiple paragraphs of text for me to email.

    You will receive that email, recognize its length and immediately copy and paste it into ChatGPT, asking it to summarize the points provided.

    Somewhere off in the distance a lake evaporates.

  • polpenn a day ago ago

    Here’s something I sometimes do to avoid boring content when using LLMs: I type out what it gives me and tweak it as I go instead of copy/pasting directly.

    It helps me spot the bits that feel flat or don’t add much, so I can cut or rework them—while still getting the benefit of the LLM’s idea generation.

  • Zebfross a day ago ago

    I am personally proud of my use of AI because for anything non-trivial it is generally a conversation where each recommendation needs to be altered by an imaginative suggestion. So ultimately it’s the entire conversation that needs to be considered, not just the “final” prompt.

  • bertil a day ago ago

    I found it ironic that the author said having bullet points with the key topic in bold was a sign to use that format immediately.

  • jama211 a day ago ago

    This is a lot of words of complaining instead of just changing the assignment specification to ask students to provide a prompt that they’d use, instead of asking them to regurgitate information. You get what you ask for mate, I don’t understand what’s so hard about this.

    If your assignment can be easily performed by an LLM, it’s a bad assignment. Teachers are just now finding out the hard way that these assignments always sucked and were always about regurgitating information pointlessly and weren’t helpful tools for learning lol. I did heaps of these assignments before the existence of LLMs, and I can assure you that the busywork was mostly a waste of time back then too.

    People using LLMs is just proof they don’t respect your assignment - and you know what, if one person doesn’t respect your assignment, they’re probably wrong. But if 90% of people don’t respect your assignment? Maybe you should consider whether the assignment is the problem. It’s not rocket science.

  • psygn89 a day ago ago

    As someone who doesn't have to grade student assignments, I'd rather read the bullet points. I've always liked using bullet points even in grade school, and also used words like dwelve, so I'd definitely fail the AI sniff test if I ever went back to school.

  • GuB-42 a day ago ago

    Just paste the full text and ask a LLM to summarize it for you.

    It feels like we are getting to this weird situation where we just use LLMs as proxies, and the long, boring text is just for LLMs to talk to each other.

    For example:

    Person A to LLM A: Give me my money.

    LLM A to LLM B: Long formal letter.

    LLM B to Person B : Give me my money.

    Hopefully, nothing is lost in translation.

  • palata a day ago ago

    This article really resonates with me.

    The very first time I enjoyed talking to someone in another language, I was 21. Then an exchange student, I had a pleasant and interesting discussion with someone in that foreign language. On the next day, I realised that I wouldn't have been able to do that without that foreign language. I felt totally stupid: I had been getting very good grades in languages for years at school without ever caring about actually learning the language. And now, it was obvious, but all that time was lost; I couldn't go back and do it better.

    A few years earlier, I had this great history teacher in high school. Instead of making us learn facts and dates by heart, she wanted us to actually get an general understanding of a historical event. Actually internalise, absorb the information in such a way that we could think and talk about it. And eventually develop our critical thinking. It was confusing at first, because when we asked "what will the exam be about", she wouldn't say "the material in those pages". She'd be like "well, we've been talking about X for 2 months, it will be about that".

    Her exams were weird at first: she would give us articles from newspapers and essentially ask what we could say about them. Stuff like "Who said what, and why? And why does this other article disagree with the first one? And who is right?". At first I was confused, and eventually it clicked and I started getting really good at this. Many students got there as well, of course. Some students never understood and hated her: their way was to learn the material by heart and prove it to get a good grade. And I eventually realised this: those students who were not good at this were actually less interesting when they talked about history. They lacked this critical thinking, they couldn't make their own opinion or actually internalise the material. So whatever they would say in this topic was uninteresting: I had been following the same course, I knew which events happened and in which order. With the other students were it "clicked" as well, I could have interesting discussion: "Why do you think this guy did this? Was it in good faith or not? Did he know about that when he did it? etc".

    She was one of my best teachers. Not only she got me interested in history (which had never been my thing), but she got me to understand how to think critically, and how important it is to internalise information in order to do that. I forgot a lot of what we studied in her class. I never lost the critical thinking. LLMs cannot replace that.

  • pmarreck a day ago ago

    > I’m not much of a generative-model user myself

    Perhaps that's good, perhaps that's bad, but it certainly doesn't really allow him to see much of the appeal... yet

  • agentbrown a day ago ago

    Some thoughts:

    1. “When copying another person’s words, one doesn’t communicate their own original thoughts, but at least they are communicating a human’s thoughts. A language model, by construction, has no original thoughts of its own; publishing its output is a pointless exercise.”

    LLMs, having being trained using the corpus of the web, I would argue communicate other human’s thoughts particularly well. Only in exercising an avoidance of plagiarism are the thoughts of other human’s evolved into something closer to “original thought” for the would-be plagarizer. But yes, at least a straight copy/paste retains the same rhetoric as the original human.

    2. I’ve seen a few advertisements recently leverage “the prompt” as a means to resonate visual appeal.

    i.e a new fast food delivery service starting their add with some upbeat music and a visual presentation of somebody typing into a LLM interface, “Where’s the best sushi around me?” And then cue the advertisement for the product they offer.

  • eunos a day ago ago

    There's also strong inferiority complex. When you read and find out the output your motivation to at least paraphrase the prompt output instantly dive because it looks so good and proper whereas your original writing looks so dumb in comparison

  • boredatoms a day ago ago

    For tests, just require everything to be written in-person, by hand or mechanical typewriter

  • sieve a day ago ago

    Depends on the situation.

    I like reading and writing stories. Last month, I compared the ability of various LLMs to rewrite Saki's "The Open Window" from a given prompt.[1] The prompt follows the 13-odd attempts. I am pretty sure in this case that you'd rather read the story than the prompt.

    I find the disdain that some people have for LLMs and diffusion models to be rather bizarre. They are tools that are democratizing some trades.

    Very few people (basically, those who can afford it) write to "communicate original thoughts." They write because they want to get paid. People who can afford to concentrate on the "art" of writing/painting are pretty rare. Most people are doing these things as a profession with deadlines to meet. Unlike you are GRRM, you cannot spend decades on a single book waiting for inspiration to strike. You need to work on it. Also, authors writing crap/gold at a per-page rate is hardly something new.

    LLMs are probably the most interesting thing I have encountered since I did the computer. These puritans should get off of their high horse (or down from their ivory tower) and join the plebes.

    [1] Variations on a Theme of Saki (https://gist.github.com/s-i-e-v-e/b4d696bfb08488aeb893cce3a4...)

  • QuadmasterXLII a day ago ago

    Crucially: if you just send me the prompt, and for some reason I would rather have read the model output, I can just paste the prompt into the model. However, theres no way to go the other way

  • programjames a day ago ago

    You can train an LLM to maximize the information content bitrate. I just think most companies want to maximize "customer satisfaction" or w/e, which is why we get the verbose, bold, bullet points.

  • dakiol a day ago ago

    All it takes is to provide a slightly better prompt (“write the answer in a natural prose style, no bullet points, no boring style, perhaps introduce a small error). It’s not that difficult.

  • xmorse a day ago ago

    I am thinking about creating a proof-of-writing signature. Basically an editor with an "anti-cheat", you can't paste text into it. It signs your text with a public key.

  • r0b05 a day ago ago

    Ironically, the biggest benefit I get from using LLM's is precisely to help me learn.

  • me3meme 16 hours ago ago

    if people are going to read 630 comments about this post, I think they are not willing to read the prompt they prefer to expand endlessly like a LLM.

  • halfadot a day ago ago

    > Don’t let a computer write for you! I say this not for reasons of intellectual honesty, or for the spirit of fairness. I say this because I believe that your original thoughts are far more interesting, meaningful, and valuable than whatever a large language model can transform them into.

    Having spent about two decades reading other humans' "original thoughts", I have nothing else to say here other than: doubt.

  • sillysaurusx a day ago ago

    I think people who don’t like writing shouldn’t be forced to write, just like people who don’t like music shouldn’t be forced to play music. Ditto for math.

    Forcing people to do these things supposedly results in a better, more competitive society. But does it really? Would you rather have someone on your team who did math because it let them solve problems efficiently, or did math because it’s the trick to get the right answer?

    Writing is in a similar boat as math now. We’ll have to decide whether we want to force future generations to write against their will.

    I was forced to study history against my will. The tests were awful trivia. I hated history for nearly a decade before rediscovering that I love it.

    History doesn’t have much economical value. Math does. Writing does. But is forcing students to do these things the best way to extract that value? Or is it just the tradition we inherited and replicate just because our parents did?

  • neilwilson a day ago ago

    “I would have written a shorter letter but I didn’t have the time”

    Pithy and succinct takes time.

  • quest88 a day ago ago

    Hah, I've been including the prompts of my patches in pull requests. Glad others like that.

  • ijidak a day ago ago

    The level of cheating in college, pre-AI, is often overlooked in these articles.

    Pre-AI, homework was often copied and then individuals just crammed for the tests.

    AI is not the problem for these students, it's that many students are only in it for the diploma.

    If it wasn't AI it would just be copying the assignment from a classmate or previous grad.

    And I imagine the students who really want to learn are still learning because they didn't cheat then, and they aren't letting AI do the thinking for them now.

  • barbazoo a day ago ago

    Same with all these AI businesses wrapping business around a prompt. Just tell me the prompt.

  • unreal37 a day ago ago

    Looks like a "GPT text output condenser" might be a good project to work on.

  • cryptozeus a day ago ago

    Yes writing in lots of form is thinking, we are loosing the ability to think

  • firefoxd a day ago ago

    Personally, I've used LLM to help me better structure my blog post after I write it. Meaning I've already written it, then it enhances it. Most of the time, I'm happy with the results at the time of editing. But when I come back a week or two to re-read it, it looks just like the example the author shared.

    The goal is to make something legible, but the reality is we are producing slop. I'm back to writing before my brain becomes lazy.

  • j2d3 a day ago ago

    All this was an excuse to use the word brobdingnagian.

  • palata a day ago ago

    > A typical belief among students is that classes are a series of hurdles to be overcome; at the end of this obstacle course, they shall receive a degree

    Yes, totally. Unfortunately, it takes time and maturity to understand how this is completely wrong, but I feel like most students go through that belief.

    Not sure how relevant it is, but it makes me think of two movies with Robin Williams: Dead Poet's Society and Will Hunting. In the former, Robin's character manages to get students interested in stuff instead of "just passing the exams". In the later, I will just quote this part:

    > Personally, I don’t give a shit about all that, because you know what? I can’t learn anything from you I can’t read in some fuckin’ book. Unless you wanna talk about you, who you are. And I’m fascinated. I’m in.

    I don't give a shit about whether a student can learn the book by heart or not. I want the student to be able to think on their own; I want to be able to have an interesting discussion with them. I want them to think critically. LLMs fundamentally cannot solve that.

  • kookamamie a day ago ago

    We offloaded our memory to Google and then our writing to LLMs.

    There's too much information in the World for it to matter, I think is the underlying reason.

    As an example, most enterprise communication nears the levels of noise in its content.

    So, why not let a machine generate this noise, instead?

  • unraveller 15 hours ago ago

    You mean to tell me even anti-AI people are glazing my unseen prompts now! The solve for slop is easy for teachers and communicators alike: stop asking sorry questions and you stop getting sorry responses. Or stay the easy course, you will cede the game to discreet cheaters just to make honest people jump through antiquated hoops.

  • sussmannbaka a day ago ago

    People defending this are wrong in an additional, more pathetic way: Even if you insist on “cheating” and using an LLM to communicate, you are using it badly. You manage to be obviously incompetent at using the tool you are evangelizing.

  • ArthurStacks a day ago ago

    These people are about to become extinct.

  • cadamsdotcom a day ago ago

    LLMs and AI use create new dichotomies we don’t have language for.

    Exploring a concept-space with LLM as tutor is a brilliant way to educate yourself. Whereas pasting the output verbatim, passing it as one’s own work, is tragic: skipping the only part that matters.

    Vibe coding is fun right up to the point it isn’t. (Better models get you further.) But there’s still no substitute for guiding an LLM as it codes for you, incrementally working and layering code, committing to version control along the way, then putting the result through both AI and human peer code reviews.

    Yet these all qualify as “using AI”.

    We cannot get new language for discussing emerging distinctions soon enough. Without them we only have platitudes like “AI is a powerful tool with both appropriate and inappropriate uses and determining which is which depends on context”.

  • shaimagz 18 hours ago ago

    if you’ve ever written a good prompt, you’ve probably thought: “someone else could benefit from this.”

    we agree. mixus makes that easy — across teams, classes, and communities.

  • Retr0id a day ago ago

    Write a witty comment in the style of a Hacker News user who just read an article titled "I'd rather read the prompt"

  • harha_ 19 hours ago ago

    I'm so tired of generative AI. I can't take anyone who uses them seriously anymore.

  • satisfice a day ago ago

    I blogged about this just yesterday. The problem of disguised authorship ruins your reputation as a thinker, worker, and writer.

    https://www.satisfice.com/blog/archives/487881

  • Pxtl 16 hours ago ago

    I'm not hardcore anti-AIgen, but it feels like most of the usage of text-AIgen is for creating pointless filler.

  • samyar 16 hours ago ago

    I agree on this. I thought LLMs are great for coding but they suck. They produce more work with debugging.

    I think I is good in two way, one is which you use if as a small helper (basic questions, auto completion...)

    Two for getting started on something that you have no idea on ( not to teach you but just give you an idea of what it's and resources to learn more)

  • TZubiri a day ago ago

    Prompts are source

  • CivBase 17 hours ago ago

    > A typical belief among students is that classes are a series of hurdles to be overcome; at the end of this obstacle course, they shall receive a degree as testament to their completion of these assignments.

    IMO the core problem is that in many cases this typical belief holds true.

    I went to university to get a degree for a particular field of jobs. I'd generously estimate that about half of my classes actually applied to that field or jobs. The other half were required to make me a more "well rounded student" or something like that. But of course they were just fluff to maximize my tuition fees.

    There was no university that offered a more affordable program without the fluff. After all, the fluff is a core part of the business model. But there isn't much economic opportunity without a diploma so students optimize around the fluff.

  • quijoteuniv a day ago ago

    Even more interesting is why the students think that is the reply the teacher is expecting

  • SebFender 18 hours ago ago

    Maybe ridiculous from my part - but I think if they at least had to go hand written with these it could transfer a bit more knowledge back to the brain...

    Simply blaming models is an easy way out and creates little value - Maybe changing the medium and exercise to which it transfers could be a thing?

    It's time to get creative.

  • 6510 a day ago ago

    Teachers lose much if not all of their time teaching while people applying what they've learned spend all of their time applying and advancing the practical side of it. The later don't even know how to use LLM's.

  • xkcd1963 a day ago ago

    As long as there are trick-questions, LLMs have legitmimacy for being used

  • tkgally a day ago ago

    I teach a university class in which I ask the students to submit writing each week, and I have also seen obviously LLM-produced writing. Yes, it’s boring and doesn’t show the students’ thinking, and the students are not getting any wiser by doing assignments that way. Just last week, I told my students that, while they can use LLMs any way they like for the class, their writing will be more interesting if they write it themselves and use LLMs only sparingly, such as for fixing grammatical mistakes (most of the students are not native speakers of English). It helps, I think, that in this class the students’ writing is shared among the students, and during class I often refer to interesting comments from student writing. The students themselves, I hope, will come to understand the value of reading human-written writing.

    That said, I myself am increasingly reading long texts written by LLMs and learning from them. I have been comparing the output of the Deep Research products from various companies, often prompting for topics that I want to understand more deeply for projects I am working on. I have found those reports very helpful for deepening my knowledge and understanding and for enabling me to make better decisions about how to move forward with my projects.

    I tested Gemini and ChatGPT on “utilizing Euler angles for rotation representation,” the example topic used by the author in the linked article. I first ran the following metaprompt through Claude:

      Please prepare a prompt that I can give to a reasoning LLM that has web search and “deep research” capability. The prompt should be to ask for a report of the type mentioned by the sample “student paper” given at the beginning of the following blog post: https://claytonwramsey.com/blog/prompt/ Your prompt should ask for a tightly written and incisive report with complete and accurate references. When preparing the prompt, also refer to the following discussion about the above blog post on Hacker News: https://news.ycombinator.com/item?id=43888803
    
    I put the the full prompt written by Claude at the end of the Gemini report, which has some LaTex display issues that I couldn’t get it to fix:

    https://docs.google.com/document/d/1sqpeLY4TWD8L4jDSloeH45AI...

    Here is the ChatGPT report:

    https://chatgpt.com/share/681816ff-2048-8011-8e0f-d8cbad2520...

    I know nothing about this topic, so I cannot evaluate the accuracy or appropriateness of the above reports. But when I have had these two Deep Research models produce similar reports on topics I understand better, they have indeed deepened my understanding and, I hope, made me a bit wiser.

    The challenge for higher education is trying to decide when to stick to the traditional methods of teaching—in this case, having the students learn through the process of writing on their own—and when to use these powerful new AI tools to promote learning in other ways.

  • casey2 a day ago ago

    Teachers say they would rather read the prompt but the truth is plain that they wouldn't

    It's the old joke of the teacher who wants students to tried their best and that failure doesn't matter. But when the student follows the process to the best of their ability and fails they are punished while the student who mostly follows the process and then fudges their answer to the correct one is rewarded.

  • zombiwoof a day ago ago

    My 10 year old who’s an amazing at drawing sad AI is bad it won’t allow creativity or imagination, it will just be copy paste artists

  • RicoElectrico a day ago ago

    > Either the article is so vapid that a summary provides all of its value, in which case, it does not merit the engagement of a comment, or it demands a real reading by a real human for comprehension, in which case the summary is pointless.

    There's so much bad writing of valuable information out there. The major sins being: burying the lede, no or poor sectioning, and just generally verbose.

    In some cases, like in EULAs and patents that's intentional.

  • revskill a day ago ago

    It is designed that way intentionally because provider charges token for money.

  • tomjen3 a day ago ago

    I wish the author had state out right that they were not using LLMs much, since their opinion on them and their output has no value (its a new technology, and different enough that you do have to spend some time with them in order to be able to find out what value they have for your particluar work[0].

    The is especially the case when you are about to complain about style, since that can easily be adjusted, by simply telling the model what you want.

    But I think there is a final point that the author is also wrong about, but that is far more interesting: why we write. Personally I write for 3 reasons: to remember, to share and to structure my thoughts.

    If an LLM is better then me at writing (and it is) then there is no reason for me to write to communicate - it is not only slower, it is counterproductive.

    If the AI is better at wrangling my ideas into some coherent thread, then there is no reason for me to do it. This one I am least convinced about.

    AI is already much better than me at strictly remembering, but computers have been that since forever, the issue is mostly convinient input/output. AIs makes this easier thanks to speech to text input.

    [0]: See eg. https://www.oneusefulthing.org/p/centaurs-and-cyborgs-on-the....

  • scarface_74 a day ago ago

    I don’t know anything about the subject area, so I don’t know if this captures enough to get a good grade. But I’m curious if anyone could tell whether the last answer were AI generated if I copied and pasted. These are the iterations I go through when writing long requirement documents/assessments/statements of work (consulting).

    Yes I know the subject area for which I write assessments and know if what is generated is factually correct. If I’m not sure, I ask for web references using the web search tool.

    https://chatgpt.com/share/6817c46d-0728-8010-a83d-609fe547c1...

  • time4tea 19 hours ago ago

    Love that final thought.

    (AI slop). If it's not worth writing, it's not worth reading.

    Perfect.

  • hoppp a day ago ago

    If LLMs existed back in the 90s and 00s I would have generated all my homework too.

    The kids these days got everything...

  • perching_aix a day ago ago

    > I should hope that the purpose of a class writing exercise is not to create an artifact of text but force the student to think

    Back in HS literature class, I had to produce countless essays on a number of authors and their works. It never once occurred to me that it was anything BUT an exercise in producing a reasonably well written piece of text, recounting rote-memorized talking points.

    Through-and-through, it was an exercise in memorization. You had to recall the fanciful phrases, the countless asinine professional interpretations, brief bios of the people involved, a bit of the historical and cultural context, and even insert a few verses and quotes here and there. You had to make the word count, and structure your writing properly. There was never any platform for sharing our own thoughts per se, which was sometimes acknowledged explicitly, and this was most likely because the writing was on the wall: nobody cared about these authors or their works, much less enjoyed or took interest in anything about them.

    I cannot recount a single thought I memorized for these assignments back then. Passed these with flying colors most usually, but even for me, this was just pure and utter misery. Even in hindsight, the sheer notion that this was supposed to make me think about the subject matter at hand borders on laughable. It took astronomical efforts to even retain all the information required - where would I have found the power in me to go above and beyond, and meaningfully evaluate what was being "taught" to me in addition to all this? How would it have mattered (in specifically the context of the class)? Me actually understanding these topics and pondering about them deeply is completely inobservable through essay writing, which was the sole method of grading. If anything, it made me biased against doing so, as it takes a potentially infinite extra time and effort. And since there was approximately no way for our teacher to make me interested in literature either, he had no chance at achieving such lofty goals with me, if he ever actually aimed for them.

    On the other side of the desk, he also had literal checklists. Pretty sure that you do too. Is that any environment for an honest exchange of thoughts? Really?

    If you want to read people's original thoughts, maybe you should begin with not trying to coerce them into producing some for you on demand. But that runs contrary to the overarching goal here, so really, maybe it's the type of assignment that needs changing. Or the framework around it. But then academia is set in its ways, so really, there's likely nothing you can specifically do. You don't deserve to have to sift through copious amounts of LLM generated submissions; but the task of essay writing does, and you're now the one forced to carry this novel burden.

    LLMs caught incumbent pedagogical practices with their pants down, and it's horrifying to see people still being in denial of it, desperately trying to reason and bargain their ways out of it, spurred on by the institutionally ingrained mutual-hostage scenario that is academia. *

    * Naturally, I have absolutely zero formal relation to the field of pedagogy (just like the everyday practice of it in academia to my knowledge). This of course doesn't stop me from having an unreasonably self-confident idea on how to achieve what you think essay writing is supposed to achieve though, so if you want a terrible idea or two, do let me know.

  • alganet a day ago ago

    The suggestion that an artificial intelligence follows a specific kind of writing style is a trap.

    Relying on that to automatically detect their use makes no sense.

    From a teaching perspective, if there is any expectation that artificial intelligence is going to stick, we need better teachers. Ones that can come up with exercises that an artificial intelligence can't solve, but are easy for humans.

    But I don't expect that to happen. I expect instead text to become more irrelevant. It already has lost a lot of its relevancy.

    Can handwriting save us? Partially. It won't prevent anyone from copying artificial intelligence output, but it will make anyone that does so think about what is being written. Maybe think "do I need to be so verborragic?".

  • cortesoft a day ago ago

    > I say this because I believe that your original thoughts are far more interesting, meaningful, and valuable than whatever a large language model can transform them into.

    Really? The example used was for a school test. Is there really much original thought in the answer? Do you really want to read the students original thought?

    I think the answer is no in this case. The point of the test is to assess whether the student has learned the topic or not. It isn’t meant to share actual creative thoughts.

    Of course, using AI to write the answer is contrary to the actual purpose, too, but it isn’t because you want to hear the students creativity, but because it is failing to serve its purpose as a demonstration of knowledge.

  • qustrolabe a day ago ago

    It's actually doesn't matter. I hated this hassle of writing various texts while studying so much. Like does it really matter whether student would generate this text or just go google and copy paste some paragraphs from somewhere? And don't even hope for them to genuinely write all that stuff themselves because it's a huge waste of time even for those who actually cares and interested in the subject.