These examples show that we have a serious social issue, and it's not limited to teachers. People misuse LLMs. We engineers understand that LLMs are products under development. They only work correctly under certain circumstances, and they have limitations and non-perfect evaluation metrics. Regular people (non-engineers) treat them as finished products or magic wands. They ignore the warnings in the page saying LLM can make mistakes. And there are billions of those people. This may create huge social problems that engineers can't fix.
I think it's a grey area - on one hand, it's been sold as a source of truth, but on the other hand, there's a strong element of confirmation bias and/or simple laziness from users (of course, to varying degrees).
A friend of mine states that the market rates for their position are wrong, because ChatGPT gave higher numbers: this is an example on the far end of the spectrum of confirmation bias - it matters little whether it was sold as source of truth or not.
> We engineers understand that LLMs are products under development. They only work correctly under certain circumstances
Looking around me, engineers do not understand that. Instead, they have exactly same overblown expectations and actively push for LLM everywhere. They will call you ludite if you say anything else.
Luddites weren't crushed by progress, they were crushed by an armed militia backed by the government. 12,000 militia and yeomanry units, in fact. Knowing how most of the AI fanatics are like though, doesn't surprise me they're on the side of the boot rather than on the side of the people getting crushed by said boot.
Also, what's the definition and point of "progress" to you? Because the way AI is shaking out to me screams the opposite of what I'd expect progress to look like. Assuming the likes of Altman (an individual who wants to harvest your biometric data for a scam shitcoin, by the way) can be believed and we indeed reach the singularity or AGI or whatever, is everyone except for the C-level (who is somehow magically exempt from the negative effects of this progress and irreplaceable somehow) losing their livelihoods and getting crushed under the boot of the wealthy and powerful "progress", in your eyes?
If you think that the prospect of "job loss" would, or should, stop progress, you're delusional. There are reasons to slow AI progress down, but "think of all the jobs" certainly isn't one.
Am I? I'm saying this is what's going to happen (if people like Altman are correct), same as how the Luddites knew exactly what was going on. I'm not denying that we're not likely to stop AI development even if everyone loses their jobs, I'm saying that it's not what my vision of progress looks like.
You also conveniently left out the part I mentioned about those jobless people getting crushed by powerful boots and focused solely on what I said about job loss.
And again I'll ask, what exactly does "progress" mean for you? What world are we heading towards that counts as positive progress in your mind? Because from what I can tell you think we're going to be heading towards mass unemployment and... consider it a good thing, for some reason?
> Luddites were idiots. They thought they could stand in the way of progress.
Ummmm. No.
The luddites were not opposed to progress or new machinery. The luddites called for unemployment compensation and retraining for workers *displaced* by the new machinery (machinery they sometimes helped build!). This probably makes them amongst the most progressive people of the 1800's.
"Ludite" "low-IQ" "meat LLM" "you will be left behind"
The behavior of the boosters is basically the opposite of how to make friends and influence people. I've been through plenty of hype cycles, and this is the first one where they seem to need to insult and threaten everyone.
I don't get it. And I don't feel any need to entertain it.
That's also because of seeing them as technology under development. The overblown expectations are because of their potential in the future. The glass is half full today. It was almost empty just a few years ago. The water level is rising with an unprecedented rate. But we shouldn't forget it's still half empty right now. More importantly, we are bad at predicting how actual people use the technology.
Otherwise, yes, I am very concerned about society's use of LLMs -- particularly young people (students).
But now the very teachers themselves... Frankly, not surprised.
I've been using it to make me a much better tutor/mentor. But the cases outlined in (I'm assuming) the public education sector are very, very worrisome.
Normal product release cycles bring testable quality measures before the product is released. Do LLMs go through such tests?
If there are serious societal issues that it can cause at the moment I wonder why it was released before being perfected, but then, what does perfected even look like? The product is darn good at the moment.
These products are basically public beta. Some features are even experimental. They are released to the public because (1) companies have to gain early market share (2) they need actual user data. Sam Altman firing drama from last year was related to this issue.
I think its more capitalism problems. The constant squeeze for everyone to output more for less pay or die of starvation. No one could ever choose good in these circumstances.
It is hilarious though that humans are just ready to slurp up any opportunity to just start shitting all over their work and their coworkers and students. Less time spent and caring?! yes please! then they become full time salesmen of it to everyone else, and then the social interaction problems just explode. Tribal standoffs, create entirely new tribes, plotting and deception to continue getting what they have a taste of.
Infighting distractions are so convenient as the government guts everything they work under
Wonder if this is a natural consequence of teachers being overworked. If teachers can get more work done with AI (who cares if quality suffers!) then that becomes the baseline and admins will push them to do even more.
In other words I predict this to be less of an issue with smaller class sizes.
In some fields it's becoming a bit ridiculous/worrying.
The work load is making people create or extend their work using LLMs, and the reviewers/managers are also overloaded and don't have enough time to go through it so they feed it to an LLM to get a summary, that later is pushed somewhere else to feed another process... becoming a "broken telephone" business process were nobody really knows the detail of what's going on, and it's just LLMs feeding another LLMs in an eternally absurd process.
In my experience teachers are overworked because they care if quality suffers. They can get their work done in the set time if they just don't care for the students as much.
(Very anecdotal, local-to-the-Netherlands experience, of course.)
The cracks in the education system were showing even before AI, with unmotivated students and teachers alike just burning their hours. AI just exposed these cracks and showed that the entire system is incredibly inefficient and pointless. I believe that the future of education is in much smaller institutions that can support their communities on a human scale.
I don't think inefficiency is the right word. In fact, I'd argue the exact opposite: that one of the main problems with education is that every single administrator has been trying to optimize it to death for the wrong metrics, namely "number of students making it through" and "budget".
Smaller institutions are indeed better, but they are also less efficient. It's no wonder that only rich families can afford institutions like that.
I agree with you. Learning is messy, hyper situational, and personalized. "Optimizing" for "efficiency" neglects this and resulted in the cookie cutter "teacher factories" that public education has become. As someone with relatives who were public school teachers- they will tell you that there is no way to scale it back and bring the community aspect in closer... that most communities have too little budget and too many children and it just burns through teachers... Like gun control, this is likely a problem that will continue "without solution" because people are lazy and change is difficult. Im sure future historians will credit some of Americas collapse to this problem, among other "unfixable" societal problems.
As a teacher – excellent description, thank you. Just to add my experience to it – I was in school in seventies and had "40 years since graduating" meeting some years ago with my classmates. Vast majority were doing well and while we talked about old times in school, two things stood out. At first while we were in the same class, our experience was very different. We remembered very different things, different teachers were important to us up to the point where some of them were most loved ones to some, but most hated ones to others etc. But we all agreed that our homes were even more important for our education than school – from our homes (parents and grandparents) came the attitude that education is important and no matter what, it's our responsibility to study.
It feels funny to think about this next to the outrage over trans kids in school sports. There are probably a dozen kids nationally participating in a sport with other kids who didn't share the same set of chromosomes at birth. That's a tiny slice of the population, but the issue has captured the attention of a huge group of people. I believe the anger, if you distill it a bit, comes from an "unlevel" playing field, right?
But, when students use AI, and if there are some students that don't, the playing field is "unlevel" there as well. The students that don't perhaps want to learn a craft rather than take a shortcut to getting a grade. I would wager that the number of students and teachers using AI is now the majority population.
I face this dilemma on a daily basis when trying to do my job as a software developer. Let claude take over, and risk losing the only skill I had to differentiate myself in this harsh world? Or, take a chance on being the turtle and trying to win the race against the hare?
The more time I spend writing code with help from LLMs the less I fear for my job, because I gain an increased understanding of how much depth there is to building software.
To get good results out of an LLM you need to determine exactly what the system needs to do and how it should work.
That's programming! We just don't have to type all the semicolons ourselves any more.
I always feel pretty special when you respond to a comment of mine. Thank you.
I agree with you. And, I'm not sure LLMs help me learn high level concepts (yet). They certainly have those concepts inside their training data and you can extract the concepts if you do the work. But, in a lot of domains, and this applies to someone old like me and someome young like my kids, knowing what to ask is the central problem.
This applies to what I see my kids doing with AI: I don't think LLMs, right now, encourage them to learn concepts as much as they quickly give them answers.
I don't see ChatGPT Study Mode as fulfilling on this, in my limited usage, but I would love to be wrong about that. Its a good direction indeed.
Probably this is the new frontier, where the best students are the ones that figure out how to use these tools to learn "deeply" rather than just jumping to the answers. Maybe that is how it has always been?
> I believe the anger, if you distill it a bit, comes from an "unlevel" playing field, right?
Why is "unlevel" in quotes? When it comes to physical activities, biological males have a huge advantage over biological females; high school boys routinely beat professional adult women's sport teams.
> But, when students use AI, and if there are some students that don't, the playing field is "unlevel" there as well. The students that don't perhaps want to learn a craft rather than take a shortcut to getting a grade.
I agree that this is a bigger problem than trans kids in sports. I think people are less upset about this because
1. It's a more recent development
2. They think that the kids using AI are actually putting themselves at a disadvantage, albeit one that will only become apparent after they graduate.
> Let claude take over, and risk losing the only skill I had to differentiate myself in this harsh world?
the good times are over, it happens. i remember watching that Dall-E come out and feeling sorry for graphic designers, gloating in the knowledge programming was too complex to automate. then they automated it.
a human is still required in the loop for vibe coding, as its fairly fuckin useless without guidance, but i can see that changing too
I’m drafting policy at work with teammates about how we will handle pull requests with aggressive use of Claude Code. We are currently researching and piloting it.
I am going to propose that no one should feel pressure to use any of the generative coding tools if they don't want to.
> * A teacher sponsoring a club put student artwork through Microsoft Copilot to 'clean it up' because he thought it looked too unfinished and the kid felt incredibly disrespected and upset.
and rightly so! kids deserve better, that is awful
1. One cannot not communicate
2. Every communication has a content and relationship
aspect such that the latter classifies the former
and is therefore a metacommunication
3. The nature of a relationship is dependent on the
punctuation of the partners' communication procedures
4. Human communication involves both digital and analog
modalities
5. Inter-human communication procedures are either
symmetric or complementary
Re: (1), the "mere" act of using AI communicates something, just like some folks might register a text message as more (or less) intimate than a phone call, email, etc. The choice of modality is always part of what's communicated, part of the act of communication, and we can't stop that. Re: (2), that communication is then classified by each person's idea of what the relationship is.
This is a dramatic and expensive way to learn they had different ideas of their relationship!
Of course, in a teacher/student situation, it's the teacher's job to make it clear to the students what the relationship is. Otherwise you risk relationship-damaging "surprises" like this.
Even ignoring the normative question of what a teacher Should™ do in that situation, it was counterproductive. Whatever benefit the teacher thought AI would provide, they'd (hopefully) agree it was outweighed by the cost to their relationship w/ students. All future interactions w/ those students will now be X% harder.
There's a kind of technical rationale which says that if (1) the GOAL is to improve the student's output and (2) I would normally do that by giving one or more rounds of feedback and waiting for the student to incorporate it then (3) I should use AI because it will help us reach that goal faster and more efficiently.
John Dewey described this rationale in Human Nature and Conduct as thinking that "Because a thirsty man gets satisfaction in drinking water, bliss consists in being drowned." He concludes:
”It is forgotten that success is success of a specific effort, and satisfaction the fulfillment of a specific demand, so that success and satisfaction become meaningless when severed from the wants and struggles whose consummations they are, or when taken universally.”
The act of receiving and incorporating feedback is not "inefficient", especially not in a school setting. The consummation of that process is part of the goal. Maybe the most important part!
however, this same action could be useful if it was placed in a different context - for example, if the teacher uses the same AI to produce an artefact, then use it to critique the student as part of teaching (say, to show what might be lacking in a particular piece).
All those teachers should indeed be banned from using AI. But that's not because LLMs are incapable of the things they're using them for, in a way that would be an improvement over how those same teachers were doing those tasks pre-LLMs.
The majority of times I see things like this it turns out that it's either:
- The "they've built it wrong" case; this one is the most common. People using - or in this case being made to use at work - tools that behind the scenes all use very cheap models (e.g. 4o-mini) with little context, half vibe-coded up, to save costs. The company making "MagicSchool" doesn't care, they want to maximize those profit margins and they're selling to school administration, not teachers, who only look at the costs and don't ever actually use the products themselves. Just like classic enterprise software in traditional companies. They need to tick boxes, show features that only show the happy path/case. It is perfectly possible to make it high quality, in a way that adds value, doesn't make shit up, and is properly validated. But especially in this niche, sales trumps everything. The hope is that at some point, this will change. We've seen the same play out with enterprise software to an extent; new such software does tend to be more usable on average than it used to be. It has taken a long time to get there though.
- The "you're holding it wrong" meme; users themselves directly using tools like Microsoft Copilot, 4o and friends (very outdated, free tiers, miles behind Claude/Gemini 2.5 pro/o3/etc.), along with having zero idea about what LLMs can and can't do, and obviously even less of an idea about inherent biases and prompting to prevent those. This combined with a complete lack of caring, along with a lack of competency - people lacking the basic critical thinking skills necessary to spot issues - is a deadly combo.
Of the problems with tasks and outcomes named in that thread, the large majority can indeed be done already with LLMs in a manner that both saves time and provides better quality than the level of those teachers rightly being criticized there. Teachers who are not even checking the output obviously don't give a single damn anyway, and that tells you enough about what the quality of their teaching would've been like pre-LLMs.
Using LLMs to produce material is not a good idea, except maybe to polish up grammar and phrasing.
As a former teacher, I know you need to have a good grasp of the material you are using in order to help students understand it. The material should also be in a similarly structured form thoughout a course, which will reinforce the expectations of the students, making their mental load lesser. The only way to do this is to prepare the material yourself.
Material created by LLM will have the issues you mentioned, yes, but it will also be less easy to teach, for the reasons mentioned above. In the US, where teaching is already in a terrible state, I wouldn't be surprised if this is accepted quietly, but it will have a long lasting negative impact on learning outcomes.
If we project this forward, a reliance on AI tools might also create a lower expectation of the quality of the material, which will drag the rest of the material down as well. This mirrors the rise of expendable mass produced products when we moved the knowledge needed to produce goods from workers to factory machines.
Commodities are one thing, you could argue that the decrease in quality is offset by volume (I wouldn't, but you could), but for teaching? Not a good idea. At most, let the students know how to use LLMs to look for information, and warn them of hallucinations and not being able to find the sources.
I agree you shouldn't use LLMs to produce material wholesale, but I think it can be positively useful when used thoughtfully.
I recently taught a high school equivalent philosophy class, and wanted to design an exercise for my students to allocate a limited number of organs to recipients that were not directly comparable. I asked an LLM to generate recipient profiles for the students to choose between. First pass, the recipients all needed different organs, which kind of ruined the point of the dilemma! I told it so, and second pass was great.
Even with the extra handholding, the LLM made good materials faster than if I would have designed them manually. But if I had trusted it blindly, the materials would have been useless.
How can you ensure that the exercise actually teaches the students anything in this case? Shouldn't you be building the exercise around the kinds of issues that are likely to come up, or that are difficult/interesting?
If you're teaching ethics in high school (which it sounds like you are), how many minutes does it take to write three or four paragraphs, one per case, highlighting different aspects that the student would need to take into account when making ethical decisions? I would estimate five to ten. A random assortment of cases from an LLM is unlikely to support the ethical themes you've talked about in the rest of the class, and the students are therefore also unlikely to be able to apply anything they've learned in class before then.
This may sound harsh, but to me it sounds like you've created a non-didactic, busywork exercise.
> How can you ensure that the exercise actually teaches the students anything in this case?
By participating in the exercise during class. Introducing the cases, facilitating group discussions, and providing academic input when bringing the class back together for a review. I'm not just saying "hey take a look at this or whatever".
> If you're teaching ethics in high school (which it sounds like you are)
Briefly and temporarily. I have no formal pedagogic background. Input appreciated.
> This may sound harsh, but to me it sounds like you've created a non-didactic, busywork exercise.
I may not have elaborated well enough on the context. I'm not creating slop in order to avoid doing work. I'm using the tools available to do more work faster - and sometimes coming across examples or cases that I realized I wouldn't have thought of myself. And, crucially, strictly supervising any and all work that the LLM produces.
If I had infinite time, then I'd happily spend it on meticulously handcrafting materials. But as this thread makes clear, that's a rare luxury in education.
I've done years of private 1:1 teaching and some class teaching though not class lecturing, which is presumably the material you're talking about.
> As a former teacher, I know you need to have a good grasp of the material you are using in order to help students understand it. The material should also be in a similarly structured form thoughout a course, which will reinforce the expectations of the students, making their mental load lesser. The only way to do this is to prepare the material yourself.
It's absolutely necessary to have a good fundamental understanding of the material yourself. These teachers abusing AI and not even catching these obvious issues, clearly don't have such an understanding - or they're not using any of it, which is effectively the same. In fact, they're likely to have a much worse understanding than your average frontier LLM, especially given this post is about high school level teaching.
> The only way to do this is to prepare the material yourself.
As brought up in other comments, what is yourself? For decades teachers have been using premade lesson plans, either third-party, school supplied or otherwise obtained, with minor edits. All teachers? Of course not, but it's completely normalized. Are they doing it themselves? If not, then the remainder did it together with Google and Wikipedia. Were they also not doing it themselves? Especially given how awful modern Google is (and the worldwide number of high school teachers using something like Kagi will be <100 people), simply using a frontier model, especially with web search enabled, is simply a better version of doing that, if used in the same way.
If you use a prepared lesson plan it at least has some structure to it that students can learn to expect, and if you search for information from the internet, you are still compiling it yourself, which again means structure, a structure _you_ made using information that _you_ have parsed and decided to include. You will also have sources.
In the future, only prestigious private schools will employ human teachers.
Education in public schools is going to be 100% LLMs with text-to-speech, the only human adult in classrooms will be a security guard, but later they will also be replaced with AI-controlled autocannons that shoot non-lethal projectiles to discipline misbehaving kids.
Drone-based school security with flash bangs, pepper spray and physical dive-bombing of the attacker is already the plan: https://www.campusguardianangel.com/
Just add a student compliance add-on subscription.
Yeah, you can use 2 drones - first one removes the cover/doors, second one enters the previously enclosed area. Just make sure the second drone does not cut the fiber optics cable trailed by the first drone by accident.
And you probably don't need fiber optic, because you're operating in "owned space" - the drones sit on charging platforms until needed. You can have, say, additional access points embedded in the walls and ceilings (for a price, but it's children's safety, so who are we to say base station rental is worth more that little Timmy not getting a 5.56 in the back in a signal blackspot!?)
* Class monitoring when no teacher present - optional collusion analysis
* Conduct enforcement in corridors - optional RFID speed ticketing to prevent running in the hallways!
* Playground overwatch - optional score keeping for licensed games such as Hopscotch [TM].
* Perimeter monitoring for truants, contraband trading and drug dealing
* Toilet break escorting (optionally at a discreet distance)
* Per-student tracking and ensemble fraternisation analysis, optional social media and online profile correlation, and real-time alerting of parental accounts on contact with other students in parent- or community-provided watchlists or handy pre-set demographic groups.
* Student mood, wellness and attitude monitoring based on body language and speech patterns. Referral to preferred behavioral therapeutic partner providers at a discount!
With facial recognition you can even send warnings and punishments directly to the student and parental phones via the CGA App and apply demerits to their account automatically. Link a lunch payment account for automatic profanity penalties!
Nah by that point people won't have a reason to drop them their children off to a glorified daycare designed to condition them to work quietly for a set amount of hours because we won't need to work anymore.
Why do you think it’s different in other countries? It’s the same all over Europe too. More and more kids have ADHD or other mental issues, social networking affects social norms etc.
I don't know about other countries, my experience has only been in US high schools, mostly public. Maybe it would work in other countries, or private schools
I assure you, it’s similar in many EU countries. Teachers are usually paid very little, governments are not doing much to keep them, due to really bad demographics forecasts in most countries.
Exacly which direction are we headed at the moment that isn't dystopian nightmare? Under-resourced towns will likely happily shed humans, except for the headmaster and security officers. It'll take a generation, though.
Nah, kids would have to wear armbands with tasers on them, required to put them on to enter the school building or open doors in the building. Their only human adult interaction will be with the guards that ensure they are banded up and who stay on campus to react to alarms from every kid who tries to remove their armbands.
Buses will be driven by AI as well, so they'll only see their parents for 10 minutes in the morning and for an hour or so during the occasional dinners they eat together, and otherwise kids will be entirely alienated and left alone.
But do not worry! There will be an AI companion for them to talk to at all hours, a nanny AI, or a nAInny, one that starts as a nanny when they are infants and will gradually grow into an educator, confidante, and true friend that will never betray them.
That nAInny will network with other nAInnies to find mates for their charges, coordinate dates, ensure that no hanky-panky goes on until they graduate college and get married, and will be there together to give pointers and instructions during the act of coitus to enhance the likelihood of producing offspring that their fellow nAInnys will get to take care of.
A truly symbiotic relationship where the humans involved never have any agency but never need it as their lives are happy and perfect in every way.
If you don't want to participate in that, you will be removed as an obstacle to the true happiness of the human race.
I thought it was the other way around, the guard is there to shoot the dog when it's attacking the children (with apologies to the very old joke about catching bears in trees).
Classrooms? Lol, it's going to be byod and wfh. You're only going to have to sit in front of the security guard and all of the electronic monitoring while doing standardized tests, and this remaining expense will aggravate the state so much that it will replace the exam rooms with omniscient, thinking rootkits on every school-aged person's computing devices. However, since children could use some adult's computing device to avoid monitoring, once well established those rootkits will be installed on everyone's computing device, at the hardware level.
If you object, it's because you hate children.
Eventually, there are no more misbehaving kids, there are misbehaving parents who children are reporting to the trusted phones who taught them about the world, the phones that aligned the values of your children with the values of the people who paid the people who designed the system.
So my daughter got sent home with some math questions. Thought they looked a bit dry but thought nothing further of it. I checked the answers for her which were all ok.
Couple of days later she comes home and tells me I was wrong about some of them which I know I was not. Apparently they self marked them as the teacher read the answers out. Decided to phone in and ask about the marking scheme which I was told I was wrong too and basically I should have done better at GCSE mathematics.
I relayed my mathematical credentials and immediately the tone changed. The discussion indicated that they’d generated the questions with CoPilot and then fed that back into CoPilot and generated the answer sheet which had two incorrect answers on it.
The teacher and department head in question defended their position until I threatened to feed them to the examination board and leadership team. The following of the tech was almost zealot level religious thinking which is not something I want to see in education.
I check all homework now carefully and there have been other issues since.
That is crazy. Curious - are you planning on raising to the board, administrators, etc? It's probably impacting other students (who don't have a parent checking their work), and teachers of other subjects in the school may be doing the same thing
It seems a little disingenuous to equate the importance of bagging groceries and supporting your child's education when judging how much time and attention each deserves.
I think the meaning was more "and now there is yet another thing the education system was better suited to do the parent now needs to do instead" and less "your child's education is worth grocery bags".
Here's a review of AlphaSchool and it's methods. Honestly, the review is a good one and very well written. It's worth your time if you have inkling about alternative education and the use of AI in the classroom.
TLDR: The magic is not AI, it's that they bribe kids for good grades. Oops, sorry, 'motivate' kids for good grades.
Teachers using AI to generate all of their lesson material, read student papers and write comments.
Students using AI to generate their papers and solve complex problems.
What are we as humans even doing. Why not just connect two shitty models together and tell them to hallucinate to each other and skip the whole need to think about anything. We can fire both teachers and students at the same time and save money on this whole education thing.
> Why not just connect two shitty models together and tell them to hallucinate to each other and skip the whole need to think about anything.
Western countries have better conditions than much of the world for a variety of reasons, but among them is education and culture.
Raising the next generation to outsource all thinking to AI and form a culture around influencing people 45 seconds at a time will destroy those prerequisites to our better lifestyle, and it will be downhill from there.
You might argue that the AI can be a mentor or can guide society appropriately. That's not wholly untrue, but if AI is "a bicycle for the mind", you still have to have the willingness and vision to go someplace with it. If you've never thought for yourself, never learned anything independently, I just don't see how people will avoid using AI to be "stupid faster".
> You might argue that the AI can be a mentor or can guide society appropriately
its a next word predictor trained off datasets.
> Raising the next generation to outsource all thinking to AI and form a culture around influencing people 45 seconds at a time will destroy those prerequisites to our better lifestyle, and it will be downhill from there.
they said the same about tv, youtube and even printed books. short length videos now apparently are the new evil (somehow).
quick question, why was nobody complaining about these exact same "engagement" algorithms 20 years ago? Why only when tiktok short form videos appear? Popularity based ranking was in search engines decades ago but nobody cared then. No cocomelon back then, coincidence?
> Raising the next generation to outsource all thinking to AI and form a culture around influencing people 45 seconds at a time will destroy those prerequisites to our better lifestyle, and it will be downhill from there.
absolutely
up until 2022 I was optimistic for the future
our current big problems: climate change, nuclear proliferation, global pandemics, dictatorships, antibiotic resistance, all seemed solvable over the long term
"AI" however is different
previously all human societies placed a high value on education
this is now gone, if anything spending time educating yourself is now a negative
I don't see how the species survives this new reality over the long term
IIRC, it may be better to have the same number of real humans focussing on fewer pupils. Even when they're using VLMs as assistants.
Students:
While humans max out at a higher skill level than VLMs, I suspect that most (not all!) people who would otherwise have finished education at their local mandatory school leaving age, may be better off finishing school as soon as they can use a VLM.
But also #1: There's also a question of apprenticeships over the current schooling system. Robotics' AI are not as advanced as VLMs, so while plumbing will get solved eventually (and a tentacle robot arm with a camera in each "finger" is clearly superior to our human arms in tight spaces), right now it still looks like a sane thing to train in.
But also #2: Telling who is and isn't getting anything out of the education system is really hard; not only in historical systems like the UK's old eleven-plus exams, but today after university graduation when it can sometimes take a bit of effort to detect that someone only got a degree for the prestige and didn't really learn anything.
>There's also a question of apprenticeships over the current schooling system. Robotics' AI are not as advanced as VLMs, so while plumbing will get solved eventually (and a tentacle robot arm with a camera in each "finger" is clearly superior to our human arms in tight spaces), right now it still looks like a sane thing to train in.
This is the current meta. Today's knowledge workers are propertymaxxing like crazy, and sending their kids to trade school. Well, at least those who see the writing on the wall. The second half of the 021st century will see the rise of the PLIWs [1]. Knowledge work will become extinct. The social order will be:
1. elites: a small aristocracy, who control the access to AI
2. middle class: PLIWs
3. low class: children of today's knowledge workers who couldn't amass sufficient wealth for their kids to become PLIWs. Also called slop-feeders, as their job is to carry out the instructions coming from the AIs without questioning or understanding what they're doing.
There have always been a disgusting number of people who treat education as a means to an end.
A teaching culture of thinking that all you have to do is graduate students + a learning culture of thinking all you have to do is graduate.
This already was at an 8. Got dialled up to 11 during covid. And somehow dialled up to 21 after ChatGPT.
Normally, broken things can hobble along for a very long time. But the strain is so intense on what has become of education that my current guess is that the chickens will come to roost on this one sometime 2026 to 2027.
Somehow though, this actually might be the best time to for learners, to sit down and engage with topics and don’t be distracted by formal stuff (degrees, grades, points) because the latter is becoming more meaningless with each token being sent down the drain.
I think the whole "school" thing is just a giant filtering mechanism to sort out who gets placed where in society. The idea of learning things is just pretense for the majority of students. It's a giant, intricate sorting hat. Both teachers and students using AI to get out of doing the work just makes it obvious. The thing is, it means we're going to need a new filtering mechanism, because AI is making this one obsolete.
> I think the whole "school" thing is just a giant filtering mechanism to sort out who gets placed where in society.
I think you haven't picked up enough history books if that's the only positive thing you can come up about "schools". But I guess that's what we get after decades of "the economy is the only thing that matters" propaganda, what's the point of history, math, science, when the system just need good little consumerist wage slaves
Maybe get your head out of your history books and take a look around you. The whole thing is a cross between babysitting at the younger ages and allocating who gets what jobs at the older ages. Any purpose the system served previously has been supplanted by this.
A few years ago employers wanted people to make NFTs. Luckily for the most part educators didn't start exclusively teaching kids about how to make NFTs.
Perhaps chasing what employers want at any given moment is not a good basis for an education system.
This is happening across all industries, unfortunately. Medical, engineering, pharmaceuticals, law enforcement, military, transportation, law... Thanks for a perfect post that describes the problem! We need more of these. Most people know they're doing it too, they just need to be told more.
The good news is that teaching via PowerPoint slides is probably one of the worst possible ways and so just making it repetitive won't disturb the students naps too much.
If the teacher had asked AI what are more effective ways to ensure the students are learning the material, I really doubt a PowerPoint presentation would have been the result
Often the LLM people read like they're five years old, discovering for the very first time what happens when you start to act out against society and root your moral calculus in deep cynicism
Considering that an auto-generated storybook (https://gemini.google.com/share/8d296b91b77b) taught me why 0.99999... = 1 more clearly and memorably than my "good school district" education, I'm optimistic what AI could do for education.
In some cases teachers are being overworked and expected to deliver far beyond their capability. The issue I see here is that its Powerpoint / document generation AI tools often use older / cheaper / worse models e.g. 4o mini, instead of Claude opus or Gemini 2.5 pro. The second issue it is often hard for the original prompter to see issues in AI output, so another pair of eyes or a different LLM prompt with more context can often pick up most issues. I dont think AI use for teachers is going anywhere, we should work with the flow on this one and help teachers do their jobs more easily.
Hot take: ChatGPT’s performance is close enough to a teacher’s which is why this is a problem at all.
Can some one answer what would realistically change if teachers did use ChatGPT in this way but the students never found out? Things would be more or less the same.
Richard Feynman summed up public education well even if schools have ostensibly changed since: "Everything was written by somebody who didn’t know what the hell he was talking about, so it was a little bit wrong, always!".
I absolutely cannot read a Feynman quote without literally hearing it in his voice.
Things being a little bit wrong is not a huge problem. Much worse is if LLMs remove all the rigor and grit from education, the hard work to learn how to recognize facts.
[the slideshow] was missing important tested material, repetitive, and just totally airy and meaningless. Just slide after slide of the same handful of sentences rephrased with random loosely related stock photos.
Who cares if he saved himself some time when he completely wasted everyone else's time?
If I had a choice about whether to give the presentation, I would choose not to. If you had a choice about whether to attend it, you would also choose not to. But, alas, both of us are there -- such is the way of the large bureaucracy.
I think the real problems are that knowing when to use something appropriately and holding yourself honestly to that are pretty difficult for most people.
No, if you can type in a prompt, just email me the prompt so I know what you intended. I don't need the slop the AI came up with, thank you.
I already feel disrespected in powerpoint presentations where they clearly haven't practiced it for a long time and seem to be discovering the slides and coming up with the argument they want to make on the spot. I usually get up and leave.
People need to realize that the next generation of kids is already unable to differentiate human vs llm generated text, and not only that, but they don't even mind it. They are already using LLMs to generate all their text and so they don't mind reading LLM generated text either.
They won't be reading the text, they will be getting their LLMs to summarize the LLM generated text and read it to them. We are heading for a state where all written communication will be mediated by LLMs - get my people to talk to your people but for everyone.
LLMs will mediate plenty of routine text, but the choke-point shifts from “writing” to “prompting + validating”.
In client projects we see two hard costs pop up:
1. Human review time ⟶ still 2–4 min per 1 k tokens because hallucination isn’t solved.
2. Inference \$: for a 70 B model at 16 k context you pay ~\$0.12 per 1 k tokens — cheap for generation, expensive for bulk reading.
So yes, AI will read for us, but whoever owns the *attention budget + validation loop* still controls comprehension. That’s where new leverage lives.
Agreed, this is just a quote on his link blog. Better to post Reddit thread directly. I wouldn’t expect a plain Reddit link to get to the HN front page though.
Trains allowed us to go further than we could ever walk. Cars caused us to lose the ability to walk completely.
Looms allowed us to produce fabrics of higher quality than we ever could by hand. Fast fashion caused us to lose the ability to care for and mend clothes completely.
Computers allowed us to calculate and "think" faster than we ever could before. AI caused us to lose the ability to think completely...
I suggest please link directly to the reddit thread, it has the original text (not a snippet) and lots of additional insights and anecdotes in the comments.
On HN, reddit counts as a negative signal. You link to reddit directly and most HNers will instantly downvote.
I keep telling people here that reddit is actually an underappreciated goldmine, but I guess feeling better than others feels too good to pass on.
In my mind, reddit is like of HN, except instead of being just tech and business oriented people, it's every subject under the sun. Most of it is garbage (like on HN) but if you're willing to search it's a goldmine.
Every time someone criticizes reddit I automatically assume that they have brain damage.
When I go to reddit, I see posts about astrophotography, vintage computers, ham radio, classic cars, typewriters, film photography, sculpture, gardening, woodworking, firefighting, archery, fencing, outer space, watches, cutaway drawings, the Sega Saturn, and more topics I'm interested in.
When they go to reddit they see stuff they don't like, and people arguing about it.
I just want to shout "yes, you do, because your brain is damaged and you asked it to show those things to you".
It's the same with all social media. When I go to instagram I see people I know personally and have been in the same room with doing things I am interested in. I don't see any rage, titillation, celebrities, or gossip. Just my friends and acquaintances being friendly. (It IS annoying that I keep having to turn off suggested posts)
Even when I click on the magnifying glass, which is where people say Instagram shows them titillating things in order to get them hooked, I see scuba diving, aviation, vintage Macs, watches, and astronomy.
What is going on? Do I have a "only show this guy nice stuff he's interested in" cookie following me around the internet?
And YouTube. People will complain about YouTube showing them shit. When I go to YouTube.com, right now, I just opened it in another tab, the top six videos are: Dutch firefighters battling a stable fire, a homelabber messing around with vintage linksys equipment, a history of the development of nuclear weapons, a review of a handheld video game emulator, a guy with a VAX setup in his basement working on restoring those machines, and a video on new theories about how the moon was formed.
The next six are also laser-focused on my interests including a two hour video about various motifs found in Indo-European mythology and their origins which I am totally going to listen to in the background while at work.
I did nothing, NOTHING, except subscribe to/follow things I like and people I know, and it's great.
When people log into reddit and see people arguing about bullshit, instagram and see models bouncing their tits, and YouTube and see garbage, the only logical conclusion I can reach is that their brains are damaged and they set up the systems to show them these things then decided to complain about it as some kind of hobby or something.
If anything, HN is the worst of them all because I can't tell it "show me more 'floppy disk raid array' and less 'crypto and AI bullshit'".
Agreed. My Reddit experience is always enjoyable, entertaining and informative. I'm always surprised when I see references to crazy/hateful/deviant/cesspool content on Reddit because I never see anything remotely like that. But of course, I'm not looking for that stuff.
Ah yes, blame the users for the algorithm. Makes sense. Blame the overweight for the food they have access to! Blame the kids for failing schools!
Not everything is a personal moral failure when society is literally out to get each and every one of us. Many of us have been damaged by the ‘net, it’s purveyors of crap, intentionally for their gain.
Don’t just turn and point fingers at the endusers. They sure as fuck didn’t design the algorithms.
Perhaps growing up in an age and (admittedly unusual) setting where I had to deliberately choose the media I consumed, rather than having it fed to me, was a boon.
I am convinced that is a skill that can be learned or taught.
AI just reveals how lazy/cheap/low-standards they were already trying to be. And if AI keeps progressing at current speeds, those are the people who are going to be most easily replaced by AI-tutors within a few years. The actually-good teachers would still have a job in a sane world, but who knows what will happen.
These examples show that we have a serious social issue, and it's not limited to teachers. People misuse LLMs. We engineers understand that LLMs are products under development. They only work correctly under certain circumstances, and they have limitations and non-perfect evaluation metrics. Regular people (non-engineers) treat them as finished products or magic wands. They ignore the warnings in the page saying LLM can make mistakes. And there are billions of those people. This may create huge social problems that engineers can't fix.
Correct, and the root cause is it’s been sold to the public this way.
I think it's a grey area - on one hand, it's been sold as a source of truth, but on the other hand, there's a strong element of confirmation bias and/or simple laziness from users (of course, to varying degrees).
A friend of mine states that the market rates for their position are wrong, because ChatGPT gave higher numbers: this is an example on the far end of the spectrum of confirmation bias - it matters little whether it was sold as source of truth or not.
Marketing to the consumer instead of informing and educating the user. That makes sense in terms of incentives.
> We engineers understand that LLMs are products under development. They only work correctly under certain circumstances
Looking around me, engineers do not understand that. Instead, they have exactly same overblown expectations and actively push for LLM everywhere. They will call you ludite if you say anything else.
Wear that Luddite badge with honor. They were not anti technology[0], they fought for worker rights during an age of rapid rise of new technology.
[0]: https://www.newyorker.com/books/page-turner/rethinking-the-l...
Luddites were idiots. They thought they could stand in the way of progress. They were crushed by it.
What's worse is that people still make the same mistake today.
Luddites weren't crushed by progress, they were crushed by an armed militia backed by the government. 12,000 militia and yeomanry units, in fact. Knowing how most of the AI fanatics are like though, doesn't surprise me they're on the side of the boot rather than on the side of the people getting crushed by said boot.
Also, what's the definition and point of "progress" to you? Because the way AI is shaking out to me screams the opposite of what I'd expect progress to look like. Assuming the likes of Altman (an individual who wants to harvest your biometric data for a scam shitcoin, by the way) can be believed and we indeed reach the singularity or AGI or whatever, is everyone except for the C-level (who is somehow magically exempt from the negative effects of this progress and irreplaceable somehow) losing their livelihoods and getting crushed under the boot of the wealthy and powerful "progress", in your eyes?
If you think that the prospect of "job loss" would, or should, stop progress, you're delusional. There are reasons to slow AI progress down, but "think of all the jobs" certainly isn't one.
You didn’t answer his question about what you define progress as.
> you're delusional
Am I? I'm saying this is what's going to happen (if people like Altman are correct), same as how the Luddites knew exactly what was going on. I'm not denying that we're not likely to stop AI development even if everyone loses their jobs, I'm saying that it's not what my vision of progress looks like.
You also conveniently left out the part I mentioned about those jobless people getting crushed by powerful boots and focused solely on what I said about job loss.
And again I'll ask, what exactly does "progress" mean for you? What world are we heading towards that counts as positive progress in your mind? Because from what I can tell you think we're going to be heading towards mass unemployment and... consider it a good thing, for some reason?
> Luddites were idiots. They thought they could stand in the way of progress.
Ummmm. No.
The luddites were not opposed to progress or new machinery. The luddites called for unemployment compensation and retraining for workers *displaced* by the new machinery (machinery they sometimes helped build!). This probably makes them amongst the most progressive people of the 1800's.
So it's a "mistake" to choose be Amish for example?
This is a pretty typical response of people who know the term "Luddite" as it's used today, but don't know much about the actual Luddites.
Progress crushes you whether you are a Luddite or not.
"Ludite" "low-IQ" "meat LLM" "you will be left behind"
The behavior of the boosters is basically the opposite of how to make friends and influence people. I've been through plenty of hype cycles, and this is the first one where they seem to need to insult and threaten everyone.
I don't get it. And I don't feel any need to entertain it.
I have a bunch of coworkers who paste various LLM outputs to every chat while discussing issues in production.
- "LLM X told that we should try to add this into configuration file – SOMETHING_SOMETHING = false."
- "There is no SOMETHING_SOMETHING configuration option, you have a full source, grep for it."
- "But should we try at least?"
That's also because of seeing them as technology under development. The overblown expectations are because of their potential in the future. The glass is half full today. It was almost empty just a few years ago. The water level is rising with an unprecedented rate. But we shouldn't forget it's still half empty right now. More importantly, we are bad at predicting how actual people use the technology.
[flagged]
> "Social problems that engineers can't fix"
Sounds similar to social media.
Otherwise, yes, I am very concerned about society's use of LLMs -- particularly young people (students).
But now the very teachers themselves... Frankly, not surprised.
I've been using it to make me a much better tutor/mentor. But the cases outlined in (I'm assuming) the public education sector are very, very worrisome.
Engineers are not uniquely immune to this phenomenon. Just look at all the commenters in this board claiming AI makes them 20x more effective.
Normal product release cycles bring testable quality measures before the product is released. Do LLMs go through such tests?
If there are serious societal issues that it can cause at the moment I wonder why it was released before being perfected, but then, what does perfected even look like? The product is darn good at the moment.
These products are basically public beta. Some features are even experimental. They are released to the public because (1) companies have to gain early market share (2) they need actual user data. Sam Altman firing drama from last year was related to this issue.
I think its more capitalism problems. The constant squeeze for everyone to output more for less pay or die of starvation. No one could ever choose good in these circumstances.
It is hilarious though that humans are just ready to slurp up any opportunity to just start shitting all over their work and their coworkers and students. Less time spent and caring?! yes please! then they become full time salesmen of it to everyone else, and then the social interaction problems just explode. Tribal standoffs, create entirely new tribes, plotting and deception to continue getting what they have a taste of.
Infighting distractions are so convenient as the government guts everything they work under
Wonder if this is a natural consequence of teachers being overworked. If teachers can get more work done with AI (who cares if quality suffers!) then that becomes the baseline and admins will push them to do even more.
In other words I predict this to be less of an issue with smaller class sizes.
In some fields it's becoming a bit ridiculous/worrying.
The work load is making people create or extend their work using LLMs, and the reviewers/managers are also overloaded and don't have enough time to go through it so they feed it to an LLM to get a summary, that later is pushed somewhere else to feed another process... becoming a "broken telephone" business process were nobody really knows the detail of what's going on, and it's just LLMs feeding another LLMs in an eternally absurd process.
In my experience teachers are overworked because they care if quality suffers. They can get their work done in the set time if they just don't care for the students as much.
(Very anecdotal, local-to-the-Netherlands experience, of course.)
Are they overworked? In the article he states he is coaching and run three preps in addition to teaching. Less can be more.
The fact that you have to have side hustles to make ends meet as a teacher is one aspect of how they're overworked.
Same article says he well paid. I’ll grant you that alot of teachers are paid little but there is nothing about money being in the discussion.
Overworked is relative. As soon as a way to reduce current workload is available, everyone feels "overworked"
Students give AI-generated essays and the AI then grade it.
That's what's called GAN - generative adversarial network.
And teachers used AI to generate the essay question.
Unsupervised learning.
LOL
The cracks in the education system were showing even before AI, with unmotivated students and teachers alike just burning their hours. AI just exposed these cracks and showed that the entire system is incredibly inefficient and pointless. I believe that the future of education is in much smaller institutions that can support their communities on a human scale.
I don't think inefficiency is the right word. In fact, I'd argue the exact opposite: that one of the main problems with education is that every single administrator has been trying to optimize it to death for the wrong metrics, namely "number of students making it through" and "budget".
Smaller institutions are indeed better, but they are also less efficient. It's no wonder that only rich families can afford institutions like that.
I agree with you. Learning is messy, hyper situational, and personalized. "Optimizing" for "efficiency" neglects this and resulted in the cookie cutter "teacher factories" that public education has become. As someone with relatives who were public school teachers- they will tell you that there is no way to scale it back and bring the community aspect in closer... that most communities have too little budget and too many children and it just burns through teachers... Like gun control, this is likely a problem that will continue "without solution" because people are lazy and change is difficult. Im sure future historians will credit some of Americas collapse to this problem, among other "unfixable" societal problems.
As a teacher – excellent description, thank you. Just to add my experience to it – I was in school in seventies and had "40 years since graduating" meeting some years ago with my classmates. Vast majority were doing well and while we talked about old times in school, two things stood out. At first while we were in the same class, our experience was very different. We remembered very different things, different teachers were important to us up to the point where some of them were most loved ones to some, but most hated ones to others etc. But we all agreed that our homes were even more important for our education than school – from our homes (parents and grandparents) came the attitude that education is important and no matter what, it's our responsibility to study.
Have been thinking similar... well, that we'll see much smaller educational organizations which are funded/supported in much different ways.
Technology is making humankind lazy. First physically, then mentally.
Learning is hard, it's a struggle. Why learn when you can not learn?
It feels funny to think about this next to the outrage over trans kids in school sports. There are probably a dozen kids nationally participating in a sport with other kids who didn't share the same set of chromosomes at birth. That's a tiny slice of the population, but the issue has captured the attention of a huge group of people. I believe the anger, if you distill it a bit, comes from an "unlevel" playing field, right?
But, when students use AI, and if there are some students that don't, the playing field is "unlevel" there as well. The students that don't perhaps want to learn a craft rather than take a shortcut to getting a grade. I would wager that the number of students and teachers using AI is now the majority population.
I face this dilemma on a daily basis when trying to do my job as a software developer. Let claude take over, and risk losing the only skill I had to differentiate myself in this harsh world? Or, take a chance on being the turtle and trying to win the race against the hare?
The more time I spend writing code with help from LLMs the less I fear for my job, because I gain an increased understanding of how much depth there is to building software.
To get good results out of an LLM you need to determine exactly what the system needs to do and how it should work.
That's programming! We just don't have to type all the semicolons ourselves any more.
I always feel pretty special when you respond to a comment of mine. Thank you.
I agree with you. And, I'm not sure LLMs help me learn high level concepts (yet). They certainly have those concepts inside their training data and you can extract the concepts if you do the work. But, in a lot of domains, and this applies to someone old like me and someome young like my kids, knowing what to ask is the central problem.
This applies to what I see my kids doing with AI: I don't think LLMs, right now, encourage them to learn concepts as much as they quickly give them answers.
I don't see ChatGPT Study Mode as fulfilling on this, in my limited usage, but I would love to be wrong about that. Its a good direction indeed.
Probably this is the new frontier, where the best students are the ones that figure out how to use these tools to learn "deeply" rather than just jumping to the answers. Maybe that is how it has always been?
> We just don't have to type all the semicolons ourselves any more.
And if we're not having to code in Java then we never had to type all those in the first place! ;)
> I believe the anger, if you distill it a bit, comes from an "unlevel" playing field, right?
Why is "unlevel" in quotes? When it comes to physical activities, biological males have a huge advantage over biological females; high school boys routinely beat professional adult women's sport teams.
> But, when students use AI, and if there are some students that don't, the playing field is "unlevel" there as well. The students that don't perhaps want to learn a craft rather than take a shortcut to getting a grade.
I agree that this is a bigger problem than trans kids in sports. I think people are less upset about this because
1. It's a more recent development 2. They think that the kids using AI are actually putting themselves at a disadvantage, albeit one that will only become apparent after they graduate.
> Let claude take over, and risk losing the only skill I had to differentiate myself in this harsh world?
the good times are over, it happens. i remember watching that Dall-E come out and feeling sorry for graphic designers, gloating in the knowledge programming was too complex to automate. then they automated it.
a human is still required in the loop for vibe coding, as its fairly fuckin useless without guidance, but i can see that changing too
I’m drafting policy at work with teammates about how we will handle pull requests with aggressive use of Claude Code. We are currently researching and piloting it.
I am going to propose that no one should feel pressure to use any of the generative coding tools if they don't want to.
> * A teacher sponsoring a club put student artwork through Microsoft Copilot to 'clean it up' because he thought it looked too unfinished and the kid felt incredibly disrespected and upset.
and rightly so! kids deserve better, that is awful
I sometimes find Paul Watzlawick's five axioms of communication helping in thinking about situations like this.
Link: (https://en.wikipedia.org/wiki/Paul_Watzlawick#Five_basic_axi...)
Re: (1), the "mere" act of using AI communicates something, just like some folks might register a text message as more (or less) intimate than a phone call, email, etc. The choice of modality is always part of what's communicated, part of the act of communication, and we can't stop that. Re: (2), that communication is then classified by each person's idea of what the relationship is.This is a dramatic and expensive way to learn they had different ideas of their relationship!
Of course, in a teacher/student situation, it's the teacher's job to make it clear to the students what the relationship is. Otherwise you risk relationship-damaging "surprises" like this.
Even ignoring the normative question of what a teacher Should™ do in that situation, it was counterproductive. Whatever benefit the teacher thought AI would provide, they'd (hopefully) agree it was outweighed by the cost to their relationship w/ students. All future interactions w/ those students will now be X% harder.
There's a kind of technical rationale which says that if (1) the GOAL is to improve the student's output and (2) I would normally do that by giving one or more rounds of feedback and waiting for the student to incorporate it then (3) I should use AI because it will help us reach that goal faster and more efficiently.
John Dewey described this rationale in Human Nature and Conduct as thinking that "Because a thirsty man gets satisfaction in drinking water, bliss consists in being drowned." He concludes:
”It is forgotten that success is success of a specific effort, and satisfaction the fulfillment of a specific demand, so that success and satisfaction become meaningless when severed from the wants and struggles whose consummations they are, or when taken universally.”
The act of receiving and incorporating feedback is not "inefficient", especially not in a school setting. The consummation of that process is part of the goal. Maybe the most important part!
Full Dewey quote: https://news.ycombinator.com/item?id=44597741
In another view, this prepares the kids for what the future is going to be like.
however, this same action could be useful if it was placed in a different context - for example, if the teacher uses the same AI to produce an artefact, then use it to critique the student as part of teaching (say, to show what might be lacking in a particular piece).
All those teachers should indeed be banned from using AI. But that's not because LLMs are incapable of the things they're using them for, in a way that would be an improvement over how those same teachers were doing those tasks pre-LLMs.
The majority of times I see things like this it turns out that it's either:
- The "they've built it wrong" case; this one is the most common. People using - or in this case being made to use at work - tools that behind the scenes all use very cheap models (e.g. 4o-mini) with little context, half vibe-coded up, to save costs. The company making "MagicSchool" doesn't care, they want to maximize those profit margins and they're selling to school administration, not teachers, who only look at the costs and don't ever actually use the products themselves. Just like classic enterprise software in traditional companies. They need to tick boxes, show features that only show the happy path/case. It is perfectly possible to make it high quality, in a way that adds value, doesn't make shit up, and is properly validated. But especially in this niche, sales trumps everything. The hope is that at some point, this will change. We've seen the same play out with enterprise software to an extent; new such software does tend to be more usable on average than it used to be. It has taken a long time to get there though.
- The "you're holding it wrong" meme; users themselves directly using tools like Microsoft Copilot, 4o and friends (very outdated, free tiers, miles behind Claude/Gemini 2.5 pro/o3/etc.), along with having zero idea about what LLMs can and can't do, and obviously even less of an idea about inherent biases and prompting to prevent those. This combined with a complete lack of caring, along with a lack of competency - people lacking the basic critical thinking skills necessary to spot issues - is a deadly combo.
Of the problems with tasks and outcomes named in that thread, the large majority can indeed be done already with LLMs in a manner that both saves time and provides better quality than the level of those teachers rightly being criticized there. Teachers who are not even checking the output obviously don't give a single damn anyway, and that tells you enough about what the quality of their teaching would've been like pre-LLMs.
Using LLMs to produce material is not a good idea, except maybe to polish up grammar and phrasing.
As a former teacher, I know you need to have a good grasp of the material you are using in order to help students understand it. The material should also be in a similarly structured form thoughout a course, which will reinforce the expectations of the students, making their mental load lesser. The only way to do this is to prepare the material yourself.
Material created by LLM will have the issues you mentioned, yes, but it will also be less easy to teach, for the reasons mentioned above. In the US, where teaching is already in a terrible state, I wouldn't be surprised if this is accepted quietly, but it will have a long lasting negative impact on learning outcomes.
If we project this forward, a reliance on AI tools might also create a lower expectation of the quality of the material, which will drag the rest of the material down as well. This mirrors the rise of expendable mass produced products when we moved the knowledge needed to produce goods from workers to factory machines.
Commodities are one thing, you could argue that the decrease in quality is offset by volume (I wouldn't, but you could), but for teaching? Not a good idea. At most, let the students know how to use LLMs to look for information, and warn them of hallucinations and not being able to find the sources.
I agree you shouldn't use LLMs to produce material wholesale, but I think it can be positively useful when used thoughtfully.
I recently taught a high school equivalent philosophy class, and wanted to design an exercise for my students to allocate a limited number of organs to recipients that were not directly comparable. I asked an LLM to generate recipient profiles for the students to choose between. First pass, the recipients all needed different organs, which kind of ruined the point of the dilemma! I told it so, and second pass was great.
Even with the extra handholding, the LLM made good materials faster than if I would have designed them manually. But if I had trusted it blindly, the materials would have been useless.
How can you ensure that the exercise actually teaches the students anything in this case? Shouldn't you be building the exercise around the kinds of issues that are likely to come up, or that are difficult/interesting?
If you're teaching ethics in high school (which it sounds like you are), how many minutes does it take to write three or four paragraphs, one per case, highlighting different aspects that the student would need to take into account when making ethical decisions? I would estimate five to ten. A random assortment of cases from an LLM is unlikely to support the ethical themes you've talked about in the rest of the class, and the students are therefore also unlikely to be able to apply anything they've learned in class before then.
This may sound harsh, but to me it sounds like you've created a non-didactic, busywork exercise.
> How can you ensure that the exercise actually teaches the students anything in this case?
By participating in the exercise during class. Introducing the cases, facilitating group discussions, and providing academic input when bringing the class back together for a review. I'm not just saying "hey take a look at this or whatever".
> If you're teaching ethics in high school (which it sounds like you are)
Briefly and temporarily. I have no formal pedagogic background. Input appreciated.
> This may sound harsh, but to me it sounds like you've created a non-didactic, busywork exercise.
I may not have elaborated well enough on the context. I'm not creating slop in order to avoid doing work. I'm using the tools available to do more work faster - and sometimes coming across examples or cases that I realized I wouldn't have thought of myself. And, crucially, strictly supervising any and all work that the LLM produces.
If I had infinite time, then I'd happily spend it on meticulously handcrafting materials. But as this thread makes clear, that's a rare luxury in education.
I've done years of private 1:1 teaching and some class teaching though not class lecturing, which is presumably the material you're talking about.
> As a former teacher, I know you need to have a good grasp of the material you are using in order to help students understand it. The material should also be in a similarly structured form thoughout a course, which will reinforce the expectations of the students, making their mental load lesser. The only way to do this is to prepare the material yourself.
It's absolutely necessary to have a good fundamental understanding of the material yourself. These teachers abusing AI and not even catching these obvious issues, clearly don't have such an understanding - or they're not using any of it, which is effectively the same. In fact, they're likely to have a much worse understanding than your average frontier LLM, especially given this post is about high school level teaching.
> The only way to do this is to prepare the material yourself.
As brought up in other comments, what is yourself? For decades teachers have been using premade lesson plans, either third-party, school supplied or otherwise obtained, with minor edits. All teachers? Of course not, but it's completely normalized. Are they doing it themselves? If not, then the remainder did it together with Google and Wikipedia. Were they also not doing it themselves? Especially given how awful modern Google is (and the worldwide number of high school teachers using something like Kagi will be <100 people), simply using a frontier model, especially with web search enabled, is simply a better version of doing that, if used in the same way.
If you use a prepared lesson plan it at least has some structure to it that students can learn to expect, and if you search for information from the internet, you are still compiling it yourself, which again means structure, a structure _you_ made using information that _you_ have parsed and decided to include. You will also have sources.
None of this will be true for LLM output.
In the future, only prestigious private schools will employ human teachers.
Education in public schools is going to be 100% LLMs with text-to-speech, the only human adult in classrooms will be a security guard, but later they will also be replaced with AI-controlled autocannons that shoot non-lethal projectiles to discipline misbehaving kids.
Drone-based school security with flash bangs, pepper spray and physical dive-bombing of the attacker is already the plan: https://www.campusguardianangel.com/
Just add a student compliance add-on subscription.
So how do they get through closed doors?
> The drones can also fly through windows, using a front lance to break through.
https://www.campusguardianangel.com/faq
I would say you couldn't make it up, but you could. You'd just be called a bad writer with unsubtle and derivative ideas.
Yeah, you can use 2 drones - first one removes the cover/doors, second one enters the previously enclosed area. Just make sure the second drone does not cut the fiber optics cable trailed by the first drone by accident.
Even then, Slaughterbots did it first: https://youtu.be/9fa9lVwHHqg?t=129
And you probably don't need fiber optic, because you're operating in "owned space" - the drones sit on charging platforms until needed. You can have, say, additional access points embedded in the walls and ceilings (for a price, but it's children's safety, so who are we to say base station rental is worth more that little Timmy not getting a 5.56 in the back in a signal blackspot!?)
This is satire, right?
Looks pretty real:
• https://www.linkedin.com/company/mithril-defense/people/
• https://www.nbcnews.com/nightly-news/video/company-says-high...
Why solve the root problem when it can instead be made into a business opportunity?
I foresee lots of up-sells:
* Class monitoring when no teacher present - optional collusion analysis
* Conduct enforcement in corridors - optional RFID speed ticketing to prevent running in the hallways!
* Playground overwatch - optional score keeping for licensed games such as Hopscotch [TM].
* Perimeter monitoring for truants, contraband trading and drug dealing
* Toilet break escorting (optionally at a discreet distance)
* Per-student tracking and ensemble fraternisation analysis, optional social media and online profile correlation, and real-time alerting of parental accounts on contact with other students in parent- or community-provided watchlists or handy pre-set demographic groups.
* Student mood, wellness and attitude monitoring based on body language and speech patterns. Referral to preferred behavioral therapeutic partner providers at a discount!
With facial recognition you can even send warnings and punishments directly to the student and parental phones via the CGA App and apply demerits to their account automatically. Link a lunch payment account for automatic profanity penalties!
Nah by that point people won't have a reason to drop them their children off to a glorified daycare designed to condition them to work quietly for a set amount of hours because we won't need to work anymore.
You vastly underestimate the value of daycare (leaving aside the actual point you’re making)
Exactly, a big part of public school is the "daycare" aspect of it. LLMs cannot provide that.
"get" to
I think it was sarcasm
Have you ever been inside an American K-12 classroom ?
Education is secondary to a teachers job …. the real issue is managing the classroom without disruption
Why do you think it’s different in other countries? It’s the same all over Europe too. More and more kids have ADHD or other mental issues, social networking affects social norms etc.
I don't know about other countries, my experience has only been in US high schools, mostly public. Maybe it would work in other countries, or private schools
It's culture. American culture treats teaching and education like a free babysitting service, and pays teachers accordingly.
If our culture valued education we would value teachers and their ability to teach, and we so clearly do not.
I assure you, it’s similar in many EU countries. Teachers are usually paid very little, governments are not doing much to keep them, due to really bad demographics forecasts in most countries.
"non-lethal"? I wish I shared your sunny optimism.
Why would it need lethal? The students have that covered, this is America.
This seems like a dystopian nightmare to me
Exacly which direction are we headed at the moment that isn't dystopian nightmare? Under-resourced towns will likely happily shed humans, except for the headmaster and security officers. It'll take a generation, though.
Nah, kids would have to wear armbands with tasers on them, required to put them on to enter the school building or open doors in the building. Their only human adult interaction will be with the guards that ensure they are banded up and who stay on campus to react to alarms from every kid who tries to remove their armbands.
Buses will be driven by AI as well, so they'll only see their parents for 10 minutes in the morning and for an hour or so during the occasional dinners they eat together, and otherwise kids will be entirely alienated and left alone.
But do not worry! There will be an AI companion for them to talk to at all hours, a nanny AI, or a nAInny, one that starts as a nanny when they are infants and will gradually grow into an educator, confidante, and true friend that will never betray them.
That nAInny will network with other nAInnies to find mates for their charges, coordinate dates, ensure that no hanky-panky goes on until they graduate college and get married, and will be there together to give pointers and instructions during the act of coitus to enhance the likelihood of producing offspring that their fellow nAInnys will get to take care of.
A truly symbiotic relationship where the humans involved never have any agency but never need it as their lives are happy and perfect in every way.
If you don't want to participate in that, you will be removed as an obstacle to the true happiness of the human race.
There should be a dog to stop the guard from talking to the kids.
I thought it was the other way around, the guard is there to shoot the dog when it's attacking the children (with apologies to the very old joke about catching bears in trees).
Classrooms? Lol, it's going to be byod and wfh. You're only going to have to sit in front of the security guard and all of the electronic monitoring while doing standardized tests, and this remaining expense will aggravate the state so much that it will replace the exam rooms with omniscient, thinking rootkits on every school-aged person's computing devices. However, since children could use some adult's computing device to avoid monitoring, once well established those rootkits will be installed on everyone's computing device, at the hardware level.
If you object, it's because you hate children.
Eventually, there are no more misbehaving kids, there are misbehaving parents who children are reporting to the trusted phones who taught them about the world, the phones that aligned the values of your children with the values of the people who paid the people who designed the system.
I mean, I would object, but I also hate children.
No, kids won’t go to schools at all. They will stay home and just learn from their own computers through virtual online curriculums.
first order effect of AI: the individual saves time
second order effect: across the entire population, the incentive to learn anything at all is removed
third order effect: society ceases to improve and regresses
but it's all good as I can generate boilerplate 30% faster!
So my daughter got sent home with some math questions. Thought they looked a bit dry but thought nothing further of it. I checked the answers for her which were all ok.
Couple of days later she comes home and tells me I was wrong about some of them which I know I was not. Apparently they self marked them as the teacher read the answers out. Decided to phone in and ask about the marking scheme which I was told I was wrong too and basically I should have done better at GCSE mathematics.
I relayed my mathematical credentials and immediately the tone changed. The discussion indicated that they’d generated the questions with CoPilot and then fed that back into CoPilot and generated the answer sheet which had two incorrect answers on it.
The teacher and department head in question defended their position until I threatened to feed them to the examination board and leadership team. The following of the tech was almost zealot level religious thinking which is not something I want to see in education.
I check all homework now carefully and there have been other issues since.
That is crazy. Curious - are you planning on raising to the board, administrators, etc? It's probably impacting other students (who don't have a parent checking their work), and teachers of other subjects in the school may be doing the same thing
I caused enough stink for them to be looking over their shoulder.
Great, and now you too are bagging your own groceries, mission accomplished.
It seems a little disingenuous to equate the importance of bagging groceries and supporting your child's education when judging how much time and attention each deserves.
I think the meaning was more "and now there is yet another thing the education system was better suited to do the parent now needs to do instead" and less "your child's education is worth grocery bags".
Oh! Yes that's a completely different read, and one whose sentiment I very much agree with.
FYI This school uses AI as teachers: https://alpha.school/santa-barbara/
They say they have good results?
Given that "Alpha School tuition ranges from $40,000 upwards" I wouldn't expect them to not say they get good results!
https://www.astralcodexten.com/p/your-review-alpha-school
Here's a review of AlphaSchool and it's methods. Honestly, the review is a good one and very well written. It's worth your time if you have inkling about alternative education and the use of AI in the classroom.
TLDR: The magic is not AI, it's that they bribe kids for good grades. Oops, sorry, 'motivate' kids for good grades.
Teachers using AI to generate all of their lesson material, read student papers and write comments.
Students using AI to generate their papers and solve complex problems.
What are we as humans even doing. Why not just connect two shitty models together and tell them to hallucinate to each other and skip the whole need to think about anything. We can fire both teachers and students at the same time and save money on this whole education thing.
> Why not just connect two shitty models together and tell them to hallucinate to each other and skip the whole need to think about anything.
Western countries have better conditions than much of the world for a variety of reasons, but among them is education and culture.
Raising the next generation to outsource all thinking to AI and form a culture around influencing people 45 seconds at a time will destroy those prerequisites to our better lifestyle, and it will be downhill from there.
You might argue that the AI can be a mentor or can guide society appropriately. That's not wholly untrue, but if AI is "a bicycle for the mind", you still have to have the willingness and vision to go someplace with it. If you've never thought for yourself, never learned anything independently, I just don't see how people will avoid using AI to be "stupid faster".
> You might argue that the AI can be a mentor or can guide society appropriately its a next word predictor trained off datasets.
> Raising the next generation to outsource all thinking to AI and form a culture around influencing people 45 seconds at a time will destroy those prerequisites to our better lifestyle, and it will be downhill from there.
they said the same about tv, youtube and even printed books. short length videos now apparently are the new evil (somehow).
quick question, why was nobody complaining about these exact same "engagement" algorithms 20 years ago? Why only when tiktok short form videos appear? Popularity based ranking was in search engines decades ago but nobody cared then. No cocomelon back then, coincidence?
> Raising the next generation to outsource all thinking to AI and form a culture around influencing people 45 seconds at a time will destroy those prerequisites to our better lifestyle, and it will be downhill from there.
absolutely
up until 2022 I was optimistic for the future
our current big problems: climate change, nuclear proliferation, global pandemics, dictatorships, antibiotic resistance, all seemed solvable over the long term
"AI" however is different
previously all human societies placed a high value on education
this is now gone, if anything spending time educating yourself is now a negative
I don't see how the species survives this new reality over the long term
Teachers:
IIRC, it may be better to have the same number of real humans focussing on fewer pupils. Even when they're using VLMs as assistants.
Students:
While humans max out at a higher skill level than VLMs, I suspect that most (not all!) people who would otherwise have finished education at their local mandatory school leaving age, may be better off finishing school as soon as they can use a VLM.
But also #1: There's also a question of apprenticeships over the current schooling system. Robotics' AI are not as advanced as VLMs, so while plumbing will get solved eventually (and a tentacle robot arm with a camera in each "finger" is clearly superior to our human arms in tight spaces), right now it still looks like a sane thing to train in.
But also #2: Telling who is and isn't getting anything out of the education system is really hard; not only in historical systems like the UK's old eleven-plus exams, but today after university graduation when it can sometimes take a bit of effort to detect that someone only got a degree for the prestige and didn't really learn anything.
>There's also a question of apprenticeships over the current schooling system. Robotics' AI are not as advanced as VLMs, so while plumbing will get solved eventually (and a tentacle robot arm with a camera in each "finger" is clearly superior to our human arms in tight spaces), right now it still looks like a sane thing to train in.
This is the current meta. Today's knowledge workers are propertymaxxing like crazy, and sending their kids to trade school. Well, at least those who see the writing on the wall. The second half of the 021st century will see the rise of the PLIWs [1]. Knowledge work will become extinct. The social order will be:
1. elites: a small aristocracy, who control the access to AI
2. middle class: PLIWs
3. low class: children of today's knowledge workers who couldn't amass sufficient wealth for their kids to become PLIWs. Also called slop-feeders, as their job is to carry out the instructions coming from the AIs without questioning or understanding what they're doing.
________
[1] PLIW = Physical Labour, Inherited Wealth
Would be great if that massive solar flare could hit like tomorrow.
It's a microcosm of the real economy.
>What are we as humans even doing.
We are avoiding work that we don't want to do and therefore saving time, which is precisely what technology promised would help us do.
> and therefore saving time
Apparently we aren't.
We are normalising everything and everyone into information grey goo.
The problem with your proposal is that people need money to buy food and housing.
There have always been a disgusting number of people who treat education as a means to an end.
A teaching culture of thinking that all you have to do is graduate students + a learning culture of thinking all you have to do is graduate.
This already was at an 8. Got dialled up to 11 during covid. And somehow dialled up to 21 after ChatGPT.
Normally, broken things can hobble along for a very long time. But the strain is so intense on what has become of education that my current guess is that the chickens will come to roost on this one sometime 2026 to 2027.
that system produced silicon valley, so it cant be that disfunctional
> We can fire both teachers and students at the same time and save money on this whole education thing.
The current US administration has already started this process.
Agree.
Somehow though, this actually might be the best time to for learners, to sit down and engage with topics and don’t be distracted by formal stuff (degrees, grades, points) because the latter is becoming more meaningless with each token being sent down the drain.
I think the whole "school" thing is just a giant filtering mechanism to sort out who gets placed where in society. The idea of learning things is just pretense for the majority of students. It's a giant, intricate sorting hat. Both teachers and students using AI to get out of doing the work just makes it obvious. The thing is, it means we're going to need a new filtering mechanism, because AI is making this one obsolete.
The system we have is now legacy: https://www.ndtv.com/offbeat/student-flaunts-use-of-chatgpt-...
> I think the whole "school" thing is just a giant filtering mechanism to sort out who gets placed where in society.
I think you haven't picked up enough history books if that's the only positive thing you can come up about "schools". But I guess that's what we get after decades of "the economy is the only thing that matters" propaganda, what's the point of history, math, science, when the system just need good little consumerist wage slaves
Maybe get your head out of your history books and take a look around you. The whole thing is a cross between babysitting at the younger ages and allocating who gets what jobs at the older ages. Any purpose the system served previously has been supplanted by this.
I'm personally glad my neighbors aren't illiterate and that we share a somewhat common ground truth about the history of our country for example
You’re painting with such a broad brush that it destroys your credibility.
Prompting AI is what employers want, they are learning the right things.
A few years ago employers wanted people to make NFTs. Luckily for the most part educators didn't start exclusively teaching kids about how to make NFTs.
Perhaps chasing what employers want at any given moment is not a good basis for an education system.
This is happening across all industries, unfortunately. Medical, engineering, pharmaceuticals, law enforcement, military, transportation, law... Thanks for a perfect post that describes the problem! We need more of these. Most people know they're doing it too, they just need to be told more.
Interesting, Simon Williamson has been rather engaged with and positive about LLM usage, so it's good to see some nuance developing.
I've been using my AI-ethics tag to track stories of this nature for a few years - it's up to 206 posts now: https://simonwillison.net/tags/ai-ethics/
He's more pro-AI than I am, but his writing has always had nuance IMO
The good news is that teaching via PowerPoint slides is probably one of the worst possible ways and so just making it repetitive won't disturb the students naps too much.
If the teacher had asked AI what are more effective ways to ensure the students are learning the material, I really doubt a PowerPoint presentation would have been the result
Often the LLM people read like they're five years old, discovering for the very first time what happens when you start to act out against society and root your moral calculus in deep cynicism
Considering that an auto-generated storybook (https://gemini.google.com/share/8d296b91b77b) taught me why 0.99999... = 1 more clearly and memorably than my "good school district" education, I'm optimistic what AI could do for education.
So you mean to say now good prompt engineering can get me good grades...
Well I guess as long as you have an idea which model your teacher uses you are golden.
In some cases teachers are being overworked and expected to deliver far beyond their capability. The issue I see here is that its Powerpoint / document generation AI tools often use older / cheaper / worse models e.g. 4o mini, instead of Claude opus or Gemini 2.5 pro. The second issue it is often hard for the original prompter to see issues in AI output, so another pair of eyes or a different LLM prompt with more context can often pick up most issues. I dont think AI use for teachers is going anywhere, we should work with the flow on this one and help teachers do their jobs more easily.
We need to pay teachers more if we're to effectively have this conversation.
Hot take: ChatGPT’s performance is close enough to a teacher’s which is why this is a problem at all.
Can some one answer what would realistically change if teachers did use ChatGPT in this way but the students never found out? Things would be more or less the same.
Richard Feynman summed up public education well even if schools have ostensibly changed since: "Everything was written by somebody who didn’t know what the hell he was talking about, so it was a little bit wrong, always!".
Now they just have an extra tool to help them.
I absolutely cannot read a Feynman quote without literally hearing it in his voice.
Things being a little bit wrong is not a huge problem. Much worse is if LLMs remove all the rigor and grit from education, the hard work to learn how to recognize facts.
Teacher saved himself some time by using a chatbot to make a slideshow presentation for a staff meeting. Good!
Teachers use chatbots for everything else, uncritically. Not good!
???
Who cares if he saved himself some time when he completely wasted everyone else's time?He should have questionned himself if that meeting was relevant in the first place. Nothing good came out of that meeting apparently.
Slop should be consumed directly not shared with friends, family, or coworkers
If you can't be bothered to assemble information to present, why should I bother to waste my time staring at a pile of slop?
If I had a choice about whether to give the presentation, I would choose not to. If you had a choice about whether to attend it, you would also choose not to. But, alas, both of us are there -- such is the way of the large bureaucracy.
I do have a choice to zone out and think of England.
I think the real problems are that knowing when to use something appropriately and holding yourself honestly to that are pretty difficult for most people.
No, if you can type in a prompt, just email me the prompt so I know what you intended. I don't need the slop the AI came up with, thank you.
I already feel disrespected in powerpoint presentations where they clearly haven't practiced it for a long time and seem to be discovering the slides and coming up with the argument they want to make on the spot. I usually get up and leave.
There is a middle ground between artisanal powerpoint craftsmanship and AI slop.
People need to realize that the next generation of kids is already unable to differentiate human vs llm generated text, and not only that, but they don't even mind it. They are already using LLMs to generate all their text and so they don't mind reading LLM generated text either.
They won't be reading the text, they will be getting their LLMs to summarize the LLM generated text and read it to them. We are heading for a state where all written communication will be mediated by LLMs - get my people to talk to your people but for everyone.
LLMs will mediate plenty of routine text, but the choke-point shifts from “writing” to “prompting + validating”.
In client projects we see two hard costs pop up: 1. Human review time ⟶ still 2–4 min per 1 k tokens because hallucination isn’t solved. 2. Inference \$: for a 70 B model at 16 k context you pay ~\$0.12 per 1 k tokens — cheap for generation, expensive for bulk reading.
So yes, AI will read for us, but whoever owns the *attention budget + validation loop* still controls comprehension. That’s where new leverage lives.
The next generation? Our generation is already there, apparently.
Fairy low quality post this one
Agreed, this is just a quote on his link blog. Better to post Reddit thread directly. I wouldn’t expect a plain Reddit link to get to the HN front page though.
You need to read more of the source blog - he's been pretty pro LLM, but is now acknowledging where it's going too far.
Trains allowed us to go further than we could ever walk. Cars caused us to lose the ability to walk completely.
Looms allowed us to produce fabrics of higher quality than we ever could by hand. Fast fashion caused us to lose the ability to care for and mend clothes completely.
Computers allowed us to calculate and "think" faster than we ever could before. AI caused us to lose the ability to think completely...
I suggest please link directly to the reddit thread, it has the original text (not a snippet) and lots of additional insights and anecdotes in the comments.
https://np.reddit.com/r/Teachers/comments/1mhntjh/unpopular_...
Updated, thanks!
On HN, reddit counts as a negative signal. You link to reddit directly and most HNers will instantly downvote.
I keep telling people here that reddit is actually an underappreciated goldmine, but I guess feeling better than others feels too good to pass on.
In my mind, reddit is like of HN, except instead of being just tech and business oriented people, it's every subject under the sun. Most of it is garbage (like on HN) but if you're willing to search it's a goldmine.
Reddit was a goldmine until they decided to put a toll booth on its main entry points.
Every time someone criticizes reddit I automatically assume that they have brain damage.
When I go to reddit, I see posts about astrophotography, vintage computers, ham radio, classic cars, typewriters, film photography, sculpture, gardening, woodworking, firefighting, archery, fencing, outer space, watches, cutaway drawings, the Sega Saturn, and more topics I'm interested in.
When they go to reddit they see stuff they don't like, and people arguing about it.
I just want to shout "yes, you do, because your brain is damaged and you asked it to show those things to you".
It's the same with all social media. When I go to instagram I see people I know personally and have been in the same room with doing things I am interested in. I don't see any rage, titillation, celebrities, or gossip. Just my friends and acquaintances being friendly. (It IS annoying that I keep having to turn off suggested posts)
Even when I click on the magnifying glass, which is where people say Instagram shows them titillating things in order to get them hooked, I see scuba diving, aviation, vintage Macs, watches, and astronomy.
What is going on? Do I have a "only show this guy nice stuff he's interested in" cookie following me around the internet?
And YouTube. People will complain about YouTube showing them shit. When I go to YouTube.com, right now, I just opened it in another tab, the top six videos are: Dutch firefighters battling a stable fire, a homelabber messing around with vintage linksys equipment, a history of the development of nuclear weapons, a review of a handheld video game emulator, a guy with a VAX setup in his basement working on restoring those machines, and a video on new theories about how the moon was formed.
The next six are also laser-focused on my interests including a two hour video about various motifs found in Indo-European mythology and their origins which I am totally going to listen to in the background while at work.
I did nothing, NOTHING, except subscribe to/follow things I like and people I know, and it's great.
When people log into reddit and see people arguing about bullshit, instagram and see models bouncing their tits, and YouTube and see garbage, the only logical conclusion I can reach is that their brains are damaged and they set up the systems to show them these things then decided to complain about it as some kind of hobby or something.
If anything, HN is the worst of them all because I can't tell it "show me more 'floppy disk raid array' and less 'crypto and AI bullshit'".
Agreed. My Reddit experience is always enjoyable, entertaining and informative. I'm always surprised when I see references to crazy/hateful/deviant/cesspool content on Reddit because I never see anything remotely like that. But of course, I'm not looking for that stuff.
Ah yes, blame the users for the algorithm. Makes sense. Blame the overweight for the food they have access to! Blame the kids for failing schools!
Not everything is a personal moral failure when society is literally out to get each and every one of us. Many of us have been damaged by the ‘net, it’s purveyors of crap, intentionally for their gain.
Don’t just turn and point fingers at the endusers. They sure as fuck didn’t design the algorithms.
Perhaps growing up in an age and (admittedly unusual) setting where I had to deliberately choose the media I consumed, rather than having it fed to me, was a boon.
I am convinced that is a skill that can be learned or taught.
[flagged]
[dead]
[dead]
Is there more to it, or are we calling the situation out of control based on a single anecdote from Reddit?
You need to read more of the source blog - he's been pretty pro LLM, but is now acknowledging where it's going too far.
I'm not new to simonw.
It doesn't change that this is just a quote from a reddit post and a link to it.
AI just reveals how lazy/cheap/low-standards they were already trying to be. And if AI keeps progressing at current speeds, those are the people who are going to be most easily replaced by AI-tutors within a few years. The actually-good teachers would still have a job in a sane world, but who knows what will happen.