There are really two observations here: 1. AI hasn't commoditized skilled labor. 2. AI is diluting/degrading media culture.
For the first, I'm waiting for more data, e.g. from the BLS. For the second, I think a new category of media has emerged. It lands somewhere near chiptune and deep-fried memes.
> There are really two observations here: 1. AI hasn't commoditized skilled labor.
The problem is, actually skilled labor - think of translators, designers, copywriters - still is obviously needed, but at an intermediate/senior level. These people won't be replaced for a few years to come, and thus won't show up in labor board statistics.
What is getting replaced (or rather, positions not refilled as the existing people move up the career ladder) is the bottom of the barrel: interns and juniors, because that level of workmanship can actually be done by AI in quite a few cases despite it also being skilled work. But this kind of replacement doesn't show up in any kind of statistics, maybe the number of open positions - but a change in that number can also credibly be attributed to economic uncertainty thanks to tariffs, the Russian invasion, people holding their money together and foregoing spending, yadda yadda.
Obviously this is going to completely wreck the entire media/creative economy in a few years: when the entry side of the career funnel has dried up "thanks" to AI... suddenly there will not be any interns that evolve into juniors, no juniors that evolve into intermediates, no intermediates that evolve into seniors... and all that will be left for many an ad/media agency are a bunch of ghouls in suits that last touched Photoshop one and a half decades ago and sales teams.
This is my fear for everything. I lay no blame here, but at the same time, I can imagine an entire younger generation not actually learning... anything.
Part of learning is doing. You can read about fixing a car, but until you do it, you won't know how it's actually done. For most things, doing is what turns "reading a bunch of stuff" into "skill".
Yet how far will this go? I see a neuralink, or I see smartglasses, where people just ask "how do I do this" and follow along as some kind of monkey. Not even understanding anything, at all, about whatever they do.
Who will advance our capabilities? Or debug issues not yet seen? Certainly AI is nowhere near either of those. Taking existing data and correlating it and discovering new relationships in that data isn't an advancement of capability.
AI don't stop you from learning, it just forces you to reach father. The bench league projects that would have forced you to learn solo no longer cut it when GPT5 can solve basic problems. You need to identify your own personal moonshot, something that's outside your capabilities but not impossibly so, then just keep fighting the problem until you best it. I guarantee you will learn a tremendous amount.
> Taking existing data and correlating it and discovering new relationships in that data isn't an advancement of capability.
What? Are you arguing that anyone in the world who isn't themselves running empirical research is not advancing any capabilities?
Also, on a related note, there's absolutely nothing stopping AIs from designing, running and analyzing their own experiments. As just one example, I'll mention the impressive OpenDrop microfluidics device [0] and a recent Steve Mould video about it [1] - it allows a computer to precisely control the mixing of liquids in almost arbitrary ways.
> when the entry side of the career funnel has dried up "thanks" to AI... suddenly there will not be any interns that evolve into juniors, no juniors that evolve into intermediates, no intermediates that evolve into senior
> This has happened before in other industries. Lots of things that are now automated or no longer exist due to computerisation (e.g. manually processing cheques, fixing many types of bookkeeping errors) were part of the training of juniors and how they gained their first real world experience. There is still a funnel in those careers. A narrower one, but still sufficient.
> What is getting replaced is the bottom of the barrel: interns and juniors
But in the context of highly skilled work, I don't think anyone hires juniors or interns to actually do any productive work. You typically hire juniors for the hope of retention of future intermediate talents.
It is not hard to find anecdotes of intermediate or senior translators, designers, or copy writers whose jobs were eliminated. Or their pay decreased. Do you have data?
> and all that will be left for many an ad/media agency are a bunch of ghouls in suits that last touched Photoshop one and a half decades ago and sales teams.
AI killing the ad industry sounds great and I fully support it.
Every AI company is probably desperately trying to become the next "ad industry." AI is just another conduit for mass data vacuuming for "targeted" advertisements. Or they're trying to become a defense contractor.
> The problem is, actually skilled labor - think of translators, designers, copywriters - still is obviously needed, but at an intermediate/senior level. These people won't be replaced for a few years to come, and thus won't show up in labor board statistics.
This is even more true for construction workers and cooks. The actually, actually skilled, I suppose.
Also, an AI still can't come close to replacing either interns or juniors—but, I suppose we're just supposed to act like shouldering more work cleaning up after an AI that can't learn rather than hiring someone up to the task is progress.
Consumers aren't going to consume derivative slop forever though. You can do it for quite a while, but as with the MCU, people do get bored. At which that point the market would self-correct as a scrappy firm of passionate individuals would run circles around incumbent giants that lost all their talent.
> At which that point the market would self-correct as a scrappy firm of passionate individuals would run circles around incumbent giants that lost all their talent.
... which requires these individuals to have skills surpassing AI, and as AI is only getting better... in the end, for large corporations the decision between AI or humans will always be price.
Human creativity and explanations were up to now, using arbitrary units to reach the edges of specificity.
Now the automation of that arbitrariness, trained not for specificity, does definitely degrade that leading edge and puts specificity further out of reach - particularly that humans mistake output as valid or specific.
Is that so ironic? Think of humans in factories fishing out faulty items, where formerly they would perhaps be the artisans that made the product in the first place.
The difference is that in the factory case the faulty items are outliers and easy to spot. You throw it away and let the machine carry on making another copy. You barely lost any time and in the end are still faster than artisans, which are never in the loop.
In the AI case, you’re not making the same thing over and over, so it’s more difficult to spot problems and when they happen you have to manually find and fix them, likely throwing everything away and starting from scratch. So in the end all the time and effort put into the machine was wasted and you would’ve been better going with the artisan (which you still need) in the first place.
I don't think you've ever talked with someone in manufacturing that's in any way aware how quality assurance works there...
I can understand how you might have that misunderstanding, but just think about it a little, what kind of minor changes can result in catastrophic failures
Producing physical objects to spec and doing quality assurance for that spec is way harder then you think.
Some errors are easy to spot for sure, but that's literally the same for AI generated slop
I spent five years working in quality assurance in the manufacturing industry. Both on the plant floor and in labs, and the other user is largely correct in the spirit of their message. You are right that it's not just up to things being easy to spot, but that's why there are multiple layers of QA in manufacturing. It's far more intensive than even traditional software QA.
You are performing manual validation of outputs multiple times before manufacturing runs, and performing manual checks every 0.5-2 hours throughout the run. QA then performs their own checks every two hours, including validation that line operators have been performing their checks as required. This is in addition to line staff who have their eyes on the product to catch obvious issues as they process them.
Any defect that is found marks all product palleted since the last successful check as suspect. Suspect product is then subjected to distributed sampling to gauge the potential scope of the defect. If the defect appears to be present in that palleted product AND distributed throughout, it all gets marked for rework.
This is all done when making a single SKU.
In the case of AI, let's say AI programming, not only are we not performing this level of oversight and validation on that output, but the output isn't even the same SKU! It's making a new one-of-a-kind SKU every time, without the pre and post quality checks common in manufacturing.
AI proponents follow a methodology of not checking at all (i.e. spec-driven development) or only sampling every tenth, twentieth, or hundredth SKU rolling off the analogous assembly line.
In the case of AI, it gets even worse when you factor in MCPs - which, to continue your analogy, is like letting random people walk into the factory and adjust the machine parameters at will.
But people won't care until a major correction happens. My guess is that we'll see a string of AI-enabled script kiddies piecing together massive hacks that leak embarrassing or incriminating information (think Celebgate-scale incidents). The attack surface is just so massive - there's never been a better time to be a hacker.
It depends entirely on what you’re building. The OP mentioned “humans fishing out faulty items” that would otherwise be built by artisans, so clearly we’re not talking complex items requiring extensive tests, but stuff you can quickly find and sort visually.
Either way, the point would stand. You wouldn’t have that factory issue then say “alright boys, dismantle everything, we need to get an artisan to rebuild every single item by hand”.
"Think of humans in factories fishing out faulty items, where formerly they would perhaps be the artisans that made the product in the first place."
But according to this Indian service provider's website, the workers (Indians?) are hired to "clean up" not "fish out" the "faulty items"
Imagine a factory where the majority of items produced are faulty and are easily "fished out". But instead of discarding them,^1 workers have to fix each one
My jury is still out as to whether the current models are proto-AI. Obviously an incredible innovation. I'm just not certain they have the potential to go the whole way.
As you say, whether we call it "AI", or "doohickey", it is an incredible innovation. And I don't think that anyone is claiming at the moment that the systems as-is will themselves "go the whole way" - it is a technological advancement, that like all others should inspire practitioners to develop better future systems, that adapt some aspects of it.
Perhaps at some point we will see a self-propelling technological singularity with the AI developing its own successor autonomously, but that's clearly not the current situation.
That will never happen. We may approach that state asymptotically but since AI output is stochastic, and humans' goals change over time, humans will always be part of the loop.
Whatever the formula for the probability of recursive self-improvement of AI may be, I am unfortunately certain that the fickleness of human goals does not factor into it.
I'm a booster, but LLMs are 100% not going to give us true autonomous intelligence, they're incredibly powerful but all the intelligence they display is "hacked," generalization is limited. That being said, people are making a huge mistake with the idea that just because we're not gonna hit AGI in the next few years, that these tools aren't powerful enough to irreversibly transform the the world. They absolutely are, and there's no going back.
It's not so strange that e-commerce is the first thing that AI has visibly altered. Most "buy this thing" sites really just have one proposition at their core. The presentation is incidental. You can't have a website looking like it's still 2003, but you also don't really care what your 2025 shop front looks like. Your ads are there to draw attention, not to be works of art.
What does AI do, at its heart? It is literally trained to make things that can pass for what's ordinary. What's the best way to do that, normally? Make a bland thing that is straight down the middle of the road. Boring music, boring pictures, boring writing.
Now there are still some issues with common sense, due to the models lacking certain qualities that I'm sure experts are working on. Things like people with 8 fingers, lack of a model of physics, and so on. But we're already at a place where you could easily not spot a fake, especially while not paying attention.
So where does that leave us? AI is great at producing scaffolding. Lorem Ipsum, but for everything.
Humans come in to add a bit of agency. You have to take some risk when you're producing something, decisions have to be made. Taste, one might call it. And someone needs to be responsible for the decisions. Part of that is cleaning up obvious errors, but also part of it is customizing the skeleton so that it does what you want.
Brad Pitt as Rusty: "Don't use seven words when four will do. Don't shift your weight, look always at your mark but don't stare, be specific but not memorable, be funny but don't make him laugh. He's got to like you then forget you the moment you've left his side. And for God's sake, whatever you do, don't, under any circumstances..."
> And for God's sake, whatever you do, don't, under any circumstances...
I’ve only seen the movie when it came out, so I didn’t remember this scene and thought you might’ve been doing the ellipsis yourself. So I checked it out. For anyone else curious, the character was interrupted.
I've never heard of a middle manager that fixes errors in other employees' output. Sounds more like a senior developer or creative (which, in fairness, is often a type of low-level manager).
I got shudders just re-reading it when I came across:
'“It’s too late to vid your wife,” the fasrad said. “There are
three emergency-rockets in the stern; if you want, I’ll fire them
off in the hope of attracting a passing military transport.”'
> LLMs are not AI. Machine learning is more useful.
LLMs are a particular application of machine learning, and as such LLMs both benefit by and contribute to general machine learning techniques.
I agree that LLMs are not the AI we all imagine, but the fact that it broke a huge milestone is a big deal - natural language used to be one of the metrics of AGI!
I believe it is only a matter of time until we get to a multi-sensory self-modifying large models which can both understand and learn from all five of human senses, and maybe even some of the senses we have no access to.
I don't think we have. Semantic symbolic computation on natural languages still seams like a great way to bring reasoning to computers, but LLMs aren't doing that.
> Semantic symbolic computation on natural languages still seams like a great way to bring reasoning to computers, but LLMs aren't doing that.
But they do close a big gap - they're capable of "understanding" fuzzy ill-defined sentences and "infer" the context, insofar as they can help formalize it into a format parsable by another system.
The technique itself is good. And paired with a good amount of data and loads with training time, it’s quite capable of extending prompts in a plausible way.
But that’s it. Nothing here has justified the huge amount of money that are still being invested here. It’s nowhere near useful as mainframes computing or as attractive as mobile phones.
The goal remains the same - AGI is what we see in sci-fi movies. An infallible human like intelligence that has access to infinite knowledge, can navigate it without fail and is capable of performing any digital action a human can.
What changed is how we measure progress. This is common in the tech world - some times your KPIs become their own goal, and you must design new KPIs.
Obviously NLP was not a good enough predictor of progress towards AGI and we must find a better metric.
We have been doing this for decades. I was hired to correct and train speech recognition and OCR programs like 20 years ago. A friend of mine corrected geolocated tags.
In the history of AI systems you basically had people inputing Prolog rules in "smart" systems or programmers hardcoding rules is programs like ELIZA or Generalised Problem Solvers.
> creating this garbage consumes staggering amounts of water and electricity, contributing to emissions that harm the planet
This is highly dependent on which model is being used and what hardware it's running on. In particular, some older article claimed that the energy used to generate an image was equivalent to charging a mobile phone, but the actual energy required for a single image generation (SDXL, 25 steps) is about 35 seconds of running a 80W GPU.
Nobody's running SDXL on an 80W GPU when they're talking about generating images, and you also have to take into account training and developing SDXL or the relevant model. AI companies are spending a lot of resources on training, trying various experiments, and lately they've become a lot more secretive when it comes to reporting climate impact or even any details about their models (how big is ChatGPT's image generation model compared to SDXL? how many image models do they even have?)
IIRC some of those researches you're taking estimates from not only used cherry picked figures for AI image generators, but also massively underestimated man-hour costs of human artists by using commission prices and market labor rates without requisite corroboration works before choosing those values.
Their napkin math went like, human artists take $50 or so per art, which is let's say $200/hr skill, which means each art cannot take longer than 10 minutes, therefore the values used for AI must add up to less than 10 workstation minutes, or something like that.
And that math is equally broken for both sides: SDXL users easily spend hours rolling dice a hundred times without usable images, and likewise, artists just easily spend a day or two for an interesting request that may or may not come with free chocolates.
So those estimates are not only biased, but basically entirely useless.
I did a little investigation. Turns out that GPT-4's training consumes as much energy as 300 cars in their lifetime, which comes about 50 GWh. Not really that much, could be just families on a short street burning that kind of energy. As for inference, GPT-4 usage for an hour consumes less than watching Netflix for an hour.
If you compare datacenter energy usage to the rest, it amounts to 5%. Making great economies on LLMs won't save the planet.
An hour of Netflix streaming consumes approximately 77 Wh according to IEA analysis showing streaming a Netflix video in 2019 typically consumed around 0.077 kWh of electricity per hour [1], while an hour of active GPT-4 chatting (assuming 20 queries at 0.3 Wh each) consumes roughly 6 Wh based on Epoch AI's estimate that a single query to GPT-4o consumes approximately 0.3 watt-hours per query [2]. That makes Netflix about 13 times more energy-intensive than LLM usage.
Jesus Christ, what a poor take on those numbers! It's possible to have a more wrong interpretation, but not by much.
The Netflix consumption takes into account everything[1], the numbers for AI are only the GPU power consumption, not including the user's phone/laptop.
IOW, you are comparing the power cost of using a datacenter + global network + 55" TV to the cost of a single 1shot query (i.e. a tiny prompt) on the GPU only
Once again, I am going to say that the power cost of serving up a stored chunk of data is going to be less than the power cost of first running a GPU and then serving up that chunk.
==================
[1] Which (in addition to the consumption by netflix data centers) includes the network equipment in between, the computer/TV on the user's end. Consider that the user is watching netflix on a TV (min 100w, but more for a 60" large screen).
If you look at their figure (0.0377 kW hour) for a phone using 4G, the device power consumption is minimal and mostly made up by the network usage.
The data center +network usage will be the main cost factor for streaming. For an LLM, you are not sending or receiving nearly as much data, so while I wouldn't know the numbers, it should be nominal
> while an hour of active GPT-4 chatting (assuming 20 queries at 0.3 Wh each)
We're not talking about a human occasionally chatting with ChatGPT, that's not who the article and earlier comments are about.
People creating this sort of AI slop are running agents that provide huge contexts and apply multiple layers of brute-force, like "reasoning" and dozens/hundreds of iterations until the desired output is achieved. They end up using hundreds (or even thousands) of dollars worth of inference per month on their $200 plans, currently sponsored by the AI bubble.
I'm going to expose my ignorance here, but I thought mAh/Ah was not a good measure for comparing storage of quite different devices, because it doesn't take into account voltage. This is fine for comparing Li-ion devices, because they use the same voltage, but I understood that using watt-hours was therefore more appropriate for apples-to-apples comparisons for devices with different wattages.
Am I missing something? Does the CPU/GPU/APU doing this calculation on servers/PCs run the same wattage as mobile devices?
Don’t you ignore the energy used to train the models? I don’t know how much is that “per image”, but it should be included (and if it shouldn’t, we should know why it is negligible).
I’m not sure it will be as high as a full charge of a phone, but it’s incomplete without the resources needed for collecting data and training the model.
An ideal measurement would be to calculate your utility's water usage to kWh then to this, perhaps a token per gram measurement. Of course it would be small, but it should be calculable and then directly comparable to these models if we try to go by something like token per gram of water. I suspect due to DC power distribution they may be more efficient in the data center. You could get more specific about the water too, recycled vs evaporated etc etc
From the case that the article is presenting, I think LLMs are acting as a validation step – the average person who doesn't know how to code creates a minimal, LLM-spaghetti system (e.g. a restaurant menu website) with ChatGPT, validate if this is something that they need, iterate on it, create a specification, and then bring in an actual (costly) engineer that can fix and improve the system.
There's a lot of LLM byproducts that leave us in bad taste (hate all of the LLM slop on the internet), but I don't think this is one of them.
Someone tried to generate a retro hip-hop album cover image with AI, but the text is all nonsense, and humans would have to be hired to clean that AI slop
In about two years we've gone from "AI just generates rubbish where the text should be" to "AI spells things pretty wrong." This is largely down to generating a whole image with a textual element. Using a model like SDXL with a LORA like FOOOCUS to do inpainting and input image with a very rough approximation of the right text (added via MS Paint) you can get a pretty much perfect result. Give it another couple of years and the text generation will be spot on.
So yes, right now we need a human to either use the AI well, or to fix it afterwards. That's how technology always goes - something is invented, it's not perfect, humans need to fix the outputs, but eventually the human input diminishes to nothing.
This is not how AI has ever gone. Every approach so far has either been a total dead end, or the underlying concept got pivoted into a simplified, not-AI tech.
This new approach of machine learning content generation will either keep developing, or it will join everything else in the history of AI by hitting a point of diminishing to zero returns.
But their comment is about 2 years out of date, and AI image gen has got exponentially better at text than when the models and LoRAs they mentioned were SOTA.
I agree we probably won't magically scale current techniques to AGI, but I also think the local maxima for creative output is going to be high enough that it changes how we approach it the way computers changed how we approach knowledge work.
This is not how AI has ever gone. Every approach so far has either been a total dead end, or the underlying concept got pivoted into a simplified, not-AI tech.
You're talking about the progress of technology. I'm talking about how humans use technology in it's early states. They're not mutually exclusive.
Minor correction. FOOCUS [1] isn't a LoRA - it's a Gradio-based frontend (in the same vein as Automatic1111, Forge, etc.) for image generation.
And most SOTA models (Imagen, Qwen 20b, etc) at this point can actually already handle a fair amount of text in a single T2I generation. Flux Dev provided your willing to roll a couple gens can do it as well.
How is this ironic? Carelessly AI-generated output (what we call "slop") is precisely that mediocre average you get before investing more in refining it through iteration. The problem isn't that additional work is needed, but that in many cases it is assumed that no additional work is needed and the first generation from a vague prompt is good enough.
The irony stems from the fact workers are fired due to being 'replaced' by AI only to then be re-hired afterwards to clean up the slop, thus maximizing costs to the business!
It'll be a large cost reduction over time. The median software developer in the US was at around $112,000 in salary plus benefits on top of that (healthcare, stock compensation), prior to the job plunge. Call it a minimum of $130,000 just at the median.
They'll hire those people back at half their total compensation, with no stock, far fewer benefits, to clean up AI slop. And or just contract it overseas at ~1/3 the former total cost.
Another ten years from now the AI systems will have improved drastically, reducing the slop factor. There's no scenario where it goes back to how it was, that era is over. And the cost will decline substantially versus the peak for US developers.
Based on... what? The more you try to "reduce costs" by letting LLMs take the reigns, the more slop will eventually have to be cleaned up by senior developers. The mess will get exponentially bigger and harder to resolve.
Because I think it won't just be a linear relationship. If you let 1 vibe coder replace a team of 10, you'll need a lot more than 10 people to clean it up and maintain it going forward when they hit the wall.
Personally I'm looking forward to the news stories about major companies collapsing under the weight of their LLM-induced tech debt.
Full self-driving is 1000x more likely than AGI. There are all these maps, lines on the ground and signs; the roads don't wiggle like snakes, appear and disappear; and the other cars and obstacles don't suddenly change number, shape and physics.
I like the idea that mediocre sci-fi show Upload came up with: maybe they can get self-driving to the point where it doesn't require a human in the loop and a squirrel will work.
I for one am super excited for what my kids, and the other children they grow up with, will do in their future careers! I am so proud and cheer this future on, it can’t come soon enough! This is software’s true purpose.
The article has a strong focus on deceptive media, used on social media to penetrate viewers' attention. It makes me sad and glad that my kids and family did grow up before all this insanity of psychological abuse.
I assume you missed the whole part where 75% of people use LLMs for porn, AI bf/gf roleplaying, spam, ads, scams, &c.
It's like youtube, in theory: unlimited free educational content to raise the bar world wide, in practice: 3/4th of videos are braindead garbage, and probably 99% of views are on concentrated this.
First thing that came to mind when I started seeing news about companies needing developers to clean up AI code, was the part of Charlie and the Chocolate Factory where Charlie's father is fired from the toothpaste factory because they bought this new machine to produce the toothpaste, but then they re-wire him for an higher salary because the machine keeps breaking and they need someone to fix it.
AI (at least this form of AI) is not going to take our jobs away and let us all idle and poor, just like the milling machine or the plough didn't take people's jobs away and make everyone poor. it will enable us to do even greater things.
Well, sometimes innovation does destroy jobs, and people have to adapt to new ones.
The plough didn't make everyone poor, but people working in agriculture these days are a tiny percentage of the population compared to the majority 150 years ago.
(I don't think LLMs are like that, tho).
Touching on this topic, I cannot recommend enough "The Box: How the Shipping Container Made the World Smaller and the World Economy Bigger" which (among other things) illustrates the story of dockworkers: there were entire towns dedicated to loading ships.
But the people employed in that area have declined by 90% in the last 60 years, while shipping has grown by orders of magnitude. New port cities arose, and old ones died. One needs to accept inevitable change sometimes.
By the same logic, people working in transportation around the time Ford Model T was introduced did NOT diminish 100 years later. We went from about 3.2 million in 1910 (~8% of the workforce) to 6–16 million in 2023 (~4–10%, depending on definition). That is the effect of a century of transportation development.
Sometimes demand scales, maybe food is less elastic. Programming has been automating itself with each new language and library for 70 years and here we are, so many software devs. Demand scaled up as a result of automation.
Just as gun powder enabled greater things. I agree with you just humans have shown, time after time, an ability to first use innovation to make lives miserable for their fellow humans.
Industrial revolution caused a massive shift from fairly self-sustained agrarian communities to living horrible poverty in urban factory and mining towns, just look up numbers of children deceased during work in the 19th Century England. It did not make everyone poorer - it made plenty of people helluva lot richer - but it did increase the number of those in poverty and the effects of their destitude. The mega rich capitalists class greated by the industrialisation replaced the old aristocrats being able to buy goverments and leaders to do their bidding, smashing worker’s rights and unions created to defend the workers from the effects of industrial capitalism with police violence, William Hearst and the like were able to essentially control public opinion since they owned the newspapers…Sound familiar? We are not entering an era descriped on an utopian scifi, but just returning to the good old 19th century.
Except I still doubt whether AI is the new Spinning Jenny. Because the quality is so bad and because it can’t replace human’s in most things or doesn’t necessarily even speed the prodiction in a sognificant way, we might just be facing another IT-bubble and financial meltdown, since US seems to have but all of the eggs in a one basket.
The amount of luddites angrily answering this comment and downvoting me for seeing with a positive view a new great revolution for human knowledge and productivity, is very funny when coming from a website supposedly dedicated to innovation :)
In my country, we also have a class (they have achieved the status of social class IMO) that expects to keep their jobs and get ever increasing privileges while not having to upgrade their competences or even having to learn anything new during all their life: our public workers.
> AI was supposed to replace humans
There are really two observations here: 1. AI hasn't commoditized skilled labor. 2. AI is diluting/degrading media culture.
For the first, I'm waiting for more data, e.g. from the BLS. For the second, I think a new category of media has emerged. It lands somewhere near chiptune and deep-fried memes.
> There are really two observations here: 1. AI hasn't commoditized skilled labor.
The problem is, actually skilled labor - think of translators, designers, copywriters - still is obviously needed, but at an intermediate/senior level. These people won't be replaced for a few years to come, and thus won't show up in labor board statistics.
What is getting replaced (or rather, positions not refilled as the existing people move up the career ladder) is the bottom of the barrel: interns and juniors, because that level of workmanship can actually be done by AI in quite a few cases despite it also being skilled work. But this kind of replacement doesn't show up in any kind of statistics, maybe the number of open positions - but a change in that number can also credibly be attributed to economic uncertainty thanks to tariffs, the Russian invasion, people holding their money together and foregoing spending, yadda yadda.
Obviously this is going to completely wreck the entire media/creative economy in a few years: when the entry side of the career funnel has dried up "thanks" to AI... suddenly there will not be any interns that evolve into juniors, no juniors that evolve into intermediates, no intermediates that evolve into seniors... and all that will be left for many an ad/media agency are a bunch of ghouls in suits that last touched Photoshop one and a half decades ago and sales teams.
This is my fear for everything. I lay no blame here, but at the same time, I can imagine an entire younger generation not actually learning... anything.
Part of learning is doing. You can read about fixing a car, but until you do it, you won't know how it's actually done. For most things, doing is what turns "reading a bunch of stuff" into "skill".
Yet how far will this go? I see a neuralink, or I see smartglasses, where people just ask "how do I do this" and follow along as some kind of monkey. Not even understanding anything, at all, about whatever they do.
Who will advance our capabilities? Or debug issues not yet seen? Certainly AI is nowhere near either of those. Taking existing data and correlating it and discovering new relationships in that data isn't an advancement of capability.
AI don't stop you from learning, it just forces you to reach father. The bench league projects that would have forced you to learn solo no longer cut it when GPT5 can solve basic problems. You need to identify your own personal moonshot, something that's outside your capabilities but not impossibly so, then just keep fighting the problem until you best it. I guarantee you will learn a tremendous amount.
> Taking existing data and correlating it and discovering new relationships in that data isn't an advancement of capability.
What? Are you arguing that anyone in the world who isn't themselves running empirical research is not advancing any capabilities?
Also, on a related note, there's absolutely nothing stopping AIs from designing, running and analyzing their own experiments. As just one example, I'll mention the impressive OpenDrop microfluidics device [0] and a recent Steve Mould video about it [1] - it allows a computer to precisely control the mixing of liquids in almost arbitrary ways.
[0] https://gaudishop.ch/index.php/product/opendrop-v4-digital-m...
[1] https://www.youtube.com/watch?v=rf-efIZI_Dg
> when the entry side of the career funnel has dried up "thanks" to AI... suddenly there will not be any interns that evolve into juniors, no juniors that evolve into intermediates, no intermediates that evolve into senior
> This has happened before in other industries. Lots of things that are now automated or no longer exist due to computerisation (e.g. manually processing cheques, fixing many types of bookkeeping errors) were part of the training of juniors and how they gained their first real world experience. There is still a funnel in those careers. A narrower one, but still sufficient.
> What is getting replaced is the bottom of the barrel: interns and juniors
But in the context of highly skilled work, I don't think anyone hires juniors or interns to actually do any productive work. You typically hire juniors for the hope of retention of future intermediate talents.
It is not hard to find anecdotes of intermediate or senior translators, designers, or copy writers whose jobs were eliminated. Or their pay decreased. Do you have data?
> and all that will be left for many an ad/media agency are a bunch of ghouls in suits that last touched Photoshop one and a half decades ago and sales teams.
AI killing the ad industry sounds great and I fully support it.
Every AI company is probably desperately trying to become the next "ad industry." AI is just another conduit for mass data vacuuming for "targeted" advertisements. Or they're trying to become a defense contractor.
>AI is just another conduit for mass data vacuuming for "targeted" advertisements. Or they're trying to become a defense contractor.
The only difference between the two is in the delivery of the end product.
"Instead of treated, we get tricked"; as the old broadway show goes.
It's the hard knock life for [some].
> The problem is, actually skilled labor - think of translators, designers, copywriters - still is obviously needed, but at an intermediate/senior level. These people won't be replaced for a few years to come, and thus won't show up in labor board statistics.
This is even more true for construction workers and cooks. The actually, actually skilled, I suppose.
Also, an AI still can't come close to replacing either interns or juniors—but, I suppose we're just supposed to act like shouldering more work cleaning up after an AI that can't learn rather than hiring someone up to the task is progress.
Consumers aren't going to consume derivative slop forever though. You can do it for quite a while, but as with the MCU, people do get bored. At which that point the market would self-correct as a scrappy firm of passionate individuals would run circles around incumbent giants that lost all their talent.
Have you seen tv for the last 50 years? Tons of people still consume it.
> At which that point the market would self-correct as a scrappy firm of passionate individuals would run circles around incumbent giants that lost all their talent.
... which requires these individuals to have skills surpassing AI, and as AI is only getting better... in the end, for large corporations the decision between AI or humans will always be price.
Human creativity and explanations were up to now, using arbitrary units to reach the edges of specificity.
Now the automation of that arbitrariness, trained not for specificity, does definitely degrade that leading edge and puts specificity further out of reach - particularly that humans mistake output as valid or specific.
It's an act of insanity made tech-normal.
> For the first, I'm waiting for more data, e.g. from the BLS.
I wouldn't hold your breath for accurate numbers, given the way Trump has treated that bureau since they gave a jobs report he didn't like.
Is that so ironic? Think of humans in factories fishing out faulty items, where formerly they would perhaps be the artisans that made the product in the first place.
The difference is that in the factory case the faulty items are outliers and easy to spot. You throw it away and let the machine carry on making another copy. You barely lost any time and in the end are still faster than artisans, which are never in the loop.
In the AI case, you’re not making the same thing over and over, so it’s more difficult to spot problems and when they happen you have to manually find and fix them, likely throwing everything away and starting from scratch. So in the end all the time and effort put into the machine was wasted and you would’ve been better going with the artisan (which you still need) in the first place.
I don't think you've ever talked with someone in manufacturing that's in any way aware how quality assurance works there...
I can understand how you might have that misunderstanding, but just think about it a little, what kind of minor changes can result in catastrophic failures
Producing physical objects to spec and doing quality assurance for that spec is way harder then you think.
Some errors are easy to spot for sure, but that's literally the same for AI generated slop
I spent five years working in quality assurance in the manufacturing industry. Both on the plant floor and in labs, and the other user is largely correct in the spirit of their message. You are right that it's not just up to things being easy to spot, but that's why there are multiple layers of QA in manufacturing. It's far more intensive than even traditional software QA.
You are performing manual validation of outputs multiple times before manufacturing runs, and performing manual checks every 0.5-2 hours throughout the run. QA then performs their own checks every two hours, including validation that line operators have been performing their checks as required. This is in addition to line staff who have their eyes on the product to catch obvious issues as they process them.
Any defect that is found marks all product palleted since the last successful check as suspect. Suspect product is then subjected to distributed sampling to gauge the potential scope of the defect. If the defect appears to be present in that palleted product AND distributed throughout, it all gets marked for rework.
This is all done when making a single SKU.
In the case of AI, let's say AI programming, not only are we not performing this level of oversight and validation on that output, but the output isn't even the same SKU! It's making a new one-of-a-kind SKU every time, without the pre and post quality checks common in manufacturing.
AI proponents follow a methodology of not checking at all (i.e. spec-driven development) or only sampling every tenth, twentieth, or hundredth SKU rolling off the analogous assembly line.
In the case of AI, it gets even worse when you factor in MCPs - which, to continue your analogy, is like letting random people walk into the factory and adjust the machine parameters at will.
But people won't care until a major correction happens. My guess is that we'll see a string of AI-enabled script kiddies piecing together massive hacks that leak embarrassing or incriminating information (think Celebgate-scale incidents). The attack surface is just so massive - there's never been a better time to be a hacker.
Yeah, a relative has worked in this area. It's eye-opening just how challenging it can be to test "does this component conform to its spec".
It depends entirely on what you’re building. The OP mentioned “humans fishing out faulty items” that would otherwise be built by artisans, so clearly we’re not talking complex items requiring extensive tests, but stuff you can quickly find and sort visually.
Either way, the point would stand. You wouldn’t have that factory issue then say “alright boys, dismantle everything, we need to get an artisan to rebuild every single item by hand”.
A factory produces physical products and “AI” produces intellectual products. One is a little fuzzier than the other.
"Think of humans in factories fishing out faulty items, where formerly they would perhaps be the artisans that made the product in the first place."
But according to this Indian service provider's website, the workers (Indians?) are hired to "clean up" not "fish out" the "faulty items"
Imagine a factory where the majority of items produced are faulty and are easily "fished out". But instead of discarding them,^1 workers have to fix each one
1. The energy costs of production are substantial
Yes, when AI's whole schtick was that it was supposed to be the greatest and smartest revolution in the last few centuries.
Conclusion: we are not in the age of AI.
Dunno. Mass production was clearly a many-orders-of-magnitude improvement on the artisan model, yet still humans are needed.
We still call it the "industrial revolution".
Fair.
My jury is still out as to whether the current models are proto-AI. Obviously an incredible innovation. I'm just not certain they have the potential to go the whole way.
/layman disclaimer
As you say, whether we call it "AI", or "doohickey", it is an incredible innovation. And I don't think that anyone is claiming at the moment that the systems as-is will themselves "go the whole way" - it is a technological advancement, that like all others should inspire practitioners to develop better future systems, that adapt some aspects of it.
Perhaps at some point we will see a self-propelling technological singularity with the AI developing its own successor autonomously, but that's clearly not the current situation.
That will never happen. We may approach that state asymptotically but since AI output is stochastic, and humans' goals change over time, humans will always be part of the loop.
Whatever the formula for the probability of recursive self-improvement of AI may be, I am unfortunately certain that the fickleness of human goals does not factor into it.
Doohickey is so much more relatable ... I may call LLM's that from now on. Thank you.
> And I don't think that anyone is claiming at the moment that the systems as-is will themselves "go the whole way"
Dunno but I see plenty of people making exactly this claim every day, even on this site
I'm a booster, but LLMs are 100% not going to give us true autonomous intelligence, they're incredibly powerful but all the intelligence they display is "hacked," generalization is limited. That being said, people are making a huge mistake with the idea that just because we're not gonna hit AGI in the next few years, that these tools aren't powerful enough to irreversibly transform the the world. They absolutely are, and there's no going back.
> That being said, people are making a huge mistake with the idea that just because we're not gonna hit AGI in the next few years
Because that's what we've been promised, not once but many times by many different companies.
So sure, there's a marginal improvement like refactoring tools that do a lot of otherwise manual labor.
And humans are hired to clean up humans' slop all the time. Especially in software development.
It's not so strange that e-commerce is the first thing that AI has visibly altered. Most "buy this thing" sites really just have one proposition at their core. The presentation is incidental. You can't have a website looking like it's still 2003, but you also don't really care what your 2025 shop front looks like. Your ads are there to draw attention, not to be works of art.
What does AI do, at its heart? It is literally trained to make things that can pass for what's ordinary. What's the best way to do that, normally? Make a bland thing that is straight down the middle of the road. Boring music, boring pictures, boring writing.
Now there are still some issues with common sense, due to the models lacking certain qualities that I'm sure experts are working on. Things like people with 8 fingers, lack of a model of physics, and so on. But we're already at a place where you could easily not spot a fake, especially while not paying attention.
So where does that leave us? AI is great at producing scaffolding. Lorem Ipsum, but for everything.
Humans come in to add a bit of agency. You have to take some risk when you're producing something, decisions have to be made. Taste, one might call it. And someone needs to be responsible for the decisions. Part of that is cleaning up obvious errors, but also part of it is customizing the skeleton so that it does what you want.
Brad Pitt as Rusty: "Don't use seven words when four will do. Don't shift your weight, look always at your mark but don't stare, be specific but not memorable, be funny but don't make him laugh. He's got to like you then forget you the moment you've left his side. And for God's sake, whatever you do, don't, under any circumstances..."
(from Ocean's Eleven)
> And for God's sake, whatever you do, don't, under any circumstances...
I’ve only seen the movie when it came out, so I didn’t remember this scene and thought you might’ve been doing the ellipsis yourself. So I checked it out. For anyone else curious, the character was interrupted.
https://www.imdb.com/title/tt0240772/characters/nm0000093?it...
Indeed. Here's a clip for anyone else out of the loop: https://www.youtube.com/watch?v=VTKgyZZP5KQ
Shopping and ads are great domains for AI: you have clear feedback signals for whether the interaction was successful.
(Of course, you have to actually be using these signals, and not just cargo culting throwing LLM outputs everywhere.)
Like being a middle manager for employees that don't learn :)
I've never heard of a middle manager that fixes errors in other employees' output. Sounds more like a senior developer or creative (which, in fairness, is often a type of low-level manager).
Yeah, maybe the wrong word, it was meant to describe "someone with no power but all the responsibilities" (BTDT).
Before AI even attains the promised AGI level, we all will be driven mad by 'slopocalypse' like in one of the PKD stories:
https://en.wikipedia.org/wiki/Sales_Pitch_(short_story)
I've not read that story in decades - thanks.
I got shudders just re-reading it when I came across:
'“It’s too late to vid your wife,” the fasrad said. “There are three emergency-rockets in the stern; if you want, I’ll fire them off in the hope of attracting a passing military transport.”'
...which sounds exactly like ChatGPT5.
The greatest irony is that the only comment on that article is AI generated
We have not yet entered the AI age, though I believe we will.
LLMs are not AI. Machine learning is more useful. Perhaps they will evolve or perhaps they will prove a dead end.
> LLMs are not AI. Machine learning is more useful.
LLMs are a particular application of machine learning, and as such LLMs both benefit by and contribute to general machine learning techniques.
I agree that LLMs are not the AI we all imagine, but the fact that it broke a huge milestone is a big deal - natural language used to be one of the metrics of AGI!
I believe it is only a matter of time until we get to a multi-sensory self-modifying large models which can both understand and learn from all five of human senses, and maybe even some of the senses we have no access to.
> natural language used to be one of the metrics of AGI
what if we have chosen a wrong metric there?
I don't think we have. Semantic symbolic computation on natural languages still seams like a great way to bring reasoning to computers, but LLMs aren't doing that.
> Semantic symbolic computation on natural languages still seams like a great way to bring reasoning to computers, but LLMs aren't doing that.
But they do close a big gap - they're capable of "understanding" fuzzy ill-defined sentences and "infer" the context, insofar as they can help formalize it into a format parsable by another system.
The technique itself is good. And paired with a good amount of data and loads with training time, it’s quite capable of extending prompts in a plausible way.
But that’s it. Nothing here has justified the huge amount of money that are still being invested here. It’s nowhere near useful as mainframes computing or as attractive as mobile phones.
LLMs have shown no signs of understanding.
We keep moving the goalposts...
The goal remains the same - AGI is what we see in sci-fi movies. An infallible human like intelligence that has access to infinite knowledge, can navigate it without fail and is capable of performing any digital action a human can.
What changed is how we measure progress. This is common in the tech world - some times your KPIs become their own goal, and you must design new KPIs.
Obviously NLP was not a good enough predictor of progress towards AGI and we must find a better metric.
Maybe it is linear enough to figure out where the goalposts will be 10, 20, 50 years from now.
We have been doing this for decades. I was hired to correct and train speech recognition and OCR programs like 20 years ago. A friend of mine corrected geolocated tags.
In the history of AI systems you basically had people inputing Prolog rules in "smart" systems or programmers hardcoding rules is programs like ELIZA or Generalised Problem Solvers.
> creating this garbage consumes staggering amounts of water and electricity, contributing to emissions that harm the planet
This is highly dependent on which model is being used and what hardware it's running on. In particular, some older article claimed that the energy used to generate an image was equivalent to charging a mobile phone, but the actual energy required for a single image generation (SDXL, 25 steps) is about 35 seconds of running a 80W GPU.
Nobody's running SDXL on an 80W GPU when they're talking about generating images, and you also have to take into account training and developing SDXL or the relevant model. AI companies are spending a lot of resources on training, trying various experiments, and lately they've become a lot more secretive when it comes to reporting climate impact or even any details about their models (how big is ChatGPT's image generation model compared to SDXL? how many image models do they even have?)
IIRC some of those researches you're taking estimates from not only used cherry picked figures for AI image generators, but also massively underestimated man-hour costs of human artists by using commission prices and market labor rates without requisite corroboration works before choosing those values.
Their napkin math went like, human artists take $50 or so per art, which is let's say $200/hr skill, which means each art cannot take longer than 10 minutes, therefore the values used for AI must add up to less than 10 workstation minutes, or something like that.
And that math is equally broken for both sides: SDXL users easily spend hours rolling dice a hundred times without usable images, and likewise, artists just easily spend a day or two for an interesting request that may or may not come with free chocolates.
So those estimates are not only biased, but basically entirely useless.
I did a little investigation. Turns out that GPT-4's training consumes as much energy as 300 cars in their lifetime, which comes about 50 GWh. Not really that much, could be just families on a short street burning that kind of energy. As for inference, GPT-4 usage for an hour consumes less than watching Netflix for an hour.
If you compare datacenter energy usage to the rest, it amounts to 5%. Making great economies on LLMs won't save the planet.
> As for inference, GPT-4 usage for an hour consumes less than watching Netflix for an hour.
This can't be correct, I'd like to see how this was measured.
Running a GPU at full throttle for one hour uses less power than serving data for one hour?
I'm very sceptical.
An hour of Netflix streaming consumes approximately 77 Wh according to IEA analysis showing streaming a Netflix video in 2019 typically consumed around 0.077 kWh of electricity per hour [1], while an hour of active GPT-4 chatting (assuming 20 queries at 0.3 Wh each) consumes roughly 6 Wh based on Epoch AI's estimate that a single query to GPT-4o consumes approximately 0.3 watt-hours per query [2]. That makes Netflix about 13 times more energy-intensive than LLM usage.
[1] https://www.iea.org/commentaries/the-carbon-footprint-of-str...
[2] https://epoch.ai/gradient-updates/how-much-energy-does-chatg...
Jesus Christ, what a poor take on those numbers! It's possible to have a more wrong interpretation, but not by much.
The Netflix consumption takes into account everything[1], the numbers for AI are only the GPU power consumption, not including the user's phone/laptop.
IOW, you are comparing the power cost of using a datacenter + global network + 55" TV to the cost of a single 1shot query (i.e. a tiny prompt) on the GPU only
Once again, I am going to say that the power cost of serving up a stored chunk of data is going to be less than the power cost of first running a GPU and then serving up that chunk.
==================
[1] Which (in addition to the consumption by netflix data centers) includes the network equipment in between, the computer/TV on the user's end. Consider that the user is watching netflix on a TV (min 100w, but more for a 60" large screen).
If you look at their figure (0.0377 kW hour) for a phone using 4G, the device power consumption is minimal and mostly made up by the network usage.
The data center +network usage will be the main cost factor for streaming. For an LLM, you are not sending or receiving nearly as much data, so while I wouldn't know the numbers, it should be nominal
> while an hour of active GPT-4 chatting (assuming 20 queries at 0.3 Wh each)
We're not talking about a human occasionally chatting with ChatGPT, that's not who the article and earlier comments are about.
People creating this sort of AI slop are running agents that provide huge contexts and apply multiple layers of brute-force, like "reasoning" and dozens/hundreds of iterations until the desired output is achieved. They end up using hundreds (or even thousands) of dollars worth of inference per month on their $200 plans, currently sponsored by the AI bubble.
35 seconds @ 80W is ~210 mAh, so definitely a lot less than the ~4000+ mAh in today's phone batteries.
I'm going to expose my ignorance here, but I thought mAh/Ah was not a good measure for comparing storage of quite different devices, because it doesn't take into account voltage. This is fine for comparing Li-ion devices, because they use the same voltage, but I understood that using watt-hours was therefore more appropriate for apples-to-apples comparisons for devices with different wattages.
Am I missing something? Does the CPU/GPU/APU doing this calculation on servers/PCs run the same wattage as mobile devices?
No you are completely right. mAh is a unit of current over time. Not power.
The proper unit is watt hours.
Don’t you ignore the energy used to train the models? I don’t know how much is that “per image”, but it should be included (and if it shouldn’t, we should know why it is negligible).
I’m not sure it will be as high as a full charge of a phone, but it’s incomplete without the resources needed for collecting data and training the model.
Current over time (ampere hours) and power over time (watt hours) are two different things.
You should be using KWh.
> the actual energy required for a single image generation (SDXL, 25 steps) is about 35 seconds of running a 80W GPU.
And just how many people manage to 1shot the image?
There are maybe 5 to 20 images generated before the user is happy.
I mean, 80 W is not a ton even if you're generating images constantly. How many people leave the lights on at home?
> I mean, 80 W is not a ton even if you're generating images constantly.
Compared to what?
> How many people leave the lights on at home?
What does that have to do with this?
An ideal measurement would be to calculate your utility's water usage to kWh then to this, perhaps a token per gram measurement. Of course it would be small, but it should be calculable and then directly comparable to these models if we try to go by something like token per gram of water. I suspect due to DC power distribution they may be more efficient in the data center. You could get more specific about the water too, recycled vs evaporated etc etc
Greatest irony of the industrial age: humans hired to operate machines.
From the case that the article is presenting, I think LLMs are acting as a validation step – the average person who doesn't know how to code creates a minimal, LLM-spaghetti system (e.g. a restaurant menu website) with ChatGPT, validate if this is something that they need, iterate on it, create a specification, and then bring in an actual (costly) engineer that can fix and improve the system.
There's a lot of LLM byproducts that leave us in bad taste (hate all of the LLM slop on the internet), but I don't think this is one of them.
Someone tried to generate a retro hip-hop album cover image with AI, but the text is all nonsense, and humans would have to be hired to clean that AI slop
In about two years we've gone from "AI just generates rubbish where the text should be" to "AI spells things pretty wrong." This is largely down to generating a whole image with a textual element. Using a model like SDXL with a LORA like FOOOCUS to do inpainting and input image with a very rough approximation of the right text (added via MS Paint) you can get a pretty much perfect result. Give it another couple of years and the text generation will be spot on.
So yes, right now we need a human to either use the AI well, or to fix it afterwards. That's how technology always goes - something is invented, it's not perfect, humans need to fix the outputs, but eventually the human input diminishes to nothing.
> That's how technology always goes
This is not how AI has ever gone. Every approach so far has either been a total dead end, or the underlying concept got pivoted into a simplified, not-AI tech.
This new approach of machine learning content generation will either keep developing, or it will join everything else in the history of AI by hitting a point of diminishing to zero returns.
But their comment is about 2 years out of date, and AI image gen has got exponentially better at text than when the models and LoRAs they mentioned were SOTA.
I agree we probably won't magically scale current techniques to AGI, but I also think the local maxima for creative output is going to be high enough that it changes how we approach it the way computers changed how we approach knowledge work.
That's why I focus on it at least.
This is not how AI has ever gone. Every approach so far has either been a total dead end, or the underlying concept got pivoted into a simplified, not-AI tech.
You're talking about the progress of technology. I'm talking about how humans use technology in it's early states. They're not mutually exclusive.
Minor correction. FOOCUS [1] isn't a LoRA - it's a Gradio-based frontend (in the same vein as Automatic1111, Forge, etc.) for image generation.
And most SOTA models (Imagen, Qwen 20b, etc) at this point can actually already handle a fair amount of text in a single T2I generation. Flux Dev provided your willing to roll a couple gens can do it as well.
[1] https://github.com/lllyasviel/Fooocus
How is this ironic? Carelessly AI-generated output (what we call "slop") is precisely that mediocre average you get before investing more in refining it through iteration. The problem isn't that additional work is needed, but that in many cases it is assumed that no additional work is needed and the first generation from a vague prompt is good enough.
The irony stems from the fact workers are fired due to being 'replaced' by AI only to then be re-hired afterwards to clean up the slop, thus maximizing costs to the business!
Relative cost of labour will differ. One was subject matter expert price, the other will aim for mechanical turk.
When the big lawsuits hit, they'll roll back.
It'll be a large cost reduction over time. The median software developer in the US was at around $112,000 in salary plus benefits on top of that (healthcare, stock compensation), prior to the job plunge. Call it a minimum of $130,000 just at the median.
They'll hire those people back at half their total compensation, with no stock, far fewer benefits, to clean up AI slop. And or just contract it overseas at ~1/3 the former total cost.
Another ten years from now the AI systems will have improved drastically, reducing the slop factor. There's no scenario where it goes back to how it was, that era is over. And the cost will decline substantially versus the peak for US developers.
Cleaning up code requires more skill than creating it (see Kernhigans quote)
Why does that fact stop being true when the code is created by AI?
Based on... what? The more you try to "reduce costs" by letting LLMs take the reigns, the more slop will eventually have to be cleaned up by senior developers. The mess will get exponentially bigger and harder to resolve.
Because I think it won't just be a linear relationship. If you let 1 vibe coder replace a team of 10, you'll need a lot more than 10 people to clean it up and maintain it going forward when they hit the wall.
Personally I'm looking forward to the news stories about major companies collapsing under the weight of their LLM-induced tech debt.
Just pour 500 more billion per year into OpenAI and they'll fix it.
... about when Tesla delivers full self driving.
Full self-driving is 1000x more likely than AGI. There are all these maps, lines on the ground and signs; the roads don't wiggle like snakes, appear and disappear; and the other cars and obstacles don't suddenly change number, shape and physics.
I like the idea that mediocre sci-fi show Upload came up with: maybe they can get self-driving to the point where it doesn't require a human in the loop and a squirrel will work.
> Full self-driving is 1000x more likely than AGI.
GP specifically said Tesla delivering FSD.
> the roads don't wiggle like snakes, appear and disappear
You mean, full self driving only on pre mapped roads then?
I doubt even that will happen, but it's just a subset anyway.
The irony was created by AI companies purposely overpromising and inflating the bubble. They deserve such ridicule.
Someone can drop a sick blog post called LoremIpsum.ai
Man complains about AI giving him a starting point to do his work from.
ironic, like rain on your wedding day
Only if you're a meteorologist.
I for one am super excited for what my kids, and the other children they grow up with, will do in their future careers! I am so proud and cheer this future on, it can’t come soon enough! This is software’s true purpose.
The article has a strong focus on deceptive media, used on social media to penetrate viewers' attention. It makes me sad and glad that my kids and family did grow up before all this insanity of psychological abuse.
> This is software’s true purpose.
I assume you missed the whole part where 75% of people use LLMs for porn, AI bf/gf roleplaying, spam, ads, scams, &c.
It's like youtube, in theory: unlimited free educational content to raise the bar world wide, in practice: 3/4th of videos are braindead garbage, and probably 99% of views are on concentrated this.
> employment for hundreds of thousands of humans: cleaning up the mess AI makes
Stopped reading there. The author is very biased and out of touch.
[dead]
Irony or a never ending capitalist deescalator on human value
First thing that came to mind when I started seeing news about companies needing developers to clean up AI code, was the part of Charlie and the Chocolate Factory where Charlie's father is fired from the toothpaste factory because they bought this new machine to produce the toothpaste, but then they re-wire him for an higher salary because the machine keeps breaking and they need someone to fix it.
AI (at least this form of AI) is not going to take our jobs away and let us all idle and poor, just like the milling machine or the plough didn't take people's jobs away and make everyone poor. it will enable us to do even greater things.
Well, sometimes innovation does destroy jobs, and people have to adapt to new ones.
The plough didn't make everyone poor, but people working in agriculture these days are a tiny percentage of the population compared to the majority 150 years ago.
(I don't think LLMs are like that, tho).
Touching on this topic, I cannot recommend enough "The Box: How the Shipping Container Made the World Smaller and the World Economy Bigger" which (among other things) illustrates the story of dockworkers: there were entire towns dedicated to loading ships.
But the people employed in that area have declined by 90% in the last 60 years, while shipping has grown by orders of magnitude. New port cities arose, and old ones died. One needs to accept inevitable change sometimes.
[0] https://en.wikipedia.org/wiki/The_Box_(Levinson_book)
By the same logic, people working in transportation around the time Ford Model T was introduced did NOT diminish 100 years later. We went from about 3.2 million in 1910 (~8% of the workforce) to 6–16 million in 2023 (~4–10%, depending on definition). That is the effect of a century of transportation development.
Sometimes demand scales, maybe food is less elastic. Programming has been automating itself with each new language and library for 70 years and here we are, so many software devs. Demand scaled up as a result of automation.
> it will enable us to do even greater things.
Just as gun powder enabled greater things. I agree with you just humans have shown, time after time, an ability to first use innovation to make lives miserable for their fellow humans.
Industrial revolution caused a massive shift from fairly self-sustained agrarian communities to living horrible poverty in urban factory and mining towns, just look up numbers of children deceased during work in the 19th Century England. It did not make everyone poorer - it made plenty of people helluva lot richer - but it did increase the number of those in poverty and the effects of their destitude. The mega rich capitalists class greated by the industrialisation replaced the old aristocrats being able to buy goverments and leaders to do their bidding, smashing worker’s rights and unions created to defend the workers from the effects of industrial capitalism with police violence, William Hearst and the like were able to essentially control public opinion since they owned the newspapers…Sound familiar? We are not entering an era descriped on an utopian scifi, but just returning to the good old 19th century.
Except I still doubt whether AI is the new Spinning Jenny. Because the quality is so bad and because it can’t replace human’s in most things or doesn’t necessarily even speed the prodiction in a sognificant way, we might just be facing another IT-bubble and financial meltdown, since US seems to have but all of the eggs in a one basket.
The amount of luddites angrily answering this comment and downvoting me for seeing with a positive view a new great revolution for human knowledge and productivity, is very funny when coming from a website supposedly dedicated to innovation :)
In my country, we also have a class (they have achieved the status of social class IMO) that expects to keep their jobs and get ever increasing privileges while not having to upgrade their competences or even having to learn anything new during all their life: our public workers.
I doubt anyone is hiring ai ghostbusters to cleanup after ai slop. Couple of people added that to their linkedin (ewww) to ride the anti-slop hype.
In reality most devs can cleanup after themselves.
>it will enable us to do even greater things.
It doesnt do this.