I've felt similar to the author, a sort of despair that the only point of writing software now is to prop up the valuation of AI companies, that quality no longer matters, etc.
Then I realized that nothings stopping me from writing software how I want and feel is best. I've stopped using LLMs completely and couldn't be happier. I'm not even struggling at work or feeling like I'm behind. I work on a number of personal projects too, all without LLMs, and I couldn't feel better.
MIT isn’t “weak” because it allows LLM training; it’s weak because it puts zero obligations on the recipient.
Blocking “LLM training” in a license feels satisfying, but I’ve run into three practical issues while benchmarking models for clients:
1. Auditability — You can grep for GPL strings; you can’t grep a trillion-token corpus to prove your repo wasn’t in it. Enforcement ends up resting on whistle-blowers, not license text.
2. Community hard-forks — “No-AI” clauses split the ecosystem. Half the modern Python stack depends on MIT/BSD; if even 5 % flips to an LLM-ban variant, reproducible builds become a nightmare.
3. Misaligned incentives — Training is no longer the expensive part. At today’s prices a single 70 B checkpoint costs about \$60 k to fine-tune, but running inference at scale can exceed that each day. A license that focuses on training ignores the bigger capture point.
A model company that actually wants to give back can do so via attribution, upstream fixes, and funding small maintainers (things AGPL/SSPL rarely compel). Until we can fingerprint data provenance, social pressure—or carrot contracts like RAIL terms—may move the needle more than another GPL fork.
Happy to be proven wrong; I’d love to see a case where a “no-LLM” clause was enforced and led to meaningful contributions rather than a silent ignore.
This is also a good opportunity to remember that MIT is not a strong enough open source license, and if you want to prevent corporations making money off your work, make it AGPL or even SSPL, plus a statement that AI training creates a derivative work (the latter may or may not have any legal effect).
MIT is a donation of your labour to corporations. With a stronger license, at least they're more likely to contribute back or to pay you for a looser license.
As far as I was able to tell, every single coding LLM out there still violates the terms of the MIT license, because the license requires attribution - and LLMs rarely (if ever?) provide any.
I've not used AI to program and have very little interest in using AI to program, but I fail to see how laundering code through massive probabilistic lossy compression (silicon) should be treated any differently than laundering code through massive probabilistic lossy compression (biological). Should humans have to keep track of which software codebases they learn each pattern from, too?
The point is that people who think they want permissive licenses usually don't, and eventually regret choosing them when a corporation treats their work as donated labour (because it is), assuming their software is important enough to be picked up by them (if not then license choice doesn't matter anyway).
I always found this stance puzzling. If the point of open source is to give your code to the public, why do people get upset when corporations do exactly what you told them they could do?
If you didn't want to give it to everyone, you shouldn't have chosen that license.
And if you choose a non-commercial license, people get upset that it's "not technically open source because the OSI says so" as if they are somehow the arbiter of this (or even should be). It's not like anyone owns the trademark to the term "open source" for software either.
Ironically, I've seen a lot of people in the last several years quit open source entirely and/or switch to closed source.
Yes I understand... but they already knew that the license explicitly allows this, and they already knew companies regularly take advantage of FOSS without giving back, so I'm not sure why they were expecting to get lucky or something.
To me this is just like getting upset when someone forks your open source project. Which ironically I've seen happen a LOT. Sometimes the original developer/team even quits when that happens.
It's like... they don't actually want it to be open source, they want to be the ONLY source.
Because they don't think about it deeply - that's why reminders are necessary. They think they're only donating to people with similar attitudes to themselves. xGPL licenses (SSPL included) are the license family most similar to that...
... but MIT is what corporations told them they want. There has been a low-level but persistent campaign against xGPL in the past several years and the complaints always trace back to "the corporation I work for doesn't like xGPL." No individual free software developer has a problem with xGPL (SSPL not included).
> No individual free software developer has a problem with xGPL
I do... I consider it the opposite of freedom. I think it places severe restrictions on your project that make it hard/impossible for some people (like companies) to use, especially if your project contains lots of code from other people that make it really hard/impossible to try to re-license if one day you decide you like/need money (assuming you have no CLA, I don't like those either).
But I also realize there's different kinds of freedom... freedom TO vs freedom FROM.
Some want the freedom TO do whatever they want... and others want freedom FROM the crazy people doing whatever they want.
I wish there was a happy medium but centrism doesn't seem to be very popular these days.
Tangentially, I wonder if logins and click-throughs can help address this on the legal front.
If you set up a login flow with a click through that explicitly sets the terms of access, specifying no cost for access by a person, and some large cost for access by AI.
Stepping past this prompt into the content would require an AI to either lie, committing both fraud and unauthorized access of content.. or behave truthfully, opting in the proprietor of the API to the associated costs.
In either case, the site operator can then go after the company doing the scraping to collect the fees as specified in the copyright contract (and perhaps some additional delta of punitive fines if content was accessed fraudulently).
Given Meta's history of torrenting every book it could get its hands on for training, I'm not convinced that the majority of AI companies would respect that license. Maybe if we also had a better way to prove that such code was part of the training set and see a couple of solid legal victories with compensation awarded.
I'm pretty astounded that "The Stack" at least did and effort, and continue to do so by weeding out GPL or similar strong copyleft source code from their trove, and even implemented an opt-out mechanism [0].
They look like saints when compared to today's companies.
They're also getting sued for it, and the judge ruled they had no right to torrent those books so now it's just a matter of calculating how many trillions Meta has to pay, then extracting it from them.
Because Meta got caught. I'm not convinced that every random OSS lib will have the resources to audit every model out there for a hypothetical GPL+no training violation.
*Unless you're a member of the capital class, in terms of being a corporation or a wealthy individual, who can then make our two-tiered justice system work for you. As Disney is seemingly looking to do. Then it will absolutely work for you.
This is why I and people like me so often say "there is no war but the class war." Arguing about copyright misses the entire point: The law serves the large stakeholders in the system, not the people. The only thing that's changed is there is now a large stakeholder of whom a core pillar of their ongoing business is the theft of data at industrial scale which happens to include data of other large stakeholders which is why we're now seeing the slap fight.
By all means enjoy it, it's very entertaining watching these people twist themselves into knots to explain why it's okay for Nintendo to sue people into the ground for distributing copies of games they no longer sell in any capacity but simultaneously it's okay for OpenAI to steal absolutely goddamned everything on the grounds that nothing has been "really" taken due to being infinitely replicable, or because it's a public research org, or whatever flimsy excuse is being employed at the time.
As it has been from the beginning, my position is: whatever the rule we decide on, it should apply to everyone. A very simple statement on very basic ethics that seems to make a lot of people very angry for some reason.
> MIT is a donation of your labour to corporations.
Unless you are willing to spend yourself into financial ruin pursuing legal action against some faceless megacorp - it literally doesn't matter what license you use.
I've lived enough to know there is "what should be" and then there is what actually happens in reality. We don't live in a reality where everyone just does things out of the goodness of their heart...
Adding some text to your project, hosted on a public website for all to see means some people will take your code regardless of the license or your intent - and, realistically, what are you going to do about it? Nothing...
So... please, let's get off this GPL high-horse. It's not some end-all-be-all holy text that solves all of the world's problems.
Perhaps many would have refused to donate if they knew that the project would be archived in a year. Collecting for audit and then archiving the project is, in a way, a violation of expectations.
Would they have refused to donate if they knew the author would be hit by a bus in a year? Or hired by someone who refused to allow them to continue working on it?
I don't think the author had explicit plans to do this a year ago.
Depends on your perspective... If I'd known the project was going to stop soon after I donated, I probably wouldn't donate, even if the purpose of the money was strictly for an audit.
(I'm not trying to throw shades at the author. I know they have no obligation to maintain an open source project. I'm simply having a hard time gasping what's happening.)
Seems like the author is abandoning software because in his opinion due to AI explosion employers don't care anymore about code quality and only quality.
I don't get it either, because that has always been the case, thus most of his post is borderline non sense.
Imo, what happened is that he took the opportunity of him entering academia to vent some stuff and quit maintaining his project.
> I don't get it either, because that has always been the case, thus most of his post is borderline non sense.
Yes, making software development cheaper has been the main priority of the industry for a long time. The new development is that there's now a magic "do what I want" button that obviously won't quite do what you want but it's so cheap (for corporations, not humanity...) that you might as well pretend it does. (Compared to paying professionals who might even care about doing a good job, that is.)
I've been a web application developer for nearly 30 years now. I care about the craft and discipline immensely. Then you pull up something like the Jack-In-The-Box menu site and fully realize that managers/executives don't give a damn if the stuff works well... 48MB of built JS?!? My daughter expressed how badly the site was working on her phone, and I got curious.
What's funny, is some will say, "use the app" instead for things like this... why should I trust someone to build a safe/secure app, who cannot build a reasonably functional website?
To be fair, the app may be developed by a completely different team.
If your argument is still that you don't want to trust a company that can't make both functional, well... maybe you shouldn't be going to Jack-In-The-Box in the first place.
My point is that I'm not going to give app-level access to my phone to a company that doesn't care enough to have a functional website. That said, I'm unlikely to install an app for anything on my device.
I don't actually install that many apps, and generally not retail apps anyway.
He doesn't have interest in the project anymore. He didn't have a long time, and now that he stopped with software development and gone into the academia- he certainly doesn't have interest. Is that hard to get?
He explained the reasons he went into the academia, which is because of the AI, and AI is not the reason he stopped with development.
- The author enjoyed writing quality open source code
- The author needs to make strategic decisions for his own career and livelihood and he doesn't have enough bandwidth for both
I don't feel he is happy about the decision he needs to make and he is pointing to something dark happening to software development and open source.
Now this is not new, and didn't start with LLM. I am sure if we ask the OpenBSD devs what they think about the modern mainstream open source ecosystem, docker, full stack development, etc. they see it like we might look at LLM generated code.
This is just a question how much of a purist you are.
I was thinking exactly the same : I also don't get it (even though I totally get that someone may lose motivation to work on a project, and certainly has no obligation to continue. But this justification sounds a bit weird to me).
Could this mean that he has been approached by some "men in black" asking to insert some backdoor in the code or to stop working on it, together with a gag order ? (actually I also wondered the same a long time ago with Truecrypt, even though to my knowledge no backdoor has ever been found in further audits...)
I'm starting to feel kinda old and out of the loop. Could someone please explain the conversational style of this post?
It begins with a prompt directed at Gemini, followed by what appears to be an AI-generated response. Are these actual AI responses or is the developer writing their parting message in a whimsical way? I'm genuinely confused. Help much appreciated!
This is a post framed in medias res, from the perspective of the developer, as portrayed by themselves, asking an LLM to construct the post that they post immediately afterwards.
I'm unsure if the post is actually created by Gemini or the developer's imitation. I suspect the latter.
In regards to "As long as you can run the code, archiving this project means nothing, really." I think this section misses the main concern of archived software - what happens when one of those bugs is run into (either something not yet noticed or something due to external changes down the road) and there is no actively maintained version (which could include one you're willing to hack st yourself) to just update to?
The simpler the software the less urgent the concern but "I haven't had a problem in the last 2 years" is something I could say of most software which I end up needing to update, and it makes sense to make myself able to do so at a convenient time rather than the moment the problem occurs.
This project seems popular enough I'm sure eaiting a bit and seeing who the successor project is would be a safe bet as well though.
I like the creativity behind this. And I feel sorry for them that the current wave of AI has lead to them abandoning their pet project. Maybe they will pick up the project again, once the dust has settled. At the end, at least for me, they are pet projects for exactly that reason: An escape route from the day to day business. One should still be proud of what they achieved in their spare time. I don’t care if my job requires me to use K8s, Claude or Docker. Or if that’s considered "industry standard".
I understand the author's sentiment but industries don't exist solely because somebody wants them to. I mean, sure, hobbies can exist, but you won't be paid well (or even at all) to work with them.
Software engineering pays because companies want people to develop software. It pays so well because it's hard, but the coding portion is become easier. Vibe coding and AI is here to stay, the author can choose to ignore it and go preach to a dying field (specifically, writing code, not CS), or embrace it. We should be happy we no longer need to type away if and for loops 20 times and instead can focus on high level architecture.
it's not LLMs vs typing for loops by hand. It's LLMs vs snippets and traditional cheap, pattern based code generation, find and replace, and traditional refactoring tools
those are still faster and cheaper and more predictable than an LLM in many cases
Considering the author is explicitly going into AI research, has an AI-generated profile picture, and claims front-and-centre on their website they are excited about LLMs, I don’t think that analogy works. Or rather, it is like a knitter throwing away their needles to eagerly go work in the loom manufacturing industry.
I dont think many people would be excited at the thought of going from handcrafted artisan knitting to babying machines in the knitting factory. You need a certain type of autism to be into the latter.
Fortunately this is the software industry. We've got a lot of those autists and that automating urge is the best part about software. If someone don't like the idea of sitting around babysitting factories of machines they certainly shouldn't go into DevOps. It would be safest to just avoid programming in general, given how much of the industry centres on figuring out how to deploy huge amounts of computing power in an autonomous fashion.
I'd think it would be more autistic to continue to use and have interest in something that's been superseded by something far more easier and efficient.
Who would you think is weirder, the person still obsessed with horse & buggies, or the person obsessed with cars?
So basically, he’s leaving software development because the job market is bad. Instead, he’s joining AI research which (currently) has a more healthy job market. That seems pretty reasonable to me, given that even widely used open source projects are only barely financially viable. Many open source projects end when the author finally gets a girlfriend, this one ends for a new job. Seems like a good outcome to me. Plus truly fascinating presentation.
As a complete outsider looking at this, without additional context, I just have a hard time believing there aren’t more reasons, they’re just not willing to share them:
* I’m not passionate about it anymore
* I’m tired
* I want to repurpose my free time
* I’m not adding enough value compared to other options now available
In the end, it’s pointless to argue about why someone feels the way they do. If they are firm on their stance, don’t waste anybody’s time, no matter how irrational their argument is. Give up trying to be right.
Probably this guy should have just stopped engaging directly with some of the dialogue, but the fact that he is exploring the idea of trying to hand it off in some manner tells me he really does care about the project.
Where a precipitous drop in earning power, combined with longer working hours, high inflation and large companies making people unemployed cause large social unrest.
I mean if you want that argument then sure, but given that those riots were one step in a long path to workers rights. The lesson here is that if we avoid exploiting workers and/or throwing them out on their arses, we can sidestep a load of social upheaval.
or we can not and just end up having a blood bath.
> It's not easy to fix in the code either because it'll require major changes to the GUI library which can get messy, especially since GUIs were never a strength of Go.
Immediate mode UI toolkits are designed for pluggable backends, some even can discover the appropriate backend at runtime. If you're writing a game, you're expected to (and actually, you must) build your own integration.
If you're using a portable library that needs to render graphics on mac, it's probably using OpenGL to do it unless it has a platform-specific backend.
I'm holding out hope that there will always be boutique/edge software to be written, which requires enough design and care to be mentally challenging and engaging - the craftsmanship kind.
When AlphaGo was announced, I had to keep telling people that "It's not like computers win at Go, it's just that we now have a tool that makes us way better at Go". If an alien race showed up and challenged us to Go to save the species, would we use a Go player or a AlphaGo, if we had to choose?
The problem is LLMs aren't like that, because software isn't like Go. And, they really are annoying to use, frustrating to redirect all the time, and generally cannot do what you want precisely, without putting in more mental energy to provide context and decompose further than you'd need to do the damn thing yourself. And then you lose an hour/two of flow, which is the reward for the whole process.
But at certain times they are a godsend and they have completely replaced some of the more boring parts of my job. I wouldn't want to lose them, not at all.
Like the author, I don't think we're heading to a healthy balance where LLMs help us be better at our job. I do think the hype is going in the wrong direction, and I do worry for the state of our field (Or at least the _ideals_ of our field). Call me naive, but I also thought it mattered what the code was.
The difference I perceive is the split between being the one designing the software, which is what he likes to do, and letting a LLM design the software, without the developer actually understanding what's going on, which is what he dislikes.
(This short comment exchange between us is also a meta commentary on that. Because our comments are much shorter than all the AI summaries, and at the same time, we add nuance and clarification to the ideas exposed, something the AI summaries don't do.)
It was clearly AI generated. So the author is clearly OK with using AI to generate slop in an area they don’t work in, while simultaneously decrying its use in an area they do work in. If they believe so strongly that AI use is destroying their industry, they should reflect on its effect on other industries too (it is well-documented how artists are being negatively impacted).
I agree with the commenters above that it makes the critique fall flat. The
author is saying “This thing is so frustrating and harmful it makes me want to stop working in a field because of it. Oh, by the way, I use this tool myself for other things, and will indeed pivot to contribute directly to them”.
Exactly. The self-contradiction exposes this farce of an arrangement for what it is: talented engineers are presented with immense monetary incentives to automate themselves out of the workforce, their carefully honed craftsmanship to be replaced by hordes of monkeys at typewriters producing voluminous slop.
Anybody can plainly see that the emperor is without clothes, but so long as the C-level rhetoric is sung to the tune of "Either you have to embrace the AI, or you get out of your career," [0] you may as well put on your own clown nose and wig and start dancing while the music is still playing and there are still seats out.
I didn't interpret it as decrying the use of AI. Especially because he plans to dedicate his time and energy into researching the very same AI he rants about, basically promoting its use!
Instead, I see it as a deeply personal rant about the state of affairs which he considers inevitable himself. That is why he leaves the ship.
Before AI slop, there has always been just the agile slop of the bare(ly) minimum product, good enough to woo the ones making a buying decision, or at least until the career sharks have moved on to the next thing. That kind of slop has always been there and everywhere actually. Its called capitalism, or consumerism. The trick is to work for a place that isn't squeezed too hard, because its still in the investment phase or because it just earns money on its own merit.
AI will certainly transform things, just like higher level languages and frameworks have done so. Maybe programming without AI will be the 'micro optimization' of the future: something that is still there and valuable, but only sometimes and only in a certain niche. Slop is eternal, it just has a new face and a new name.
This blog to me is a nice personal rant about a smart young developer coming of age, trying to find his way and guard his ideals or standards against the onslaught of consumerism, just as ambitious young developers always have tried to do.
LLMs are glorified "LMGTFY" tools. AI assistance doesn't make people experts at anything. Some genz vibe coder isn't getting your job guys calm the heck down.
I think people who are afraid that AI coding is going to replace them should try using it a bit seriously. They should be quickly reassured.
What worries me more (on coding related impact of AI - because of course all the impact on fake news and democracies are much more worrying IMO) is having to deal with code written by others using AI (yes, people can write shity code on their own, but with manageable pace)
I'm not worried that LLMs, as they currently and foreseeably (i.e. 18 months) are, will be a good substitute for high quality developers like me.
But oh boy have I seen a lot of mediocre coders get away with mediocrity for a long time — there's a big risk that employers won't care about the quality, just the price, for long enough that the developments in AI are no longer foreseeable.
As someone whos a bit older, and remembers the latter wave of offshoring, I can tell you that quality doesn't matter.
Yes, fake news driving by AI slop is a big problem, but that is only enabled by social media personality fiefdoms.
The shit is going to hit the fan if 10% of the highest paid US working population is laid off for AI outsourcing. That kind of social change brings revolution. and thats before the fracture of US social fabric.
I’m concerned that AI slop will affect open source projects by tilting the already unfavorable maintainer-contributor balance even more towards low-quality contributions.
Daniel Stenberg (from the curl project) has blogged a bunch about AI slop seeping into vulnerability reports[1], and if the same happens to code contributions, documentation and so forth that can help turning a fun hobby project into something you dread maintaing.
You don't need an equal replacement to lose your job, just a good enough and more economical one.
Lots of graphic designers lost their jobs or at least a lot of their work now that image generation models have gotten decent at rendering text. Now any idiot can whip up some advertising graphics at half the quality of a designer, but in 1/10th the time and 1/100th the cost (or even for free!). It doesn't matter that it looks like ass and makes no sense in context, they produced an acceptable result for a fraction of the cost.
Quality does not matter in the market, it never has. Whoever can produce the most slop at the lowest price nearly always wins. Yes, there are exceptions, many of them even. But not enough to employ nearly as many of us as there are now.
With the .com crashes through 2000 and 2001, it wasn't until well after 2005-2008 until pay had started to come back up, and still without the crazy signing bonuses. We're still in the down trend, and the industry is bigger today, so longer/larger impact.
Not only that, but it's pushing market rates down significantly. I'm making about 60% of what I made the past few years... I could only handle not having income for so long. I was juggling two jobs for a while, but just couldn't manage it. Hoping to pick up some side work to fill the gaps. Have to face it, a lot of the high pay contract software jobs have just dried up for now.
Because a lot of devs were getting a free ride off the back of ten years of zirp money, and firing people is a sure fire way to pump your share options.
The fear may just be a result of thinking about who is making the decisions. I know I'm good, my peers know I'm good. But how far up in management chain does that knowledge go?
TLDR: Author of an open-source project has a crashout over other people using LLMs for coding, believes that AI will replace all developers, and decides to preemptively give up on software engineering entirely because of that.
IMO anyone who understands AI at a technical level will understand that this won't happen. No matter how many parameters, training and compute you throw at it, putting AI in direct charge of anything that's critical and not entirely predictable is going to backfire. Though, based on response from this author, it should be apparent that his response comes from a place of emotion, misunderstanding, and likely conformism to dogmatic anti-AI rhetoric of the same nature, rather than actual reason and logic.
It is a matter of time. 5 years no problem. In 10 years some devs will be replaced. In 15 years, i don't think that "pure" developer jobs will exist in the most companies.
But in my opinion, it's a bit hypocritical to blame / be mad at LLMs etc. ruining the fun of coding because & then use AI generated profile pictures.
Why not draw your ghibli styled profile picture yourself? Why use an AI generated image? Doesn't using image generators ruin the fun of drawing? Vibe-drawing?
> He criticizes "Large tech companies and investors" for prioritizing "vibe coding," but not a specific company or individual.
You could rewrite this generated response to match artists point of view as well
> you're not helping anything or making some resonant statement with that thing above or your avatar
> Sorry for breathing and producing CO2.
That's not what the commenter argued, and that response is incredibly petty. It's a way to defuse the argument entirely (is that a straw-man? no idea).
> I originally intended to work in the software engineering industry, but seeing the complete disregard for high quality code, overpowering greed and hype, and the layoffs that follow from it
Replace "software engineering" with literally anything and this is a true statement since the dawn of civilization.
I am thinking of that quotation that said [paraphrasing] "90% of my skills went to $0, but the other 10% are now worth 1000x"
This LLM-fuelled rant/departure is a thought-provoking expression of frustration from someone who focused on the 90%, not the 10% -- namely someone willing to handcraft software like an artisan.
I think we're in the mass-production era for code and nobody wants a hand-crafted anything anymore. Automated and mass-produced please. "Quantity has a quality all its own"
what exactly is the 90% that he focused on, and the 10% he didn't?
Is the 90% just hand crafting? I don't believe there can be no place for hand crafting things at all, because there is not any other business or human endeavor in history which has been 100% mass produced without any place for artisans in the market.
Even Automobile production has its place in the market for craftsmanship.
Damn. The part about quality over quantity really hit home. I also got into software engineering as a way to exert craftsmanship and am disappointed with the state of things. It's potentially a great field for people who are interested in the pursuit of perfection. There are few areas which are as complex and take so much time and effort to master.
Software engineering provides a window into reality in a way which exposes you to its full complexity. It changes your brain, you start seeing complexity everywhere around you. You start seeing problems that nobody else sees. This is why I got into coding... But now the industry often feels like it's leading you astray and preventing you from truly flourishing.
This is sad because being skilled at coding feels great and it shapes your reality in a very positive way. Being able to think clearly is a great gift and a worthy goal to strive for. Having a logical mind feels inherently good. Being able to approach any topic, with anyone and maintaining full logical consistency feels good.
Yes, the AI hype is real, and yes there's a desire to cut costs by using AI within companies. However, I think the maintainer (Evan Su) has a bit of a narrow view on this matter. Evan is still a student in university.
This doesn't mean his perspective or opinion should be disregarded, it's more just I think he's declaring quite a career defining absolute for himself before really having a solid foot in the industry. Frankly, this rant seems kind of fueled by intense doom-scrolling on linkedin rather than by first hand experience.
To be fair, the job market is terrible. Not nearly as attributable to AI as people think, I suspect - I'd pin it on interest rates. Still, I'd consider other things if I had anywhere to go and he's not made a bad choice.
If he's deciding to go elsewhere due to the current market, I think that's fair. However, probably his biggest talking point in the whole post is about how no one wants to craft artisan code anymore and how that's due to AI taking over and taking jobs. That's where I'm saying that the jump in conclusion is quite drastic given the absence of hard evidence of AI taking jobs being true (yet).
That is to be expected by a project that contains encryption code, I'd say. Maybe their userbase isn’t big enough to report all false-positives and gain the reputation needed.
When I checked a few years back, even a ”hello world” Go application compiled for Windows was flagged as malware by a malware scanner that I investigated.
I’m not at a computer where I could try that hypothesis right now, but back then my conclusion after testing various executables was that any unsigned Go binary would be flagged as a malware.
Just as a rule of thumb, doesn't the fact that only a few unknown vendors flag it—and all of the major vendors do not—indicate something? It would suggest a false positive, wouldn't it?
This was really huge. I actually had to pass it to an LLM to get an abstract ....
I didn't know about picocrypt but I already have two options for safely encrypting files: 7zip with its AES-256 (simple) and veracrypt with various algorithms (more involved but allows you to mount the encrypted vols). Actually these are already mentioned in the tool's readmy, great work: https://github.com/Picocrypt/Picocrypt?tab=readme-ov-file#co...
I've felt similar to the author, a sort of despair that the only point of writing software now is to prop up the valuation of AI companies, that quality no longer matters, etc.
Then I realized that nothings stopping me from writing software how I want and feel is best. I've stopped using LLMs completely and couldn't be happier. I'm not even struggling at work or feeling like I'm behind. I work on a number of personal projects too, all without LLMs, and I couldn't feel better.
MIT isn’t “weak” because it allows LLM training; it’s weak because it puts zero obligations on the recipient.
Blocking “LLM training” in a license feels satisfying, but I’ve run into three practical issues while benchmarking models for clients:
1. Auditability — You can grep for GPL strings; you can’t grep a trillion-token corpus to prove your repo wasn’t in it. Enforcement ends up resting on whistle-blowers, not license text.
2. Community hard-forks — “No-AI” clauses split the ecosystem. Half the modern Python stack depends on MIT/BSD; if even 5 % flips to an LLM-ban variant, reproducible builds become a nightmare.
3. Misaligned incentives — Training is no longer the expensive part. At today’s prices a single 70 B checkpoint costs about \$60 k to fine-tune, but running inference at scale can exceed that each day. A license that focuses on training ignores the bigger capture point.
A model company that actually wants to give back can do so via attribution, upstream fixes, and funding small maintainers (things AGPL/SSPL rarely compel). Until we can fingerprint data provenance, social pressure—or carrot contracts like RAIL terms—may move the needle more than another GPL fork.
Happy to be proven wrong; I’d love to see a case where a “no-LLM” clause was enforced and led to meaningful contributions rather than a silent ignore.
My boss has taken this approach, and it took a load off the "LLM pressure".
This is also a good opportunity to remember that MIT is not a strong enough open source license, and if you want to prevent corporations making money off your work, make it AGPL or even SSPL, plus a statement that AI training creates a derivative work (the latter may or may not have any legal effect).
MIT is a donation of your labour to corporations. With a stronger license, at least they're more likely to contribute back or to pay you for a looser license.
Alternatively MIT does exactly what it says it does. It's up to you as an author whether you like those terms or if you'd prefer GPL, AGPL, or SSPL.
If you want a permissive license MIT is perfectly reasonable. If you want more restrictions or stronger copy-left then don't pick MIT.
As far as I was able to tell, every single coding LLM out there still violates the terms of the MIT license, because the license requires attribution - and LLMs rarely (if ever?) provide any.
I've not used AI to program and have very little interest in using AI to program, but I fail to see how laundering code through massive probabilistic lossy compression (silicon) should be treated any differently than laundering code through massive probabilistic lossy compression (biological). Should humans have to keep track of which software codebases they learn each pattern from, too?
The point is that people who think they want permissive licenses usually don't, and eventually regret choosing them when a corporation treats their work as donated labour (because it is), assuming their software is important enough to be picked up by them (if not then license choice doesn't matter anyway).
> MIT is a donation of your labour to corporations.
No, MIT is a donation of your labor to the public. That includes corporations, yes, but it is not only corporations.
I always found this stance puzzling. If the point of open source is to give your code to the public, why do people get upset when corporations do exactly what you told them they could do?
If you didn't want to give it to everyone, you shouldn't have chosen that license.
And if you choose a non-commercial license, people get upset that it's "not technically open source because the OSI says so" as if they are somehow the arbiter of this (or even should be). It's not like anyone owns the trademark to the term "open source" for software either.
Ironically, I've seen a lot of people in the last several years quit open source entirely and/or switch to closed source.
> why do people get upset when corporations do exactly what you told them they could do?
A lot of people have been taught `corporations == bad`, part of the anti-capitalism efforts taught to our youth for a couple generations.
Yes I understand... but they already knew that the license explicitly allows this, and they already knew companies regularly take advantage of FOSS without giving back, so I'm not sure why they were expecting to get lucky or something.
To me this is just like getting upset when someone forks your open source project. Which ironically I've seen happen a LOT. Sometimes the original developer/team even quits when that happens.
It's like... they don't actually want it to be open source, they want to be the ONLY source.
Because they don't think about it deeply - that's why reminders are necessary. They think they're only donating to people with similar attitudes to themselves. xGPL licenses (SSPL included) are the license family most similar to that...
... but MIT is what corporations told them they want. There has been a low-level but persistent campaign against xGPL in the past several years and the complaints always trace back to "the corporation I work for doesn't like xGPL." No individual free software developer has a problem with xGPL (SSPL not included).
> No individual free software developer has a problem with xGPL
I do... I consider it the opposite of freedom. I think it places severe restrictions on your project that make it hard/impossible for some people (like companies) to use, especially if your project contains lots of code from other people that make it really hard/impossible to try to re-license if one day you decide you like/need money (assuming you have no CLA, I don't like those either).
But I also realize there's different kinds of freedom... freedom TO vs freedom FROM.
Some want the freedom TO do whatever they want... and others want freedom FROM the crazy people doing whatever they want.
I wish there was a happy medium but centrism doesn't seem to be very popular these days.
Tangentially, I wonder if logins and click-throughs can help address this on the legal front.
If you set up a login flow with a click through that explicitly sets the terms of access, specifying no cost for access by a person, and some large cost for access by AI.
Stepping past this prompt into the content would require an AI to either lie, committing both fraud and unauthorized access of content.. or behave truthfully, opting in the proprietor of the API to the associated costs.
In either case, the site operator can then go after the company doing the scraping to collect the fees as specified in the copyright contract (and perhaps some additional delta of punitive fines if content was accessed fraudulently).
When are we getting a GPLv4 that's AGPL + no LLM training? This is overdue.
Given Meta's history of torrenting every book it could get its hands on for training, I'm not convinced that the majority of AI companies would respect that license. Maybe if we also had a better way to prove that such code was part of the training set and see a couple of solid legal victories with compensation awarded.
I'm pretty astounded that "The Stack" at least did and effort, and continue to do so by weeding out GPL or similar strong copyleft source code from their trove, and even implemented an opt-out mechanism [0].
They look like saints when compared to today's companies.
[0]: https://huggingface.co/spaces/bigcode/in-the-stack
They're also getting sued for it, and the judge ruled they had no right to torrent those books so now it's just a matter of calculating how many trillions Meta has to pay, then extracting it from them.
Because Meta got caught. I'm not convinced that every random OSS lib will have the resources to audit every model out there for a hypothetical GPL+no training violation.
"Adversarial Internet" => if it touches the internet it's no longer yours. See a previous comment chain: https://news.ycombinator.com/item?id=44616163
> if it touches the internet it's no longer yours
*Unless you're a member of the capital class, in terms of being a corporation or a wealthy individual, who can then make our two-tiered justice system work for you. As Disney is seemingly looking to do. Then it will absolutely work for you.
This is why I and people like me so often say "there is no war but the class war." Arguing about copyright misses the entire point: The law serves the large stakeholders in the system, not the people. The only thing that's changed is there is now a large stakeholder of whom a core pillar of their ongoing business is the theft of data at industrial scale which happens to include data of other large stakeholders which is why we're now seeing the slap fight.
By all means enjoy it, it's very entertaining watching these people twist themselves into knots to explain why it's okay for Nintendo to sue people into the ground for distributing copies of games they no longer sell in any capacity but simultaneously it's okay for OpenAI to steal absolutely goddamned everything on the grounds that nothing has been "really" taken due to being infinitely replicable, or because it's a public research org, or whatever flimsy excuse is being employed at the time.
As it has been from the beginning, my position is: whatever the rule we decide on, it should apply to everyone. A very simple statement on very basic ethics that seems to make a lot of people very angry for some reason.
Like if LLM training cared about respecting licenses. :(
Be the change you wish to see.
Or just literally call your program's license "AGPL + no LLM training" and that may suffice.
> MIT is a donation of your labour to corporations.
Unless you are willing to spend yourself into financial ruin pursuing legal action against some faceless megacorp - it literally doesn't matter what license you use.
I've lived enough to know there is "what should be" and then there is what actually happens in reality. We don't live in a reality where everyone just does things out of the goodness of their heart...
Adding some text to your project, hosted on a public website for all to see means some people will take your code regardless of the license or your intent - and, realistically, what are you going to do about it? Nothing...
So... please, let's get off this GPL high-horse. It's not some end-all-be-all holy text that solves all of the world's problems.
The (current) last commend by hakavlad (same as hakavlad on HN perhaps?):
That audit was one year ago. The money didn't go towards the author. The source continues to be available. The author doesn't own you zilch.Yes, I found this a profoundly weird comment. The audited code will be forever available and audited.
Human beings are weird, and donations aren't always based on reason. I say it's better to discuss the feelings than worry about disapproval.
Surely a recent audit only increases the odds of someone assuming responsibility for a fork. Knowing there is a solid baseline to proceed from.
>The money didn't go towards the author.
Perhaps many would have refused to donate if they knew that the project would be archived in a year. Collecting for audit and then archiving the project is, in a way, a violation of expectations.
Would they have refused to donate if they knew the author would be hit by a bus in a year? Or hired by someone who refused to allow them to continue working on it?
I don't think the author had explicit plans to do this a year ago.
Did they perform the audit? That is what is important.
The more and more you start modifying code after the audit, the more and more useless the audit becomes.
Yes, they performed.
> That is what is important
Depends on your perspective... If I'd known the project was going to stop soon after I donated, I probably wouldn't donate, even if the purpose of the money was strictly for an audit.
I don't get it.
(I'm not trying to throw shades at the author. I know they have no obligation to maintain an open source project. I'm simply having a hard time gasping what's happening.)
Seems like the author is abandoning software because in his opinion due to AI explosion employers don't care anymore about code quality and only quality.
I don't get it either, because that has always been the case, thus most of his post is borderline non sense.
Imo, what happened is that he took the opportunity of him entering academia to vent some stuff and quit maintaining his project.
> I don't get it either, because that has always been the case, thus most of his post is borderline non sense.
Yes, making software development cheaper has been the main priority of the industry for a long time. The new development is that there's now a magic "do what I want" button that obviously won't quite do what you want but it's so cheap (for corporations, not humanity...) that you might as well pretend it does. (Compared to paying professionals who might even care about doing a good job, that is.)
I've been a web application developer for nearly 30 years now. I care about the craft and discipline immensely. Then you pull up something like the Jack-In-The-Box menu site and fully realize that managers/executives don't give a damn if the stuff works well... 48MB of built JS?!? My daughter expressed how badly the site was working on her phone, and I got curious.
What's funny, is some will say, "use the app" instead for things like this... why should I trust someone to build a safe/secure app, who cannot build a reasonably functional website?
To be fair, the app may be developed by a completely different team.
If your argument is still that you don't want to trust a company that can't make both functional, well... maybe you shouldn't be going to Jack-In-The-Box in the first place.
My point is that I'm not going to give app-level access to my phone to a company that doesn't care enough to have a functional website. That said, I'm unlikely to install an app for anything on my device.
I don't actually install that many apps, and generally not retail apps anyway.
He doesn't have interest in the project anymore. He didn't have a long time, and now that he stopped with software development and gone into the academia- he certainly doesn't have interest. Is that hard to get?
He explained the reasons he went into the academia, which is because of the AI, and AI is not the reason he stopped with development.
I believe you need to separate two things:
- The author enjoyed writing quality open source code
- The author needs to make strategic decisions for his own career and livelihood and he doesn't have enough bandwidth for both
I don't feel he is happy about the decision he needs to make and he is pointing to something dark happening to software development and open source.
Now this is not new, and didn't start with LLM. I am sure if we ask the OpenBSD devs what they think about the modern mainstream open source ecosystem, docker, full stack development, etc. they see it like we might look at LLM generated code. This is just a question how much of a purist you are.
I was thinking exactly the same : I also don't get it (even though I totally get that someone may lose motivation to work on a project, and certainly has no obligation to continue. But this justification sounds a bit weird to me).
Could this mean that he has been approached by some "men in black" asking to insert some backdoor in the code or to stop working on it, together with a gag order ? (actually I also wondered the same a long time ago with Truecrypt, even though to my knowledge no backdoor has ever been found in further audits...)
Did you not read the comment he wrote? It's straightforward
I'm starting to feel kinda old and out of the loop. Could someone please explain the conversational style of this post?
It begins with a prompt directed at Gemini, followed by what appears to be an AI-generated response. Are these actual AI responses or is the developer writing their parting message in a whimsical way? I'm genuinely confused. Help much appreciated!
This is a post framed in medias res, from the perspective of the developer, as portrayed by themselves, asking an LLM to construct the post that they post immediately afterwards.
I'm unsure if the post is actually created by Gemini or the developer's imitation. I suspect the latter.
It is also a demonstration of what he is frustrated with what software development is becoming.
He posted a conversation with Gemini, including his real posts and then Gemini's responses.
In regards to "As long as you can run the code, archiving this project means nothing, really." I think this section misses the main concern of archived software - what happens when one of those bugs is run into (either something not yet noticed or something due to external changes down the road) and there is no actively maintained version (which could include one you're willing to hack st yourself) to just update to?
The simpler the software the less urgent the concern but "I haven't had a problem in the last 2 years" is something I could say of most software which I end up needing to update, and it makes sense to make myself able to do so at a convenient time rather than the moment the problem occurs.
This project seems popular enough I'm sure eaiting a bit and seeing who the successor project is would be a safe bet as well though.
I like the creativity behind this. And I feel sorry for them that the current wave of AI has lead to them abandoning their pet project. Maybe they will pick up the project again, once the dust has settled. At the end, at least for me, they are pet projects for exactly that reason: An escape route from the day to day business. One should still be proud of what they achieved in their spare time. I don’t care if my job requires me to use K8s, Claude or Docker. Or if that’s considered "industry standard".
My projects, my rules.
I understand the author's sentiment but industries don't exist solely because somebody wants them to. I mean, sure, hobbies can exist, but you won't be paid well (or even at all) to work with them.
Software engineering pays because companies want people to develop software. It pays so well because it's hard, but the coding portion is become easier. Vibe coding and AI is here to stay, the author can choose to ignore it and go preach to a dying field (specifically, writing code, not CS), or embrace it. We should be happy we no longer need to type away if and for loops 20 times and instead can focus on high level architecture.
it's not LLMs vs typing for loops by hand. It's LLMs vs snippets and traditional cheap, pattern based code generation, find and replace, and traditional refactoring tools
those are still faster and cheaper and more predictable than an LLM in many cases
https://github.com/Picocrypt/Picocrypt/issues/134#issuecomme...
This to me is the crux of the whole thing.
Almost like a knitter throwing away their needles because they saw a loom.
Considering the author is explicitly going into AI research, has an AI-generated profile picture, and claims front-and-centre on their website they are excited about LLMs, I don’t think that analogy works. Or rather, it is like a knitter throwing away their needles to eagerly go work in the loom manufacturing industry.
I dont think many people would be excited at the thought of going from handcrafted artisan knitting to babying machines in the knitting factory. You need a certain type of autism to be into the latter.
The author is a student at a university. There's many paths to take that early in the career, I don't think people have to read too much into it.
Fortunately this is the software industry. We've got a lot of those autists and that automating urge is the best part about software. If someone don't like the idea of sitting around babysitting factories of machines they certainly shouldn't go into DevOps. It would be safest to just avoid programming in general, given how much of the industry centres on figuring out how to deploy huge amounts of computing power in an autonomous fashion.
Yeah, the whole industry is just speedrunning Factorio at this point.
I'd think it would be more autistic to continue to use and have interest in something that's been superseded by something far more easier and efficient.
Who would you think is weirder, the person still obsessed with horse & buggies, or the person obsessed with cars?
So basically, he’s leaving software development because the job market is bad. Instead, he’s joining AI research which (currently) has a more healthy job market. That seems pretty reasonable to me, given that even widely used open source projects are only barely financially viable. Many open source projects end when the author finally gets a girlfriend, this one ends for a new job. Seems like a good outcome to me. Plus truly fascinating presentation.
As a complete outsider looking at this, without additional context, I just have a hard time believing there aren’t more reasons, they’re just not willing to share them:
* I’m not passionate about it anymore
* I’m tired
* I want to repurpose my free time
* I’m not adding enough value compared to other options now available
In the end, it’s pointless to argue about why someone feels the way they do. If they are firm on their stance, don’t waste anybody’s time, no matter how irrational their argument is. Give up trying to be right.
Probably this guy should have just stopped engaging directly with some of the dialogue, but the fact that he is exploring the idea of trying to hand it off in some manner tells me he really does care about the project.
I mean your analogy is almost there. The loom is pretty old.
What your grasping for is https://en.wikipedia.org/wiki/Power-loom_riots
Where a precipitous drop in earning power, combined with longer working hours, high inflation and large companies making people unemployed cause large social unrest.
And yeah, I can see why they rioted.
Yeah, life has just been on a steady decline since 1826. Who wants all this food and medicine anyway
I mean if you want that argument then sure, but given that those riots were one step in a long path to workers rights. The lesson here is that if we avoid exploiting workers and/or throwing them out on their arses, we can sidestep a load of social upheaval.
or we can not and just end up having a blood bath.
How does "a very small (hence Pico), very simple, yet very secure encryption tool" come to depend on OpenGL, threatening its future on MacOS?
> It's not easy to fix in the code either because it'll require major changes to the GUI library which can get messy, especially since GUIs were never a strength of Go.
There just doesn't seem to be a lot of viable competition to web based UIs these days.
Immediate mode UI toolkits are designed for pluggable backends, some even can discover the appropriate backend at runtime. If you're writing a game, you're expected to (and actually, you must) build your own integration.
ImGUI and Nuklear each have 20+ backends in their repos: <https://github.com/ocornut/imgui/tree/master/backends> <https://github.com/Immediate-Mode-UI/Nuklear/tree/master/dem...>
If you're using a portable library that needs to render graphics on mac, it's probably using OpenGL to do it unless it has a platform-specific backend.
Historically, yes. These days it might well be using wgpu.
Honestly I feel the same.
I'm holding out hope that there will always be boutique/edge software to be written, which requires enough design and care to be mentally challenging and engaging - the craftsmanship kind.
When AlphaGo was announced, I had to keep telling people that "It's not like computers win at Go, it's just that we now have a tool that makes us way better at Go". If an alien race showed up and challenged us to Go to save the species, would we use a Go player or a AlphaGo, if we had to choose?
The problem is LLMs aren't like that, because software isn't like Go. And, they really are annoying to use, frustrating to redirect all the time, and generally cannot do what you want precisely, without putting in more mental energy to provide context and decompose further than you'd need to do the damn thing yourself. And then you lose an hour/two of flow, which is the reward for the whole process.
But at certain times they are a godsend and they have completely replaced some of the more boring parts of my job. I wouldn't want to lose them, not at all.
Like the author, I don't think we're heading to a healthy balance where LLMs help us be better at our job. I do think the hype is going in the wrong direction, and I do worry for the state of our field (Or at least the _ideals_ of our field). Call me naive, but I also thought it mattered what the code was.
I forked it and named it NanoCrypt. Time to rip out the GUI code. muha ha ha!
There's a cli version anyways... https://github.com/Picocrypt/CLI
Wouldn't that make it femtocrypt?
Why else would you use this code other than for its UI?
> muha ha ha!
Feeling ok? Do you need some support?
I can recommend an excellent system prompt, if times are rough
Wow.
That was strangely...something. Simultaneously not what I expected and yet just nailing the vibe of vibe slop frustration.
It's a nice message and I sympathize with the frustration, but the critique falls flat with the author's decision to pivot into AI research.
> Advancements in intelligent AI and LLMs get me excited.
Large and centered on the authors website: https://evansu.com/
What a hypocrite lmao
You can dislike what AI/LLMs are doing in the software development field, but be excited about their medical applications, as an example.
The difference I perceive is the split between being the one designing the software, which is what he likes to do, and letting a LLM design the software, without the developer actually understanding what's going on, which is what he dislikes.
(This short comment exchange between us is also a meta commentary on that. Because our comments are much shorter than all the AI summaries, and at the same time, we add nuance and clarification to the ideas exposed, something the AI summaries don't do.)
Also the AI ghibli pfp...
The entire thing was crafted, including the profile pic.
His low-quality “petty comments” are to the low-effort haters.
What's wrong with it?
It was clearly AI generated. So the author is clearly OK with using AI to generate slop in an area they don’t work in, while simultaneously decrying its use in an area they do work in. If they believe so strongly that AI use is destroying their industry, they should reflect on its effect on other industries too (it is well-documented how artists are being negatively impacted).
I agree with the commenters above that it makes the critique fall flat. The author is saying “This thing is so frustrating and harmful it makes me want to stop working in a field because of it. Oh, by the way, I use this tool myself for other things, and will indeed pivot to contribute directly to them”.
Exactly. The self-contradiction exposes this farce of an arrangement for what it is: talented engineers are presented with immense monetary incentives to automate themselves out of the workforce, their carefully honed craftsmanship to be replaced by hordes of monkeys at typewriters producing voluminous slop.
Anybody can plainly see that the emperor is without clothes, but so long as the C-level rhetoric is sung to the tune of "Either you have to embrace the AI, or you get out of your career," [0] you may as well put on your own clown nose and wig and start dancing while the music is still playing and there are still seats out.
Farcical circumstances prompt paradoxical responses.
[0] https://news.ycombinator.com/item?id=44808645
I didn't interpret it as decrying the use of AI. Especially because he plans to dedicate his time and energy into researching the very same AI he rants about, basically promoting its use!
Instead, I see it as a deeply personal rant about the state of affairs which he considers inevitable himself. That is why he leaves the ship.
Before AI slop, there has always been just the agile slop of the bare(ly) minimum product, good enough to woo the ones making a buying decision, or at least until the career sharks have moved on to the next thing. That kind of slop has always been there and everywhere actually. Its called capitalism, or consumerism. The trick is to work for a place that isn't squeezed too hard, because its still in the investment phase or because it just earns money on its own merit.
AI will certainly transform things, just like higher level languages and frameworks have done so. Maybe programming without AI will be the 'micro optimization' of the future: something that is still there and valuable, but only sometimes and only in a certain niche. Slop is eternal, it just has a new face and a new name.
This blog to me is a nice personal rant about a smart young developer coming of age, trying to find his way and guard his ideals or standards against the onslaught of consumerism, just as ambitious young developers always have tried to do.
> falls flat with the author's decision to pivot into AI research.
yeah but _what_ part of AI research. There are loads of it that are nothing to do with slop, and might even have practical and useful benefits....
yep, it's a very old internet response, something you rarely see now. creative and interesting, in a meta sense.
LLMs are glorified "LMGTFY" tools. AI assistance doesn't make people experts at anything. Some genz vibe coder isn't getting your job guys calm the heck down.
I think people who are afraid that AI coding is going to replace them should try using it a bit seriously. They should be quickly reassured.
What worries me more (on coding related impact of AI - because of course all the impact on fake news and democracies are much more worrying IMO) is having to deal with code written by others using AI (yes, people can write shity code on their own, but with manageable pace)
I'm not worried that LLMs, as they currently and foreseeably (i.e. 18 months) are, will be a good substitute for high quality developers like me.
But oh boy have I seen a lot of mediocre coders get away with mediocrity for a long time — there's a big risk that employers won't care about the quality, just the price, for long enough that the developments in AI are no longer foreseeable.
Tell that to C-level executives. They don't understand this, and until then, we, developers, can only be afraid of losing our jobs to a mediocre AI.
As someone whos a bit older, and remembers the latter wave of offshoring, I can tell you that quality doesn't matter.
Yes, fake news driving by AI slop is a big problem, but that is only enabled by social media personality fiefdoms.
The shit is going to hit the fan if 10% of the highest paid US working population is laid off for AI outsourcing. That kind of social change brings revolution. and thats before the fracture of US social fabric.
I’m concerned that AI slop will affect open source projects by tilting the already unfavorable maintainer-contributor balance even more towards low-quality contributions.
Daniel Stenberg (from the curl project) has blogged a bunch about AI slop seeping into vulnerability reports[1], and if the same happens to code contributions, documentation and so forth that can help turning a fun hobby project into something you dread maintaing.
[1] https://daniel.haxx.se/blog/2025/07/14/death-by-a-thousand-s...
Your last sentence should have been: CEOs calm the heck down.
They can't. Shilling is practically their whole job.
What does that stand for?
https://googlethatforyou.com/what-is-lmgtfy-meaning.html
LMGTFY is „let me google that for you“
That answer is designed for exactly this question.
bing it
I see what you did there
It is a RTFMism from the early 2000s.
You don't need an equal replacement to lose your job, just a good enough and more economical one.
Lots of graphic designers lost their jobs or at least a lot of their work now that image generation models have gotten decent at rendering text. Now any idiot can whip up some advertising graphics at half the quality of a designer, but in 1/10th the time and 1/100th the cost (or even for free!). It doesn't matter that it looks like ass and makes no sense in context, they produced an acceptable result for a fraction of the cost.
Quality does not matter in the market, it never has. Whoever can produce the most slop at the lowest price nearly always wins. Yes, there are exceptions, many of them even. But not enough to employ nearly as many of us as there are now.
doesnt change anyones mind when it comes to layoffs
> Some genz vibe coder isn't getting your job guys calm the heck down.
Then why isn't the software job market recovering
With the .com crashes through 2000 and 2001, it wasn't until well after 2005-2008 until pay had started to come back up, and still without the crazy signing bonuses. We're still in the down trend, and the industry is bigger today, so longer/larger impact.
Not only that, but it's pushing market rates down significantly. I'm making about 60% of what I made the past few years... I could only handle not having income for so long. I was juggling two jobs for a while, but just couldn't manage it. Hoping to pick up some side work to fill the gaps. Have to face it, a lot of the high pay contract software jobs have just dried up for now.
Because a lot of devs were getting a free ride off the back of ten years of zirp money, and firing people is a sure fire way to pump your share options.
The fear is like telling on yourself.
The fear may just be a result of thinking about who is making the decisions. I know I'm good, my peers know I'm good. But how far up in management chain does that knowledge go?
TLDR: Author of an open-source project has a crashout over other people using LLMs for coding, believes that AI will replace all developers, and decides to preemptively give up on software engineering entirely because of that.
IMO anyone who understands AI at a technical level will understand that this won't happen. No matter how many parameters, training and compute you throw at it, putting AI in direct charge of anything that's critical and not entirely predictable is going to backfire. Though, based on response from this author, it should be apparent that his response comes from a place of emotion, misunderstanding, and likely conformism to dogmatic anti-AI rhetoric of the same nature, rather than actual reason and logic.
It is a matter of time. 5 years no problem. In 10 years some devs will be replaced. In 15 years, i don't think that "pure" developer jobs will exist in the most companies.
Nice post really . I like the meta conversation .
But in my opinion, it's a bit hypocritical to blame / be mad at LLMs etc. ruining the fun of coding because & then use AI generated profile pictures.
Why not draw your ghibli styled profile picture yourself? Why use an AI generated image? Doesn't using image generators ruin the fun of drawing? Vibe-drawing?
> He criticizes "Large tech companies and investors" for prioritizing "vibe coding," but not a specific company or individual.
You could rewrite this generated response to match artists point of view as well
lol
I second this, plus their responses are petty.
As an example:
> you're not helping anything or making some resonant statement with that thing above or your avatar
> Sorry for breathing and producing CO2.
That's not what the commenter argued, and that response is incredibly petty. It's a way to defuse the argument entirely (is that a straw-man? no idea).
Is it a strawman? I think so
> I originally intended to work in the software engineering industry, but seeing the complete disregard for high quality code, overpowering greed and hype, and the layoffs that follow from it
Replace "software engineering" with literally anything and this is a true statement since the dawn of civilization.
The new era Socrates dialogue. With a machine
I am thinking of that quotation that said [paraphrasing] "90% of my skills went to $0, but the other 10% are now worth 1000x"
This LLM-fuelled rant/departure is a thought-provoking expression of frustration from someone who focused on the 90%, not the 10% -- namely someone willing to handcraft software like an artisan.
I think we're in the mass-production era for code and nobody wants a hand-crafted anything anymore. Automated and mass-produced please. "Quantity has a quality all its own"
I think if you believe that 90% of skills went to 0% but the other 10% are worth 1000x, that makes sense.
But even if that's true, the 1000x is going to go to far fewer humans. Maybe you're in the lucky % saved, but a lot of people won't be.
It's interesting to consider. I don't have any takes one way or the other, I'm just observing. I have no idea how all of this works out.
what exactly is the 90% that he focused on, and the 10% he didn't?
Is the 90% just hand crafting? I don't believe there can be no place for hand crafting things at all, because there is not any other business or human endeavor in history which has been 100% mass produced without any place for artisans in the market.
Even Automobile production has its place in the market for craftsmanship.
(income × 0 × 0.9) + (income × 1000 × 0.1) = income × 100
Really looking forwards to the 10000% pay raise
>I think we're in the mass-production era for code and nobody wants a hand-crafted anything anymore.
Not all people like IKEA furniture.
>Quantity has a quality all its own
You understand the meaning of this Stalin attributed quote?
I wouldn’t want that kind of quality in planes, cars, surgical robots, power plants etc.
I think it’s a good time to introduce fines for severe damages by software bugs.
Damn. The part about quality over quantity really hit home. I also got into software engineering as a way to exert craftsmanship and am disappointed with the state of things. It's potentially a great field for people who are interested in the pursuit of perfection. There are few areas which are as complex and take so much time and effort to master.
Software engineering provides a window into reality in a way which exposes you to its full complexity. It changes your brain, you start seeing complexity everywhere around you. You start seeing problems that nobody else sees. This is why I got into coding... But now the industry often feels like it's leading you astray and preventing you from truly flourishing.
This is sad because being skilled at coding feels great and it shapes your reality in a very positive way. Being able to think clearly is a great gift and a worthy goal to strive for. Having a logical mind feels inherently good. Being able to approach any topic, with anyone and maintaining full logical consistency feels good.
Lol what a drama queen
> What the fuck bro
QFT
This is dumb
I don't think this moment will age well. Is this an attempt to create a personal brand story?
I think this is just a bit doomerish honestly.
Yes, the AI hype is real, and yes there's a desire to cut costs by using AI within companies. However, I think the maintainer (Evan Su) has a bit of a narrow view on this matter. Evan is still a student in university.
This doesn't mean his perspective or opinion should be disregarded, it's more just I think he's declaring quite a career defining absolute for himself before really having a solid foot in the industry. Frankly, this rant seems kind of fueled by intense doom-scrolling on linkedin rather than by first hand experience.
To be fair, the job market is terrible. Not nearly as attributable to AI as people think, I suspect - I'd pin it on interest rates. Still, I'd consider other things if I had anywhere to go and he's not made a bad choice.
If he's deciding to go elsewhere due to the current market, I think that's fair. However, probably his biggest talking point in the whole post is about how no one wants to craft artisan code anymore and how that's due to AI taking over and taking jobs. That's where I'm saying that the jump in conclusion is quite drastic given the absence of hard evidence of AI taking jobs being true (yet).
Seems like a nice project. I'm still a little bit concerned with the VirusTotal result: https://www.virustotal.com/gui/file/81bbdffb92181a11692ec665...
I'm concerned about your comment.
I envision a future when people can't deduct anything by themselves and will rely on automated, flawed systems.
The problem is not knowing why and how the systems are flawed, and therefore being unable to have nuanced and truly accurate decisions.
That is to be expected by a project that contains encryption code, I'd say. Maybe their userbase isn’t big enough to report all false-positives and gain the reputation needed.
When I checked a few years back, even a ”hello world” Go application compiled for Windows was flagged as malware by a malware scanner that I investigated.
I’m not at a computer where I could try that hypothesis right now, but back then my conclusion after testing various executables was that any unsigned Go binary would be flagged as a malware.
Just as a rule of thumb, doesn't the fact that only a few unknown vendors flag it—and all of the major vendors do not—indicate something? It would suggest a false positive, wouldn't it?
This was really huge. I actually had to pass it to an LLM to get an abstract ....
I didn't know about picocrypt but I already have two options for safely encrypting files: 7zip with its AES-256 (simple) and veracrypt with various algorithms (more involved but allows you to mount the encrypted vols). Actually these are already mentioned in the tool's readmy, great work: https://github.com/Picocrypt/Picocrypt?tab=readme-ov-file#co...