244 comments

  • candiddevmike 2 days ago ago

    > Despite $30–40 billion in enterprise investment into GenAI, this report uncovers a surprising result in that 95% of organizations are getting zero return.

    Oof

    • grues-dinner 2 days ago ago

      These are probably the same people who are "surprised" when 100 offshore agency dredgings don't magically do the app 10x faster than 10 expensive onshore workers.

      To be fair, the PowerPoint they were shown at that AI Synergies retreat probably was very slick.

      • ToucanLoucan 2 days ago ago

        It's almost like the people in charge of these businesses have no goddamn clue what they're actually selling, how it works or why it's good (or isn't).

        It's almost like, and stay with me here, but it's almost like the vast majority of tech companies are now run by business graduates who do not understand tech AT ALL, have never written a single line of code in their lives, and only know how to optimize businesses by cutting costs and making the products worse until users revolt.

        • ethbr1 2 days ago ago

          It's almost like that's a consequence of not enforcing anti-trust vigorously enough, allowing capital to accrete in Big Tech, and creating extremely large tech companies whose primary innovation comes from acquiring rather than building.

          The reason a competitive ecosystem of tech companies is effective has less to do with market hand magic and more to do with big companies being dumb and conservative, largely as a consequence of their leader selection criteria.

          Microsoft missing web and mobile.

          Intel missing mobile and GPU.

          Google missing productizing AI.

        • franktankbank 2 days ago ago

          Cough cough kickbacks

      • charlieyu1 a day ago ago

        I just don't understand why offshore agenies are so prevalent. Surely hiring directly would be cheaper and gives your more control when you are hiring >1000 workers

        • ike2792 a day ago ago

          About 10 years ago I worked for Fortune 500 company that had outsourced a bunch of its technical work to offshore contractors. New CEO and CTO came in and got rid of all the contractors, replacing them with talented US-based hires. Cut the size of our tech organization down from something like 12,000 (including contractors) to maybe 3,000. Our tech generally worked a lot better after that also.

      • SpaceManNabs 2 days ago ago

        I have tried to digest why this is done. It is not because they believe they are 10x faster.

        It is because they think it will 10x their chances of getting a really good engineer for 1/10th as cheap.

        At least that is my theory. maybe i am wrong. i try to be charitable.

        • jajko 2 days ago ago

          No engineer is smart and overall capable enough to be called a 10x one and yet doesn't realize their price in western value. And we still talk about corporate cogs, the truly brilliant simply start their own gigs

          • motbus3 2 days ago ago

            I never knew a single 10x. I know lots of them who say they are 10x though, but my parrot does a better job then most of them

            • jajko 2 days ago ago

              I knew 2 at least. They could run literal circles around whole experienced team and achieved sometimes, ie if given ie a week things that would be hard for seniors to do at all.

              I talk about ie taking ActiveMQ, building it on your own and tweaking various calls and internal parameters to achieve cca 10x performance boost compared to just vanilla installation. Companies bundling it as part of the product would kill and pay serious money for such distro. Guy did this in maybe 3-4 days from never touching ActiveMQ or any other similar messaging system before to have it reliably working and moving to next thing.

              These folks can be dangerous though, they come up with complex solution that can be extremely hard to maintain, debug and evolve by others. So their added value on long enough time scale can be actually negative even for quite senior but not absolutely top notch brilliant team. Not something 'code ninjas' (or as I call them brilliant juniors) care about, but if you work on something long term you will see this pattern from time to time.

              Also these folks are hard to keep since they get bored when things slow down and big challenges are not around, and quickly and easily move on. Making the issue above pretty serious item to consider.

        • somenameforme 2 days ago ago

          Do you think you could do a better job than a CEO of public company [x] from a technical standpoint - in other words, omitting the connections and public-facing charisma that they typically bring as part of the package?

          I genuinely do, but kind of paradoxically also suspect I'm wrong. It's simply that it's something so far outside my domain that I just can't really appreciate their skills honed over many years of practice and training, because all I get to externally see are their often ridiculous limitations, failures, and myopia.

          I imagine this is, in many ways, how people who have no understanding of e.g. software, let alone software development, see software engineers. I don't think it's uncharitable, it's just human nature. Imagine if we were the ones hiring CEOs. 'That guys a total asshat, and we can get ten guys in India - hard working, smart guys, standouts in 1.4 billion people - for the same price.' Go go go.

          • dariusj18 2 days ago ago

            I think there is confusion because coding is easy, software engineering is hard.

            • motbus3 2 days ago ago

              Coding was never the hardest problem. And it is hard to say why people are taking so long to realise it

              • aaronbaugher a day ago ago

                People who don't know how to code know they don't know how. They can look over your shoulder and see that it looks like gibberish, and they also have no interest in understanding it even if they could.

                On the other hand, designing the software or engineering a solution to the problem seems like something they could do, as far as they know, because it's not something concrete that they can look at and see is beyond their abilities.

        • grues-dinner 2 days ago ago

          Alternatively, when your outsourcing agency finds they have accidentally assigned you an actually good engineer, term used loosely, that you're not paying for (they know this happens when the engineer gets up to speed with your codebase at all), "your guy" is replaced with another guy who inherits the name, the email address and the SSH key.

          And if the agency doesn't do that, the good engineer will figure out he's being underpaid as slop-for-hire cannon fodder and move on his own accord.

      • tmaly 2 days ago ago

        that meme with the oversized pants and penny loafers comes to mind.

    • crazygringo 2 days ago ago

      This is no different from the personal computer, and it is to be expected.

      The initial years of adopting new tech have no net return because it's investment. The money saved is offset by the cost of setting up the new tech.

      But then once the processes all get integrated and the cost of buying and building all the tech gets paid off, it turns into profit.

      Also, some companies adopt new tech better than others. Some do it badly and go out of business. Some do it well and become a new market leader. Some show a net return much earlier than others because they're smarter about it.

      No "oof" at all. This is how investing in new transformative business processes works.

      • TimTheTinker 2 days ago ago

        > transformative business processes

        Many new ideas came through promising to be "transformative" but never reached anywhere near the impact that people initially expected. Some examples: SOA, low-code/no-code, blockchain for anything other than cryptocurrency, IoT, NoSQL, the Semantic Web. Each of these has had some impact, but they've all plateaued, and there are very good reasons (including the results cited in TA) to think GenAI has also plateaued.

        My bet: although GenAI has plateaued, new variants will appear that integrate or are inspired by "old AI" ideas[0] paired with modern genAI tech, and these will bring us significantly more intelligent AI systems.

        [0] a few examples of "old AI": expert systems, genetic algorithms, constraint solving, theorem proving, S-expression manipulation.

        • MonkeyClub 2 days ago ago

          > [...] S-expression manipulation.

          Can't wait for Lisp to be the language of the future again.

          Some of my friends reckon it'll happen the year after the year of Linux on the desktop. They're on Windows 11, though, so I don't know how to read that.

      • delusional 2 days ago ago

        The document actually debunks this take:

        > GenAI has been embedded in support, content creation, and analytics use cases, but few industries show the deep structural shifts associated with past general-purpose technologies such as new market leaders, disrupted business models, or measurable changes in customer behavior.

        They are not seeing the structural "disruptions" that were present for previous technological shifts.

        • signatoremo 2 days ago ago

          Changes over which time window? AI projects in enterprises can’t be longer than 2 years, which is practically in testing the water phase, of course there are very few projects of the disruption nature exist yet.

      • PhantomHour 2 days ago ago

        > This is no different from the personal computer, and it is to be expected.

        What are you talking about? The return on investment from computers was immediate and extremely identifiable. For crying out loud "computers" are literally named after the people whose work they automated.

        With Personal Computers the pitch is similarly immediate. It's trivial to point at what labour VisiCalc automated & improved. The gains are easy to measure and for every individual feature you can explain what it's useful for.

        You can see where this falls apart in the Dotcom Bubble. There are very clear pitches; "Catalogue store but over the internet instead of a phone" has immediately identifiable improvements (Not needing to ship out catalogues, being able to update it quickly, not needing humans to answer the phones)

        But the hype and failed infrastructure buildout? Sure, Cisco could give you an answer if you asked them what all the internet buildout was good for. Not a concrete one with specific revenue streams attached, and we all know how that ends.

        The difference between Pets.com and Amazon is almost laughably poignant here. Both ultimately attempts to make the "catalogue store but on the computer" work, but Amazon focussed on broad inventory and UX. They had losses, but managed to contain them and became profitable quickly (Q4 2001). Amazon's losses shrank as revenue grew.

        Pets.com's selling point was selling you stuff below cost. Good for growth, certainly, but this also means that their losses grew with their growth. The pitch is clearly and inherently flawed. "How are you going to turn profitable?" We'll shift into selling less expensive goods "How are you going to do that?" Uhhh.....

        ...

        The observant will note: This is the exact same operating model of the large AI companies. ChatGPT is sold below unit cost. Claude is sold below unit cost. Copilot is sold below unit cost.

        What's the business pitch here? Even OpenAI struggles to explain what ChatGPT is actually useful for. Code assistants are the big concrete pitch and even those crack at the edges as research after research shows the benefits appear to be psychosomatic. Even if Moore's law hangs on long enough to bring inference cost down (nevermind per-task token usage skyrocketing so even that appears moot), what's the pitch. Who's going to pay for this?

        Who's going to pay for a Personal Computer? Your accountant.

        • bpt3 2 days ago ago

          The contortions people will go through to defend a technology or concept they like blows my mind. Irrational exuberance is one thing, but denial of history in order to lower the bar for the next big thing really irritates me for some reason.

          Computing was revolutionary, both at enterprise and personal scale (separately). I would say smartphones were revolutionary. The internet was revolutionary, though it did take a while to get going at scale.

          Blockchain was not revolutionary.

          I think LLM-based AI is trending towards blockchain, not general purpose computing. In order for it to be revolutionary, it needs to objectively and quantifiably add value to the lives (professionally or personally) of a significant piece of the population. I don't see how that happens with LLMs. They aren't reliable enough and don't seem to have any path towards reasoning or understanding.

        • crazygringo 2 days ago ago

          > What are you talking about? The return on investment from computers was immediate and extremely identifiable.

          It is well-documented, and called the "productivity paradox of computers" if you want to look it up. It was identified in 1987, and economic statistics show that personal computing didn't become a net positive for the economy until around 1995-1997.

          And like I said, it's very dependent on the individual company. But consider how many businesses bought computers and didn't use them productively. Where it was a net loss because the computers were expensive and the software was expensive and the efficiency gained wasn't worth the cost -- or worse, they weren't a good match and efficiency actually dropped. Think of how many expensive attempted migrations from paper processes to early databases failed completely.

          • PhantomHour 2 days ago ago

            It's well documented. It's also quite controversial and economists still dispute it to this day.

            It's economic analysis of the entire economy, from the "outside" (statistics) inward. My point is that the individual business case was financially solvent.

            Apple Computer did not need to "change the world" it needed to sell computers at a profit, enough of them to cover their fixed costs, and do so without relying on other people just setting their money on fire. (And it succeeded on all three counts.) Whether or not they were a minute addition to the entire economy or a gigantic one is irrelevant.

            Similarly with AI. AI does not need to "increase aggregate productivity over the entire economy", it needs to turn a profit or it dies. Whether or not it can keep the boomer pension funds from going insolvent is a question for economics wonks. Ultimately the aggregate economic effects follow from the individual one.

            Thus the difference. PCs had a "core of financial solvency" nearly immediately. Even if they weren't useful for 99.9% of jobs that 0.1% would still find them useful enough to buy and keep the industry alive. If the hype were to run out on such an industry, it shrinks to something sustainable. (Compare: Consumer goods like smartwatches, which were hyped for a while, and didn't change the world but maintained a suitable core audience to sustain the industry)

            With AI, even AI companies struggle to pitch such a core, nevermind actually prove it.

            • crazygringo 2 days ago ago

              The productivity paradox isn't disputed by any mainstream economists. What is debated is its exact timing, size, and exactly which parts of businesses are most responsible (i.e. was eventual growth mostly about computers improving existing processes, or computers enabling brand-new processes like just-in-time supply chains)? The underlying concept is generally considered sound and uncontroversial.

              I don't really understand what point you're trying to make. It seems like you're complaining that CapEx costs are higher in GenAI than they were in personal computing? But lots of industries have high CapEx. That's what investors are for.

              The only point I've made is that "95% of organizations are getting zero return" is to be expected in the early days of a new technology, and that the personal computer is a reasonable analogy here. The subject here is companies that use the tech, not companies creating the tech. The investment model behind the core tech has nothing to do with the profitability of companies trying to use it or build on it. The point is that it takes a lot of time and trial and error to figure out how to use a new tech profitably, and we are currently in very early days of GenAI.

        • simianwords 2 days ago ago

          I highly doubt that the return in investment was seen immediately for personal computers. Do you have any evidence? Can you show me a company that adopted personal computers and immediately increased its profits? I’ll change my mind.

          • Jensson 2 days ago ago

            I know people who bought a computer and automated away a massive amount of work and thus paid it back in a single day in the 70s.

            Back then companies needed a massive amount of people to sit and do all the calculations to do their accounting, but a single person using a computer could do the same work in a day. This was so easy and efficient that almost every bigger company started buying computers at the time.

            You don't need to automate away accountants, you just need to automate away the many thousands of calculations needed to complete the accounting to save a massive amount of money. It wasn't hard to convince people to use a computer instead of sitting for weeks manually calculating sums on sheets.

          • PhantomHour 2 days ago ago

            I'm sorry but you're asking me here to dig up decades old data to justify my claim that "The spreadsheet software has an immediately identifiable ROI".

            I am not going to do that. If you won't take it at my word that "computer doing a worksheet's of calculations automatically" is faster & less error-prone than "a human [with electronic calculator] doing that by hand", then that's a you problem.

            An apple II cost $1300. VisiCalc cost $200. An accountant in that time would've cost ~10x that annually and would either spend quite a bit more than 10% doing the rote work, or hire dedicated people for it.

            • simianwords 2 days ago ago

              >If you won't take it at my word that "computer doing a worksheet's of calculations automatically" is faster & less error-prone than "a human [with electronic calculator] doing that by hand", then that's a you problem.

              Reality is complicated and messy. There are many hurdles to overcome, many people to convince and many logistics to handle. You can't just replace accountants with computers - it takes time. You can understand why I find it hard to believe that a huge jump like the one with software can take time as well.

      • datavirtue 2 days ago ago

        I think it just turns into table stakes.

    • close04 2 days ago ago

      I'm wondering, if the return is that the employees get 20 minutes extra free time per day, is that a good, quantifiable return? Would anyone consider as a "return" anything that you can't put on your balance sheet?

    • DoctorOetker 2 days ago ago

      For companies competing in the same niche, the same low hanging fruits will be automated first if they invest in ML. So within the niche there is no comparative advantage.

      It's pay big tech or fall behind.

    • 2 days ago ago
      [deleted]
  • hereme888 2 days ago ago

    AI is already so much better than 99% of customer support employees.

    It also improves brand reputation by actually paying attention to what customers are saying and responding in a timely manner, with expert-level knowledge, unlike typical customer service reps.

    I've used LLMs to help me fix Windows issues using pretty advanced methods, that MS employees would have just told me to either re-install Windows or send them the laptop and pay $hundreds.

    • kldg 2 days ago ago

      As someone who was recently screwed over by LLM CSR, I'd respectfully disagree. Amazon replaced their offshore humans with LLMs recently. They put the "subscribe to Prime" button on the right-hand side of the screen when you go to checkout. It's a one-click subscription. I accidentally clicked it a few days ago.

      I immediately hop on customer service chat to ask for a refund. I was surprised to be talking to an LLM rather than a human, but go ahead and explain what happened and state I want the transaction for the subscription canceled. It offers to cancel the subscription at the end of the 30-day subscription. I decline, noting I want a refund for the subscription I didn't intend to take. It repeats it can cancel the subscription at the end of 30-day subscription. I ask for human. It repeats. I ask for human again. It repeats. I disconnect.

      Amazon knows what it's doing.

      • Nextgrid 2 days ago ago

        This occurrence has nothing to do with AI? The reason AI doesn't want to grant you the refund is because it's not been given the ability to do so. It would be no different with a human.

        If Amazon wanted to give you the ability to get a refund for unused Prime benefits, it would allow the AI to do it, or even give you a button to do it yourself.

        • kldg 2 days ago ago

          I can't really argue this except to say trust me, bro, I've been an Amazon customer for over 20 years, and there has never been an issue, no matter how unusual, the CSR was not able to resolve inside about five minutes. The LLM was completely on rails with only specific whitelisted actions available to it. Even if a human couldn't do the specific action, they could explain why instead of word-for-word repeating an irrelevant part of their script.

          They don't trust the LLM so they cripple what it can do, would be the generous interpretation. I actually think they're intentionally crippling the LLMs' access to accounts, though, to reduce their spend not on CSRs, but on CSR actions for, for example, refunds, where the LLM becomes an excuse for the change; they can hide behind what they'll call technical issues or teething pains.

          • AbstractH24 a day ago ago

            I know exactly what you mean, but its hard to tell if its something they are ok with because they are slowly becoming less user-centric and willing to make refunds/exceptions. Or if its the rigidity of AI.

            If it was the former, they would help you when you esclated it. So I think they are just becoming more greedy.

    • nurumaik 2 days ago ago

      I don't want AI customer support. I want open documentation so I can ask AI if I want or ask human support if it's not resolvable with available documentation

      All my interactions with any AI support so far is repeatedly saying "call human" until it calls human

      • aydyn 2 days ago ago

        This is such a HN comment lol.

        Customer support is when all the documentation already failed and you need a human.

        • aldonius 2 days ago ago

          I'll put in a good word for a chatbot hooked up to the documentation (e.g. at $dayjob we use Intercom Fin) acting as level 0.5 support.

          At $dayjob our customers are nontechnical so they don't always know what to search for, so the LLM/RAG approach can be quite handy.

          It answers about 2/3 of incoming questions, can escalate to the humans as needed, and scales great.

      • inquirerGeneral 2 days ago ago

        [dead]

    • onlyrealcuzzo 2 days ago ago

      > AI is already so much better than 99% of customer support employees.

      99% seems like a pulled-out-of-your-butt number and hyperbolic, but, yes, there's clearly a non-trivial percentage of customer support that's absolutely terrible.

      Please keep in mind, though, that a lot of customer support by monopolies is intended to be terrible.

      AI seems like a dream for some of these companies to offer even worse customer service, though.

      Where customer support is actually important or it's a competitive market, you tend to have relatively decent customer support - for example, my bank's support is far from perfect, but it's leaps and bounds better than AT&T or Comcast.

      • ponector 2 days ago ago

        >> 99% seems like a pulled-out-of-your-butt number

        I don't agree. AI support is as useless as real customer support. But it is more polite, calm, with clear voice, etc. Much better, isn't it?

        • Jensson 2 days ago ago

          AI support never solves your issue, its just there to try to make you go away, but human support sometimes do even if they mostly just try to go away they can help now and then.

    • bilsbie 2 days ago ago

      This is great but most customer support is actually designed as a “speed bump” for customers.

      Cancel account- have them call someone.

      Withdraw too much - make it a phone call.

      Change their last name? - that would overwhelm our software, let’s have our operator do that after they call in.

      Etc.

      • gruez 2 days ago ago

        >Change their last name? - that would overwhelm our software, let’s have our operator do that after they call in.

        That doesn't make much sense. Either your system can handle it or it can't. Putting a support agent in front isn't going to change that.

        • Gazoche 2 days ago ago

          Having been a customer to a half-dozen different banks in almost as many countries, I can assure you that this is very common. You'd be surprised how often the user interface stonewalls you with a "please call support" for even the most basic contact details update operation.

        • ipython 2 days ago ago

          The backend can. But what’s exposed to customers will be a very very small subset of that capability. Hence why only the csrs can perform that function.

          The business undoubtedly did a crude cost/benefit analysis where the cost to expose and maintain that public interface vastly outstrips the cost for the few people that have to call in and change their name.

          • com2kid 2 days ago ago

            > The business undoubtedly did a crude cost/benefit analysis where the cost to expose and maintain that public interface vastly outstrips the cost for the few people that have to call in and change their name.

            Haha, not likely.

            In reality the org is so drowned in technical debt that changing the last name involves manually running 3 different scripts that hit three different DBs directly and the estimate from the 3rd party dev consultancy that maintains the mess for how long it'd take to make a safe publicly usable endpoint is somewhere between 2 years and forever.

            • ipython 2 days ago ago

              Sounds like a crude cost/benefit analysis to me :-p

          • lurking_swe 2 days ago ago

            OK, so why not have a customer support bot add the “operation description” into a message queue (SQS, Kafka, whatever) if a formal API doesn’t exist for that operation? The csrs can then handle that task async and the customer can get a sms / email when their request is fulfilled? Why force things to be synchronous and irritate the customer?

            It’s not exactly a difficult design problem. Unless I’m missing some thing.

        • sedawkgrep 2 days ago ago

          I think you missed the point of the parent. All these things are speed bumps and the "reasons" for having them are mostly incidental, as the main reason is to avoid the expense of having any more customer support personnel / infrastructure than is absolutely necessary to function.

    • pluc 2 days ago ago

      Except AI support agents are only using content that is already available in support knowledge bases, making the entire exercise futile and redundant. But sure, they're eloquent while wasting your time.

      • somenameforme 2 days ago ago

        "Only" kind of misses the benefit though. I'm very bearish on "AI", but this is an absolutely perfect use case for LLMs. The issue is that if you describe a problem in natural language on any search engine, your results are going to be garbage unless you randomly luckboxed into somebody asking, with near identical verbiage, the question on some Q&A site.

        That is because search is still mostly stuck in ~2003. But now ask the exact same thing of an LLM and it will generally be able to provide useful links. There's just so much information out there, but search engines just suck because they lack any sort of meaningful natural language parsing. LLMs provide that.

        • another-dave 2 days ago ago

          Speaking of which, could we apply vector embeddings to search engines (where crawled pages get indexed by their vector embeddings rather than raw text) and use that for better fuzzy search results even without an LLM in the mix?

          (Might be a naïve question, I'm at the edge of my understanding)

          • com2kid 2 days ago ago

            > Speaking of which, could we apply vector embeddings to search engines (where crawled pages get indexed by their vector embeddings rather than raw text) and use that for better fuzzy search results even without an LLM in the mix?

            Yes, this is how all the new dev documentation sites work now days, with their much improved searches. :-D

            • another-dave a day ago ago

              ah cool right! I didn't know that. One for me to check out and understand more. Thanks!

          • esafak 2 days ago ago

            Why stop there? The LLM can synthesize the results and spare you the work.

            • another-dave 2 days ago ago

              I'm talking about the scenario the GP referenced — where if you search for say "holiday" but get no results because the pages only use the word "vacation" which AFAIK is still a problem in regular search.

              LLMs inherently would introduce the possibility of hallucinations, but just using the vectors to match documents wouldn't, right?

              • esafak 2 days ago ago

                No, llms still use similarity search for candidate generation, unless you don't give them any tools.

        • pluc 2 days ago ago

          "Instead of making search smarter we just decided to make everyone stupider"

          Why invest in making users more savvy when you can dumb down everything to 5 year old level eh

      • mava_app 2 days ago ago

        There are AI agents that train from knowledge bases but also keep improving on actual conversations. For example, our Mava bot actually learns from mods directly within Discord servers. So it's not about replacing human mods but assist them so they can take better care of users in the end.

        • pluc 2 days ago ago

          I don't see how this is any different than enriching knowledge bases from feedback and experience. You just find yourself duplicating all the information, locking yourself in your AI vendor and investing in a technology that doesn't add anything to what you had before. It's utterly nonsensical.

          • hereme888 2 days ago ago

            You're going to browse through a manual every time you need to fix something in some app in some particular OS version?

            I want:

            > Respond with terminal command to do X

            >> `complex terminal command code block`

            > oh we need to run that on all such and such files

            >> script.py

            • pluc 2 days ago ago

              > You're going to browse through a manual every time you need to fix something in some app in some particular OS version?

              Yes, that's literally how you learn things. I can't understand how anyone on this forum thinks otherwise. Hackers are supposed to be people who thrive in unknown contexts, who thirst for knowledge of how things work. What you are suggesting is brain atrophy. It's the death of knowledge for profit and productivity. Fuck all of that.

              • hereme888 2 days ago ago

                My cognitive energies are reserved for other things. This is the point of AI: do boring tasks so humans can spend their energy on loftier/more important things.

                • bluefirebrand 2 days ago ago

                  If you can't be bothered to do the boring stuff ever, you won't ever develop the skills that you need to make anything loftier

                  They might be boring but they are nonetheless foundational

                  • hereme888 a day ago ago

                    But the point is that the boring things I refer to are not foundational or even related to my field. I have zero formal education in computers. LLMs are my personal IT experts, for pennies on the dollar. I'm not interested in formally learning programming, yet vibe-code apps I want.

                    • bluefirebrand 13 hours ago ago

                      > I'm not interested in formally learning programming, yet vibe-code apps I want.

                      In that case I'm really not sure why professional software developers should be interested in your opinion on the technology at all

                      • hereme888 4 hours ago ago

                        I don't understand how your comment is related to anything I've said. "AI is great for customer service" is a statement that can be made by anyone who has experienced both LLMs and human reps.

      • scarface_74 2 days ago ago

        It’s even worse than you think. I work with Amazon Connect. Now the human agent doesn’t have to search the Knowledge Base manually, likely answers will automatically be shown to the agent based on the conversation that you are having. They are just regurgitating what you can find for yourself.

        But I can’t imagine ever calling tech support for help unless it is more than troubleshooting and I need them to actually do something in their system or it’s a hardware problem where I need a replacement.

      • d1sxeyes 2 days ago ago

        I would agree, but I’ve spent the last ten years or so working with outsourced tech support and I guarantee you, a lot of people call us just because they can’t be bothered to look for themselves.

        • GuinansEyebrows 2 days ago ago

          > a lot of people call us just because they can’t be bothered to look for themselves

          if the service offered is "support" then why is a phone call less acceptable than reading documentation?

          • d1sxeyes 2 days ago ago

            Exactly. It’s not futile or redundant, it’s something people want.

        • pluc 2 days ago ago

          Getting instant answers without having to deploy any effort isn't going to make the problem go away, it's going to make us dependent on the solution.

      • klodolph 2 days ago ago

        Most of my questions are answerable from the support knowledge base.

        • SpaceManNabs 2 days ago ago

          If i am calling support, it is probably because I already scoured the resources.

          Over the past 3 years of calling support any service or infrastructure (bank, health insurance, doctor, wtv), over like 90% of my requests were things only solvable via customer support or escalation.

          I only keep track because I document when I didn't need support into a list of "phone hacks" (like press this sequence of buttons when calling this provider).

          Most recently, I went to an urgent care facility a few weekends ago, and they keep submitting claims to the arm, of my insurance, that is officed in a different state instead of my proper state.

      • hereme888 2 days ago ago

        AI has all sorts of technical knowledge, plus massive working memory, and high IQ-ish. It's vastly, vastly superior to most IT support agents.

        • kbelder 2 days ago ago

          True, partially, but it's also vastly inferior to most IT support agents at the same time.

          I love a good AI to help search through large documentation basis for the particular issue I'm experiencing. But it is clear when the problem I am having is outside of the AI's sometimes infantile ability to understand, and I need the ability to bypass it.

    • cafebeen 2 days ago ago

      When asking customers how well they were helped by the customer support system (via CSAT score), I've found industry-standard AI support agents will generally perform worse than a well-trained human support team. AI agents are fine at handling some simple queries, e.g. basic product and order informatino, but support issues are often biased towards high complexity, because otherwise they could solve it in a more automated. I'm sure it depends on the industry, and whether the customer's issue is truly novel.

      • aydyn 2 days ago ago

        I think the main problem is access, not quality.

        I.e. AI isn't allowed to offer me a refund because my order never arrived. For that, I have to spend 20 minutes on the phone with Mike from India.

    • pesus 2 days ago ago

      Improves brand reputation? I don't think I've seen a single case where someone is glad to talk to an LLM/chat bot instead of an actual person. Personally, I think less of any company that uses them. I've never seen one be actually useful, and they seem to only really regurgitate links to FAQ pages or give the most generic answers possible while I fight to get a customer service number so I can actually solve the problem at hand.

      • hereme888 2 days ago ago

        I use SOTA LLM chatbots to solve issues that would take a long time via human customer service reps. In fact, LLMs solve things quicker than it takes to get a human on the phone/chat/forum response.

    • nkingsy 2 days ago ago

      It isn’t empowered to do anything you can’t already do in the UI, so it is useless to me.

      Perhaps there is a group that isn’t served by legacy ui discovery methods and it’s great for them, but 100% of chat bots I’ve interacted with have damaged brand reputation for me.

      • another-dave 2 days ago ago

        A chatbot for those sorts of queries that are easily answerable is great in most scenarios though to "keeps the phone lines clear"

        The trouble is when they gatekeep you from saying "I know what I'm doing, let me talk to someone"

    • GuinansEyebrows 2 days ago ago

      > AI is already so much better than 99% of customer support employees.

      i have yet to experience this. unfortunately i fear it's the best i can hope for, and i worry for those in support positions.

    • IAmGraydon 2 days ago ago

      MS customer service is perhaps the lowest bar available. One look at their tech support forums tells you that most of what they post is canned garbage that is no help to anyone.

      AI is not better than a good customer service team, or even an above-average one. It is better than a broken customer service team, however. As others have noted, 99% is hyperbolic BS.

  • jcfrei 2 days ago ago

    IMHO this is going to be part of a broader trend where advancements in AI and robotics nullify any comparative advantages low wage countries had.

    • xenotux 2 days ago ago

      > IMHO this is going to be part of a broader trend where advancements in AI and robotics nullify any comparative advantages low wage countries had.

      Then why hasn't it yet? In fact, some lower-wage countries such as China are on the forefront of industrial automation?

      I think the bottom line is that many Western countries went out of their way to make manufacturing - automated or not - very expensive and time-consuming to get off the ground. Robots don't necessarily change that if you still need to buy land, get all the permits, if construction costs many times more, and if your ongoing costs (energy, materials, lawyers, etc) are high.

      We might discover that AI capacity is easier to grow in these markets too.

      • mensetmanusman 2 days ago ago

        Hard to honestly say if China is low wage. On one hand, their wages have risen as the work force has shrunk now for a few years and tasks are being outsourced to other countries. On the other hand, their currency is pegged meaning that the earning power of the workers should be much higher so that they can afford the things they are making and transition to a consumer driven economy.

        • datavirtue 2 days ago ago

          They are very much devaluing their currency. This is all the rage and I expect a currency devaluation race as the US tries to deal with crushing government liabilities.

          • nradov 2 days ago ago

            It's not just China and the USA. Pretty much all countries want to devalue their currency to improve their balance of trade in a race to the bottom. Logically not everyone can win the race.

            • mensetmanusman 2 days ago ago

              Someone needs to figure out the best hack for what to do with high value currency.

              • chowchowchow 2 days ago ago

                Isn’t it to sell home currency denominated debt…?

      • alecco 2 days ago ago

        > Then why hasn't it yet?

        Because the current companies are behind the curve. Most of finance still runs on Excel. A lot of other things, too. AI doesn't add much to that. But the new wave of Tech-first companies now have the upper hand since the massive headcount is no longer such an advantage.

        This is why Big Tech is doing layoffs. They are scared. But the traditional companies would need to redo the whole business and that is unlikely to happen. Not with the MBAs and Boomers running the board. So they are doing the old stupid things they know, like cutting costs by offshoring everything they can and abusing visas. They end up losing knowledgeable people who could've turned the ship around, the remaining employees become apathetic/lazy, and brand loyalty sinks to the bottom. See how S&P 500 - top 10 is flat or dumping.

        • cicko 2 days ago ago

          >They end up losing knowledgeable people who could've turned the ship around, the remaining employees become apathetic/lazy, and brand loyalty sinks to the bottom

          Right. And AI is here to fix that!

      • tempodox 2 days ago ago

        > We might discover that AI capacity is easier to grow in these markets too.

        If only because someone else has to build all the nuclear reactors that supply the data centers with electricity. /s

    • ckorhonen 2 days ago ago

      I don’t fully agree. Yes, AI can be seen as a cheaper outsourcing option, but there’s also a plausible future where companies lean more on outsourced engineers who are good at wielding AI effectively, to replace domestic mid-level roles. In other words, instead of nullifying outsourcing, AI might actually amplify it by raising the leverage of offshore talent.

      • PhantomHour 2 days ago ago

        Consider the kinds of jobs that are popular with outsourcing right now.

        Jobs like customer/tech support aren't uniquely suited to outsourcing. (Quite the opposite; People rightfully complain about outsourced support being awful. Training outsourced workers on the fine details of your products/services & your own organisation, nevermind empowering them to do things is much harder)

        They're jobs that companies can neglect. Terrible customer support will hurt your business, but it's not business-critical in the way that outsourced development breaking your ability to put out new features and fixes is.

        AI is a perfect substitute for terrible outsourced support. LLMs aren't capable of handling genuinely complex problems that need to be handled with precision, nor can they be empowered to make configuration changes. (Consider: Prompt-injection leading to SIM hijacking and other such messes.)

        But the LLM can tell meemaw to reset her dang router. If that's all you consider support to be (which is almost certainly the case if you outsource it), then you stand nothing to lose from using AI.

        • thewebguyd 2 days ago ago

          > But the LLM can tell meemaw to reset her dang router. If that's all you consider support to be (which is almost certainly the case if you outsource it), then you stand nothing to lose from using AI.

          I worked in a call center before getting into tech when I was young. I don't have any hard statistics, but by far the majority of calls to support were basic questions or situations (like Meemaw's router) that could easily be solved with a chatbot. If not that, the requests that did require action on accounts could be handled by an LLM with some guardrails, if we can secure against prompt injection.

          Companies can most likely eliminate a large chunk of customer service employees with an LLM and the customers would barely notice a difference.

        • gausswho 2 days ago ago

          Also consider the mental health crisis among outsourced content moderation staff that have to appraise all kinds of depravity on a daily basis. This got some heavy reporting a year or two ago, in particular from Facebook. These folks for all their suffering are probably being culled right now.

          You could anticipate a shift to using AI tools to achieve whatever content moderation goals these large networks have, with humans only handling the uncertain cases.

          Still brain damage, but less. A good thing?

      • _DeadFred_ 2 days ago ago

        I see it the other way around. An internal person with real domain knowledge can use AI far more effectively than an outsourced team. Domain knowledge is what matters now, and companies don’t want to pay for outsiders to learn it on their dime. AI let's the internal team be small enough that it's a better idea to keep things in house.

      • 2 days ago ago
        [deleted]
      • brandall10 2 days ago ago

        In a vacuum, sure. But when you take two resources of similar ability and amplify their output, it makes those resources closer in cost per output, and in turn amplifies the risk factors for choosing the cheaper by cost resource. So locality, availability, communication, culture, etc, become more important.

    • cantrevealname 2 days ago ago

      > AI and robotics nullify any comparative advantages low wage countries had

      If we project long term, could this mean that countries with the most capital to invest in AI and robotics (like the U.S.) could take back manufacturing dominance from countries with low wages (like China)?

      • adev_ 2 days ago ago

        > could take back manufacturing dominance from countries with low wages (like China)?

        The idea that China is a low wages country should just die. It was the case 10y ago, not anymore.

        Some part of China have higher average salaries than some Eastern European countries.

        The chance of a robotic industry in the US moving massively jobs from China only due to a pseudo A.I revolution replacing low paid wages (without other external factors, e.g tarifs or sanctions) is close to 0.

        Now if we do speak about India and the low skill IT jobs there. The story is completely different.

        • sleepyguy 2 days ago ago

          > The idea that China is a low wages country should just die. It was the case 10y ago, not anymore.

          The wages for factory work in a few Eastern European countries are cheaper than Chinese wages. I suppose they don’t have the access to infrastructure and supply chains the Chinese do but that is changing quickly do to the Russian war against Ukraine

      • ceronman 2 days ago ago

        China dominance in manufacturing, at least in tech, it's not based on cheap labor, but rather in skills, tooling and supply chain advantages.

        Tim Cook explains it better that I could ever do:

        https://www.youtube.com/watch?v=2wacXUrONUY

        • themaninthedark 2 days ago ago

          But it's not like China had the skills, tooling and supply chain to begin with....and it's not like the US suddenly stopped having all those things. There are reasons manufacturing moved out of the US and it was not "They are soooo much better at all the things over there!"

          Tim Cook had a direct hand in this and know it and is now deflecting because it looks bad.

          One of the comments on the video puts it way better than I could:

          @cpaviolo : "He’s partially right, but when I began my career in the industry 30 years ago, the United States was full of highly skilled workers. I had the privilege of being mentored by individuals who had worked on the Space Shuttle program—brilliant professionals who could build anything. I’d like to remind Mr. Cook that during that time, Apple was manufacturing and selling computers made in the U.S., and doing so profitably.

          Things began to change around 1996 with the rise of outsourcing. Countless shops were forced to close due to a sharp decline in business, and many of those exceptionally skilled workers had to find jobs in other industries. I remember one of my mentors, an incredibly talented tool and die maker, who ended up working as a bartender at the age of 64.

          That generation of craftsmen has either retired or passed away, and the new generation hasn’t had the opportunity to learn those skills—largely because there are no longer places where such expertise is needed. On top of that, many American workers were required to train their Chinese replacements. Jobs weren’t stolen by China; they were handed over by American corporations, led by executives like Tim Cook, in pursuit of higher profits."

          • hombre_fatal 2 days ago ago

            > it was not "They are soooo much better at all the things over there!"

            Though I think we should also disabuse ourselves of the idea that this can't ever be the case.

            An obvious example that comes to mind is the US' inability to do anything cheaply anymore, like build city infrastructure.

            Also, once you enumerate the reasons why something is happening somewhere but not in the US, you may have just explained how they are better de facto than the US. Even if it just cashes out into bureaucracy, nimbyism, politics, lack of will, and anything else that you wouldn't consider worker skillset. Those are just nation-level skillsets and products.

            • bcrosby95 2 days ago ago

              Hence "had the skills" and "was not". They are not making claims about the present day, they are talking about why the shift happened in the first place and who brought it about.

              • hombre_fatal 2 days ago ago

                Good point. When I commented, the sentence I quoted was the final sentence of their comment essentially leaving it more abstract. Though my comment barely interacts with their point anyways.

                • themaninthedark 2 days ago ago

                  Sorry. I was typing, got distracted and submitted before I meant to. I thought I had edited pretty quickly, normally I put an edit tag if I think too much time had elapsed.

                  • hombre_fatal 2 days ago ago

                    I was just blaming it on that. In reality my comment was making a trivial claim rather than a good observation.

      • Aurornis 2 days ago ago

        Manufacturing isn’t one uniform block of the economy that is either won or lost. US manufacturers focus on high quality, high precision, and high price orders. China excels at factories that will take small orders and get something shipped.

        The reason US manufacturers aren’t interested in taking small volume low cost orders is that they have more than enough high margin high quality orders to deal with. Even the small-ish machine shop out in the country near the farm fields by some of my family’s house has pivoted into precision work for a big corporation because it pays better than doing small jobs

        • themaninthedark 2 days ago ago

          I would say, it pays more consistently than small jobs. As by nature small jobs are not generally continuous, most often piecemeal.

          The other factors are: In any sort of manufacturing, the only time you are making money is when the equipment is making product.

          If you are stopped for a change over or setup you are losing money. Changing over contains risk of improper setup, where you lose even more money since you produce unusable product.

          Where I live, the local machine shops support themselves in two way: 1. Consistent volume work for an established customer. 2. Emergency work for other manufacturing sites: repair or reverse engineering and creating parts to support equipment(fast turn around and high cost)

          They are willing to do small batches but lead times will be long since they have to work it into their production schedules.

      • Teever 2 days ago ago

        Probably not because America lacks the blue collar skills necessary to build and service the kind of manufacturing infrastructure needed to do what you're describing.

      • thisoneworks 2 days ago ago

        Hard disagree. You can't just one day wake up and double your energy infrastructure.. China is way ahead.

      • daymanstep 2 days ago ago

        China has more robots per capita than the US

        And the idea that China has low wages is outdated. Companies like Apple don't use China for its low wages, countries like Vietnam have lower wages. China's strength lies in its manufacturing expertise

        • signatoremo 2 days ago ago

          Manufacturing expertise that have been transferred from the West over the last 40 years. Knowledge and expertise are fluid, they can go both way, they can be transferred to other countries as well, India, Vietnam, etc. The world doesn’t stand still.

          • daymanstep 2 days ago ago

            I don't get why I was downvoted. I didn't say anything that contradicts what you just said.

        • mensetmanusman 2 days ago ago

          Western engineers worked relentlessly on knowledge transfer to China to do so, it might be easy to bring back with the 10x industrial subsidies that the CCP provided to do so.

          • daymanstep 2 days ago ago

            And the US is already starting to do it, for example partnering with South Korea or Japan to rebuild American shipbuilding.

    • Aurornis 2 days ago ago

      Depends where you draw the line. I would expect countries like China will continue to leverage AI to extend their lead in areas like low cost manufacturing. Some of the very low cost Chinese vendors I use are already using AI tools to review submitted pieces with mixed results, but they’re only going to get better at it.

    • 2 days ago ago
      [deleted]
    • burnerRhodo 2 days ago ago

      it's wierd because where before, i've never had a offshore "VA" nor did I think they'd be useful. But after AI, I can just get the VA a subscription to Chatgpt and have them do the initial draft of whatever i need. ChatGPT get 80% of the way, VA gets the next 10 (Copying where i need it, removing obvious stuff that shouldn't be client facing, etc.), i only have to polish the last 10%.

    • FirmwareBurner 2 days ago ago

      Lemme know when robots will make your sneakers and T-Shirts and pick fruits from fields at a competitive price to third world slave labor.

    • kjkjadksj 2 days ago ago

      They will still be the cheaper countries to run your ai models and robotics factory by a longshot compared to the western world.

  • donperignon 2 days ago ago

    Yes, I agree. And it is not that AI is any good, but those outsourcing shops are most of the time not adding any value, all the contrary takes time to babysit them. Some of this even look like an elaborate scam, someone in the organization launder money through this companies somehow, otherwise I don’t understand how they are useful. Obviously there some good ones, but in my experience is not the norm.

    • commandlinefan 2 days ago ago

      > launder money through this companies

      That would explain a lot, actually. If so, it'll be interesting to see what happens to the overall software economy when that revenue stream dries up. My wife grew up in Mexico on a border town and told me that the nightclubs in her town were amazing; when she moved to the US, she was disappointed by how drab the nightclubs here were. Later she found out that the border town nightclubs were so extravagant because they were laundering drug money. When they cracked down on the money laundering, the nightclubs reverted back to their natural "drab" state of relying on actual customers to pay the bills.

    • segfaultex 2 days ago ago

      Yeah I think this will be a noticeable trend moving forward. We've frozen backfills in our offshore subsidiaries for the same reason; the quality is nonexistent and onshore resources spend hours every day fixing what the offshore people break.

    • 2 days ago ago
      [deleted]
    • jbreckmckye 2 days ago ago

      You are not wrong. Sometimes I have seen outsourcing relationships that I am sure are suspect in some way.

      It may just be incompetence in large organisations though. Things get outsourced because nobody wants to manage them.

  • toomuchtodo 3 days ago ago

    https://archive.today/dcz9V

    Original title "AI is already displacing these jobs" tweaked using context from first paragraph to be less clickbaity.

    • chihuahua 2 days ago ago

      "You'll never guess which jobs AI is about to replace!"

  • toenail 2 days ago ago

    Makes sense, current llms seem to be at a similar level considering quality and supervision.

  • torginus 2 days ago ago

    I wonder if AI automation will even lead to a recession in total software engineering revenue.

    At my job, thanks to AI, we managed to rewrite one of our boxed vendor tools we were dissatisfied with, to an in-house solution.

    I'm sure the company we were ordering from misses the revenue. The SaaS industry is full of products whose value proposition is 'it's cheaper to buy the product from us than hire a guy who handles it in house'

    • simianwords 2 days ago ago

      What you are saying is not intuitive. Software engineers are a cost to software companies. With automation the profits would increase so I’m not sure how it can lead to recession.

      • torginus 2 days ago ago

        Something not being intuitive doesn't make it untrue - if AI makes engineers 10x as productive it means that we need 1/10th the engineers to produce as much software as we do - it might induce demand but demand might not keep up with the production. SW Engineering might become a buyers market instead of a sellers market.

        One example I mentioned is SaaS whose value proposition is that it's cheaper than to hire a dedicated guy to do it - if AI can do it, then that software has no more reason to exist.

      • graeme 2 days ago ago

        They used the word in an irregular way. They meant a decline in software company revenue, not an economic recession.

        You might well see more software profits if costs go down but less revenue. Depends on Jevon's paradox really

        • simianwords 2 days ago ago

          No this doesn’t make sense either. Why would Amazon‘s profits go down if their engineers are cheaper?

          • torginus 2 days ago ago

            In AWS's case - if AI can replicate what AWS offers as a value add - then you migh go with a cheaper cloud provider.

            Like, you have the option of either using AWS RDS, or hiring a DBA and devops who administer your DB, and set up backups, replication and networking.

            If AI (or a regular dev with the help of AI) can do that, it might mean your company decides to take the administrative burden on, and save the money.

          • graeme a day ago ago

            I didn't say profits down. The OP was talking about revenue for some companies potentially declining.

          • 2 days ago ago
            [deleted]
      • golol 2 days ago ago

        More middlemen = more revenue/GDP, right?

        • simianwords 2 days ago ago

          I don’t think middle men are counted in GDP because GDP only counts final value and not intermediate.

          • gls2ro 2 days ago ago

            Trying to understand this and please correct me if I am wrong:

            A is producing something of value 100. That is complex to configure so B comes along and they say: Buy from me at 150 and you will get both the product and the configuration.

            C comes and say: there are multiple products like this so I created a marketplace where I do some offering that in the end will cost you 160 but you can switch providers whenever you want.

            Now I am a customer of C and I buy at 160: C gets 160 retains 10 but total revenue is 160 B gets 150 retains 50 but total revenue is 150 A gets the 100

            Here is the question: How big is GDP in this case?

            I think it is 160.

            Now A adds LLM for about 4 extra that can do what B and C can (allegedly) removing the intermediaries and so now the GDP is 104.

            Am I wrong with this?

            • simianwords 2 days ago ago

              This is technically correct but missing some details.

              The real GDP after accounting for cost of living has not changed much because while GDP has decreased, cost of living has also decreased (because A is now priced at 104 instead of 160).

              But it’s even better because we have this extra money that we previously spent on C. In theory we will spend this extra money somewhere else and drive demand there. The workers put out of employment due to LLM will move to that sector to fulfill it.

              Now the GDP not only increased but also cost of living reduced.

            • torginus 2 days ago ago

              Yes exactly. There's the joke of one economist paying the other $100 to dig a hole, then the other one giving back the money to the first one to fill it back up, thereby increasing the GDP by $200.

    • ido 2 days ago ago

      Historically imporvments in programmer productivity (e.g. via better languages, tooling and hardware) didn't correlate to a decrease in demand for programemrs, but quite the opposite.

      • scarface_74 2 days ago ago

        This is completely different - said as someone who has been in the industry professionally for 30 years and as a hobbyist before then for a decade.

        There are projects I lead now that I would have at least needed one or maybe two junior devs to do the grunt work after I have very carefully specified requirements (which I would have to do anyway) and diagrams and now ChatGPT can do the work for me.

        That’s never been the case before and I’ve personally gone from programming in assembly, to C, to higher level languages and on the hardware side, personally managing the build out of a data center that had an entire room dedicated to a SAN with a whopping 3TB of storage to being able to do the same with a yaml/HCL file.

      • torginus 2 days ago ago

        Imo historically there was no connection between the two - demand for programmers increased, while at the same time, better tools came along.

        I remember Bill Gates once said (sometime in the 2000s) that his biggest gripe, is during his decades in the software industry, despite dramatic improvements in computing power and software tools, there has only been a modest increase in productivity.

        I started out programming in C for DOS, and once you got used to how things were done, you were just as productive.

        The stuff frameworks and other stuff help with, is 50% of the job at max, which means due to Amdahls law, productivity can at most double.

        In fact, I'd argue productivity actually got reduced (comparing my output now, vs back then). I blame this on 2 factors:

        - Distractions, it's so easy to d*ck around the internet, instead of doing what you need to do. I have a ton of my old SVN/CVS repos, and the amount of progress I made was quite respectable, even though I recall being quite lazy.

        - Tooling actually got worse in many ways. I used to write programs that ran on the PC, you could debug those with breakpoints, look into the logs as txt, deployment consisted of zipping up the exe/uploading the firmware to the uC. Nowadays, you work with CI/CD, cloud, all sorts of infra stuff, debugging consists of logging and reading logs etc. I'm sure I'm not really more productive.

      • 2 days ago ago
        [deleted]
    • MangoCoffee 2 days ago ago

      I've done a vibe coding hobby project where I simply give AI instructions on what I want, using a persona-based approach for the agent to generate or fix the code.

      It worked out pretty well. Who knows how the software engineering landscape will change in 10 to 20 years?

      I enjoyed Andrej Karpathy's talk about software in the era of AI.

      https://www.youtube.com/watch?v=LCEmiRjPEtQ

      • bcrosby95 2 days ago ago

        As an aside, his talk isn't about using AI to write code, it's about using AI instead of code itself.

  • mickeyp 2 days ago ago

    That has long been my personal theory as well, though I never had a way of firmly backing it up with evidence, though this article hardly does that either.

    But it does make sense on a superficial level at least: why pay a six-pack of nobodies half-way 'round the world to.. use AI tools on your behalf? Just hire a mid/senior developer locally and have them do it.

    • 2 days ago ago
      [deleted]
  • ChrisArchitect 2 days ago ago
  • peterldowns 2 days ago ago

    “AI reshoring” is what I call it. Makes perfect sense.

  • notyourwork 2 days ago ago

    I can see this. We employ a lot of off shore for what we call support engineering. Things like jdk upgrades or cert updates. It’s grunt work that lets higher paid engineers utilize their time on business value work. As AI continues to grow in scope, it will surely commandeer much of this. Employing a human is more expensive when compared to compute for these tasks at a certain scale.

    • sitzkrieg 2 days ago ago

      i worked for a company that did this. then they opened an office in india and put us employees on call for india to squeeze em out haha

    • simianwords 2 days ago ago

      Can I ask how the offshore team manages to deploy these changes? What skills are required for such a role in support engineering?

      • notyourwork 2 days ago ago

        Through the CI/CD deployment pipeline that all code changes get deployed through. Primary engineering team reviews code and ensures things are tested appropriately.

        If it requires a managed change, engineering team helps them draft the execution and schedule.

        Skills would be similar to IT or dev ops but with expectation that they can code.

        • simianwords 2 days ago ago

          With a ci/cd in place and a reviewer from the devs it’s probably very less value that the offshore employees are providing.

          Moreover these kind of upgrades sometimes involve unforeseen regressions which again can’t be solved by these employees.

          • notyourwork 2 days ago ago

            The value is their salary being dramatically less.

  • yc-kraln 2 days ago ago

    I work with Claude more-or-less how I worked with our Indian colleagues; the difference is Claude is improving over time.

  • bilater 2 days ago ago

    The Indian IT sector is almost certainly going to be decimated (at least in its current form), and we haven’t really wrapped our heads around what that means for the world’s fourth-largest economy.

    https://www.youtube.com/watch?v=CK-gnW3f-q0

  • anelson 2 days ago ago

    The move that I’m fighting in my company now is hiring bargain basement Indian outsourced heads who are very obviously vibe coding slop. It’s a raw deal for us since we’re paying extra for a meat wrapper around an LLM coding agent, but I’m sure it’s a boon for the outsourcing company who can easily put one vibe-coding head on three or four engagements in parallel. It’s hard to imagine LLM coding technologies not being enthusiastically adopted by all of the outsourcers given the economic incentives to do so.

    Whether or not they end up losing business long term, it seems like a nice grift for as long as they can pull it off.

  • alephnerd 2 days ago ago

    Yep! The low value outsourcing firms like Indian WITCH companies have been heavily leveraging LLMs and laying off employees as a result.

    High value product work remains safe from AI automation for now, but it was also safe from offshoring so long as domestic capacity existed.

  • api 2 days ago ago

    This makes a ton of sense. The interaction is similar: write specs, give orders, wait, review and fix results.

  • abullinan 2 days ago ago

    I have hired “outsourced, offshore workers”. Anyone who has similar experience knows the challenges of finding quality talent. Generally, you don’t know what you’re going to get until you’ve already written the first check. Sometimes it is good quality, but lots of the time it is “acceptable” quality or poor quality that needs cleaning (or re-hiring), and 5% of the time it is absolute garbage. Since the costs are typically 1/10th to 1/20th that of US engineer, you can afford to make a few mistakes. However, I can see a future where I can hire local (US) and outsource to AI with oversight within the same budget.

  • keybored 2 days ago ago

    Pretend for a moment that capital investors do any work. Can AI replace that work?

    • walterbell 2 days ago ago

        I'm sorry Dave. I can't answer that.
    • beastman82 2 days ago ago

      The investment bankers I know worked harder than any software developer to get where they are.

      • bobsmooth 2 days ago ago

        I'm sure parasites work very hard to survive.

      • throwawayoldie 2 days ago ago

        And yet have produced nothing of value for society.

        • mythrwy 2 days ago ago

          In theory they efficiently allocate capital and resources to drive increased standards of living. In theory.

    • sneilan1 2 days ago ago

      They do a lot of work but a lot of market research can be automated too.

  • musesum 2 days ago ago

    So, aside from trust the biggest barrier is lack of adaptability?

  • pydry 2 days ago ago

    Im skeptical it's even replacing those.

  • 2 days ago ago
    [deleted]
  • throwaway106382 2 days ago ago

    Good riddance.

  • keeda 2 days ago ago

    One thing I recently realized is that the evolution and discussions of AI very closely mirrors those of offshoring, when offshoring first started off. Back then too discussions were about:

    1) The quality of work produced being sub-par, with many instances of expensive, failed projects, leading to predictions of the death of offshoring.

    2) Unwillingness of offshore teams to clarify or push back on requirements.

    3) Local job displacement.

    What people figured out soon enough was that offshoring was not as easy as "throwing some high-level requirements over the wall and getting back a fully functional project." Instead the industry realized that there needed to be technically-competent, business domain-savvy counterparts in the client company that would work closely with the offshore team, setting concrete and well-scoped milestones, establishing best practices, continously monitoring progress, providing guidance, removing blockers, and encouraging pushback on requirements, even revisiting them if needed.

    Offshore teams on their part became culturally more comfortable with questioning requirements and engaging in 2-way discssions. Eventually offshore companies built up the business domain knowledge such that client companies could outsource higher- and higher-level work.

    All successful outsourcing projects followed this model, and it spread quickly across the industry, which was why the predictions of the death of offshoring never materialized. In fact the practice has only continued to grow.

    It's very interesting how much the same strategies apply to working with AI. A lot of the "how to code effectively with AI" articles basically offer the exact same advice.

    On the job displacement side, however, the story may be very different.

    With outsourcing, job displacement didn't turn out to be much of a concern because a) by delegating lower-level grunt work to offshore teams, local employees were then freed up to do higher-level, more innovative work; and b) until software has "eaten the whole world" the amount of new work is essentially unbounded.

    With AI though, the job displacement could be much more real and long-lasting. The pace at which AI has improved is mind-boggling. Now the technically-competent, business-domain savvy expert could potentially get all the outsourced work done by themselves through an army of agents with very little human support, either local or offshore. Until the rest of the workforce can upskill themselves to the level of "technically-competent, business domain-savvy expert" their job is at risk.

    "How many such roles does the world need?", and "How can junior employees get to that level without on-the-job experience?", are very open questions.

  • avrionov 2 days ago ago

    For now!

  • jsnell 2 days ago ago

    The actual report (which this article doesn't link to; bad Axios):

    https://nanda.media.mit.edu/ai_report_2025.pdf

    • modernerd 2 days ago ago
    • bux93 2 days ago ago

      And that report has this to say: "Our analysis reveals that GenAI-driven workforce reductions concentrate in functions historically treated as non-core business activities: customer support operations, administrative processing, and standardized development tasks. These roles exhibited vulnerability prior to AI implementation due to their outsourced status and process standardization. Executives were hesitant to reveal the scope of layoffs due to AI but it was between 5-20% of customer support operations and administrative processing work in these companies."

      They interviewed 52 people, and some of them said yes to "Have you reduced headcount due to GenAI?" - which may indicate that those people believe this to be true.

    • 2 days ago ago
      [deleted]
    • fuzztester 2 days ago ago

      Not Found

      The requested URL was not found on this server. Apache/2.4.62 (Debian) Server at nanda.media.mit.edu Port 443

  • -peregrine- 2 days ago ago

    [dead]

  • jmyeet 2 days ago ago

    AI is going to force the issue of having to deal with the inequity in our economic system. And my belief is that this confrontation will be violent and many people are going to die.

    The fundamental issue is wealth inequality. The ultimate forms of wealth redistribution are war and revolution. I personally believe we are already beyond the point where electoral politics can solve this issue and a violent resolution is inevitable.

    The issue is that there are a handful of people who are incredibly wealthy and are only getting wealthier. The majority of the population is struggling to survive and only getting poorer.

    AI and automation will be used to further displace working people to eke out a tiny percentage increase in profits, which will furhter this inequality as people can no longer afford to live. Plus those still working will have their wages suppressed.

    Offshored work originally dsiplaced local workers and created a bunch of problems. AI and automation is a rising tide at this point. Many in tech considered themselves immune to such trends, being highly technical and educated professionals. Those people are in for a very rude shock and it'll happen sooner than they think.

    Our politics is divided by those who want to blame marginalized groups (eg immigrants, trans people, "woke" liberals) for declining material conditions (and thus we get Brownshirts and concentration camps) and the other side who wants to defend the neoliberal status quo in the name of institutional norms.

    It's about economics, material conditions and, dare I say it, the workers relationship to the means of production.

    • thewebguyd 2 days ago ago

      No war but class war.

      Not sure how long it will take for a critical mass to realize that that we are in a class war, and placing the blame on anything else won't solve the problem.

      IOW, I agree with you, I also think we are beyond the point where electoral politics can solve it - we have full regulatory capture by the wealthy now. When governments can force striking workers back to work, workers have zero power.

      What I wonder though, is why do the wealthy allow this to persist? What's the end game here, when no one can afford to live, whose buying products and services? There'll be nothing to keep the economy going. The wealthy can end it at any time, so what is the real goal? To be the only ones left on earth?

      • usefulcat 2 days ago ago

        You write as though "the wealthy" are a unified group acting in concert. They're not; they're just like everyone else in that regard, acting in their own, mostly short to medium term best interest. Seems like a pretty ordinary tragedy of the commons type of situation.

        • jmyeet 2 days ago ago

          Oh I strongly disagree. If there's one thing the wealthy have is an intense class solidarity. They're fully aware of the power of class solidarity. You might see conflicts on the fringes but when the shit hits the fan, they will absolutely stick together.

          They're so aware of the power of class solidarity that they've designed society to ensure that there is no class solidarity among the working class. All of the hot button social issues are intentionally divisive to avoid class solidarity.

      • jmyeet 2 days ago ago

        It's greed and short-term thinking, We shouldn't be surprised by this because we see companies do it all the time. How many times have you thought an employer or some company in the news is operating on a time horizon no further than the next quarterly results?

        To be ultra-wealthy requires you to be a sociopath, to believe the bullshit that you deserve to be wealthy, it's because of how good you are and, more importantly, that any poverty is a personal moral failure.

        You see this manifest with the popularity of transhumanism in tech circles. And transhumanism is nothing more than eugenics. Extend this further and you believe that future war and revolution when many people die is actually good because it'll separate the wheat from the chaff, so to speak.

        On top of all that, in a world of mobile capital, the ultra-wealthy ultimately believe they can escape the consequences of all this. Switzerland, a Pacific island, space, or, you know, Mars.

        The neofeudalistic future the ultra-wealthy desire will be one where they are protected from the consequences of their actions on massive private estate where a handful of people service their needs. Working people will own nothing and live in worker housing. If a few billion of them have to die, so be it.

    • simianwords 2 days ago ago

      These are common simple Marxist points you are bringing up.

      Your point hinges on: declining material conditions.

      It is completely false - the conditions are pretty great for everyone. People have good wages relatively but sure inequality is increasing.

      Since your main point is incorrect I don’t think your other points follow.

      • jmyeet 2 days ago ago

        There are many ways to attack this assertion. For example:

        1. The stagnation or decline in real wages in the developed world in recent decades;

        2. Increasing homelessness as a consequence of the housing affordability crisis;

        3. How global poverty has increased in the last century under capitalism. This surprises some because defenders claim the opposite. China is singlehandedly responsible to massive decrease in extreme poverty in the 20th century.

        Maybe you're looking through the lens of tech. After all, we all have Internet-connected supercomputers in our pockets. While that's true, we're also working 3 jobs to pay for a 1 bedroom apartment where once a single job meant you had a house and enough to eat.

        • simianwords 2 days ago ago

          Your first and last points are egregiously incorrect. A simple google search will tell you this.

          Extreme poverty throughout the world has dramatically reduced. In Western Europe it came down from 50% to less than 1% through the 20th century.

          India brought it down dramatically and is continuing to do it. A simple Wikipedia search can tell you this.

          Wages has been increasing in china, India as well as USA after accounting for inflation. It’s sort of stagnant in Europe.

          • keeda 2 days ago ago

            > India brought it down dramatically and is continuing to do it. A simple Wikipedia search can tell you this.

            What the Wikipedia search won't tell you is that the methodologies and poverty guidelines used in making some of these claims are rather questionable. While real progress has undeniably been made, the extent is greatly exaggerated:

            https://www.project-syndicate.org/commentary/indian-governme...

          • mythrwy 2 days ago ago

            Additionally we can point out the problems of inequality and governmental capture by elite interests (and they are problems) but then the jump to "government will do it better than these greedy people" is a big one and I don't see much evidence for it.

          • kelseyfrog 2 days ago ago

            I'm genuinely glad for the people in India. But that progress doesn't reduce the feeling of inequality here in the U.S.

            Dismissing people with arguments doesn't work either. It doesn’t eliminate the feeling of inequality or change people's perspective about absolute vs relative wealth.

            Why? Because the promise used to justify labor - that hard work will be rewarded - was deeply believed. The contradiction becomes visible when the wealthy hold 36,000 times more wealth than the average person[1]. No one can work 36,000 times harder or longer than someone else, so the belief is no longer tenable.

            That leaves us with two choices: either acknowledge that "hard work alone" was never the full story, or take real steps to fix inequality. Pointing to poverty reduction in other countries doesn’t resolve this. It simply makes people feel unheard and resentful.

            Average billionaire has $7B in wealth. Median individual U.S. wealth $190,000.

            • simianwords 2 days ago ago

              This is not the appropriate way to respond when the poster was clearly incorrect in their main points. Dismissing people with arguments is the rational thing to do.

              Your first mistake is thinking hard work matters. No it doesn't and it shouldn't. Only work that provides value should matter - you don't deserve more money just for working 10x hard but when it doesn't matter to anyone.

              Your entire comment hinges on a zero sum line of thinking and I don't abide by it. Things have improved for everyone as I have said above but I also acknowledged that inequality is increasing. Inequality rising is a real issue.. it can be tackled but lets first acknowledge that prosperity has increased for pretty much everyone in the world.

              • kelseyfrog 2 days ago ago

                > Inequality rising is a real issue.. it can be tackled but lets first acknowledge that prosperity has increased for pretty much everyone in the world.

                I literally acknowledged that prosperity has increased for people in other parts of the world.

                Why don't you rewrite my comment so that it's acceptable to you and then we'll discuss that?

                • simianwords 2 days ago ago

                  If we acknowledge that everyone is more prosperous now than before (which completely contradicts the post I was responding to) what is your point? Inequality? I think it is a problem but not so much if everyone is getting prosperous in the mean time.

                  • kelseyfrog 2 days ago ago

                    Yes, as I pointed out in my original comment, inequality is my point.

                    If unaddressed - ie by dismissal - it doesn't go away. It simply festers. It will fester until it ruptures. Ignoring it or minimizing it doesn't make it go away.

                    • simianwords 2 days ago ago

                      Sure and I think solving inequality must be weighed along with increasing prosperity. Both have to be considered because often increasing one means increasing other - increasing taxes too much and there are no incentives to work and prosperity reduces. We need to find the right balance between both.

                      I do acknowledge that inequality can have unforeseen consequences and worth talking about and tackling today but only by considering the right tradeoffs.

                      • kelseyfrog 2 days ago ago

                        I'm trying to square these:

                        > Increasing taxes too much and there are no incentives to work and prosperity reduces.

                        > Your first mistake is thinking hard work matters.

                        If hard work doesn't matter, then why care what incentives are?

              • keybored 2 days ago ago

                Productivity in the US has gone up steadily since WWII (“only work that provides value should matter”) but wages have stagnated since the 70’s.

                • simianwords 2 days ago ago

                  Wages have _not_ stagnated since the 70's. Nor has it stagnated since the 2000's.

                  Can you provide a source to backup your claim?

                  • GuinansEyebrows 2 days ago ago
                    • simianwords 2 days ago ago

                      I investigated the first link with ChatGPT. All the percentiles have increased except 10th percentile. But they do not account for after tax wages and other benefits and transfers.

                      https://www.cbo.gov/publication/59510 shows this. Bottom 20% wages after accounting for benefits and taxes have significantly increased. If you want to answer the question: are the bottom 20% materially more well off at 1960's than now - this is your answer. Hourly wages without accounting for benefits is missing a crucial element so not really indicative of reality.

                      Caveat: this shows the bottom quintile (20th percentile) and after looking at the data it appears to be a change of ~60% of real disposable income from 1978 to 2020. 10th percentile would be similar.

                      TL;DR: if you use real disposable income that accounts for taxes and benefits (what really matters) the wages have not stagnated for anyone but increased a lot - by almost 60%.

                      • GuinansEyebrows 2 days ago ago

                        you're putting in a lot of work (well, i guess you're farming out the work to a third party service) to prove a portion of your argument with a metric that ignores inflation (including whatever you want to call what's happening right now). why? why is it so important to you to try to dispel a notion that is nearly-universally shared among scholars, experts and those actually experiencing ill effects due to the rise in costs of living compared to their income?

                        • simianwords 2 days ago ago

                          It’s not excluding inflation which means you didn’t put any effort into actual investigation. You just googled for what you wanted and posted three links without reading it.

                          It’s very telling that instead of refuting my point you instead choose to derail the discussions into a personal attack. Were you discussing in good faith you would try to understand what I said and reply to it.

                          It’s not universal at all that people are less prosperous now.

                          Why don’t you do good faith research and try to answer whether the bottom earners are actually better off now than before? You will come to the same conclusion.

    • kelseyfrog 2 days ago ago

      The ownership class and the labor class both suffer from a coordination problem.

      The former from the coordination problem of extracting wealth but not fast enough that it solves the coordination problem for the labor class who, like you said, have strike first and revolt second as their battles of last resort.

      The ownership class can voluntarily reduce wealth inequality, and they have before, but as history progresses and time marches on, so do the memories fade of what happens when they don't, pushing them closer and closer to options they don't want to admit work.

    • keybored 2 days ago ago

      Whether or not you are correct about the concrete details here, it is laughable for regular people[1] to bicker about whose job will be replaced first when the people who profit from that are just sitting on their ass, ready to get labor for nothing instead of relatively little.

      [1] Although I wouldn’t be surprised if some of the people who argue about this topic online are already independently wealthy

      • simianwords 2 days ago ago

        The people who profit from it are very much not sitting on their ass. It is easy to dismiss them as a way to reinstate your ideology but the reality is they too are working hard because it is a volatile time for them as well. They have to keep up and employ the new technology appropriately or they will lose to their competition.

        • keybored 2 days ago ago

          You’re right. They are working in the sense that they are competing with others to come out as the top parasites. Not to mention that working against laborers takes effort as well. But they are not working in the sense that people bicker about AI “taking jobs”; providing useful labor.

          • simianwords 2 days ago ago

            Competing against others to come out at the top _is_ useful labor. The best one wins usually and as a consumer you want the best products to come on top.

            • keybored 2 days ago ago

              Maybe this is very Vulgar Marxist but that seems a bit like crediting gangsters competing to shake down construction businesses for building bridges.

              • simianwords 2 days ago ago

                That’s not the case at all. Competition is integral to the system working. We have many laws to protect the market so that competition is viable like anti trust etc.

                Competition is why you have good products. Can you explain to me what incentivizes Apple to make functional and impressive iPhones instead of selling us barely working phones without cameras?

    • igleria 2 days ago ago

      > personally believe we are already beyond the point where electoral politics can solve this issue and a violent resolution is inevitable.

      I do think more or less this too, but it could be 4 years or 40 before people get mad enough. And to be honest the tech gap between civilian violence and state sponsored violence has never been wider. OR in other words, civilians don't have reaper drones etc etc.

      • jmyeet 2 days ago ago

        I agree on time frames. This system can limp on for decades yet. Or fall apart in 5 years (though probably not).

        As for the tech gap, I disagree.

        The history of post-WW2 warfare is that asymmetric warfare has been profoundly successful, to the poin twhere the US hasn't won a single war (except, arguably, Grenada, if that counts, which it does not) since 1945. And that's a country that spends more on defence that something like the next 23 countries combined (IIRC).

        Obviously war isn't exact the same thing but it's honestly not that different to suppressing violent dissent. The difficulty (since 1945) hasn't been defeating an opposing military on the battlefield. The true cost is occupying territory after the fact. And that is basically the same thing.

        Ordinary people may not have reeaper drones and as we've seen in Ukraine, consumer drops are still capable of dropping a hand grenade.

        Suppressing an insurrection or revolt is unbelievably expensive in terms of manpower, equipment and political will. It is absolutely untenable in the long term.

  • DonnyV 2 days ago ago

    H1Bs are replacing workers.

    • GuinansEyebrows 2 days ago ago

      H1B is a type of visa, the holders of which are workers.

  • hunglee2 2 days ago ago

    In hindsight, remote working is an obvious stepping stone to offshoring, which itself is an inevitable milestone toward full automation. It is the work we do in in-person collaboration which will keep the moat high against AI disintermediation.

    • lan321 2 days ago ago

      Doubt. The meaningful work in person is organisational, and it's only marginally better onsite due to whiteboard > Excalidraw. Who does what, how it'll all interact, architecture, etc. If an LLM can code the difficult bits and doesn't fall apart once the project isn't a brand new proof of concept, it'll surely be able to pick the correct pattern and tooling and/or power through the mediocre/bad decision.

      • RaftPeople 2 days ago ago

        > The meaningful work in person is organisational

        And also gaining information about the domain from the business and the business requirements for the system or feature.

      • mensetmanusman 2 days ago ago

        The invisible hand will provide the answer!

      • deadbabe 2 days ago ago

        Have you used a whiteboard recently? It sucks. Writing anything significant takes forever, there’s no undo or redo, difficult to save and version. There’s just no way it’s better.

        • lan321 2 days ago ago

          It takes forever to make a beautiful diagram, but the usual flow is that you have your presentation for the base idea, and then when the questions come, you can all grab a marker and start making a mess on the boards around the room. We also have one in the dev room, which is nice for smaller topics.

          It's not meant to be the actual documentation, and it makes sense to me since you don't want to write the actual documentation during the discussion with multiple highly paid devs and managers. Just take a photo at the end, and it's saved for when you make the documentation.

          • scarface_74 2 days ago ago

            I do the same with Lucid App shared on Zoom. I have the base diagram in Lucid and I start making changes during the meeting and adding sticky notes docs.

            > It's not meant to be the actual documentation, and it makes sense to me since you don't want to write the actual documentation during the discussion with multiple highly paid devs and managers. Just take a photo at the end, and it's saved for when you make the documentation.

            This is 2025, over Zoom, we use Gong, it records, transcribes and summarizes the action items and key discussion points. No need to take notes.

            My diagrams are already in Lucid with notes

          • deadbabe 2 days ago ago

            That just sounds like office theatre.

            • lan321 2 days ago ago

              We have some of that, but it's not the whiteboards. The dev one gets used multiple times a day in a room with only developers. No management, no power structure around.

              It's my general experience, also in prior workplaces, that sometimes a little drawing can tell a lot, and there's no quicker way to start it than to walk 3 meters and grab a marker. Same for getting attention towards a particular part of the board. On Excalidraw, it's difficult to coordinate people dynamically. On a whiteboard, people just point to the parts they're talking about while talking instinctively, so you don't get person A arguing with person B about Y while B thinks they are talking about D which is pretty close to Y as a topic.

    • scarface_74 2 days ago ago

      While I agree with remote work to offshoring. I’m not sure about the next step.

      I would have been hard pressed to find a decent paying remote work as a fully hands on keyboard developer. My one competitive advantage is that I am in the US and can fly out to a customer’s site and talk to people who control budgets and I’m a better than average English communicator.

      In person collobaration though is over rated. I’ve led mid six figure cross organization implementations for the last five years sitting at my desk at home with no pants on using Zoom, a shared Lucid App document and shared Google Docs.

    • khuey 2 days ago ago

      > remote working is an obvious stepping stone to offshoring

      This I largely agree with. If your tech job can be done from Bozeman instead of the Bay Area there's a decent chance it can be done from Bangalore.

      > which itself is an inevitable milestone toward full automation

      But IMHO this doesn't follow at all. Plenty of factory work (e.g. sewing) was offshored decades ago but is still done by humans (in Bangladesh or wherever) rather than robots. I don't see why the fact that a job can move from the Bay Area to Bozeman to Bangalore inherently means it can be replaced with AI.

    • tomohawk 2 days ago ago

      [dead]