Avoiding AI is hard – but our freedom to opt out must be protected

(theconversation.com)

189 points | by gnabgib 12 hours ago ago

112 comments

  • Bjartr 11 hours ago ago

    > Imagine applying for a job, only to find out that an algorithm powered by artificial intelligence (AI) rejected your resume before a human even saw it

    The article says this like it's a new problem. Automated resume screening is a long established practice at this point. That it'll be some LLM doing the screening instead of a keyword matcher doesn't change much. Although, it could be argued that an LLM would better approximate an actual human looking at the resume... including all the biases.

    It's not like companies take responsibility for such automated systems today. I think they're used partly for liability cya anyway. The fewer actual employees that look at resumes, the fewer that can screw up and expose the company to a lawsuit. An algorithm can screw up too of course, but it's a lot harder to show intent, which can affect the damages awarded I think. Of course IANAL, so this could be entirely wrong. Interesting to think about though.

    • JKCalhoun 3 hours ago ago

      > The article says this like it's a new problem.

      I suspect though there might be something different today in terms of scale. Bigger corporations perhaps did some kind of screening (I am not aware of it though — at Apple I was asked to personally submit resumés for people I knew that were looking for engineering jobs — perhaps there was automation in other parts of the company). I doubt the restaurants around Omaha were doing any automation on screening resumés. That probably just got a lot easier with the pervasiveness of LLMs.

    • GeorgeCurtis 9 hours ago ago

      > It also influences hiring decisions, rental applications, loans, credit scoring, social media feeds, government services and even what news or information we see when we search online.

      I think, more frighteningly, the potential for it to make decisions on insurance claims and medical care.

      • Saigonautica 7 hours ago ago

        I know someone in this space. The insurance forms are processed first-pass with AI or ML (I forget which). Then the remainder are processed by humans in Viet Nam. This is not for the USA.

        I've also vaguely heard of a large company that provides just this as a service -- basically a factory where insurance claims are processed by humans here in VN, in one of the less affluent regions. I recall they had some minor problems with staffing as it's not a particularly pleasant job (it's very boring). On the other hand, the region has few employment opportunities, so perhaps it's good for some people too.

        I'm not sure which country this last one is processing forms for. It may, or may not be the USA.

        I don't really have an opinion to offer -- I just thought you might find that interesting.

      • danielmarkbruce 8 hours ago ago

        There is an underlying assumption there that is certainly incorrect.

        So many stupid comments about AI boil down to "humans are incredibly good at X, we can't risk having AI do it". Humans are bad at all manner of things. There are all kinds of bad human decisions being made in insurance, health care, construction, investing, everywhere. It's one big joke to suggest we are good at all this stuff.

        • bluefirebrand 32 minutes ago ago

          "A computer can never be held accountable, therefore a computer should never make a management decision"

          Quote from an IBM training manual from 1979

          Seems just as true and even more relevant today than it was back then

        • chii 8 hours ago ago

          The fear is that by delegating to an ai, there's no recourse if the outcome for the person is undesirable (correctly or not).

          What is needed from the AI is a trace/line-of-reasoning to which a decision is derived. Like a court judgement, which has explanations attached. This should be available (or be made as part of the decision documentation).

          • stephen_g 8 hours ago ago

            But also an appeals process where it can be escalated to a real person. There should be nothing where any kind of AI system can make a final decision.

            I think the safest would be for a reviewer in an appeal process to even not have any access to any of the AI's decision or reasoning, since if the incorrect decision was based on hallucinated information, a reviewer might be biased to think it's true even if it was imagined.

            • JimDabell 7 hours ago ago

              > There should be nothing where any kind of AI system can make a final decision.

              This would forbid things like spam filters.

              • guappa 6 hours ago ago

                You never look into your spam folder?

                • JimDabell 4 hours ago ago

                  A huge amount of spam never makes it into spam folders; it’s discarded silently.

                  • guappa 4 hours ago ago

                    If you use microsoft outlook most of the spam never makes it into the spam folder because it goes to the inbox.

                    Do you have a source for your somewhat unbelievable claim?

                    • homebrewer 16 minutes ago ago

                      First paragraph, with references:

                      > by 2014, it comprised around 90% of all global email traffic

                      https://en.wikipedia.org/wiki/Email_spam

                    • bandrami 4 hours ago ago

                      Mail server op here: mail exchangers (mine included) absolutely silently drop insane amounts of email submissions without any indication to the sender or the envelope recipient. At most there's a log the operator can look at somewhere that notes the rejection and whatever rules it was based on.

                      • sokoloff 8 minutes ago ago

                        I run a family email server. I’ve worked with a few mail sysadmins trying to diagnose why some of our mail doesn’t get through. Even with a devoted and cooperative admin on the other side, I can absolutely believe that general logging levels are low and lots of mail is silently discarded as we’ve seen that in targeted tests when we both wanted to see the mail.

                      • guappa 2 hours ago ago

                        With no access to logs the claim is just handwaving.

                        • bandrami 2 hours ago ago

                          But I have access to logs, which is why I can describe to you how minimal the logging of email rejections is

                    • jcranmer an hour ago ago

                      The spam folder (or the inbox, in what you consider outlook) isn't for messages known to be spam; it's for messages that the filter thinks might have a chance of not being spam.

                      Most spam is so low-effort that the spam rules route it directly to /dev/null. I want to say the numbers are like 90% of spam doesn't even make it past that point, but I'm mostly culling this from recollections of various threads where email admins talk about spam filtering.

                    • sjsdaiuasgdia 2 hours ago ago

                      Tell me you've never run a mail server without telling me you've never run a mail server.

                      It's pretty standard practice for there to be a gradient of anti-spam enforcement. The messages the scoring engine thinks are certainly spam don't reach end users. If the scoring engine thinks it's not spam, it gets through. The middle range is what ends up in spam folders.

          • jdietrich 2 hours ago ago

            The EU has already addressed this in the GDPR. Using AI is fine, but individuals have a right to demand a manual review. Companies and government agencies can delegate work to AI, but they can't delegate responsibility.

            https://gdpr-info.eu/art-22-gdpr/

            • briandear an hour ago ago

              A manual review then it’ll be rejected as well. Some HR person who doesn’t know the difference between Java and JavaScript isn’t going to result in better decisions. The problem is and has always been top of funnel screening.

          • danielmarkbruce 7 hours ago ago

            That's also a silly idea. Companies get sued all the time. There is recourse. If you run a bank that runs an AI model to make decisions on home loans and the effect is no black people get a loan, you are going to find yourself in court the exact same way as if you hire loan officers who are a bunch of racists. There is no difference.

            It is available to the entity running inference. Every single token and all the probabilities of every choice. But every decision made at every company isn't subject to a court hearing. So no, that's also a silly idea.

            • sokoloff 5 minutes ago ago

              I would bet on AI well before humans when it comes to “which of these applications should be granted a home loan?” and then tracking which loans get paid as agreed.

            • kazinator 7 hours ago ago

              Companies pretty much never get sued by anyone who is from a low income or otherwise disadvantaged group. It has to be pro bono work, or a big class action suit.

              • danielmarkbruce 7 hours ago ago

                Completely wrong. Companies get sued by various government agencies all the time. The FTC, DOJ, SEC, EPA, CFPB, probably others I can't think of.

                Even if you were right, AI doesn't change any of it - companies are liable.

                • cess11 6 hours ago ago

                  So when the corporations and the state decide to be nasty, that's OK?

                  • genewitch 6 hours ago ago

                    I've been getting a real "well, they deserved it" vibe off this site tonight. Thanks for not being that way.

                    • ToucanLoucan 2 hours ago ago

                      The user base for this site is incredibly privileged. Not meant as a judgement just a stated observation.

                • watwut 6 hours ago ago

                  First, to the extend these checks worked, they are being destroyed.

                  Second, have you ever look at rhat space? Because the agencies that done this were already weak, the hurdles you had to overcome were massive and the space was abused by companie to the maximum.

            • exsomet 5 hours ago ago

              Taking the example of healthcare, a person may not have time to sue over an adverse decision. If the recourse is “you can sue the insurance company, but it’s going to take so long you’ll probably die while you’re waiting on that”, that’s not recourse.

              • jclulow 5 hours ago ago

                Right, this is the bedrock upon which injunctive relief is made available; viz., when money after the fact would not cancel out the damages caused by a party doing the wrong thing. Unfortunately you can't get that relief without having an expensive lawyer, generally, so it doesn't end up being terribly equitable for low income folks.

        • buescher 34 minutes ago ago

          "Human's aren't perfect at X, so it doesn't matter if an unaccountable person who's plausibly wrong about everything does it or an AI". What's the difference, anyway?

        • kazinator 7 hours ago ago

          Bad decisions in insurance are, roughly speaking, on the side of over-approving.

          AI will perform tirelessly and consistently at maximizing rejections. It will leave no stone unturned in search for justifications why a claim ought to be denied.

          • roenxi 3 hours ago ago

            That isn't what the incentives point too; in a free market the insurers are aligned to accuracy. An insurer who has a reputation of being unreasonable about payouts won't have any customers - what is the point of taking out a policy if you expect the insurer to be unreasonable about paying? It'd take an odd customer to sign up for that.

            If they over-approve they will be unprofitable because their premiums aren't high enough. If they under-approve it'll be because their customers go elsewhere.

            • sokoloff a minute ago ago

              [delayed]

            • Coffeewine 2 hours ago ago

              But I expect my insurer to be unreasonable about paying today.

              It’s just that A) I didn’t choose this insurer, my employer did and on balance the total package isn’t such that I want a new employer and B) I expect pretty much all my available insurance companies to be unreasonable.

          • squidbeak 4 hours ago ago

            This has an easy public policy fix through something like a national insurance claim assessment agency, with an impartial prompt, which AI will make reasonably cheap to fund. It's always been perverse that insurance companies judge the merits of their own liabilities.

        • j1436go 5 hours ago ago

          Looking at the current state of AI models that assist in software engineering I don't have much faith in it being any better, quite the contrary.

        • otabdeveloper4 6 hours ago ago

          AI will be used to justify the existence of bad decisions. Now that we have an excuse in the form of "AI" we don't need to fix or own our bad decision mistakes.

      • PieTime 4 hours ago ago

        I know people who’ve died because of AI algorithms years ago. They’ve implemented state programs with no legal oversight and only governed by an algorithm.

    • BlueTemplar 3 hours ago ago

      Good point : like when a killing is done by a machine that wasn't directly operated, the perpetrator might be found guilty, but for manslaughter rather than murder ?

  • beloch 6 hours ago ago

    >* "AI decision making also needs to be more transparent. Whether it’s automated hiring, healthcare or financial services, AI should be understandable, accountable and open to scrutiny."

    You can't simply look at a LLM's code and determine if, for example, it has racial biases. This is very similar to a human. You can't look inside someone's brain to see if they're racist. You can only respond to what they do.

    If a human does something unethical or criminal, companies take steps to counter that behaviour which may include removing the human from their position. If an AI is found to be doing something wrong, one company might choose to patch it or replace it with something else, but will other companies do the same? Will they even be alerted to the problem? One human can only do so much harm. The harm a faulty AI can do potentially scales to the size of their install base.

    Perhaps, in this sense, AI's need to be treated like humans while accounting for scale. If an AI does something unethical/criminal, it should be "recalled". i.e. Taken off the job everywhere until it can be demonstrated the behaviour has been corrected. It is not acceptable for a company, when alerted to a problem with an AI they're using, to say, "Well, it hasn't done anything wrong here yet."

    • amelius 5 hours ago ago

      The question I have is: should an AI company be allowed to push updates without testing by an independent party (e.g. in self driving cars)?

    • BlueTemplar 3 hours ago ago

      Not sure why you used the example of a human when those actually have legal responsibility and, as you suggest, are unique ?

      Rather, why would LLMs be treated any differently from other machines ? Mass recall of flawed machines (if dangerous enough) is common after all.

      • lcnPylGDnU4H9OF a few seconds ago ago

        Why even look at the LLM? A human deployed it and defers to its advice for decision-making; the problem is in the decision-making process, which is in full control of the human. It should be obvious how to regulate that: be critical of what the human behind the LLM does.

  • hedora 9 hours ago ago

    Maybe people will finally realize that allowing companies to gather private information without permission is a bad idea, and should be banned. Such information is already used against everyone multiple times a day.

    On the other hand, blocking training on published information doesn’t make sense: If you don’t want your stuff to be read, don’t publish it!

    This tradeoff has basically nothing to do with recent advances in AI though.

    Also, with the current performance trends in LLMs, we seem very close to being able to run models locally. That’ll blow up a lot of the most abusive business models in this space.

    On a related note, if AI decreases the number of mistakes my doctor makes, that seems like a win to me.

    If the AI then sold my medical file (or used it in some other revenue generating way), that’d be unethical and wrong.

    Current health care systems already do that without permission and it’s legal. Fix that problem instead.

    • heavyset_go 9 hours ago ago

      > On the other hand, blocking training on published information doesn’t make sense: If you don’t want your stuff to be read, don’t publish it!

      There's a difference between reading something and ripping it off, no matter how you launder it.

      • MoltenMan 9 hours ago ago

        I think the line is actually much blurrier than it might seem. Realistically everything is a remix of things people have done before; almost nothing is truly brand new. So why specifically are people allowed to build on humanity's past achievements, but not AI?

        • strogonoff 4 hours ago ago

          > why specifically are people allowed to build on humanity's past achievements, but not AI?

          Let’s untangle this.

          1. Humanity’s achievements are achievements by individuals, who are motivated by desires like recognition, wealth, personal security, altruism, self-actualisation.

          2. “AI” does not build on that body of work. A chatbot has no free will or agency; there is no concept of “allowing” it to do something—there is engineering it and operating a tool, and both are done by humans.

          3. Some humans today engineer and operate tools, for which (at least in the most prominent and widely-used cases) they generally charge money, yet which essentially proxy the above-mentioned original work by other humans.

          4. Those humans engineer and operate said tools without asking the people who created said work, in a way that does not benefit or acknowledge them, thus robbing people of many of the motivations mentioned in point 1, and arguably in circumvention of legal IP protections that exist in order to encourage said work.

          • TeMPOraL an hour ago ago

            That's a long-winded way of spelling out a a Dog in the Manger approach to society, coupled with huge entitlement issues:

            There is something valuable to others, that I neither built nor designed, but because I might have touched it once and left a paw print, I feel hurt no one wants to pay me rent for the valuable thing, and because of that, I want to destroy it so no one can have it.

            Point 2) operates on a spectrum, there's plenty of cases where human work has no agency or free will behind it - in fact, it's very common in industrialized societies.

            RE 3), "engineers" and "operators" are distinct; "engineers" make money because they provide something of immense value - something that exists only because of collective result of 1), but any individual contribution to it is of no importance. The value comes from the amount and diversity and how it all is processed. "Operators" usually pay "engineers" for access, and then they may or may not use it to provide some value to others, or themselves.

            In the most basic case, "engineers" are OpenAI, and "operators" are everyone using ChatGPT app (both free and paid tiers).

            RE 4) That's the sense of entitlement right there. Motivations from point 1. have already been satisfied; the value delivered by GenAI is a new thing, a form of reprocessing to access a new kind of value that was not possible to extract before, and that is not accessible to any individual creator, because (again) it comes from sheer bulk and diversity of works, not from any individual one.

            IMO, individual creators have a point about AI competing with them for their jobs. But that's an argument against deployment and about what the "operators" do with it; it's not an argument against training.

        • chii 8 hours ago ago

          > So why specifically are people allowed to build on humanity's past achievements, but not AI?

          because those people seem to think that individuals building it will be too small a scale to be commercially profitable (and thus the publisher is OK to have them be a form of social credit/portfolio building).

          As soon as it is made clear that these published data can be monetized (if only by large corporations with money), they want a piece of the pie that they think they deserve (and not getting).

      • danielmarkbruce 8 hours ago ago

        Just like there is a difference between genuine and disingenuous.

      • protocolture 8 hours ago ago

        >There's a difference between reading something and ripping it off, no matter how you launder it.

        Yes but that argument cuts both ways. There is a difference, and its not clear that training is "Ripping off"

    • BlueTemplar 3 hours ago ago

      > On the other hand, blocking training on published information doesn’t make sense: If you don’t want your stuff to be read, don’t publish it!

      > This tradeoff has basically nothing to do with recent advances in AI though.

      I am surprised someone on HN would think this, especially considering the recent examples of DDoS via LLM crawlers compared to how websites are glad to be crawled by search engines.

      For the first part : why do you think that robots.txt even exists ? Or why, say, YouTube constantly tries (and fails) to prevent you to use the 'wrong' kind of software to download their videos ?

  • lacker 12 hours ago ago

    I think most people who want to "opt out of AI" don't actually understand where AI is used. Every Google search uses AI, even the ones that don't show an "AI panel" at the top. Every iOS spellcheck uses AI. Every time you send an email or make a non-cryptocurrency electronic payment, you're relying on an AI that verifies that your transaction is legitimate.

    I imagine the author would respond, "That's not what I mean!" Well, they should figure out what they actually mean.

    • tkellogg 12 hours ago ago

      Somewhere in 2024 I noticed that "AI" shifted to no longer include "machine learning" and is now closer to "GenAI" but still bigger than that. It was never a strict definition, and was always shifting, but it made a big shift last year to no longer include classical ML. Even fairly technical people recognize the shift.

      • poslathian 6 hours ago ago

        I’ve worked in this field for 20+ years and as far as I can tell the only consistent colloquial definition of AI is “things lay people are surprised a computer can do right now”

        • mjburgess 2 hours ago ago

          AI has always been a marketing term for computer science research -- right from the inception. It's a sin of academia, not the public.

          • TeMPOraL an hour ago ago

            As if anyone in the public cared about marketing for CS research. Hardly anyone is even exposed to it.

            AI in the public mind comes from science fiction, and it means the same thing it meant for the past 5+ decades: a machine that presents recognizable characteristics of a thinking person - some story-specific combination of being as smart (or much smarter) than people in a broad (if limited) set of domains and activities, and having the ability (or at least giving impression of it) to autonomously set goals based on its own value system.

            That is the "AI" general population experiences - a sci-fi trope, not tech industry marketing.

            • mjburgess 36 minutes ago ago

              The scifi AI boom in the 60s follows the AI research boom. This was the original academia hype cycle and one which still scars the public mind via this scifi.

      • jedbrown 8 hours ago ago

        The colloquial definitions have always been more cultural than technical, but it's become more acute recently.

        > I think we should shed the idea that AI is a technological artifact with political features and recognize it as a political artifact through and through. AI is an ideological project to shift authority and autonomy away from individuals, towards centralized structures of power. https://ali-alkhatib.com/blog/defining-ai

      • JackeJR 11 hours ago ago

        It swings both ways. In some circles, logistic regression is AI, in others, only AGI is AI.

    • simonw 11 hours ago ago

      Came here to say exactly that. The use of "AI" as a weird, all-encompassing boogeyman is a big part of the problem here - it's quickly growing to mean "any form of technology that I don't like or don't understand" for a growing number of people.

      The author of this piece made no attempt at all to define what "AI" they were talking about here, which I think was irresponsible of them.

    • leereeves 12 hours ago ago

      I imagine the author would respond: "That's what I said"

      "Opting out of AI is no simple matter.

      AI powers essential systems such as healthcare, transport and finance.

      It also influences hiring decisions, rental applications, loans, credit scoring, social media feeds, government services and even what news or information we see when we search online."

      • mistrial9 11 hours ago ago

        tech people often refer to politicians as somehow dumb, but big AI safety legislation two years ago on both sides of the North Atlantic, deeply dives into exactly this as "safety" for the general public.

      • Robotbeat 11 hours ago ago

        Okay, then I guess they’ll agree to paying more for those services since they’ll cost more to deal with someone’s boutique Amistics.

        • codr7 9 hours ago ago

          If I was given the choice, I would without exception pay more for non-AI service.

          • Gigachad 9 hours ago ago

            Some businesses have done similar. Some banks in Australia announced they will be charging a fee for withdrawing cash at the counter rather than at ATMs, other businesses charge fees for receiving communications by letter rather than email.

        • lacker 8 hours ago ago

          It's not even about paying more. Think of email. Every time you send an email, there's an AI that scans it for spam.

          How could there be a system that lets you opt out, but keep sending email? Obviously all the spammers would love to opt out of spam filtering, if they could.

          The system just fundamentally does not work without AI. To opt out of AI, you will have to stop sending email. And using credit cards. And doing Google searches. Etc etc etc...

    • drivingmenuts 11 hours ago ago

      I'm just not sure I see where AI has made my search results better or more reliable. And until it can be proven that those results are better, I'm going to remain skeptical.

      I'm not even sure what form that proof would take. I do know that I can tolerate non-deterministic behavior from a human, but having computers demonstrate non-deterministic behavior is, to me, a violation of the purpose for which we build computers.

      • simonw 11 hours ago ago

        "I'm just not sure I see where AI has made my search results better or more reliable."

        Did you prefer Google search results ten years ago? Those were still using all manner of machine learning algorithms, which is what we used to call "AI".

        • lacker 8 hours ago ago

          Even 20 years ago, it wasn't using AI for the core algorithm, but for plenty of subsystems, like (IIRC) spellchecking, language classification, and spam detection.

    • BlueTemplar 3 hours ago ago

      Yes, yet another example why the word 'AI' should be tabooed : use 'machine', 'software', "software using neural networks" (or another specific term) instead.

  • roxolotl 11 hours ago ago

    Reminds me of the wonderful Onion piece about a Google Opt Out Village. https://m.youtube.com/watch?v=lMChO0qNbkY

    I appreciate the frustration that, if not quite yet, it’ll be near impossible to live a normal life without having exposure to GenAI systems. Of course as others say here, and the date on the Onion piece shows, it’s not sadly not a new concern.

  • tim333 5 hours ago ago

    The trouble with his examples of doctors or employers using AI is it's not really about him opting out, it's about forcing others, the doctors and employers, not to use AI which will be tricky.

  • yoko888 9 hours ago ago

    I’ve been thinking about what it really means to say no in an age where everything says yes for us.

    AI doesn’t arrive like a storm. It seeps in, feature by feature, until we no longer notice we’ve stopped choosing. And that’s why the freedom to opt out matters — not because we always want to use it, but because knowing we can is part of what keeps us human.

    I don’t fear AI. But I do fear a world where silence is interpreted as consent, and presence means surrender by default.

    • chii 8 hours ago ago

      > silence is interpreted as consent

      silence is indeed concent (of the status quo). You need to vote with your wallet, personal choice and such - if you want to be comfortable, choosing the status quo is the way, and thus consent.

      There's no possibility of a world where you get to remain comfortable, but still get to have a "choice" to dictate a choice contrary to the status quo.

      • yoko888 5 hours ago ago

        That’s fair. You’re speaking from a world where agency is proven by cost — where the only meaningful resistance is one that hurts. I don’t disagree. But part of me aches at how normalized that has become. Must we always buy our way out of the systems we never asked to enter? I’m not asking to be safe or comfortable. I’m asking for the space to notice what I didn’t choose — and to have that noticing matter.

        • BlueTemplar 3 hours ago ago

          Quite a lot of people try(ied) to buy it with blood and still fail(ed) :

          https://samzdat.com/2017/06/01/the-meridian-of-her-greatness...

          (Note how depending how one (mis)reads what you wrote, this is human nature, and there no escaping it, and you would likely be miserable if you tried.)

          • yoko888 an hour ago ago

            You’re right — many have tried to resist, even with blood, and still the gears turned. Systems are excellent at digesting rebellion and turning it into myth.

            But still, I think there’s something in refusing to forget. Not to win — but to remember that not everything was agreed to in silence.

            Maybe noticing isn’t power. But maybe it’s the thing that keeps us from surrendering to the machinery entirely.

  • djoldman 12 hours ago ago

    > Imagine applying for a job, only to find out that an algorithm powered by artificial intelligence (AI) rejected your resume before a human even saw it. Or imagine visiting a doctor where treatment options are chosen by a machine you can’t question.

    I wonder when/if the opposite will be as much of an article hook:

    "Imagine applying for a job, only to find out that a human rejected your resume before an algorithm powered by artificial intelligence (AI) even saw it. Or imagine visiting a doctor where treatment options are chosen by a human you can’t question."

    The implicit assumption is that it's preferred that humans do the work. In the first case, probably most would assume an AI is... ruthless? biased? Both exist for humans too. Not that the current state of AI resume processing is necessarily "good".

    In the second, I don't understand as no competent licensed doctor chooses the treatment options (absent an emergency); they presumably know the only reasonable options, discuss them with the patient, answer questions, and the patient chooses.

    • userbinator 12 hours ago ago

      Humans can be held accountable. Machines can't. AI dilutes responsibility.

      • kevmo314 12 hours ago ago

        AI didn't bring anything new to the table, this is a human problem. https://www.youtube.com/watch?v=x0YGZPycMEU

        • whilenot-dev 8 hours ago ago

          GenAI absolutely brings something new to the table! These models should be perceived as human-like intelligence when it's time to bring value to shareholders, but are designed to provide just enough non-determinism to avoid responsibilities.

          All problems are human and nothing will ever change that. Just imagine the effects anyone is facing when being affected by something like the British Post Office scandal[0], only this time it's impossible to comprehend any faults in the software system.

          [0]: https://en.wikipedia.org/wiki/British_Post_Office_scandal

      • djoldman 10 hours ago ago

        At least in the work setting, employers are generally liable for stuff in the workplace:

        https://en.wikipedia.org/wiki/Vicarious_liability

      • tbrownaw 12 hours ago ago

        How fortunate that AIs are tools operated by humans, and can't cause worse responsibility issues than when a human employee is required to blindly follow a canned procedure.

        • whilenot-dev 8 hours ago ago

          I can't tell if this is sarcasm...

          GenAI interfaces are rolled out as chat products to end users, they just evaporate this last responsibility that remains on any human employee. This responsiblity shift from employee to end user is made on purpose, "worse responsibility issues" are real and well designed to be on the customer side.

      • andrewmutz 11 hours ago ago

        What do you mean held accountable? No HR human is going to jail for overlooking your resume.

        If you mean that a human can be fired when they overlook a resume, an AI system can be be similarly rejected and no longer used.

        • theamk 11 hours ago ago

          No, not really. If a single HR person is fired, there are likely others to pick up the slack. And others will likely learn something from the firing, and adjust their behavior accordingly if needed.

          On the other hand, "firing" an AI from AI-based HR department will likely paralyze it completely, so it's closer to "let's fire every single low-level HR person at once" - something very unlikely to occur.

          The same goes with all other applications too: firing a single nurse is relativel easy. Replacing AI system with a new one is a major project which likely takes dozens of people and millions of dollars.

          • YetAnotherNick 8 hours ago ago

            If the system is built externally by vendor, you can change the vendor and it would create pressure on the vendor ecosystem to not create bad system.

            If it is built internally you need people to be responsible to create reliable tests and someone to lead the project. In a way it's not very different than if your external system is bad or crashing. You need accountability in the team. Google can't fire "Google Ads" but doesn't mean they can't expect Google Ads to reliably give them money and people to be responsible for maintaining quality.

        • locopati 11 hours ago ago

          Humans can be held accountable when they discriminate against groups of people. Try holding a company accountable for that when they're using an AI system.

          • andrewmutz 11 hours ago ago

            You haven’t explained how it’s different

          • SpicyLemonZest 11 hours ago ago

            I don't think humans actually can be held accountable for discrimination in resume screening. I've only ever heard of cases where companies were held accountable for discriminatory tests or algorithms.

        • drivingmenuts 11 hours ago ago

          You cannot punish an AI - it has no sense of ethics or morality, nor a conscience. An AI cannot be made to feel shame. You cannot punish an AI for transgressing.

          A person can be held responsible, even when it's indirect responsibility, in a way that serves as a warning to others, to avoid certain behaviors.

          It just seems wrong to allow machines to make decisions affecting humans, when those machines are incapable of experiencing the the world as a human being does. And yet, people are eager to offload the responsibility onto machines, to escape responsibility themselves.

          • SoftTalker 11 hours ago ago

            Humans making the decision to use AI need to be responsible for what the AI does, in the way that the owner/operator of a car is responsible for what the car does.

            • userbinator 10 hours ago ago

              "Humans" is the problem. There's one driver in a car, but likely far more than one human deciding to use AI, so who takes responsibility for it?

              • SoftTalker 9 hours ago ago

                If there's more than one possiblity, then their boss. Or the boss's boss. Or the CEO, ultimately.

    • linsomniac 12 hours ago ago

      >Or imagine visiting a doctor where treatment options are chosen by a human you can’t question

      That really struck a chord with me. I've been struggling with chronic sinusitis, without really much success. I had ChatGPT o3 do a deep research on my specific symptoms and test results, including a negative allergy (on my shoulder) test but that the doctor observed allergic reactions in my sinuses.

      ChatGPT seemed to do a great job, and in particular came up with a pointer to an NIH reference that showed 25% of patients in a study showed "local rhinitis" (isolated allergic reactions) in their sinuses that didn't show elsewhere. I asked my ENT if I could be experiencing a local reaction in my sinuses that didn't show up in my shoulder, and he completely dismissed that idea with "That's not how allergies work, they cause a reaction all over the body."

      However, I will say that I've been taking one of the second gen allergy meds for the last 2 weeks and the sinus issues have been resolved and staying resolved, but I do need another couple months to really have a good data point.

      The funny thing is that this Dr is a evening programmer, and every time I see him we are talking about how amazing the different LLMs are for programming. He also really seems to keep up with new ENT tech, he was telling me all about a new "KPAP" algorithm that they are working on FDA approval for and apparently is much less annoying to use than CPAP. But he didn't have any interest in looking at the at the NIH reference.

      • davidcbc 11 hours ago ago

        > I do need another couple months to really have a good data point.

        You need another couple months to really have a good anecdote.

        • linsomniac 10 hours ago ago

          I think whether I'm cured or not only slightly minimizes the story of a physician who discounted something that seemingly impacts 25% of patients... It's also interesting to me that ChatGPT came up with research supporting an answer to my primary question, but the Dr. did not.

          The point being that there's a lot that the LLMs can do in concert with physicians, discounting either one is not useful or interesting.

    • leereeves 12 hours ago ago

      > In the second, I don't understand as no competent licensed doctor chooses the treatment options (absent an emergency); they presumably know the only reasonable options, discuss them with the patient, answer questions, and the patient chooses.

      I wish that were the case, but in my experience it is not. Every time I've seen a doctor, they offered only one medication, unless I requested a different one.

      • linsomniac 11 hours ago ago

        >they offered only one medication

        I've had a few doctors offer me alternatives and talk through the options, which I'll agree is rare. It sure has been nice when it happened. One time I did push back on one of the doctor's recommendations: I was with my mom and the doctor said he was going to prescribe some medication. I said "I presume you're already aware of this but she's been on that before and reacted poorly to it and we took her off it because of that. The doctor was NOT aware of that and prescribed something else. I sure was glad to be there and be able to catch that.

      • zdragnar 11 hours ago ago

        There's a few possible reasons this can happen.

        First is that the side effect profile of one option is much better known or tolerated, so the doctor will default to it.

        Second is that the doctor knows the insurance company / government plan will require attempting to treat a condition with a standard cheaper treatment before they will pay for the newer, more expensive option.

        There's always the third case where the doctor is overworked, lazy or prideful and doesn't consider the patient may have some input on which treatment they would like, since they didn't go to medical school and what would they know anyway?

  • daft_pink 11 hours ago ago

    i’m not sure it’s just code. it’s just an algorithm similar to any other algorithm. i’m not sure that you can opt out of algorithms.

  • bamboozled 4 hours ago ago

    It's absolutely never going to happen...there I said it.

  • Nasrudith 7 hours ago ago

    This seems like one of those 'my personal neurosis deserve to be treated like a societal problem' articles. I've seen the exact same sort of thing when complaining about inability to opt out of being advertised to.

  • mianos 11 hours ago ago

    Using a poem from 1897 to illustrate why AI will be out of control? The web site name is very accurate. That's sure to start a conversation.

  • lokar 8 hours ago ago

    They include no functional definition of what counts as AI.

    Without that the whole thing is just noise

  • JimDabell 7 hours ago ago

    Article 22 of the GDPR already addresses this, you have the right to human intervention.

  • caseyy 6 hours ago ago

    You can’t outlaw being an asshole. You can’t outlaw being belligerent. And you can’t outlaw being a belligerent asshole with AI. There isn’t a question of “should we”. We have no means, as things stand.

    Our intellectual property, privacy, and consumer protection laws were all tested by LLM tech, and they failed the test. Same as with social media — with proof it has caused genocides and suicides, and common sense saying it’s responsible for an epidemic of anxiety and depression, we have failed to stop its unethical advance.

    The only wining move is to not play the game and go offline. Hope you weren’t looking to date, socialize, bank, get a ride, order food at restaurants, and do other things, because that has all moved online and is behind a cookie warning saying “We Care About Your Privacy” and listing 1899 ad partners the service will tell your behavioral measurements to for future behavior manipulation. Don’t worry, it’s “legitimate interest”. Then it will send an email to your inbox that will do the same, and it will have a tracking pixel so a mailing list company can get a piece of that action.

    We are debating what parts of the torment nexus should or shouldn’t be allowed, while being tormented from every direction. It’s actually getting very ridiculous how too little too late it is. But I don’t think humanity has a spine to say enough is enough. There are large parts of humanity that like and justify their own abuse, too. They would kiss the ground their abusers walk on.

    It is the end stage of corporate neo-liberalism. Something that could have worked out very well in theory if we didn’t become mindless fanatics[0] of it. Maybe with a little bit more hustle we can seed, scale and monetize ethics and morals. Then with a great IPO and an AI-first strategy, we could grow golden virtue retention in the short and long-run…

    [0] https://news.ycombinator.com/item?id=33668502

  • Imnimo 9 hours ago ago

    This is going to end with me having to click another GDPR-style banner on every website, isn't it?