ChatGPT may be polite, but it's not cooperating with you

(theguardian.com)

23 points | by nickcotter a day ago ago

9 comments

  • OgsyedIE a day ago ago

    This is a hint of the direction of a future backlash. People who use LLMs and other AI programs as collaborators and even as coworkers will be stigmatized and attacked by the easily-automated, in groups, because of the marginally less poor career prospects.

    Learning the cognitive framings for working with AI to boost your own efficiency is moderately easy but it feels impossible to the uninformed. Some even deeply believe that lifelong learning is itself a hoax.

    • conartist6 a day ago ago

      Puhleeez, when will this efficiency boosting myth die?

      Everyone who already knew code knew that efficiency in writing software does not come from typing speed, full stop.

      Nobody wants to call out an employer on AI magical thinking in a tight job market though. You could lose your job.

      • OgsyedIE a day ago ago

        Efficiency comes from agency, vision and creativity, not typing speed. All three of these can be and are boosted by having a full time career coach on the phone in your pocket.

        • conartist6 11 hours ago ago

          I mean using it as a career coach is a different direction and one that I think is a moderately good use. Passing on rote knowledge is basically what this thing is for. Except... I'm not sure anybody really knows what good career advice is right now. The best you could do is to go find a bunch of people who disagree and then decide which arguments convince you and why. A leg up, ok, but I don't see shortcuts to that process as increasing creativity, agency, or vision.

        • interstice 12 hours ago ago

          Ah yes three things you get from a training corpus consisting of blogs and stack overflow posts. I’m frankly surprised it doesn’t say closing this question as off topic halfway through a chat.

  • theothertimcook a day ago ago

    Went in to pull the curtain back—ended up doing PR for the wizard.

    • johnea 16 hours ago ago

      Seriously. One has to wonder how many of those "reviews" and "articles" were LLM generated.

      Like the tool's glowing appraisal of Altman, it also is self promoting.

      To me, this represents one of the most serious issues with LLM tools: the opacity of the model itself. The code (if provided) can be audited for issues, but the model, even if examined, is an opaque statistical amalgamation of everything it was trained on.

      There is no way (that I've read of) for identifying biases, or intentional manipulations of the model that would cause the tool to yield certain intended results.

      There are examples of DeepState generating results that refuse to acknowledge Tienanmen square, etc. These serve as examples of how the generated output can intentionally be biased, without the ability to readily predict this general class of bias by analyzing the model data.

  • Daisywh 20 hours ago ago

    [dead]

  • TripCraft a day ago ago

    [dead]