And Wall St has begun punishing AI driven layoffs instead of rewarding them. Not saying they understand the problem space but prediction markets are important signals when trying to understand complex spaces.
You have to treat LLMs like humans under time pressure. If you give a person too many tasks in a short amount of time, they will probably also skip and/or rush some tasks.
It's important to clearly identify which parts ought to be deterministic.
It's difficult to get both human-level intelligence and machine-level precision at the same time. You still need deterministic tools to do the precision work.
That said, I think LLMs definitely have a place and can provide a degree of flexibility to processes which was not possible before.
Its unclear how they intend to fix these fundemantal problems tbh. Things like "Automate the 'must not fail' moments with rules, APIs, and triggers." And "Use policies, templates, function calling, and explicit do/don't constraints." Make sense but if you have deterministic workflows, what do the llms still add?
Using APIs makes sense but isnt the whole point of these things that they can automate away stuff, it feels like we're building really big complicated frameworks to put these things in. Does it still have any actual benefit for stuff like this?
I'm not Salesforce obviously, but a combination of LLM as an input interface and the deterministic APIs doing the recurring automation behind would work.
For instance you talk with the LLMs for a while, they give you a workable set of DSL commands, you check what they do and make sure they match your needs, and set them to run as frequently as needed.
People went to the extent of letting agents to discover workflow steps dynamically. This is abuse of probabilistic logic to perform deterministic work. The hype has gone over the board, blurring the distinction.
"While LLMs are amazing, they can't run your business by themselves... We ground AI in tight guardrails and deterministic frameworks, optimizing LLMs to deliver enterprise-grade reliability. Trusted. Reliable. Secure."
this sounds like it's copy and pasted straight from an LLM
"The message is clear: probabilistic models alone won't run mission-critical operations."
Sigh, its LLM output all the way down. All jokes aside, it seems pretty obvious right? There are things that should be more 'flexible'. This is where LLMs can shine. There are things that should be more rigid, which is where old fashioned if/then logic should work. Did execs just try to plop it all in without giving a chance to a more 'balanced' solution?
edit: "While LLMs are amazing, they can't run your business by themselves... We ground AI in tight guardrails and deterministic frameworks, optimizing LLMs to deliver enterprise-grade reliability. Trusted. Reliable. Secure."
Why does it feel like management class suddenly got scared straight?
There are plenty of GPT evangelists too on HN. Plenty of people who have a few successful interactions with a code model and extrapolate that to LLMs reasoning and solving general problems etc.
Salesforce... A company worth so much and able to layoff almost half its employees but does what...? Agentforce doesn't seem to be anything but another GPT wrapper with a lot more buzzwords and now they are already backtracking?
i believe this is via original at https://timesofindia.indiatimes.com/technology/tech-news/aft... (possibly an LLM redraft of the original)
Recent and related...
Salesforce regrets firing 4000 experienced staff and replacing them with AI - https://news.ycombinator.com/item?id=46384781 - Dec 2025 (121 comments)
And Wall St has begun punishing AI driven layoffs instead of rewarding them. Not saying they understand the problem space but prediction markets are important signals when trying to understand complex spaces.
You have to treat LLMs like humans under time pressure. If you give a person too many tasks in a short amount of time, they will probably also skip and/or rush some tasks. It's important to clearly identify which parts ought to be deterministic.
It's difficult to get both human-level intelligence and machine-level precision at the same time. You still need deterministic tools to do the precision work.
That said, I think LLMs definitely have a place and can provide a degree of flexibility to processes which was not possible before.
Its unclear how they intend to fix these fundemantal problems tbh. Things like "Automate the 'must not fail' moments with rules, APIs, and triggers." And "Use policies, templates, function calling, and explicit do/don't constraints." Make sense but if you have deterministic workflows, what do the llms still add?
Using APIs makes sense but isnt the whole point of these things that they can automate away stuff, it feels like we're building really big complicated frameworks to put these things in. Does it still have any actual benefit for stuff like this?
I'm not Salesforce obviously, but a combination of LLM as an input interface and the deterministic APIs doing the recurring automation behind would work.
For instance you talk with the LLMs for a while, they give you a workable set of DSL commands, you check what they do and make sure they match your needs, and set them to run as frequently as needed.
Love how unreliable is recast as "probabalistic"...
People went to the extent of letting agents to discover workflow steps dynamically. This is abuse of probabilistic logic to perform deterministic work. The hype has gone over the board, blurring the distinction.
Yeah, it's not their kingdom so it makes sense.
Eventually AI will simply be integrated in the OS and there will be no room for small players.
Windows is doing that, but no ones interested.
Just wait until they have sufficient data.
"While LLMs are amazing, they can't run your business by themselves... We ground AI in tight guardrails and deterministic frameworks, optimizing LLMs to deliver enterprise-grade reliability. Trusted. Reliable. Secure."
this sounds like it's copy and pasted straight from an LLM
Presumably the 4000 sacked workers won’t be coming back though?
Why would they? Sacking them had zero to do with LLMs, that was just an excuse.
"The message is clear: probabilistic models alone won't run mission-critical operations."
Sigh, its LLM output all the way down. All jokes aside, it seems pretty obvious right? There are things that should be more 'flexible'. This is where LLMs can shine. There are things that should be more rigid, which is where old fashioned if/then logic should work. Did execs just try to plop it all in without giving a chance to a more 'balanced' solution?
edit: "While LLMs are amazing, they can't run your business by themselves... We ground AI in tight guardrails and deterministic frameworks, optimizing LLMs to deliver enterprise-grade reliability. Trusted. Reliable. Secure."
Why does it feel like management class suddenly got scared straight?
> when models were given more than eight instructions
I mean you can call a model 8 time, this seems they looking for excuses.
> If users asked unrelated questions
if you have enumerable business process why do you have chat interface, put down a set of buttons to start each business process
this seem some egregious misuse of the tech, which has it's problem, but hasn't had a fair chance here.
wonder if the whole project was just smoke and mirror to justify layoffs.
This reads like a list of obvious things that we have been saying for the past 3 years.
There are plenty of GPT evangelists too on HN. Plenty of people who have a few successful interactions with a code model and extrapolate that to LLMs reasoning and solving general problems etc.
Salesforce... A company worth so much and able to layoff almost half its employees but does what...? Agentforce doesn't seem to be anything but another GPT wrapper with a lot more buzzwords and now they are already backtracking?
[flagged]
Please avoid internet tropes on HN.
https://news.ycombinator.com/newsguidelines.html