I had an ELIZA-like "chatbot" written in BASIC on the laptop I carried in high school (1991-1995). I added logging, let classmates interact with it, and then read the logs. The extent to which people treated the program as though it had agency was kind of horrifying. I can only imagine what's happening with LLMs today. It scares the willies out of me.
re: my ELIZA-like logs - I was at least somewhat ethical, insofar as I didn't share the logs with others, nor did I ever tell anybody that they had been logged or acted upon what I read in the logs. Still, I was pretty shitty to the people who interacted with my computer. The extent to which current "AI" companies won't be shitty to users is, I assume, much less than I was back then.
I am curious, was there any improvement of ELIZA type chatbots, before the advent of LLMs. What was the state of the art of conventional chatbot tech. Perhaps some IRC chatbots were more advanced?
We developed ALICE and AIML (https://en.wikipedia.org/wiki/Artificial_Intelligence_Markup...) as a way to program bots (some of my work included adding scripting and a learning mechanism), at the time it was open sourced but AOL literally threw it into it's AIM service at certain points. There were plenty of "connectors" for different services, but the real ironic bit was that there was a central Graphmaster class which was extremely memory intensive. This was all before AWS and Cloud.
I made a private fork of ALICE back in the day and maintained my own response ruleset to give it a bespoke personality. I extended the main ALICE codebase with a TCP-based API server, and wrote another service that connects ALICE to IRC channels. I also made a GTK-based UI for starting, stopping, reloading and monitoring ALICE and to ease writing rule files. This gave me an IRC buddy that joined me in chatrooms.
If I remember correctly, I also modified the Graphmaster to add support for rule priorities, so that I can better manage rules beyond the tree-based matching approach.
One of the first things people would do, upon discovering that she's a bot, is trying to break her responses.
All of this was for private use, nothing was open sourced. Unfortunately I think I forgot to copy it over from an old hard drive during a computer hardware migration, so it's gone now.
I remember Richard Wallace writing something along the lines of "if I were to build an artificial intelligence, I wouldn't use flesh and bones, that's just a bad choice" (not a verbatim quote) in defense of people accusing AIML for being a too simple/dumb of an approach, with those people favoring more complex approaches. In the age of LLM, that statement aged both well and badly.
Right before LLMs broke into the scene we had a few techniques I was aware of:
* Personality Forge uses a rules-based scripting approach [0]. This is basically ELIZA extended to take advantage of modern processing power.
* Rasa [1] used traditional NLP/NLU techniques and small-model ML to match intents and parse user requests. This is the same kind of tooling that Google/Alexa historically used, just without the voice layer and with more effort to keep the context in mind.
Rasa is actually open source [2], so you can poke around the internals to see how it's implemented. It doesn't look like it's changed architecture substantially since the pre-LLM days. Rhasspy [3] (also open source) uses similar techniques but in the voice assistant space rather than as a full chatbot.
;;; Commentary:
;; The single entry point `doctor', simulates a Rogerian analyst using
;; phrase-production techniques similar to the classic ELIZA demonstration
;; of pseudo-AI.
From the docstring [2] of the command:
"Switch to *doctor* buffer and start giving psychotherapy."
Also, here's an example session, in case someone is curious:
M-x doctor
I am the psychotherapist. Please, describe your problems. Each time
you are finished talking, type RET twice.
How can I explain to my friends that Emacs is a lifestyle choice?
What do you think?
That's not an answer, Doctor. Are you avoiding the question?
I'll ask the questions, if you don't mind!
I do mind, actually. This is my buffer, remember?
Why do you say that?
Because I'm the one holding the Ctrl and Meta keys here.
Is it because you are the one holding the ctrl and meta keys there
that you came to me?
C-x k
Once, way back when, I ported eliza to $lang and hooked it up to my AIM account. All well and good till the boss interacted with it for a couple of minutes before twigging on.
I had an ELIZA-like "chatbot" written in BASIC on the laptop I carried in high school (1991-1995). I added logging, let classmates interact with it, and then read the logs. The extent to which people treated the program as though it had agency was kind of horrifying. I can only imagine what's happening with LLMs today. It scares the willies out of me.
re: my ELIZA-like logs - I was at least somewhat ethical, insofar as I didn't share the logs with others, nor did I ever tell anybody that they had been logged or acted upon what I read in the logs. Still, I was pretty shitty to the people who interacted with my computer. The extent to which current "AI" companies won't be shitty to users is, I assume, much less than I was back then.
I am curious, was there any improvement of ELIZA type chatbots, before the advent of LLMs. What was the state of the art of conventional chatbot tech. Perhaps some IRC chatbots were more advanced?
We developed ALICE and AIML (https://en.wikipedia.org/wiki/Artificial_Intelligence_Markup...) as a way to program bots (some of my work included adding scripting and a learning mechanism), at the time it was open sourced but AOL literally threw it into it's AIM service at certain points. There were plenty of "connectors" for different services, but the real ironic bit was that there was a central Graphmaster class which was extremely memory intensive. This was all before AWS and Cloud.
I made a private fork of ALICE back in the day and maintained my own response ruleset to give it a bespoke personality. I extended the main ALICE codebase with a TCP-based API server, and wrote another service that connects ALICE to IRC channels. I also made a GTK-based UI for starting, stopping, reloading and monitoring ALICE and to ease writing rule files. This gave me an IRC buddy that joined me in chatrooms.
If I remember correctly, I also modified the Graphmaster to add support for rule priorities, so that I can better manage rules beyond the tree-based matching approach.
One of the first things people would do, upon discovering that she's a bot, is trying to break her responses.
All of this was for private use, nothing was open sourced. Unfortunately I think I forgot to copy it over from an old hard drive during a computer hardware migration, so it's gone now.
I remember Richard Wallace writing something along the lines of "if I were to build an artificial intelligence, I wouldn't use flesh and bones, that's just a bad choice" (not a verbatim quote) in defense of people accusing AIML for being a too simple/dumb of an approach, with those people favoring more complex approaches. In the age of LLM, that statement aged both well and badly.
Right before LLMs broke into the scene we had a few techniques I was aware of:
* Personality Forge uses a rules-based scripting approach [0]. This is basically ELIZA extended to take advantage of modern processing power.
* Rasa [1] used traditional NLP/NLU techniques and small-model ML to match intents and parse user requests. This is the same kind of tooling that Google/Alexa historically used, just without the voice layer and with more effort to keep the context in mind.
Rasa is actually open source [2], so you can poke around the internals to see how it's implemented. It doesn't look like it's changed architecture substantially since the pre-LLM days. Rhasspy [3] (also open source) uses similar techniques but in the voice assistant space rather than as a full chatbot.
[0] https://www.personalityforge.com/developers/how-to-build-cha...
[1] https://web.archive.org/web/20200104080459/https://rasa.com/ (old link because Rasa's marketing today is ambiguous about whether they're adding LLMs now).
[2] https://github.com/RasaHQ/rasa
[3] https://rhasspy.readthedocs.io/en/latest/
Authentic eliza in the browser: https://anthay.github.io/eliza.html
(Port/rewrite I think. More details here https://github.com/anthay/ELIZA )
For Emacs users, see also:
From its commentary [1] in the source code: From the docstring [2] of the command: [1] https://cgit.git.savannah.gnu.org/cgit/emacs.git/tree/lisp/p...[2] https://cgit.git.savannah.gnu.org/cgit/emacs.git/tree/lisp/p...
Also, here's an example session, in case someone is curious:
I fondly remember M-x psychoanalyze-pinhead as well. (Though the actual Zippy the Pinhead quotes have long sense been removed.)
Once, way back when, I ported eliza to $lang and hooked it up to my AIM account. All well and good till the boss interacted with it for a couple of minutes before twigging on.
Discussed in January: <https://news.ycombinator.com/item?id=42746506>
you can use elizallm.com (it also offers the openai api just in case you need that).
ELIZA is not an LLM. This site also doesn't say what program it is actually running, any details at all. It's just a chat box without any explanation.
HOW DO YOU DO. PLEASE TELL ME YOUR PROBLEM