Before the advent of AI, I wrote a slack bot called slackbutt that made Markov chains of random lengths between 2 and 4 out of the chat history of the channel. It was surprisingly coherent. Making an “llm” like that would be trivial.
Reddit has at least one sub where the posts and the comments are generated by Markov-chain bots. More than a few times I’ve gotten a post from there in my feed, and read through it confusedly for several minutes before realizing. Iirc it’s called subreddit_simulator.
The original subreddit simulator ran on simple Markov chains.
Subreddit simulator GPT2 used GPT2, and was already so spookily accurate that IIRC its creators specifically said they wouldn’t create one based on GPT3 out of fear that people wouldn’t be able to tell the difference between real and not generated content
It’s actually kinda easy. Neural networks are just weirder than usual logic gate circuits. You can program them just the same and insert explicit controlled logic and deterministic behavior. To somebody who don’t know the details of LLM training, they wouldn’t be able to tell much of a difference. It will be packaged as a bundle of node weights and work with the same interfaces and all.
The reason that doesn’t work well if you try to insert strict logic into a traditional LLM despite the node properties being well known is because of how intricately interwoven and mutually dependent all the different parts of the network is (that’s why it’s a LARGE language model). You can’t just arbitrarily edit anything or insert more nodes or replace logic, you don’t know what you might break. It’s easier to place inserted logic outside of the LLM network and train the model to interact with it (“tool use”).
It would make me laugh if they could train an LLM that could only regurgitate content verbatim
https://en.wikipedia.org/wiki/Markov_chain
Before the advent of AI, I wrote a slack bot called slackbutt that made Markov chains of random lengths between 2 and 4 out of the chat history of the channel. It was surprisingly coherent. Making an “llm” like that would be trivial.
Reddit has at least one sub where the posts and the comments are generated by Markov-chain bots. More than a few times I’ve gotten a post from there in my feed, and read through it confusedly for several minutes before realizing. Iirc it’s called subreddit_simulator.
The original subreddit simulator ran on simple Markov chains.
Subreddit simulator GPT2 used GPT2, and was already so spookily accurate that IIRC its creators specifically said they wouldn’t create one based on GPT3 out of fear that people wouldn’t be able to tell the difference between real and not generated content
It’s actually kinda easy. Neural networks are just weirder than usual logic gate circuits. You can program them just the same and insert explicit controlled logic and deterministic behavior. To somebody who don’t know the details of LLM training, they wouldn’t be able to tell much of a difference. It will be packaged as a bundle of node weights and work with the same interfaces and all.
The reason that doesn’t work well if you try to insert strict logic into a traditional LLM despite the node properties being well known is because of how intricately interwoven and mutually dependent all the different parts of the network is (that’s why it’s a LARGE language model). You can’t just arbitrarily edit anything or insert more nodes or replace logic, you don’t know what you might break. It’s easier to place inserted logic outside of the LLM network and train the model to interact with it (“tool use”).
Well, it’s not an LLM, but “AI” doesn’t have a defined meaning, so from that perspective they kind of already did.