No AI slop
Posted by p_r0@reddit | vintagecomputing | View on Reddit | 152 comments
Content made primarily (or entirely) by generative artificial intelligence is not allowed. This includes AI images, AI videos, AI text, and AI code.
As a general rule, if it's recognizable as AI, it's not allowed in /r/vintagecomputing. Please continue reporting these posts if you see them.
NatteringNabob69@reddit
Vintage dude here. I remember in the vintage era we looked forward to every speed increase, every new development environment, every new compiler or dev tool - to make us faster. Every step of the way allowed us to solve more problems with less effort. That’s the vintage ethos. Constant improvement, ever higher levels of abstraction.
Young people born after this era look back at it nostalgically as something it most decidedly wasn’t. This was the most pro-technology era in modern memory. AI was born then.
There’s a guy making modern ROM replacements using modern microcontrollers. He uses AI in his dev process. But the products work and they can make it easier to use your vintage hardware. Most of his customers care only about that. I guess you are free to care about other things.
MousseHuge8339@reddit
I don't think this should be encouraged. Devs are now being replaced with prompt jockeys who don't actually know how what's generated actually works. People in general are becoming too reliant on AI. And this is giving authority yet another thing to "cut off" to force the population into submission.
NatteringNabob69@reddit
That argument can be made of every technological improvement that’s happened during your lifetime. You are just drawing and arbitrary line now.
FowlZone@reddit
wild that this would need to be said in a /vintage computing/ subreddit
pinkocatgirl@reddit
The only AI we should allow is whatever can run on a 486 lmao
bigbigdummie@reddit
Vintage AI would be on-topic. Eliza anyone?
SDogAlex@reddit
How bout MacinAI Local? ;) https://oldapplestuff.com/blog/MacinAI-Local/
OhCrapImBusted@reddit
Clippy?
flecom@reddit
AI, banned
flecom@reddit
nope, eliza is AI, so banned
so stupid
Culbrelai@reddit
Bonzi Buddi
RichardGereHead@reddit
I remember a DOS product called "Guru" by MDBS. Came out in the late 80s! We were an ISV for MDBS back then and played around with it and everything. Just think of it: AI in 640K!
harexe@reddit
Better on Vintage Supercomputers, an Ai that runs on a Cray would be insanely cool
Illuminatus-Prime@reddit
"Good afternoon, Doctor Chandra. I am ready for my lesson."
hexavibrongal@reddit
It's been a problem in almost every retrocomputing forum I follow, and in a couple cases the mods refused to ban it. And it's definitely not just bots, it's sometimes long-time members of the community who for some reason are obsessed with AI image generators.
Contrantier@reddit
Makes me wonder if it's only bots that would do something as stupid as posting AI content here. Maybe this post won't help as much as we hope. But we can dream.
sputwiler@reddit
Makes sense; AI is not vintage.
Mithgaraf@reddit
THIS generation of AI is not vintage -- we're at something like gen5 - gen7 AI these days (I've lost track); I think AI gen2 was spearheaded in the mid 1980s (I took a class in Prolog, which was supposed to be a building block for that). AI gen1 was, I believe, exclusively keyword/table driven (ELIZA) with a very limited data set.
How far AI has come!
How far the human race has fallen...
flecom@reddit
but you can use it's output on vintage machines? guy I watch on youtube used an LLM to write software in forth which I thought was pretty neat
a lot of these machines predate me so having something that can assist in writing code I think is pretty handy especially as the knowledge base (literally) dies off
sputwiler@reddit
Like, why tho. That defeats the whole point.
We are about preserving the knowledge before it dies off. That's not a foregone conclusion that means we must turn to LLMs. In fact, why not turn to /r/vintagecomputing and ask here? By continuing to use these machines and writing about what we learn, we pass the knowledge on. It's also not how machines were used back then; you had to actually program the thing. The 'user/programmer is the same person and therefore knows their machine well' is part of the experience of using it, though of course not everyone wants to go that far. If you don't want to interact with a vintage computer (so you have an LLM do it instead), why not just use a modern computer?
NOW some things are difficult to look up, and sometimes an LLM can point you in the right direction like some kinda fuzzy-accuracy librarian ('the info you want is probably this' type beat). That's totally OK and seems useful, but that's not something you would normally post about; it's something you would do in service of what you want to post about. Basically, people want to hear about what you did, not what some rental robot did.
flecom@reddit
because I like vintage computers, but I don't know how to program vintage computers very well if at all... I have a LOT of vintage machines, being an expert on all of them is not really possible
if an LLM can help me write an application for a vintage machine that is useful I guess that's not worth sharing? seems really stupid that just because AI assisted in something the end result is somehow always bad
sputwiler@reddit
No, it is not worth sharing.
Sharing what an LLM did for you is just advertising LLMs; it's not something you did. Anyone can do that; they just ask an LLM themself. Your post doesn't contribute anything. It's basically the same as "anyone can use google" but with extra steps and money. Basically, posting that you got an LLM to do something for you is the equivalent of posting that you paid someone on fiverr or one of those gig economy websites to do something. It's just as interesting now as it was then, which is nil.
If you find a totally new way of applying LLM technology to a problem, that's interesting, but in that case you're still not posting the output of the LLM, you're posting the things you did (AI research) to change LLM technology to make it applicable to a problem.
By all means, knock yourself out using LLMs yourself though. It's output is just not useful content for discussion or posting. Especially since most LLM output is inscrutable to the person who requested it, the OP often is incapable of discussing their own post!
flecom@reddit
wow imagine this kind of luddite attitude when these vintage machines existed... GUIs mean anyone can do anything! real men program with toggle switches!
sputwiler@reddit
You've completely missed the point.
sparkyblaster@reddit
What's vintage these days? A 2006-2008 mac pro can take quite a bit of ram. 32-64 gb might be able to pull off something decent.
sputwiler@reddit
The AI itself is not vintage. It was released less than 5 years ago.
sparkyblaster@reddit
So, no modern software on old hardware at all here then?
sputwiler@reddit
That's a different topic.
sparkyblaster@reddit
How? Local AI models is just modern software.
sputwiler@reddit
Nobody said you couldn't do that, just don't post the AI slop that comes out. The post about getting it to run at all could be interesting, but that wouldn't be an AI-generated post.
sputwiler@reddit
The topic is "No AI [output]," not running AI on vintage hardware.
Scorpius666@reddit
This should be a must in every subreddit.
Contrantier@reddit
Unless it's explicitly an AI sub, yes.
Madness_Reigns@reddit
Even then that shit should be out. Gimme my RAM back.
Contrantier@reddit
I'd just stay off those subs if I were you. If I don't like a particular sub's content, I don't go there.
Madness_Reigns@reddit
I couldn't give a shit about the content, so way ahead of you. But notice how it's still affecting me and the planet.
Illuminatus-Prime@reddit
It affects you because you let it live for free in your mind.
Contrantier@reddit
Yeah, I don't get it. If I don't want to bother with AI, then I don't, and it doesn't bother me.
Madness_Reigns@reddit
I haven't had deepfakes of me produced by Musk's bot, but I've seen fake footage passed as real and now chips cost 10x as they did a few months ago. This is not an issue of closing your eyes and letting it wash over you.
AffectionateMight182@reddit
I hate that stuff as well. But AI is a tool. No will. It is doing those things at the order of the initiator of the task. A person. I blame the users and companies who took chips from the consumer market. I use AI in all kinds of ways. I don't use it to replace me interacting with people and I def don't use it for greed like meta or Open AI. The right tool for the right job. It is great for analyzing data and writing code. This is what I do for a living. But why would I ever want to use it to be deceptive or even cruel. That is a human problem, not an AI problem.
Madness_Reigns@reddit
It affects me because 9f the air I breathe and the RAMand storage I have to pay 10x as I did before. Stop that shit you're doing where you paint your detractors as unreasonable.
istarian@reddit
The thing is that you don't actually have any right to dictate what other people do.
Environmental concerns are much more valid than complaining about the costs of RAM and storage.
The former affect everyone negatively, while the latter are mostly inconviences. And the prices of memory and storage are inevitably subject to market forces and corporate decision has a far larger impact than than any particular individual's actions.
Illuminatus-Prime@reddit
The same fixed, standard test for EVERY subreddit.
new2bay@reddit
There is no such test.
istarian@reddit
Get a group of people (10?) to independently check the image for signs of AI. If they give it a pass then it gets posted.
You'll still get some stuff that slips through, but people are pretty good at noticing the things AI screws up. That's especially true when they're actually looking for it.
As with any such system there is some potential for abuse, but if they only see the image and not the user posting that will reduce the effects of favoritism.
Illuminatus-Prime@reddit
There should be.
The current situation is untenable -- all that's needed to get a post taken down is for a mob of trolls to comment with the words "AI Slop". No proof, no checklist, and no accountability for false accusations.
Individual_Agency703@reddit
Except r/weirddalle .
srhubb@reddit
Understood
GrantExploit@reddit
I almost entirely agree with this, but ever since the AI boom really got going in 2022, I've badly wanted to see someone run† a modern‡ AI model on a retro computer, even a "peri-retro"‖ machine; it would be a really cool limit-pushing intersection of two computing eras. Would demonstrating this be allowed under the new rules?
(Also, like on other subreddits and online communities, I'm worried about any writings I may submit here may be being judged as AI-generated, as I use a rather verbose text style with lots of formatting—including the dreaded em-dash. This is despite the fact that I personally use GANs/LLMs as little as feasible {largely because I don't want to offload my cognitive abilities too much} and have had this writing style since 2016, before Attention Is All You Need...)
†I don't mean "be a thin client to a separate, much more powerful AI server", which is what all examples I'm aware of of a vintage computer being used to "do" modern AI are, actually do the computation... or at least attempt to.
‡That is, based on research post-Attention Is All You Need, Generative Adversarial Nets, or at least AlexNet.
‖Like a G5/Pentium 4 Prescott+/K8 Opteron/Athlon 64/Core-based system with a pre-GeForce 9 series/Radeon HD 3000 GPU.
p_r0@reddit (OP)
Tech demos are always allowed.
flecom@reddit
but you said
so which is it?
p_r0@reddit (OP)
Go back and read the first sentence of this post.
tpimh@reddit
llama2.c was ported to Win9x and even DOS
sputwiler@reddit
At that point you're not posting the AI's slop, you are posting your efforts to get the AI to slop in a new vintage place.
sparkyblaster@reddit
Yeah I agree. If a 2006-2008 mac pro is vintage yet, that could pull off some decent AI. They go up to about 32-64gb of ram which could make something usable.
Illuminatus-Prime@reddit
By what metrics do you determine whether or not a block of text is "AI Slop"?
Whorehammer@reddit
They submit the work to the Council of the Minds: ChatGPT, Claude, and Grok working in unison to perceive beyond human ability.
Illuminatus-Prime@reddit
How would you determine if they did?
When presented with a five-point essay on the feeding habits of mallard ducks (for example), how would you determine if it was AI slop?
TygerTung@reddit
It is pretty obvious for the most part. Emojis, bold text segments, bullet points, lack of spelling and grammatical errors, and the style and tone is fairly standard for chatgpt.
Illuminatus-Prime@reddit
Emojis are also common in human writing. Some comments I've seen have been nothing but emojis.
Bullet points are common among technical writers, especially in articles such as:
I take pride in my writing skills, even though I employ spelling checkers.
Ain't you gots no more examples?
TygerTung@reddit
It is all about pattern recognition. If one has basic pattern recognition skills, they will recognise the AI style, but that's just my impression. I'm not certain that people tapping stuff out on their phone are putting in all the AI type formatting but I could be wrong.
ILikeBumblebees@reddit
And it's a profoundly incorrect impression. Unbounded pattern recognition leads us astray all the time -- best case, you're seeing faces in the clouds, worst case, you're down the rabbit hole of crazy conspiracy theories and turning yourself into a paranoid wreck.
This is especially true here, where you're likely zeroing in on certain patterns as a matter of confirmation bias. I doubt you actually know how many false positives or false negatives you're generating, because you'd already have to know in advance whether something was written by a human or by an LLM to test the accuracy of your criteria.
What about people typing well-thought-out comments on their full-size keyboards?
TygerTung@reddit
Sure, but even on a keyboard, I'm not certain people ate putting in all the bold text segments, indented bullet points and other things like that. I'm not sure that it is extra convenient to use the reddit web client like that. I suppose they could write and format their response in libreoffice and copy it, but usually it isn't so handy to get those emojis. Maybe they could search for those online?
ILikeBumblebees@reddit
Well, let me share my own certainty with you:
Bullet points are trivially easy to include in a Reddit comment with some basic Markdown.
Bullet points have been a common feature of writing for decades. Using them is actually explicitly recommended in many business writing courses!
Features for including bullet points have been ubiquitous in software for decades: everything from traditional word processors to modern Markdown, as I mentioned above, makes them extremely convenient to use. There are dedicated HTML tags for them!
LLMs learned to use bullet lists because people do use them with great frequency, leading to them being all over the training data.
Bold text has equally been used for emphasis for decades, and all of the above applies to it, too: ubiquitous in business writing, supported by tons of software, dedicated HTML tags, and dead simple to include in a Reddit comment with Markdown.
I'm not sure what "reddit web client" you're talking about. The client is the browser, and comments are written in a standard text input box. Reddit includes a "formatting help" link right under it that even conveniently lists all of the Markdown it supports!
TygerTung@reddit
I'm not disputing that emojis, bold text, bullet points, and italics haven't been used for decades, not to mention the indents and other formatting features, it is just that I haven't seen all of those things being used at once in the combinations favoured by LLMs, in human written posts and answers on reddit. I mean it could happen, but it isn't really something i've seen.
Illuminatus-Prime@reddit
Subjective pattern recognition. One person may claim a pattern is normal human behavior, while another may claim that it can only be produced by a bot.
Again, objective standards are needed.
TygerTung@reddit
Do you think the average person would interpret that a copy pasted AI post would be written by a real person?
ILikeBumblebees@reddit
Sure. The whole point of LLMs is that they're intended to emulate human writing.
Illuminatus-Prime@reddit
I know that some will and some won't. Subjectivity is like that.
ILikeBumblebees@reddit
Apart from the emojis in formal writing, none of that is valid, sorry. All of those are features of educated writing by humans.
The reason why LLMs include those patterns is because those patterns are all over their training data, and they're in the training data because they're all standard features of writing by educated English-speakers.
Think about the long-term consequences of using these criteria, too: LLMs will just be retrained or re-prompted to avoid using those patterns in their output, while humans will continue to use them as they always have.
This will result in you increasingly getting both false positives and false negatives, to the point where you may actually end up excluding humans who write well, and interacting primarily with a combination of LLMs trained to sound dumb and actual dumb people.
TygerTung@reddit
Sure, but I feel that most people are tapping away on their phone on reddit, and I don't think they are putting all the effort into this extremely complicated formatting on their phone. I'm just saying currently that the generic copy pasted llm stuff is fairly obvious. I could be wrong though.
kabekew@reddit
Numerous reports by humans would be a good metric I think.
OnetimeRocket13@reddit
This has been shown to be a really bad metric, actually.
We've reached a point where people have begun mistaking real, human made posts for AI and mass reporting them as AI. I've seen mods in other subs express frustration that their communities are made up of people who simply cannot tell if something is AI or not, since AI has just gotten that good.
Using the masses as a metric is a really bad idea.
ILikeBumblebees@reddit
And there are memes going around that convince people to use a particular set of indicators -- certain punctuation marks, terminology, etc. -- to classify what is and is not AI-generated, and it leads a lot of people to end up presuming that anything well written must have come from an LLM.
I really don't want to encourage a situation where the only way to prove you're a human is to write like a doofus.
Illuminatus-Prime@reddit
i no watchu meen
kabekew@reddit
But Reddit itself uses the masses to decide what's good and bad. And Wikipedia. Do you have a better way?
OnetimeRocket13@reddit
I don't think there is a better way. We have unfortunately reached the point where your average person's mediocre knowledge and skills concerning AI and spotting it have been outpaced by AI, which has led to a lot of people on Reddit being unable to tell the difference between a real image/video and AI. Unless all AI companies implement something like what Google does (where images generated by their models can be directly checked for the presence of SynthID), we simply won't be able to truly tell 100% of the time. Using the opinion of the masses is hardly a good alternative, since the masses often give false positives, which just creates a "boy who cried wolf" situation with AI.
At best, we can only hope that the mods are good at spotting AI and diligent in checking whether something is AI or not, but from what I've seen in other subs, this also causes a lot of headache for mods, at least on larger subs, because the false positives are just too common, because your average Redditor has no idea what is AI generated these days.
kabekew@reddit
You mention the "average person's mediocre knowledge and skills concerning AI," but I think the whole point of the Turing test (which only requires having a "human" judging the AI) is that it doesn't require skills or knowledge for a human to know it's talking to or seeing another human. That I think is ingrained in our DNA, and I think any person can tell if something seems off ("uncanny valley"). I think multiple reports of "this is AI" is a good guideline for the mods here.
ILikeBumblebees@reddit
No, the whole point of the Turing test was to come up with a heuristic for determining whether a machine can be regarded as having achieved human-level intelligence -- the idea was that we can attribute intelligence to software when it reaches the point where the average person can't distinguish whether they're interacting with another human vs. interacting with software.
And modern LLMs, which are specifically designed to mimic human writing patterns in a context-aware way, have reached the point where they are passing the Turing test.
The status quo also sort of invalidates the Turing test, because while many people already can't distinguish with certainty between LLMs and humans, at least in text form, LLMs themselves have not actually reached a point where they exhibit human-level intelligence where reasoning and semantic awareness are concerned. So I don't think the Turing test criterion really holds anymore.
Illuminatus-Prime@reddit
The Turing Test relies on subjective perceptions, not objective measurements.
THAT is why it is not an effective test for determining the "bot-ness" of a post.
ILikeBumblebees@reddit
Worse than that, since LLMs can easily be trained to avoid the tropes that people are misidentifying as indicators of AI, and over time, people who employ them presumptively will end up with false negatives too, and may actually end up preferring interactoins with bots.
Illuminatus-Prime@reddit
Correction: Reddit allows the masses to decide what they like and don't like, not what is good or bad (although you can report violations of the rules).
Wikipedia allows its editors to correct data errors, and even asks for citations when a claim seems "iffy".
Hjalfi@reddit
Last week I got accused of being a chatbot...
flecom@reddit
couple subs I frequent just call everything AI, it's pretty funny at this point... kinda think it's the bots trying to fit in at this point
ILikeBumblebees@reddit
Not necessarily. It's becoming unfortunately common for people to misidentify other users' posts as AI-generated. There are some memes going around that try to use a particular set of indicators to determine whether something is AI generated -- certain punctuation marks, turns of phrase, etc. -- that are generally erroneous, and lead some people to presume that anything well-written must be AI-generated.
Illuminatus-Prime@reddit
No.
"Ad Populum" refers to a logical fallacy where a claim is considered true simply because many people believe it to be true. This type of argument relies on popularity rather than evidence to support the claim, and is therefor invalid as proof.
kabekew@reddit
The Turing test has been well studied and long established. A group form of that (multiple judges instead of a single judge) could only be more accurate.
Illuminatus-Prime@reddit
This would be a misapplication of the Turing Test -- a method proposed by Alan Turing to determine if a machine can exhibit intelligent behavior indistinguishable from that of a human.
What I am trying to develop is a "test" to distinguish human intelligence from AI Slop.
kabekew@reddit
But if it fails the test, then hasn't it been distinguished?
Illuminatus-Prime@reddit
Maybe.
It still relies on subjective opinions, which can vary widely from person to person.
Metrics, on the other hand, are subjectively defined by experts -- real experts -- and not your run-of-the-mill armchair bot-hunter.
kabekew@reddit
Turing was certainly an expert though. But group consensus is pretty much the best we have (jury system, legislatures, democracy) and while not perfect, it's kind of the best we have. If you have a better way to detect AI though please do share, because this is a problem everywhere and will only get worse.
Illuminatus-Prime@reddit
Turing was the expert. The average redditor is not.
DAN-attag@reddit
One of main things is confidentially incorrect information, e.g. "Yes, Windows 95 runs on 80286 because..." or really obvious posts of vintage computer that doesn't exist(like what is the point of posting gaming rig that exists only as image generated in Gemini)
robot_ankles@reddit
That's a good question -and something a lot of people don't usually inquire about. You've really identified a core challenge related to this issue.
To determine whether a block of text or code qualifies as what some colloquially call “AI slop,” one must move beyond subjective distaste and instead apply structured evaluative criteria. The assessment generally falls across five measurable dimensions:
However, it is worth noting that not all verbose writing is "AI slop." In some cases redditors may utilize a similar writing style in an attempt at humor. Often referred to as "shitposting," this style of communication is often viewed by the author as far more humorous than it is in reality.
/jk
ILikeBumblebees@reddit
Most of it isn't, and the only reason that AI writes the way it does is because it's trained on how people write.
robot_ankles@reddit
"You mean it was human slop all along?!"
[drops to knees and pounds beach sand]
Illuminatus-Prime@reddit
I don't know whether to applaud you for your humor or excoriate you for simulating AI Slop.
robot_ankles@reddit
I really hope the mods get the joke. I completely agree with the new rule #3!
Infamous-Umpire-2923@reddit
Going to take a wild guess the sole metric would be vibes alone.
Illuminatus-Prime@reddit
If by "vibes" you mean subjective impressions, then no.
I am looking for more objective methods that cannot be countered with mere opinion.
Infamous-Umpire-2923@reddit
There isn't one.
Illuminatus-Prime@reddit
There ought to be, if only so that bot-hunting trolls can no longer gang up on human professional writers, accuse them of posting AI slop, and then gloat openly over their ability to act as literary gatekeepers.
ILikeBumblebees@reddit
There ought to be pots of gold at the end of rainbows, but there aren't.
You should not just hesitate to post it, but hesitate to use it, for two reasons.
First, it's extremely doubtful that the techniques you've come up with are the result of anything other than confirmation bias, since you'd already have to know whether the content you're testing was or was not AI-generated in order to validate your test.
I suppose you could do an experiment by mixing lots of your own writing with your own AI-generated content, and then measure how well your tests work, but if you did that, you might just end up with tests that are only good at distinguishing your own writing from LLM output generated by your own prompts. The only way for this to work would be a large-scale study against many participants' curated data.
Second, any criteria that people use to distinguish LLMs is something that the LLMs themselves will adapt to -- training datasets will be updated, and prompts will be constructed to deliberately exclude those writing patterns from the output by the same people who are intentionally trying to use AI to create spam. So the more widespread adoption any particular method for detecting LLMs gains, the sooner it will stop working.
berrmal64@reddit
Can you dm it to me? I'm curious what "objective standard" you're working with.
In general, I think you and I might have a similar style of writing, which I'd call "literate" but a lot of people would suspect as AI.
One of the ways to avoid it is to favor brevity. AIs always seem to use about 300 words when 30 would have done.
Illuminatus-Prime@reddit
I would post it in r/vintagecomputing if one or more mods would request it.
Plaidomatic@reddit
Ok. A lot of subjective criteria in your objective standard
Illuminatus-Prime@reddit
Which is EXACTLY why I'm asking others how they do it. My current system is flawed, and I want to make it more accurate.
Infamous-Umpire-2923@reddit
It used to be that ChatGPT was easy to spot, but now if you tell it to avoid the usual AI-isms and do some manual editing, it's nearly impossible.
p_r0@reddit (OP)
As opposed to infallible reddit mods who can remove any post at their sole discretion? :)
Illuminatus-Prime@reddit
I have written a rather long article on the methods I use to play Spot-A-Bot. I hesitate to post it (again) because there seem to be a lot of redditors (mods and mundanes alike) who would rather see a post taken down by subjective fiat than by following objective standards.
stuffitystuff@reddit
Emoji in the code comments, for one. No human takes time to do that since they don't even write comments
Illuminatus-Prime@reddit
Oh, ☻ really?
MWink64@reddit
I fully support this, yet worry because I've been falsely accused of using AI before.
flecom@reddit
so now you will just get banned?
Walkera43@reddit
You only have to look at YouTube or Instagram to see how AI slop degrades a platform.
Illuminatus-Prime@reddit
It's the users who don't understand their tools.
2raysdiver@reddit
Amen, brother.
Trevgauntlet@reddit
I thought that should've been the standard. Did someone try to post AI-Slop here?
anothercatherder@reddit
What if I use a LISP machine to generate AI slop? Will that work here?
tpimh@reddit
The only catch is "primarily or entirely", so AI edits are allowed? Like AI restoration of old photos and such?
StefanCelMijlociu@reddit
Best idea ever!
sparkyblaster@reddit
Love this,
However (bear with me) what if it'd AI slop generated by vintage computing? As long as 2006 is vintage haha. I'd say that's as old as you could realistically do. 2006-2008 mac pro can take 32-64gb of ram which could pull off something. Might take a week atleast.
AgingSeaWolf@reddit
Thank you!
spymonkey73@reddit
In the beginning they feared us.
p47guitars@reddit
Does this mean I can't use bonzi buddy or the sandblaster parrot?
justananontroll@reddit
What about Clippy?
0riginal-Syn@reddit
Even Clippy hates AI
justananontroll@reddit
I bet Clippy hates himself.
0riginal-Syn@reddit
Very likely
p47guitars@reddit
He's got a template for that.
Contrantier@reddit
Bonzi Buddy isn't AI
cazzipropri@reddit
Good.
thomasbeagle@reddit
What if it's an 'expert system' running on an old CP/M system?
spilk@reddit
or the type of AI that Lisp machines were designed for
https://en.wikipedia.org/wiki/Lisp_machine#Historical_context
Illuminatus-Prime@reddit
Was there ever such a thing?
Those whom the gods would destroy, they first teach to use CP/M.
thomasbeagle@reddit
Expert systems were trending pretty hard in the 80s so I'm sure some were running on CP/M!
They were basically hand-crafted decision tree, at least as far as I could tell.
Illuminatus-Prime@reddit
Yes, and I've coded a few for past employers. The code quickly becomes overwhelming when you're trying to cover all possibilities.
0riginal-Syn@reddit
100% agree. If there was ever a subreddit that shouldn't allow AI slop it is this one.
PsychoMaggle@reddit
Amen
catlord@reddit
Thank you, mods.
vanetti@reddit
That’s what’s up, fuck AI
IowaNobody@reddit
https://imgur.com/PqDvZaX
cR_Spitfire@reddit
GOOD!!
nismo2070@reddit
It is appreciated! Im sooooo tired of it infiltrating every aspect of life.
Necessary-Score-4270@reddit
All praise be to our mighty and benevolent mods!!! May their queues be short and their bans be swift!
AppendixN@reddit
THANK YOU
Zilch1979@reddit
The hero we need.
codykonior@reddit
Yay!!!!!!
fragglet@reddit
Thank you
SAPianoman490@reddit
Big W from the mods
jessek@reddit
Good.
chupathingy99@reddit
Mods = gods
Thank you!