Nostalgia for just 3 years ago…
Posted by Dion-AI@reddit | LocalLLaMA | View on Reddit | 37 comments
Is it just me or has anyone else experienced the feeling I have recently thinking back on AI. I remember the days of the early ChatGPT page, my first time getting an API key and trying out Open Interpreter, and how GPT-4 was the king at that time. The days of ol’ gpt-3.5-turbo, the original ChatGPT. They also had some other models at the time like text-davinci-003 and such. Oh then before the whole Gemini series Google had Palm-2? Remember Gecko? Never heard more about it although to be fair Google has been doing that already anyway. Releasing open source edge models at that. All the projects at the time using the APIs for projects like BabyAGI attempting agentic actions and failing 99% of the time because the models at the time just weren't capable of it. Don't get me wrong, I was able to accomplish quite a bit with Open Interpreter and 3.5 turbo. But projects like BabyAGI didn't return anything fruitful. Then GPT-4. Oh GPT-4 with the limited quota but (at that time) goated responses. Making sure to save all your difficult prompts for when that quota reset. Setting up accounts through external services that gave GPT-4 messages. So many apps and websites that offered “Get x amount GPT-4 messages free!” signed up to just to get some valuable code. The first stages of Dall-E 3 was amazing too with the external platform. Microsoft adding it to Bing so you could use it there to generate a bunch of free images until you ran out of daily points. Elevenlabs releasing scarily accurate voice models and even cloning. Then advanced voice with the demo where they show it off as an obvious Her ripoff. The location finding based on images. The photo trends. Then Mythos recently. So, so much. Honestly I'm leaving out a lot but if I included everything we would be here all day. My point is, it's incredible how much has happened. Like I obviously know that is the inherit property of Moore's Law, computers and definitely AI development but still it's astounding to see and experience. Personally when I think back on all this stuff, I literally get this nostalgic feeling like it's been ages… but it's just been 3 years.
TL;DR:
AI has evolved insanely fast—what feels like a whole era (early ChatGPT, GPT-3.5, GPT-4 limits, BabyAGI, DALL·E, voice cloning, etc.) all happened in just \~3 years, and it already feels nostalgic.
ArchdukeofHyperbole@reddit
I'm nostalgic. I really enjoyed chatgpt 3.5. It felt like magic, or like someone was on the other side typing really fast lol. But at the same time, they lied to everyone. They were supposed to be open sourcing ai.
rm-rf-rm@reddit
Complete opposite perspective for me - I cant wait for the pets.com era to be over. So so much crap, hype and overselling everywhere in this bubble that its hard to understand what is actually valuable and innovative
GamerHaste@reddit
Agreed I can’t go on any tech related website without it all being completely bullshit atp
Ha_Deal_5079@reddit
saving gpt4 messages for the important stuff and burning 3.5 on everything else was such a mood. now i got qwen running locally for free lmao
Scared_Bedroom_8367@reddit
Qwen is nowhere near GPT 4
CYTR_@reddit
Qwen 27b is so much better because he is far more capable, even though he doesn't have all the knowledge of GPT4 (anyway, I prefer a web search to trusting LLM's internals).
Thump604@reddit
He?
Dion-AI@reddit (OP)
Exactly! lol that and I personally love the new Gemma series. 31B as the architect/planner and a few Qwen3.5 9bs as the workers.
LegacyRemaster@reddit
I remember the first house I lived in: cold, drafts coming from the windows, mold. Now I have a warm, clean house, new windows... No, I'm not nostalgic.
a_beautiful_rhind@reddit
I remember unlimited GPT4 from the wang for a couple of months. Quitting half way through because it suddenly got censored.
Honestly more blown away by how quickly the time passed than what happened with the models. Local is pretty comfy though.
breadfruitcore@reddit
I'm nostalgic about when machine learning were just cool videos on youtube about cars learning pathfinding and I didn't know at all how they worked.
uti24@reddit
Ahh, remember that times, we had like 10 free messages for ChatGPT and also rice of local models staring with llama-7B that could not respond coherently, what a times! But ChatGPT was good.
BidWestern1056@reddit
rest in peace 3.5 turbo you were a goat.
Mart-McUH@reddit
None of those are local, I did not use them so no nostalgia at all.
If we go before Llama2 - just load some Vicuna 33B, Pygmalion 6B or something else from Llama1 era and nostalgia quickly disappears.
In Llama2 era there are some models that I liked and some still keep on HDD or even SSD and sometimes run. Those local models are generally not lost. For example CommandR 35B, Mythomax 13B, Midnight Miqu 70B (and plenty others) - they do not stand up to models of today but they are still fun to run occasionally.
Altruistic_Heat_9531@reddit
and all LLM is limited to 2048 ctx window even the 175B GPT 3.
No mention about BLOOM? Flan? Mixtral
Dion-AI@reddit (OP)
Oh yeah true! And it was Claude first that beefed up context a lot
Altruistic_Heat_9531@reddit
If i am not mistaken the open model that start jumping to 32K is Mistral variant https://www.reddit.com/r/LocalLLaMA/comments/1soyoat/this_is_my_opinionated_language_model_tech_tree/
Dion-AI@reddit (OP)
Honestly at that time the only open source models I was interested in were the small ones that I could manage to run on my old laptop at the time. Got Vicuna 7b 'running'(more like crawling) on an old Lenovo Ideapad 14. I knew I was leagues away from models like Mistral and Mixtral. Claude is the one I remember the most, such a huge gain. Although they did argue later it wasn't the best with needle in a haystack tests but still it was a massive context gain.
Altruistic_Heat_9531@reddit
damn i forgot the llava vicuna
StardiveSoftworks@reddit
It was definitely the context for me, I remember thinking how incredible it was when we got to 4096.
DinoAmino@reddit
WizardLM-2 8x22. Back then veryone said it was too verbose. Lol.
PyrDeus@reddit
I worked with LSTM
Tight-Requirement-15@reddit
I remember when ChatGPT had the green logo and would forget things 6 messages ago. They grow up so fast 🥹
jannycideforever@reddit
This may be weird but I have absolutely zero nostalgia. All I can ever think about is how much shit required me to bang my head against the wall a year ago and how much shit I won't have to bang my head against the wall a year from now.
I can get the mindset though. Just very different from mine hahaha.
Equivalent-Repair488@reddit
Yeah, as someone who never coded before back then, installing oobabooga, sillytavern and a discord bot made by Desinc the youtuber, I spent like a few weeks straight almost sleepless trying to even get them working in the first place.
jannycideforever@reddit
Part of my lack of nostalgia may be the same lack of sleep and then waking up tired as fuck before work hahahahaha
Dion-AI@reddit (OP)
True. Especially back then, it was basically required you learn more about computers, code, IDEs, the CLI, etc. Now every app has got it's own chat bot lol
Dion-AI@reddit (OP)
Haha I definitely understand your perspective. It has gotten so, so much better. I can't say every bit of it was fun at all but I think I more so miss the novelty of it now if that is strange to say
jannycideforever@reddit
That makes total sense. I feel like (for the time being), we've left the realm of "Wow, I wonder what is possible with this technology". Now it's more "I can't wait until I can finally do something like X in a year".
Less uncharted territory, but also a clearer picture of what to look forward to.
Dion-AI@reddit (OP)
Well put. It's like "Oh, I'm used to it now. When can it code can me my own entire custom triple AAA quality game?"
Exactly. We were amazed by it at its worst, now we just want to see what it can do at its best
RepulsivePurchase257@reddit
Saving GPT-4 messages like they were gold was real
Dion-AI@reddit (OP)
facts
false79@reddit
I was on that Bard beta. What a joke.
In the end, I'm loving Gemma 4 more and more every day.
Dion-AI@reddit (OP)
Yeah Bard was just a hallucination machine. The Gemma series? Absolutely incredible. I definitely know what you mean. I just built an entire offline voice assistant using E4B
false79@reddit
Very awesome
abitrolly@reddit
I feel oudated, not overqualified. :D
Dion-AI@reddit (OP)
lol outpaced as well