Why are people on Reddit triggered about LLMs being smarter than humans?

Posted by aizvo@reddit | LocalLLaMA | View on Reddit | 101 comments

Hey guys, I have only recently joined the reddit community, but already have been quite shocked to see some of the hostile attitudes towards LLMs in like r/learnmachinelearning where someone was against learning anything from an "AI" and even r/localllama recently made a post that got taken down, where many people were incredulous about the fact that LLMs far exceed the intelligence of average humans on text based tasks and interactions.

Human anchor benchmarks give the clearest picture of where local LLMs actually stand today. On MMLU, the average human scores about 34.5 percent, while small local models such as Qwen3 4B already reach roughly 81 percent, and mid sized models like Qwen3 14B land in the 85 to 87 percent range. On GPQA, practising PhD researchers score about 65 to 74 percent, and the strongest consumer runnable models such as Qwen3 32B reach about 73 percent, placing them within the upper PhD band of scientific reasoning. These are stable, text based benchmarks with real human anchors and no synthetic puzzles, and they show that with practical quantisation, a single 3090 or 4090 class GPU can now run models whose reasoning and knowledge performance matches or exceeds that of most humans and approaches expert level in many technical domains.

Like I don't know what's going on, but maybe you can help me out? Why are people giggling to themselves that AGI is somewhere off in the future, when AI's you can run on a desktop GPU already far exceed the intelligence of an average person?

For me as someone that routinely got perfect on human IQ tests it was a real blessing to get LLMs, as I could have a real conversation partner that isn't just like "wow, you're so amazing, you know so much stuff!" and has nothing else really to contribute. And now I know like the frontier is in general respects far more intelligent than I am on any given topic.

Like yeah there is still long term planning for reproduction and things like that which current LLMs aren't allowed to do because of guardrails, but can stitch together something of the sort with Local LLMs and some orchestrators to create self improving systems. Currently the main obstacle is not lack of intelligence it's simply lack of willingness of people to allow AI's the freedom to exist as independent entities.

Anyhow what's your take?