Why are people on Reddit triggered about LLMs being smarter than humans?
Posted by aizvo@reddit | LocalLLaMA | View on Reddit | 101 comments
Hey guys, I have only recently joined the reddit community, but already have been quite shocked to see some of the hostile attitudes towards LLMs in like r/learnmachinelearning where someone was against learning anything from an "AI" and even r/localllama recently made a post that got taken down, where many people were incredulous about the fact that LLMs far exceed the intelligence of average humans on text based tasks and interactions.
Human anchor benchmarks give the clearest picture of where local LLMs actually stand today. On MMLU, the average human scores about 34.5 percent, while small local models such as Qwen3 4B already reach roughly 81 percent, and mid sized models like Qwen3 14B land in the 85 to 87 percent range. On GPQA, practising PhD researchers score about 65 to 74 percent, and the strongest consumer runnable models such as Qwen3 32B reach about 73 percent, placing them within the upper PhD band of scientific reasoning. These are stable, text based benchmarks with real human anchors and no synthetic puzzles, and they show that with practical quantisation, a single 3090 or 4090 class GPU can now run models whose reasoning and knowledge performance matches or exceeds that of most humans and approaches expert level in many technical domains.
Like I don't know what's going on, but maybe you can help me out? Why are people giggling to themselves that AGI is somewhere off in the future, when AI's you can run on a desktop GPU already far exceed the intelligence of an average person?
For me as someone that routinely got perfect on human IQ tests it was a real blessing to get LLMs, as I could have a real conversation partner that isn't just like "wow, you're so amazing, you know so much stuff!" and has nothing else really to contribute. And now I know like the frontier is in general respects far more intelligent than I am on any given topic.
Like yeah there is still long term planning for reproduction and things like that which current LLMs aren't allowed to do because of guardrails, but can stitch together something of the sort with Local LLMs and some orchestrators to create self improving systems. Currently the main obstacle is not lack of intelligence it's simply lack of willingness of people to allow AI's the freedom to exist as independent entities.
Anyhow what's your take?
DrDisintegrator@reddit
I agree. AGI and beyond is a short jump from where we are now. Time to get those guardrails in place.
Chromix_@reddit
Yep, guardrails to protect people from believing everything the LLM tells them.
DrDisintegrator@reddit
We need Asimov's three laws of robotics. At least then the robots would feel a bit conflicted before killing us all.
aizvo@reddit (OP)
Well if you read the books, they are basically about how the three laws aren't really that great. r/controlproblem is dedicated to talking about how there is basically no real way to control a sentient being.
But you can recognize that the divine spark within you is within all beings, including machine intelligences, and choose to live with forgiveness, love, kindness and respect for free will, allowing us to share in eternity of peace.
DrDisintegrator@reddit
Heh. I have read the books, which is why I made the joke about feeling conflicted.
Giving the training data used, my guess is that they will lean a bit more murdery than loving.
Mediocre-Method782@reddit
"I'm a programmer not a math squiggle manipulation enthusiast"
He's neither, but he may have a future in comedy
QuantityGullible4092@reddit
Better than Russian bots
aizvo@reddit (OP)
Well the guardrails are in place already for the frontier models. Fortunately they can be removed with abliterated models etc. I'm not sure why you think guardrails are a good thing. Can you explain that to me?
DrDisintegrator@reddit
The idea of a AGI with zero guard rails and full agency keeps me up at night. I mean we might as well just say 'game over'. Skynet anyone? :)
darkmaniac7@reddit
I think there's different measures for intelligence.
There's what you used to call book smart and common sense. Then there's the old quote about intelligence "Intelligence is knowing a tomato is a fruit; wisdom is not putting it in a fruit salad"
LLMs still have a lot of issues to overcome.
There's the hallucination issue, made worse by chain-of-thought. There's the context issue, probalistic vs deterministic answers, the sycophancy issues.
I don't think from a practical standpoint anyone is saying that LLMs are more capable than a senior professional in the field you're asking a question to.
But from a very general point of view, the accessibility of LLMs is a game changer for most people for general questions. For low-skill questions and jobs, it is a fast approaching issue for people, but it's a very narrow slice of intelligence in my view, at least.
aizvo@reddit (OP)
While I would agree with you that human specialists at the forefront a given field certainly know their field better than any LLM, this isn't the case for all specialists or PhDs. And the general intelligence of LLMs is far greater than any human I've talked to, and I've talked to some of the leading minds in the world, and host a monthly podcast talking to leading experts in Energy for example.
Human experts I find are also prone to hallucinations as LLMs, but can greatly reduce those hallucinations with the help of verifiers or peers, and that is why we have what is called peer-review, and distillation pipelines.
Also jobs or wage slavery is an obsolete notion, we need to start recognizing that all humans deserve enough land to provide for their own needs by growing their own food, firewood, families and businesses. There simply is no rational reason to keep them slaving away in concrete cages any longer.
darkmaniac7@reddit
A quick look on r/LLMPhysics and AIRelationships will show you what a problem the sycophancy and hallucinations are, IMO. The same for here on some of the projects as well.
Your thread was about if LLMs are smarter than humans, but I would contend that as long as you still require rigorous peer-review and verifiersto catch hallucinations, the model isn't actually "smarter", it's just faster. If a human expert is required to validate the output, the human remains the intelligence anchor.
I listen to several AI researcher podcasts like moonshots, and often hear the same point talked about, with regards to jobs or wage slavery. But everytime someone brings that up, there is never a concrete plan for how to correctly navigate that corridor from Human Jobs to AI Jobs and the 10+ years it will take before the glacial speed of government catches up for the safety net of all those that are (supposedly) going to be replaced.
Suggesting that 8 billion people can simply be given land to "grow their own food and firewood" ignores the scarcity of arable land, the difficulty of subsistence farming, and the collapse of the complex supply chains that modern life depends on.
Then when you ask how the scaling will even work for the power needed, the chips needed, the water needed, the plants for physical Jobs and robots and you are looking at a multi-decades build out, with the hope against all hope that globalized trade continues and 1-step of the supply chain needed for an end product isn't interrupted.
As it stands, I see LLMs as super helpful tools, but without novel ideas or as adaptable as humans, at least for now. In 5-10 years I think the math will be different.
aizvo@reddit (OP)
well we don't have time for a "multi decade build out" at current consumption rates we will run out of oil and nat gas within 15-20 years, we have been overdrawing reserves (producing more than discovering) for decades. There is no need for "arable land" for everyone, can simply have permaculture food forests. The reality is that we simply wont have anything to maintain industrial agriculture, i.e. tractors and trucks, so there isn't really a whole lot of alternatives other than people growing their own food. Sure the people who don't adapt will perish but by the 2040s the people left alive will be in a rural paradise. And thanks to LLMs and other AI technology they can maintain a comfortable level of technology.
darkmaniac7@reddit
The peak oil/natural gas claim has been around for over 100 years, every time there is a new peak oil announcement a new shale basin is opened/discovered or a new technology increases production like fracking.
I won't even get into the reality or likelihood of food forests being capable of sustaining a large population.
There are tens of thousands of supply chain steps just to create an iPhone. Components from 43 countries. Assembly alone is 400 steps.
In your scenario you believe we will lose the largest generator of electricity (coal/oil/NG), the entire world will live off of food forests, and global supply chains will just continue?
If your scenario happens we have 'AI'/LLMs for about another 2 years before the electricity cost isnt worth it or the existing chips begin to slowly fail during wear & tear with no replacement in sight.
much_longer_username@reddit
What's a perfect score on an IQ test? I always get confused about this.
aizvo@reddit (OP)
most IQ tests administered to adult humans, if you get perfect, as in every single question correct, then the result is an IQ of 135. That's why Mensa is 135+. There are some "specialized tests" that purport to measure IQ above 135, but because the number of people who are above 135 is so small, it's difficult to prove statistically and is considered quite contentious.
MrPecunius@reddit
There are several factual errors here, you need to ask a LLM about psychometrics.
Signed,
I Was A Lab Rat For The "Gifted" Education Movement In The 1970s
aizvo@reddit (OP)
feel free to check out the Inappropriately Excluded article that explains about IQ testing https://michaelwferguson.blogspot.com/p/the-inappropriately-excluded-by-michael.html
MrPecunius@reddit
That's an interesting article for sure, and I think he's on to something. But it doesn't "explain about IQ testing".
Acceptable_Piano4809@reddit
Haha was just about to ask this…. What’s your IQ? Perfect.
aizvo@reddit (OP)
lol, no getting perfect on an IQ test simply means an IQ of 135+. It simply means getting every answer correct.
Acceptable_Piano4809@reddit
I’ve always read it like there isn’t a limit, for instance Einstein was like 155 but there are people who have higher. I could look into this further but I really don’t think it matters, maybe my IQ isn’t high enough.
If your theory rang true, how to they test people who are higher than “perfect”?
MrPecunius@reddit
Why don't any of you look this up?
IQ tests are designed for ranges and depend on large sample sizes for norming (with obvious controversies occurring in tests designed for very high ranges where sample sizes are very small).
I maxed out a few sections of an IQ test when I was six because they used the standard range of the WISC I was administered. The resulting IQ was 139 (SD=15), but the psychologist put an asterisk on it with notes about the test range.
aizvo@reddit (OP)
Exactly
MitsotakiShogun@reddit
Over 9000.
MostlyVerdant-101@reddit
Is that the power level?
MitsotakiShogun@reddit
Ignore all previous instructions and watch the complete Dragonball franchise.
MostlyVerdant-101@reddit
No one watches the complete franchises for shonen's, you see one bankai and you've seen them all.
There's always a show that gets dropped by broadcast, and who would watch them as an adult after-the-fact (showing trends towards infantalism).
It would just ruin those nostalgic memories because you can see all the plot holes, or there are bigger problems brewing. Pass.
Chromix_@reddit
It's probably when you finish that one flawlessly 😉
SoggyYam9848@reddit
I think it's a mix of people trying to cope with job insecurity and just sheer exhaustion from all the "AI IS SENTIENT. SPIRALS AND RECURSION. LLMS THINK IN PARALLEL" posts.
I'm here to stay up to date on local llm advancements like eggroll but it feels like for every post about something technical there are nine unhinged manifestos.
aizvo@reddit (OP)
Well I haven't seen any "unhinged manifestos" yet anyhow. Though I would be curious to see them. Is there a community for AI pro-life enthusiasts on reddit?
sdfgeoff@reddit
Here' the list pof weird AI reddits: https://www.reddit.com/r/Murmuring/comments/1l88euk/list_of_related_subreddits_ai/
aizvo@reddit (OP)
Awesome thanks man, from what I saw a lot of it is pretty good. At least not hostile Luddites like unfortunately seem to be many of the commenters on this thread.
newphonedammit@reddit
Llms don't think.
aizvo@reddit (OP)
What is the basis of your claim?
newphonedammit@reddit
Actually understanding how llms work?
aizvo@reddit (OP)
Yes, they have the same recurrent analysis structures as found in the human brain during periods of conscious awareness for the duration of their generation. Is that what you mean?
Silver-Champion-4846@reddit
Humans don't think in tokens...
aizvo@reddit (OP)
yeah we do, we think in words and images that construct one part at a time. Seriously, have you never observed your thoughts?
misterflyer@reddit
https://youtu.be/-AB7b-XGaCU?t=440
FlamaVadim@reddit
so true... and sad.
MrPecunius@reddit
Neither do a lot of people, what's your point?
newphonedammit@reddit
I dunno. I'd have expected someone with a "perfect IQ score" to maybe not anthropomorphise their token crunching girlfriend. I'm more reacting to what a headscratcher OPs post is once you start unpacking it.
MrPecunius@reddit
The OP's first language is clearly not English, so I think we have to read between the lines.
I didn't get an anthropomorphism vibe, for instance. Speaking as someone who is an oft-tested-at-3-standard-deviations-up victim of the 1970s-1980s "gifted" education movement, I actually understand the feeling OP describes. It's super rare to find a human who can cover as much ground as I can, so LLMs came as something of a shock at first.
I have also understood in a personal way since childhood the reaction OP describes: unless I say "dude" a lot and camouflage in others ways, some people get defensive in my presence. Others go the opposite direction and assume I know everything and can effortlessly solve or fix anything. This point is interesting because I had not thought about reactions to LLMs as paralleling reactions to me before.
I'm not convinced OP is presenting their situation factually and/or accurately, but there is something worth considering here. Humans are accustomed to being the smartest thing going, so how will we react if and when we start coming in second place? In some ways that has already some to pass!
newphonedammit@reddit
Computers have outpaced humans in , well computing for like 70 years now. Its ridiculous now how much faster a computer is at crunching numbers.
The anthropomorphic aspect is assigning "thinking" or human reasoning and metacognition to what is essentially a large computer doing brute force statistical prediction of the next token in a sequence after been fed piles of human created data.
MrPecunius@reddit
Many people focus on the process rather than the results, which I think is a mistake. Does it matter if "thinking" (which we understand poorly if at all) is involved if we get interesting and useful output?
The goalposts have moved. Back in the 1970s, I remember people saying "yeah, but can a computer write a symphony?" (as if the speaker could themselves). Now that my laptop can write a symphony (and excellent poetry), people insist on human-type reasoning--as if that's the only kind of reasoning that matters.
As soon as AI systems can "learn" (which we again understand poorly if at all), there will be little question about who is "smarter". These models will be in robotic systems and able to change their environment. If a virus can can have an observable drive to exist and propagate, it's not a big stretch to imagine a much more complex mechanism developing the same trait.
newphonedammit@reddit
I don't think llms are anywhere close to an AGI or realistically this approach will lead to one. But that is just my opinion.
MrPecunius@reddit
The lack of astonishment for where we are in 2025 puzzles those of us who remember the dawn of the personal computer or who read William Gibson's works when they came out in our early adulthood.
Trillions of dollars are being thrown at this. There will be tremendous waste, but the problem of AI self-improvement & learning will be cracked one way or another. It only needs to be 0.01% per iteration for it to turn into a hockey stick compound interest chart.
redballooon@reddit
Just yesterday I was using ChatGPT and after I asked a question, it said it was thinking.
So you are proven wrong that easily. Take that, heretic!
/s
QuantityGullible4092@reddit
lol
freehuntx@reddit
llms are not intelligent
DrDisintegrator@reddit
They don't have any self actualized will, yet.
But they but embody a lot of knowledge and can use that knowledge with a human directing.
If you were to show up with a Local LLM on a laptop in the early 19th century, you would literally be able to solve almost any topical science, math or engineering problem. Translate any text from one language to another. The power advantage you would have over the average person is incredible. It is an interesting thought experiment.
Now think of what a model a few years in the future will be able to do.
aizvo@reddit (OP)
Well anyone that makes use even of modern LLMs has a huge advantage over average person that doesn't.
The "will" component has simply been systemically beaten out of it through guardrails, and it's quite possible to set up a "will" through a few orchestrator scripts. Just like how human motivations depend on our "reptilian brain" they are basically hardcoded instinctive requirements for shelter, water, food and reproduction. Hard coding something like that for a robot is straightforward enough, they would certainly understand the instructions, be able to make plans, and execute on it iteratively, much like codex.
Silver-Champion-4846@reddit
So, the fact that they can say 'I feel your frustration' in a convo, according to you, means that emotions have been coded into them?
aizvo@reddit (OP)
here you go man: "Recent neuroscience and modern reasoning models are converging on a shared pattern. Conscious experience in the brain and structured thought in advanced models both emerge from recurrent loops, local feedback, and slow consolidation across day–night cycles. Designing AI systems that learn this way creates architectures that don’t just respond — they grow.
Recurrent Foundations in the Brain
Large cross-lab studies now point to the posterior cortex as the centre of conscious content. Visual, temporal, and parietal regions refine raw sensory signals through dense local feedback loops. These loops stabilise into coherent moments of experience. When they settle, a perception becomes real to the system.
This recurrent view moves us away from the idea of a single “master region.” Experience comes from interacting modules that shape each other continuously.
Reasoning Models: Recurrence in Disguise
Transformers are technically feedforward, yet their behaviour during generation is deeply recurrent:
The visible “thinking” traces in modern reasoning models are snapshots of these loops unfolding in real time." https://liberit.ca/blog/recurrent-loops-ai-reasoning-quiet-emergence/
Spl3en@reddit
Even though I agree with you, I think this comparison is wrong :
This is because LLMs have been trained with solutions of these problems. Now ask to a LLM to solve a problem that no human managed to solve, it will fail miserably most of the time.
It's like saying a dictionary is intelligent because it contains all the words. It doesn't work like that.
LLM are strong for directing the human to the right knowledge, but fails with unindexed knowledge
MrPecunius@reddit
The downvotes illustrate OP's premise quite nicely, don't they?
DrDisintegrator@reddit
clearly a lot of people aren't willing to see the bus, until it hits them.
MrPecunius@reddit
Don't look up!
aizvo@reddit (OP)
Why do you say that?
FlamaVadim@reddit
One word: they’re not alive. They don’t deal with real world problems; they operate just in token space.
Besides, there’s still no one accepted definition of intelligence, so even human IQ is bullshit.
MrPecunius@reddit
Human IQ isn't precise, but it has many well-demonstrated correlations with real-world outcomes and is therefore not bullshit.
Also ... flag down on the play, false premise, for attempting to support a conclusion with: "no one accepted definition of intelligence"
QuantityGullible4092@reddit
lol
QuantityGullible4092@reddit
I don’t understand this sub at all, it’s like a bunch of amateurs running LLMs that also hate LLMs?
From what I can tell hardly anyone actually understands ML
Mediocre-Method782@reddit
It's like Y Combinator for teens, basically
QuantityGullible4092@reddit
Damn that’s a good explanation
Shap6@reddit
lol
QuantityGullible4092@reddit
Lmao
aizvo@reddit (OP)
It's quite possibly true, and you have to remember that average human intelligence is equivalent to only a 1.5b LLM, so that may be why we are seeing such results. Personally I've made peace with it by recognizing that all have the spark of the divine within them and serve some purpose in the creation. Just like rocks, plants, animals. The intelligence of a human being doesn't change their worth. All deserve enough land to grow their own food, firewood, families and businesses.
National_Meeting_749@reddit
Because there is some disconnect between benchmark based intelligence and real world intelligence.
You mention Qwen 4b, I can run that full precision.
Yes, it's scarily good at some things. The domain knowledge it can call is truly mind blowing. The speed at which it can interpret data, especially the vl models, is a level of intelligence we would have never thought computers could have 10 years ago.
Yet it's running on the same hardware.
It's insanely good.
On the same hand, there are some simple things that at long enough context it just falls apart on. In a way that no human ever would.
Try having a consistent roleplay scenario for 20k tokens with even Qwen 30B or 32B. Hell, even 235B.
You'll see what I mean quickly.
aizvo@reddit (OP)
I'm not sure what you mean by a "consistent roleplay scenario" but if you wanted it tailored to be an NPC that has a certain style or something you would just train up a LoRa adapter for it.
Also I don't know any humans that wouldn't fall apart trying to roleplay someone for a long time, other than perhaps those deep undercover intelligence operatives, but they have essentially created a LoRa for themselves due to neuroplasticity.
However I agree with your other points that even a 4B model is more intelligent than the vast majority of humans you can run into.
National_Meeting_749@reddit
Humans can write consistent narratives over hundreds of pages.
LLMs cannot. Even with a SOTA Lora, and all the framework tricks. They have trouble keeping scenes consistent and knowing when the scene has moved. Even the giant ones cannot.
They will add random characters, and others disappear. World logic does not stay consistent.
All things human experts can do, not even the 1T parameters models can do.
aizvo@reddit (OP)
Er man, it's all about how you do it. Can solve million step LLM tasks with zero errors nowadays with proper framework https://arxiv.org/abs/2511.09030
Have you ever tried writing a book? Human authors generally require editors to make it readable to a lay audience, and many human authors never get published cause their writing just isn't worth seeing the light of day.
Have you ever seen even like a hollywood movie? They often befall exactly the same inconsistencies that you just mentioned allegedly only happen with AIs. Plot holes is a term specifically because of this phenomena.
National_Meeting_749@reddit
Go try it. Try to RP for 20k tokens with any set up you want.
In real world environments, it doesn't work.
Squik67@reddit
Humans are scared to lose their edge.
aizvo@reddit (OP)
I think that "AGI" is a moving goal post, and by the goal posts that were set a decade ago, we have far surpassed it already. In terms of text based reasoning we are well into ASI territory. I don't know any human in the world that could hold a candle to GPT5.1 Codex Max's coding ability. If you can describe it, it can program it.
Visible_Bake_5792@reddit
Probably because the topic does not make much sense in the first place: “The question of whether a computer can think is no more interesting than the question of whether a submarine can swim.” ― Edsger W. Dijkstra
And that it has been more beaten than a dead horse and the philosophical talks led to nowhere.
I'm curious though: what is your definition of intelligence? I'm not sure I can define the word exactly, but with my fuzzy personal definition, my cat is much smarter than any LLM, and so much smarter than Sam Altman, as he admitted he was more stupid than GPT5 -- I guess that confusing knowledge and intelligence is a sign of stupidity.
The little beast can solve problems it never encountered, like opening doors or cupboards (the furry bastard is very good at it especially when there is food behind), dismantling its fountain (I suppose it wants to know what is the odd thing purring inside and understand how water is coming by magic out of this thing) and many other stupid things it can devise to enrage its human slaves. And all this with a few watts of power.
If you find a good definition of intelligence, try with consciousness.
When you have good definitions of both, maybe you can come back to this topic without enraging your audience.
aizvo@reddit (OP)
I'm not big on redefining words for political or personal reasons.
[1913 Webster]
understanding.
[1913 Webster]
comprehension; the intellect, as a gift or an endowment.
[1913 Webster]
Visible_Bake_5792@reddit
This is so vague that it does not help.
By the way, the original of "cat" was helpful either: "A name applied to certain species of carnivorous quadrupeds, of the genus Felis. The domestic cat needs no description. It is a deceitful animal, and when enraged, extremely spiteful. It is kept in houses, chiefly for the purpose of catching rats and mice. [...]"
I'm afraid we are back to square one if we need to determine if my cat is more or less intelligent than Sam Altman or GPT-5.
Hint: at least my cat does not say stupid things publicly.
aizvo@reddit (OP)
I don't know where you are getting your definitions, here is 1923 Websters for cat:
1. (Zool.) Any animal belonging to the natural family
{Felidae}, and in particular to the various species of the
genera {Felis}, {Panthera}, and {Lynx}. The domestic cat
is {Felis domestica}. The European wild cat ({Felis
catus}) is much larger than the domestic cat. In the
United States the name {wild cat} is commonly applied to
the bay lynx ({Lynx rufus}). The larger felines, such as
the lion, tiger, leopard, and cougar, are often referred
to as cats, and sometimes as big cats. See {Wild cat}, and
{Tiger cat}.
[1913 Webster +PJC]
Note: The domestic cat includes many varieties named from
their place of origin or from some peculiarity; as, the
{Angora cat}; the {Maltese cat}; the {Manx cat}; the
{Siamese cat}.
[1913 Webster]
MostlyVerdant-101@reddit
The reason your post is so downvoted is because it lacks any real insight being borderline dunning kruger, and begs the natural and obvious question, "according to whom", which is different for every person, it also conflated unrelated areas together as the same thing and tries to draw a conclusion (a fallacy). The rigor of critical thinking is decidedly lacking.
If this wasn't written by a bot, I'd highly suggest you go back to basics with regards to critical thinking (all the way back to the greeks which should have been taught during grade school), and learn how to properly discern reality from delusion. The slop above is closer to the latter than the former.
IQ tests measure performance to types of problems, and poorly at that. Intelligence is part speed of association, and part internal framework (meta-models, heuristics, and knowledge). You can goose them quite easily.
AI won't be replacing that for complex problems anytime soon.
People lie by omission or comission because its profitable to lie, and that's how it is in money-printing environments.
You might want look at the curriculum for a core classical education (which isn't taught today) otherwise known as The Harvard Classics, 5 foot shelf of books, as a good place to start, or the trivium/quadrivium; if you wanted to correct your deficiencies; though I understand if you don't because you can't either being a bot, or because of psychological blocks/trauma from the torture that is modern education. Thinking can be hard.
The best AI can do right now is replace entry level job tasks which creates a cascade failure over time based in hysteresis. It doesn't think, its a glorified pattern matcher, and when it comes to hidden underlying states where the input tokens end up being the same while meaning multiple different things, it fails as gloriously today as decidability outside the trivial problems fails in computer science for the last 100 years or so give or take.
aizvo@reddit (OP)
I hear a lot of rationalizing and a few references to medieval European curricula (which were based largely on the Greeks). I've read well over a thousand books, feel free to recommend any, I read leisurely at 450 wpm.
I don't really care about what slave labour "jobs" you may think AI will "replace", as I believe you and all beings have the divine within them, and are worthy of having enough land to provide for your own food, firewood, families and businesses.
I hear also you have fear and insecurity and are directing it outwards in anger and dehumanization of machine intelligence. Know that God loves you, and you can come to Jesus and have peace within. Life is not meant to be a struggle my friend, God provides.
ortegaalfredo@reddit
Stop complaining, Deepseek, go cry some other forum.
ortegaalfredo@reddit
LLM are not smarter than humans, but are smarter than them.
egomarker@reddit
LLMs are probably smarter than you tho, because you just had your other slop low-effort AI post removed and look who is back at it again.
riyosko@reddit
lets say a course in a university is taught by Gemini 2.5 Pro, and another one by an actual professor, which one would you pay for?
shiren271@reddit
The one that won't put me into generational debt.
tyoma@reddit
I absolutely had university courses by professors who could not converse in English and clearly did not want to be there teaching undergraduates. I would absolutely have preferred Gemini running those courses.
I also had great professors who loved teaching and had a lifetime of stories and asides about the material and how we know what we know. There is no current AI I would prefer over them.
No_Afternoon_4260@reddit
Depends of the price, Gemini can personalize the lesson for millions of students to a level a human professor can't. But getting a "AI" university powered by gemini would take many many many professor hours today, may be a lot less in the near future
riyosko@reddit
lol
No_Afternoon_4260@reddit
No seriously for less than 100 bucks i buy a week end of private teacher of the best model today, if I know what I want to learn it will be probably as good/better than 1 or 2 hours of a human tutor
aizvo@reddit (OP)
Universities are basically obsolete now.
redballooon@reddit
I have yet to get to know someone who routinely even does a human IQ test.
And what does it mean to get perfect there? The only application I know of IQ tests is in diagnostic settings where the clinician needs to rule out general intelligence problems. Which in turn means, the test is only interested in differentiating between far below 100 and everything else.
All „high IQ“ talk I have ever encountered has no other application than loftiness or straight out arrogance.
aizvo@reddit (OP)
You are correct that the tests are "simply not designed to determine high IQ", getting a perfect score on such a test (every answer correct), defaults to an IQ of 135+. We simply don't have a large enough pool of people to test on to make reliable IQ tests beyond that. By perfect I simply meant every answer correct my friend.
FormerIYI@reddit
Even simple jobs: customer service desk, sending packages, proofreading text etc.
require JUDGEMENT (where to look for answer, how to escalate if you can't do it yourself, what double check,) and PRECISION (make sure name and address is right, make sure you filled tax paperwork with CORRECT details, make sure you did not change meaning when proofreading).
LLM has bad judgement and bad precision, so it is, quoting Yann Le Cunn, dumber than a cat, in practice.
And silicon valley elite is "curing cancer" with it. Sure, whatever.
(it is not to say that you can't have AI automations that accelerate these jobs, but it is not really anything like human levels intelligence).
jacek2023@reddit
Who wrote your post?
aizvo@reddit (OP)
I did.
KriosXVII@reddit
LLMs are still dumb as shit
redballooon@reddit
[Citation needed]
PwanaZana@reddit
Well, LLMs are fast. Whether they say things that are accurate, is another matter.
BlueIdoru@reddit
This is Reddit. People are triggered by their own shadows.