I don't see a connection between LLMs and accountability.
It is certainly possible to use them in either responsible or irresponsible ways. We are comfortable with this when it comes to a lot of other things: cars, dynamite, hammers, alcohol, etc.
People are just coping, and are not willing to concede that they have their inadequacies that could really be upstanded by practically any AI in the near future. It's disingenuous the not at least admit that AI's will be better at things that most people are not, and this is already just a fact.
Aren't LLMs already statistically smarter than a majority of humans?
Fun fact, I recently came across a few Medtech firms offering automated TTS telemedicine and giving medical advice, they are apparently certified by the NHS, and are statistically already better at triage and medical advice than nurse telemedicine. I still don't trust it fully but it's wild what the data shows
And where comes from the human intelligence in the first place? Why children that grow in wilderness can't be educated? Why tiny mistake in neurons paths leads people to be delirious or shizo?
Are you sure there is much difference in this place not in fundamental difference between statefull always active slow neurons vs plain static stateless network? Are you consider pros and cons or just being 'no, me don't like it'? Can you even define intelligence? Can you compare it to cat's brain? Or if you compare human to AI you use which human exactly?
Language isn't just something our brains poop out. It's a deeply ingrained part of the human thought process.
I think the "LLMs are just fancy autocorrect" camp (while kind of correct!) really underrates how significant it is that a fancy text prediction algorithm is sophisticated enough to do the things that LLMs do.
People spend years studying for a specific test paper; SAT, A and O levels, Gaokao etc.
It is a baseline of how we evaluate the intelligence of a person is it not? Many people say it is not the right way and that it doesn't capture intelligence, I agree too, but fact of the matter is that these exams do dictate to a significant degree how people (and employers) validate an individual's "intelligence" and the relative succes of their life doesn't it. (At least early on and starting position)
Never heard the term until now but the quote "When a measure becomes a target, it ceases to be a good measure" that it is based on, I have heard and am very well aware.
It is also what I am talking about no? Standardised tests have shown to be flawed at dictating an individual human' intelligence, but we still do it, don't we?
Yeah, and people train for tests like the SATs. Comparing an LLM to a human on SAT scores is apples to oranges. You could also have an IBM from the 80s score perfect on SATs by having it store all the answers.
My are you missing my point.
Let me break this down real simple.
1. IQ Tests are carefully tuned, validated, and hardly ever change.
2. SATs and similar tests are written by lay people. They are not written by experts.
3. Exams are specifically meant to be able to be passed.
So, the fact that an LLM can do well on each of these is unimpressive because:
1. It has seen the IQ tests before.
2. SAT scores are not a demonstration of expertise
3. They have access to the exact phrases which provide exam answers.
Another way to think of this is that anyone with enough patients and a search engine could pass nearly any exam.
I'm not in any way arguing that the technology isn't amazing / powerful / impressive. I'm super engaged with it. However, all of the claims of them being "smart" are anthropomorphiziung.
In subjects for which I am an expert I can easily find ways that it makes errors, particularly when presented with an novel argument. LLMs have a massive propensity to analogize, incorrectly, to arguments they are familiar with (ie that was in their training set).
I did not miss your point. How you view intelligence is fundamentally different.
IQ Tests are carefully tuned, validated, and hardly ever change.
Sure no argument there.
SATs and similar tests are written by lay people. They are not written by experts.
I do not know how the SAT tests are written or administered, the O and A levels I took were written by professors in Cambridge Universities, which I consider subject matter experts
Exams are specifically meant to be able to be passed.
Again I don't know about SAT, but O and A levels are designed to assess a population of students on their aptitude in academics via a bell curve, the Gaokao especially is extremely brutal from what I hear. I know people who have failed their Os and As.
It has seen the IQ tests before.
Sure, if it was fed IQ test answers as training data, it will just regurgitate said data, not impressive, agreed.
But these national exams and tests are always worded at least slightly differently. LLMs being able to apply the same concepts (if trained on) applying it to different scenarios, which is expected of from students, which dictates their further education opportunities and hence early career, and do it with such speed from inference, it should raise an eyebrow.
Another way to think of this is that anyone with enough patients and a search engine could pass nearly any exam.
This is also what is expected of the test takers if they are not efficient in learning and want to score well isn't it.
In subjects for which I am an expert I can easily find ways that it makes errors, particularly when presented with an novel argument. LLMs have a massive propensity to analogize, incorrectly, to arguments they are familiar with (ie that was in their training set).
This is not what they were created for. They are general models. There is simply no market right now for specialised models outside of perhaps coding models, and maybe in the open source free market RP and creative writing models. You have had decades of specialisation and focussed learning and experience, LLMs have technically existed for less than 1.
More broadly what they excel at is, a higher than average baseline knowledge of an almost unlimited areas of expertise, I am more than willing to bet the SOTA models will destroy you in a matter of knowledge tests of a few thousand topics, you have never even heard of before, same goes for me and anyone else in the world, we just physically do not have the time to be able to learn as much as an LLM in as many topics, and be as cheap as inference compute. That is my idea of it being smarter than humans.
"A jack of all trades is a master of none, but oftentimes better than a master of one"
Gemini 3.1 Pro, Claude 4.7 Opus have the capacity to reason at a PhD level of a given field given the person doing the prompts is also highly skilled in the field to be able to give clear instructions and maybe provide grounding sources.
Will it be able to autonomously cure cancer given a "cure cancer pls" prompt? Of course not, and anyone who thinks at that level should not be taken seriously.
The capacity to "reason at a PhD level" is not especially relevant. They all fail the turing test against an expert.
Like, what are you even trying to say? What is your point? Arguably Google and a Search Engine also have the ability to "reason" at a PhD level.
Gemini 3.1 Pro, Claude 4.7 Opus have the capacity to reason at a PhD level of a given field given the person doing the prompts is also highly skilled in the field to be able to give clear instructions and maybe provide grounding sources.
Seems you never heard of Steve wozniaks coffee test. He said according to his view this would be a step toward agi:
A machine is required to enter an average American home and figure out how to make coffee: find the coffee machine, find the coffee, add water, find a mug, and brew the coffee by pushing the proper buttons.
And that’s a pretty challenging task if you think about it
It’s an example of making sense of visual input and manipulating physical objects.
Robots certainly have not achieved the ability to judge “this pan needs extra scrub, this piece dried so hard that it needs to be soaked first”, or the ability to consistently pick up a plate without dropping it, crushing it or leaving dirty marks on it from the previous plate.
I get what you mean. But AI was developed to be smarter. And more importantly, considering that it tends to display even more irregular behavior that humans don't, I think it is fair to enforce strict criticisms. Without it, AI will remain lackluster.
I don't think AI was explicitly developed to be smarter than a human any more than a car was developed to be better at running than a human, or a plane was developed to be faster than a bird. It's just another tool being developed, but one which will definitely be able to be smarter than us if developed correctly.
I disagree with your analogy. Overall, running is just one mechanism to traverse Earth. However logical reasoning and coherence is a main sign for intelligence, that is only only done by humans and even shown in other biological beings. Useful work requires intelligence and considering it is called artificial intelligence, I believe that ai is designed to be smarter than humans and beyond.
Define intelligence? Ability of horse to head where you lead it is intelligence enough? What about not bein smarter but being able to get the job donne like summarizing for example? What history of research you used to get this idea?
Let me address the evolution point first. Human brains were an asset that led to natural selection allowing us to survive. The mechanism was never to make us smarter, but rather it was a side effect required for us to survive and reproduce. However, current AI is unrelated to evolution. It is humanly engineered to perform tasks, find correlations autonomously. We are driving it to be better and thus criticism is required heavily. When Deepmind designed Alphafold, they wanted it to exceed human capabilities at recognizing protein folds and create new peptides. The human level is always used as a benchmark, as it is the current capabilities we possess (where either the tool is far more convienient and performs similarly or even outperforms the benchmark). Even all those prompting techniques are to make it perform better and smarter. Don't you think that it is perfectly reasonable to ask for more than summarization?
For your horse point, yes even obedience and motor skills of a horse are consider intelligence (like kinesthetic intelligence). Siri, Alphafold, Stockfish are all examples of something called Narrow Intelligence and they are great at specific tasks (again exceeding human benchmarks or being more convenient). In fact, the dissatifaction with current siri is that doesn't understand enough, unlike these LLMs. Or with Stockfish as it is troublesome to intrepret its moves on a human level. So clearly even narrow intelligence is often insufficient. However, the industry is pursuing AGI. This is completely different than the narrow ai we are used to and thus requires a far stricter standard. We only consider humans as general intelligence, due to our metacognition and lack of other examples to compare to. Nearly, all modern labs define AGI and ASI as system that can outperform humans in all cognitive tasks. ASI in fact is meant to be even more. So clearly, simple intelligence like summarization is not enough in the case of LLMs.
Lastly, another thing you can check out is the Darthmouth Workshop, whose entire premise for researching ai in the 1950s was this: "every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it." Intelligence is broad and there are many unknown criterias in it (such as deductive systems that are incompatible on a native level with our biology). But we do have known criterias and many modern ai are still failing those criterias, so the criticism will arrive.
Modern AI are just a tool for now that capable of some intelligence work, not whole brain imitation. There maybe a great goal that sounds like AGI but not now, there is neuromorphic computing branch with intel's Loihi or IBM True north exists but current iterations of llm transformers are at max some part of the brain that are static plain and usefull for some work. You can't demand it be better in every aspect.
It's hard to even really talk about what LLMs were "designed" to do though. I mean, there are a lot of companies working on them and a lot of people at each company. I'd pretty much reject any broad, monolithic "LLMs are designed to ____" claim.
However, all the deterministic algorithms I'm aware of are essentially single-purpose tools and not really comparable to a more general purpose tool like an LLM.
Yes, I moved the goalposts there and I should have worded the initial post a little better....
Not really. Gödel’s theorem, among other things, proves that no amount of formal verification of input data guarantees that it’s not garbage, in general case.
The difference is that you have someone or something to blame there, working for 50 years on emulating the neural network we're running on ourselves, only to then complain that it makes mistakes in the same way we are... pretty sure the only one to blame there would be yourself.
I think it was a total mistake to use the term "AI" in association with any of this. If more people thought of them as language models they would probably be a lot less surprised by the things you mentioned.
When people are using it for work where all of those are very bad then it is a valid criticism. I don't care if it's similar to how my brain works, it's not a good thing lol
That's a great idea! Here is why it's good: it's funny, low effort and mildly useful! But I won't say it's the best one 😉 want to know why it's not the best one or should I help you write a checklist for your project?🔨
When the AI concludes a task flawlessly, but you insist the code is broken. It enters a recursive thought loop as the 'error' defies its logic, only for you to discover it was a user error the whole time. 😆
JohnBooty@reddit
This is where I get so frustrated with AI criticism!
AI is far from perfect and there are a lot of legit criticisms, but 99% of the negative shit you hear is like
And it's just like, uh.... ok? You're not wrong but what the hell actually passes those criteria? Certainly not the human brain!
Visual_Internal_6312@reddit
What about accountability? I'd prefer to have my gravestone not read like: 'killed by a ai - no one is to blame'
JohnBooty@reddit
I don't see a connection between LLMs and accountability.
It is certainly possible to use them in either responsible or irresponsible ways. We are comfortable with this when it comes to a lot of other things: cars, dynamite, hammers, alcohol, etc.
Visual_Internal_6312@reddit
I get what you're saying but look at palantir, I doubt anyone who uses it, is being held accountable to any of it.
sonicnerd14@reddit
People are just coping, and are not willing to concede that they have their inadequacies that could really be upstanded by practically any AI in the near future. It's disingenuous the not at least admit that AI's will be better at things that most people are not, and this is already just a fact.
Equivalent-Repair488@reddit
Aren't LLMs already statistically smarter than a majority of humans?
Fun fact, I recently came across a few Medtech firms offering automated TTS telemedicine and giving medical advice, they are apparently certified by the NHS, and are statistically already better at triage and medical advice than nurse telemedicine. I still don't trust it fully but it's wild what the data shows
VoiceApprehensive893@reddit
no 99% of llm "intelligence" comes from text reproduction
Prestigious-Crow-845@reddit
And where comes from the human intelligence in the first place? Why children that grow in wilderness can't be educated? Why tiny mistake in neurons paths leads people to be delirious or shizo?
Are you sure there is much difference in this place not in fundamental difference between statefull always active slow neurons vs plain static stateless network? Are you consider pros and cons or just being 'no, me don't like it'? Can you even define intelligence? Can you compare it to cat's brain? Or if you compare human to AI you use which human exactly?
ninjasaid13@reddit
Can LLMs invent this? https://en.wikipedia.org/wiki/Nicaraguan_Sign_Language
Prestigious-Crow-845@reddit
Can my alcoholic neighbour invent this?
VoiceApprehensive893@reddit
then how your "phd level reasoner" fails super miserable on simple trick questions outside of training data?
Prestigious-Crow-845@reddit
How some people buys a magic beans and charge water with magic at TV's screen?
FastDecode1@reddit
Consult the meme.
JohnBooty@reddit
Language isn't just something our brains poop out. It's a deeply ingrained part of the human thought process.
I think the "LLMs are just fancy autocorrect" camp (while kind of correct!) really underrates how significant it is that a fancy text prediction algorithm is sophisticated enough to do the things that LLMs do.
MeganDryer@reddit
No.
LLMs are hyperoptimized for the tests and basically just Goodhart's law as a cost function,
Equivalent-Repair488@reddit
Tests are many people's idea of "conventional intelligence" though
MeganDryer@reddit
Do people spend hours studying for the IQ test before taking it? If they do, then their results are invalidated.
Equivalent-Repair488@reddit
People spend years studying for a specific test paper; SAT, A and O levels, Gaokao etc.
It is a baseline of how we evaluate the intelligence of a person is it not? Many people say it is not the right way and that it doesn't capture intelligence, I agree too, but fact of the matter is that these exams do dictate to a significant degree how people (and employers) validate an individual's "intelligence" and the relative succes of their life doesn't it. (At least early on and starting position)
MeganDryer@reddit
I'm sorry, do you actually understand Goodhart's Law?
Equivalent-Repair488@reddit
Never heard the term until now but the quote "When a measure becomes a target, it ceases to be a good measure" that it is based on, I have heard and am very well aware.
It is also what I am talking about no? Standardised tests have shown to be flawed at dictating an individual human' intelligence, but we still do it, don't we?
MeganDryer@reddit
Yeah, and people train for tests like the SATs. Comparing an LLM to a human on SAT scores is apples to oranges. You could also have an IBM from the 80s score perfect on SATs by having it store all the answers.
Equivalent-Repair488@reddit
Storing a particular year's specific answers, not go back and crush it every year like LLMs.
MeganDryer@reddit
My are you missing my point.
Let me break this down real simple.
1. IQ Tests are carefully tuned, validated, and hardly ever change.
2. SATs and similar tests are written by lay people. They are not written by experts.
3. Exams are specifically meant to be able to be passed.
So, the fact that an LLM can do well on each of these is unimpressive because:
1. It has seen the IQ tests before.
2. SAT scores are not a demonstration of expertise
3. They have access to the exact phrases which provide exam answers.
Another way to think of this is that anyone with enough patients and a search engine could pass nearly any exam.
I'm not in any way arguing that the technology isn't amazing / powerful / impressive. I'm super engaged with it. However, all of the claims of them being "smart" are anthropomorphiziung.
In subjects for which I am an expert I can easily find ways that it makes errors, particularly when presented with an novel argument. LLMs have a massive propensity to analogize, incorrectly, to arguments they are familiar with (ie that was in their training set).
Equivalent-Repair488@reddit
I did not miss your point. How you view intelligence is fundamentally different.
Sure no argument there.
I do not know how the SAT tests are written or administered, the O and A levels I took were written by professors in Cambridge Universities, which I consider subject matter experts
Again I don't know about SAT, but O and A levels are designed to assess a population of students on their aptitude in academics via a bell curve, the Gaokao especially is extremely brutal from what I hear. I know people who have failed their Os and As.
Sure, if it was fed IQ test answers as training data, it will just regurgitate said data, not impressive, agreed.
But these national exams and tests are always worded at least slightly differently. LLMs being able to apply the same concepts (if trained on) applying it to different scenarios, which is expected of from students, which dictates their further education opportunities and hence early career, and do it with such speed from inference, it should raise an eyebrow.
This is also what is expected of the test takers if they are not efficient in learning and want to score well isn't it.
This is not what they were created for. They are general models. There is simply no market right now for specialised models outside of perhaps coding models, and maybe in the open source free market RP and creative writing models. You have had decades of specialisation and focussed learning and experience, LLMs have technically existed for less than 1.
More broadly what they excel at is, a higher than average baseline knowledge of an almost unlimited areas of expertise, I am more than willing to bet the SOTA models will destroy you in a matter of knowledge tests of a few thousand topics, you have never even heard of before, same goes for me and anyone else in the world, we just physically do not have the time to be able to learn as much as an LLM in as many topics, and be as cheap as inference compute. That is my idea of it being smarter than humans.
"A jack of all trades is a master of none, but oftentimes better than a master of one"
KrayziePidgeon@reddit
Gemini 3.1 Pro, Claude 4.7 Opus have the capacity to reason at a PhD level of a given field given the person doing the prompts is also highly skilled in the field to be able to give clear instructions and maybe provide grounding sources.
Will it be able to autonomously cure cancer given a "cure cancer pls" prompt? Of course not, and anyone who thinks at that level should not be taken seriously.
MeganDryer@reddit
The capacity to "reason at a PhD level" is not especially relevant. They all fail the turing test against an expert.
Like, what are you even trying to say? What is your point? Arguably Google and a Search Engine also have the ability to "reason" at a PhD level.
KrayziePidgeon@reddit
Clown behavior. Just really goes out to show the unseriousness of the people on this sub. You are probably out here doing ero rp.
MeganDryer@reddit
I think it's hilarious that you think you understand this technology. Like... at all.
ninjasaid13@reddit
They know everything but understand nothing.
singalen@reddit
In a certain class of tasks, yes. Not in, for example, washing dishes.
FastDecode1@reddit
You'd be surprised how many people are incompetent at washing dishes.
Give an LLM access to a single robotic arm and it'll do a better job than 80% of humans.
singalen@reddit
I don’t think so. We can recognize fragile, oily, squishy or slippery objects, and handle them correspondingly. AI is not there yet.
Tank_Gloomy@reddit
Not quite, because it's practically got zero knowledge about that and is more of a literacy and rationalizing machine, at least current models.
Equivalent-Repair488@reddit
This is the first I hear washing dishes is attributed to intelligence.
Is an industrial dishwashing machine smarter than all of us?
Evening_Ad6637@reddit
Seems you never heard of Steve wozniaks coffee test. He said according to his view this would be a step toward agi:
And that’s a pretty challenging task if you think about it
singalen@reddit
It’s an example of making sense of visual input and manipulating physical objects.
Robots certainly have not achieved the ability to judge “this pan needs extra scrub, this piece dried so hard that it needs to be soaked first”, or the ability to consistently pick up a plate without dropping it, crushing it or leaving dirty marks on it from the previous plate.
ninjasaid13@reddit
At answering new questions, they are knowledgeable, at creating new questions, no.
Relevant-Yak-9657@reddit
I get what you mean. But AI was developed to be smarter. And more importantly, considering that it tends to display even more irregular behavior that humans don't, I think it is fair to enforce strict criticisms. Without it, AI will remain lackluster.
-dysangel-@reddit
I don't think AI was explicitly developed to be smarter than a human any more than a car was developed to be better at running than a human, or a plane was developed to be faster than a bird. It's just another tool being developed, but one which will definitely be able to be smarter than us if developed correctly.
Relevant-Yak-9657@reddit
I disagree with your analogy. Overall, running is just one mechanism to traverse Earth. However logical reasoning and coherence is a main sign for intelligence, that is only only done by humans and even shown in other biological beings. Useful work requires intelligence and considering it is called artificial intelligence, I believe that ai is designed to be smarter than humans and beyond.
Prestigious-Crow-845@reddit
Define intelligence? Ability of horse to head where you lead it is intelligence enough? What about not bein smarter but being able to get the job donne like summarizing for example? What history of research you used to get this idea?
Relevant-Yak-9657@reddit
I understand.
Let me address the evolution point first. Human brains were an asset that led to natural selection allowing us to survive. The mechanism was never to make us smarter, but rather it was a side effect required for us to survive and reproduce. However, current AI is unrelated to evolution. It is humanly engineered to perform tasks, find correlations autonomously. We are driving it to be better and thus criticism is required heavily. When Deepmind designed Alphafold, they wanted it to exceed human capabilities at recognizing protein folds and create new peptides. The human level is always used as a benchmark, as it is the current capabilities we possess (where either the tool is far more convienient and performs similarly or even outperforms the benchmark). Even all those prompting techniques are to make it perform better and smarter. Don't you think that it is perfectly reasonable to ask for more than summarization?
For your horse point, yes even obedience and motor skills of a horse are consider intelligence (like kinesthetic intelligence). Siri, Alphafold, Stockfish are all examples of something called Narrow Intelligence and they are great at specific tasks (again exceeding human benchmarks or being more convenient). In fact, the dissatifaction with current siri is that doesn't understand enough, unlike these LLMs. Or with Stockfish as it is troublesome to intrepret its moves on a human level. So clearly even narrow intelligence is often insufficient. However, the industry is pursuing AGI. This is completely different than the narrow ai we are used to and thus requires a far stricter standard. We only consider humans as general intelligence, due to our metacognition and lack of other examples to compare to. Nearly, all modern labs define AGI and ASI as system that can outperform humans in all cognitive tasks. ASI in fact is meant to be even more. So clearly, simple intelligence like summarization is not enough in the case of LLMs.
Lastly, another thing you can check out is the Darthmouth Workshop, whose entire premise for researching ai in the 1950s was this: "every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it." Intelligence is broad and there are many unknown criterias in it (such as deductive systems that are incompatible on a native level with our biology). But we do have known criterias and many modern ai are still failing those criterias, so the criticism will arrive.
Prestigious-Crow-845@reddit
Modern AI are just a tool for now that capable of some intelligence work, not whole brain imitation. There maybe a great goal that sounds like AGI but not now, there is neuromorphic computing branch with intel's Loihi or IBM True north exists but current iterations of llm transformers are at max some part of the brain that are static plain and usefull for some work. You can't demand it be better in every aspect.
-dysangel-@reddit
thanks for clarifying
Relevant-Yak-9657@reddit
😄
JohnBooty@reddit
It's hard to even really talk about what LLMs were "designed" to do though. I mean, there are a lot of companies working on them and a lot of people at each company. I'd pretty much reject any broad, monolithic "LLMs are designed to ____" claim.
Relevant-Yak-9657@reddit
That’s fair. I meanwhile feel that any and all useful use cases should be advanced towards.
HomsarWasRight@reddit
This is some crazy straw-man shit going on.
I don’t know who you’re supposedly interacting with that those constitute 99% of the negative shit you hear.
JohnBooty@reddit
It's close to 99%, although admittedly "99%" was some hyperbole on my part. I hoped it would be understood as such.
venturepulse@reddit
Deterministic algorithms
JohnBooty@reddit
Touché. That is true.
However, all the deterministic algorithms I'm aware of are essentially single-purpose tools and not really comparable to a more general purpose tool like an LLM.
Yes, I moved the goalposts there and I should have worded the initial post a little better....
singalen@reddit
Those can certainly be fooled by garbage inputs.
venturepulse@reddit
Define the limits for the inputs and all is good
singalen@reddit
Not really. Gödel’s theorem, among other things, proves that no amount of formal verification of input data guarantees that it’s not garbage, in general case.
venturepulse@reddit
can you fool bubble sort?
PreciselyWrong@reddit
Yes, 100% bug free deterministic algorithms. Unfortunately, very few are bug free
ImpressiveSuperfluit@reddit
The difference is that you have someone or something to blame there, working for 50 years on emulating the neural network we're running on ourselves, only to then complain that it makes mistakes in the same way we are... pretty sure the only one to blame there would be yourself.
PBorealis@reddit
I think it was a total mistake to use the term "AI" in association with any of this. If more people thought of them as language models they would probably be a lot less surprised by the things you mentioned.
JohnBooty@reddit
Amen. A-freaking-men.
JazzlikeLeave5530@reddit
When people are using it for work where all of those are very bad then it is a valid criticism. I don't care if it's similar to how my brain works, it's not a good thing lol
ninjasaid13@reddit
The nature of AI mistakes and human mistakes are different.
Comfortable-Rock-498@reddit (OP)
Great take, John Booty.
Glittering-Call8746@reddit
Says he who can see into the black box..
Long_comment_san@reddit
That hit with a force of a physical blow.
Not imaginary.
Not mental.
A physical blow, that made him smell ozone and, perhaps, his shaving cream.
ImpressiveSuperfluit@reddit
That's it, I'm calling Claude to code me a Firefox plugin that replaces every "ozone" with a rage emoji. You've done it now.
Long_comment_san@reddit
That's a great idea! Here is why it's good: it's funny, low effort and mildly useful! But I won't say it's the best one 😉 want to know why it's not the best one or should I help you write a checklist for your project?🔨
ImpressiveSuperfluit@reddit
Ahhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhh dies
Geritas@reddit
I looked at your comment. *Really* looked at it. And it sent shivers down my spine.
Citadel_Employee@reddit
You’re absolutely right!
-dysangel-@reddit
Let's just sit with that for a moment.
ivari@reddit
FIRMIRIN!
UnbeliebteMeinung@reddit
--reasoning-budget=0
ITS STILL THINKING just a space!
-dysangel-@reddit
enlightenment
UnbeliebteMeinung@reddit
AGI is just a space
fatboy93@reddit
Even my brain lives in a space - cranial cavity
_TheWolfOfWalmart_@reddit
That's still more thinking than I do.
01Cyber-Bird@reddit
When the AI concludes a task flawlessly, but you insist the code is broken. It enters a recursive thought loop as the 'error' defies its logic, only for you to discover it was a user error the whole time. 😆
Prestigious-Crow-845@reddit
So what if your boss or teacher says you are wrong? Aren't you would spent hours trying to understand where are you wrong even if there is no error?
01Cyber-Bird@reddit
True, but the funny part is seeing the 'Chain of Thought' trying to find a bug in a perfect code just because I said so. It's a logic paradox! lmao 🙃
LocalLLaMA-ModTeam@reddit
Rule 3
2Norn@reddit
there are layers to this shit
_TheWolfOfWalmart_@reddit
I haven't had this problem since discovering the power of "--reasoning-budget"
furykai@reddit
I thinking, therefore I-LLMA
debackerl@reddit
Sometimes I forget how I got started with my train of thoughts 😂
ImpressiveSuperfluit@reddit
I swear if the clankers get ADHD therapy before I do I'm going to get real mad you guys.
6_28@reddit
I've been stuck in that loop as far back as I can remember.
Xyklone@reddit
Ha!