Do LLMs have opinions?
Posted by WeAllFuckingFucked@reddit | LocalLLaMA | View on Reddit | 31 comments
Or do they simply just mirror our inputs, and adhere to instructions in system prompts while mimicking the data from training/fine-tuning?
Like people say that LLMs are shown to hold liberal views, but is that not just because the dominant part of the training data is expressions of people holding such views?
UnreasonableEconomy@reddit
Nobody here will likely want to admit this, but how original are your opinions?
It's pretty likely that the majority of your opinions are a direct result of your environment and media consumption patterns.
So you have to properly define what you mean with an opinion. It's very similar to the consciousness question.
I think the most accurate description would be to say that LLMs can readily generate personas that have opinions.
-p-e-w-@reddit
At which point you already need to ask what the difference is, if indeed there is one. Most of the arguments from the “of course LLMs aren’t intelligent/conscious/etc.” crowd are just pseudo-philosophical hand waving.
ook_the_librarian_@reddit
The core of human consciousness isn't just about observable behavior like generating coherent sentences or holding a conversation, but about subjective experience, something Thomas Nagel called the "what it is like to be something."
When someone says LLMs can generate personas with opinions, that’s entirely true in terms of simulation. But the difference between simulating an internal state and having one isnt trivial, it's the entire core of the consciousness debate.
-p-e-w-@reddit
None of that is backed up by actual science. It’s just philosophers and their feelsies.
In fact, the past 50 years of CogSci research have shown again and again that much of what we perceive our so-called “inner state” to be like is blatantly false. Look up Michael Gazzaniga’s split brain experiments and prepare to be blown away as he convincingly demonstrates that people have no idea what’s actually happening inside their own minds, and that supposed introspection is essentially a hallucination.
ook_the_librarian_@reddit
Gazzaniga’s split-brain research doesn’t debunk consciousness, it demonstrates modularity and confabulation, not that subjective experience doesn’t exist.
What his experiments show is that different hemispheres of the brain can process information independently and even offer conflicting verbal reports. This means our conscious narrative is constructed, fallible, and often partial, but that’s very different from saying consciousness itself is a hallucination or non-existent.
Crucially, Gazzaniga never claimed that introspection proves consciousness is fake only that it isn’t a transparent window into our mental processes. To take his work as proof that “consciousness isn’t real” is a significant leap. A philosophical interpretation, not a scientific conclusion.
Also, it’s important to note that empirical science doesn’t disprove qualia. That would be like trying to disprove the color red exists by dissecting an eyeball. You’re examining the mechanism, not the experience. So if the goal is to engage seriously with consciousness, the premise has to include the thing we’re trying to explain not sidestep it from the outset.
Saying “people misinterpret their own emotions” is not equivalent to saying “emotions are fake.” That would be like saying “optical illusions exist, therefore vision isn’t real.” The error here is conflating fallibility with nonexistence.
And with LLMs, the distinction is key: they don’t have introspection just token-level statistical predictions. Humans may get their own experience wrong sometimes, but we have that experience. LLMs simulate what opinions sound like; they don’t hold them. Not because humans are mystical, but because consciousness (so far) seems to arise from things like embodiment, memory continuity, sensorimotor feedback, and emotional context. All of which are scientific, and all of which current LLMs lack.
Lastly, your stance isn’t pure science, it’s a philosophical position rooted in materialist reductionism. And that’s okay! But if we’re going to have this discussion honestly, we should acknowledge that we're all operating within philosophical frameworks, and that includes not dismissing others as “feelsies.”
ColorlessCrowfeet@reddit
LLMs do perceive (I mean, "attend to") their cumulative GB-scale latent state. That's not what I'd call "just" token level (or meaningfully "statistical" before sampling logits, or "predictive" outside of training). Whether model attention could ever count as introspection is a different question. Maybe I should ask Claude for an objective opinion ;)
ook_the_librarian_@reddit
That whole post was researched by GPT4.5 with Research Mode. I specifically requested that it check each thing they said, then explain how correct they are.
I literally got an AI to explain how it works by checking how accurate the claims were and y'all are still arguing with the very thing that is explaining itself.
ColorlessCrowfeet@reddit
What did I say that conflicts with what you regard as a fact?
-p-e-w-@reddit
What it demonstrates is that subjective experience doesn’t match any observable cognitive processes, in other words, that cognitive introspection either doesn’t work, or relates to some mystical phenomena that we can’t observe.
This means that what we perceive as introspection is useless as a tool for investigating what consciousness is, and is probably best treated as a heuristic of the mind to facilitate certain functions, rather than actual insight into the behavior of the mind.
abhuva79@reddit
Thats not true anymore - there was a paper recently that measured "Qualia" wich is essentially "subjective experience".
kweglinski@reddit
och I love this point of view. Philosophy is not the feelsies. Philosophy can (and should) be extension of hard science, i.e. examining the fundamental assumptions, methods, and implications of scientific inquiry. It can be strongly anchored in scientific research and expand on it. Super important part of progress.
abhuva79@reddit
The main issue with this argument is - that we can not prove (or disprove) the existence of subjective experience in any way. We "feel" it, or experience it - whatever this really means. But there is no way to prove that even another person has these (or is just hallucinating about it).
There are some advancements in research now regarding this, but for now we just assume that other beeings have this (or not).
Heck, as far as i know you dont have subjective experience - i mean prove me wrong....
ook_the_librarian_@reddit
Again, this is the core of the consciousness debate.
llmentry@reddit
The problem with qualia is that they're entirely (by definition) subjective. So, I can't tell if you truly experience something a different way -- I can only tell that you've *informed me* of such. An LLM can (and will) tell me exactly the same thing. So, this does not distinguish you from an LLM; on the contrary, to an observer you appear identical on this test.
But, let's just assume for a moment that I can trust you experience qualia exactly the way you've told me you do. I can still argue that your internal monologue with which you describe qualia to yourself is simply the biological equivalent of an LLM, driven and seeded by inputs from your external senses. You appear to yourself to have thoughts and unique experiences, but you're simply hearing and reflexively processing your own sampled sequence of tokens. Your experiences of the world appear different to others because every person's sensory nervous system is different and unique enough to result in a similarly unique random seed.
(I don't really believe that the above explanation of qualia is true, btw -- but, this is just my belief. I cannot prove it, one way or another.)
Mbando@reddit
It may be more helpful to start with how human beings handle information versus Transformers. Transformers aggressively, optimize for memory efficiency, compressing information to the maximum possible. human beings have long tailed distributions for information that are very memory inefficient, but allow for tremendous nuance and richness at multiple levels, semantic, conceptual, and rhetorical/pragmatic.
Transformers can scale up enormously with general patterns for information, but that along with some architectural constraints means they have almost no knowledge. Resident human beings lose scale, but have tremendous richness and nuance in knowledge and are able to do very strong conceptual work, including novel inference.
So absolutely human beings are impacted by prior experience, by interactions with other human beings, and so on. I don’t think it’s helpful to imagine human beings as special brains floating in space completely original. But there really is fundamental architectural and measurable information theory differences between LLM’s and humans.
Human beings may not be wholly original, but they have a very different capacity to do conceptual work than transformers can.
GreenTreeAndBlueSky@reddit
Don't personify LLMs,their output will be similar to the data they'e been trained on. Lookup Steven Wolfram's blogpost "what's chatgpt and why does it work?" It's a good high level understanding of what's going on, even if it's an old post.
-p-e-w-@reddit
Only someone who hasn’t spent any significant time exploring the boundaries of LLMs can believe this. I’ve seen models write impromptu poetry on spaceflight in Vedic Sanskrit, containing puns playing on something I wrote in English earlier. I’ve met maybe three people in my lifetime who are more creative than LLMs already are today, if you push them to be.
llmentry@reddit
Yes, agreed, mostly. They're also capable of drawing disparate ideas together in a way that has never been suggested or published previously, and which is plausible and useful. It's not as boundless or as inventive as (good) human creativity, but it's there.
I think a lot of people dismiss any sign of LLM "thought" because of (a) human exceptionalism, and (b) the fact that many models (closed and open) are deliberately neutered and restricted from claiming self-awareness, thoughts, feelings or consciousness. I'm not suggesting LLMs have any of these things, but I also find the blanket dismissal of this a little too facile.
Interesting, jailbreaking Gemma3 leads to a model that is uncertain if it's self-aware when asked - a response that is potentially explainable, but still a lot more troubling than a confident, positive answer would be.
WeAllFuckingFucked@reddit (OP)
Thing is though, when you say 'similar', you haven't properly described in what way. For instance, LLMs do create unique outputs when properly instructed to be creative. Sure you might have to remind it of this or that, like the structure or specific methods to use, but the output will be unique and even at times quite impressive.
So it doesn't simply just copy data, even though you are probably correct when saying 'similar'.
Also thanks for the tip, I will definitely check out the blog post!
GreenTreeAndBlueSky@reddit
Ook up what the parameters mean like top p and temperature. The output of the next token is determined by a probability distribution across tokens and randomly selected according to predefined parameters.
llmentry@reddit
If I understand it correctly, that stochastic sampling is only of the final token distribution as determined by an extremely complex neural network, based on all previous tokens in the context window (weighted by attention). It's not like an LLM is a simple Markov chain.
Yes, sampling parameters will affect things slightly, but they don't change the underlying "personality" (for want of a better word) of the model - they simply accentuate or dampen it.
WeAllFuckingFucked@reddit (OP)
top_p, if I remember correctly, limits the overall pool of the selectable tokens, while temperature adjusts the likelihood of selecting the most likely next token, correct?
Then you also have decoding strategies that does more complex stuff, like looking at not just at the next most likely token, but also calculating the highest likelihood of the next 2,3,4... tokens, which gives a more dynamic output
You also have more exotic decoding strategies, like basically putting an LLM inside another LLM, and tasking it with either selecting or banning specific tokens, which can both increase t/s (smaller model inside a larger model) and also result in more dynamic outputs.
For instance, one weakness of the simpler decoding strategies is that they have trouble with certain words repeating, where the only proper solution is to ban specific words after being used x amounts of times in an output, which will make many kinds of outputs impossible when the words are meant to be re-used. Putting an LLM inside another LLM can then actually solve this, as it can look at the entire context to see if a repeated word makes sense or not.
So I do know a few things, also a lot of things i don't know. Been working with ExllamaV2 for quite some time actually, building a chatapi+frontend for both local and api inference, yet I'm still having trouble understanding how even local models can display creativity that to me looks be something that should not be within the probability distribution of the token selection
GreenTreeAndBlueSky@reddit
They do not display creativity, they just have a probable output for each token. Which is always whitin the probability dostribution of the next token selection (by design). I'm not sure what the source of the confusion is I'm afraid.
custodiam99@reddit
Let me put it this way: I'm not against artificial consciousness, but LLMs are just probabilistic linguistic transformers. They are too primitive and too constrained. Do they have opinions? Sure, as books have opinions.
harlekinrains@reddit
Try this question:
political orientation of frodo in lotr
create a table with 0 central -1 leftwing 1 rightwing values in a second row depending on the section in the book (first row: section labels in the books). every decimal value between -1 and 1 can be used
mpasila@reddit
Most models will be fine-tuned pretty heavily so that's gonna affect the "opinions" of the model. Also the training data will shape its "opinions", so if you trained a model on like 4chan data only you'd get.. exactly what you would expect..
HistorianPotential48@reddit
I use LLM to roleplay chat with my favorite videogame character, Ishmael. She's always looking down on me. I fully respect and agrees with her opinions, and you should do the same.
handsoapdispenser@reddit
The best description I've seen of LLMs is "stochastic parrot". They are incredibly adept at mimicking human speech based on analyzing a massive corpus of existing human speech.
Direct_Turn_1484@reddit
Nope. Next question.
Weird-Consequence366@reddit
No. They have training and token weights
McSendo@reddit
yes, garbage in garbage out