Am I the only one seeing LLMs as just browsers on steroids?
Posted by obsolescenza@reddit | learnprogramming | View on Reddit | 21 comments
Everytime I give an LLM some decisional power they mess up, and I guess it's due to how LLM's token prediction works that make them, by design, fancy browsers, some of my economy friends joke that I will never land a job (I am 20, doing CS) because AIs will steal my job, but everytime I use an AI other than making me explain something or researching something (search engines rarely work for me) the output is dog shit. Does any other of you guys who surely know better than me agree? Am I missing something?
XxDarkSasuke69xX@reddit
I think they can be decent at tasks that are more than just researching info, but they need to be guided enough and be fed enough context first. And obviously need to be supervised and their output need to be manually verified because even with info, context, and a good prompt, it's far from perfect.
I think they're best for doing online research, learning, etc... but aren't limited to that. I use them regularly to write code too, it works decently if you're explicit enough about what needs to be done. If you give it no guidance and just ask to make a whole app it will be trash. But small well defined tasks work fine.
At the end of the day it is a tool, you need to learn how to use it to maximize its efficiency, but it will never do well what it's not made for, so learn what its limitations are.
obsolescenza@reddit (OP)
I completely agree friend, but a question here, do you think the limitations are due to how LLMs intrinsically work, or, is it a "wait a little bit they will become much better and have basically 0 limitations"?
XxDarkSasuke69xX@reddit
The former. I don't think LLMs will get past its current flaws just because of the way they function. I think people that say "wait a few years they'll be fixed" are delusional and truly don't know how LLMs work. People hear what they want to hear, that's why they believe marketing around AI and hype up stuff that is obviously never gonna work, but they want to believe in their delusions.
obsolescenza@reddit (OP)
got it, thank you very much friend I wish you good and have a nice day
TonySu@reddit
Well first a browser is for fetching and rendering pages from the web, so already what you're saying makes no sense. Second, if search engines rarely work for you, then it's likely you are bad at formulating a search query, and by extension also bad at formulating prompts, explaining your issues with LLM usage. I have a PhD, work in post-doctoral research, and am frequently impressed by the results produced by LLMs in code, research and writing. What you're missing is likely good communication skills required to ask a well constrained problem, and to provide the appropriate context.
Intrepid_Witness_218@reddit
i dislike your egoistic response, have a bad day
TonySu@reddit
To the contrary it’s OP that’s highly egoistic. They failed to use a tool, and went straight to blaming the tool and is seeking validation from others to justify their blame. They don’t say “I find it hard to find thing with search engines,” they say “search engines rarely work for me” as if it’s the search engine’s fault.
It’s like when novice programmer make wild claims about how GCC isn’t compiling their program correctly. I’ve seen that claim many times, not once was it actually GCC’s fault.
obsolescenza@reddit (OP)
Never blamed the tool, i also confronted with other programmer friends and they all said that search engines have been getting worse for them too.
I also never blamed LLM, it's awesome to explain me things, I just criticized people saying that giving 100% of execution to it is an awesome idea.
I am also not egoistic, I never even watched BlueLock how can I be one
obsolescenza@reddit (OP)
emmm ACCTUALLY search engines have been getting worse over time
Cybyss@reddit
One thing I've learned is that the performance of an LLM depends heavily on how well you ask questions.
One shortcoming of LLMs is that they haven't really been trained to ask clarifying questions, so when reality conflicts with what you ask, they hallucinate. It's not unlike with newly hired junior developers. If given a vague/unclear task, they might end up misunderstanding and doing completely the wrong thing too.
Modern LLMs can be fantastic, but treat them like junior developers who have a ton of textbook knowledge but no experience. They're not magic "know it all" oracles.
obsolescenza@reddit (OP)
yes exactly, the thing is that outputs are rarely good and many times instead of asking more questions they just make up their own stuff and waste you time
Cybyss@reddit
They're not going to ask questions of you to clarify what you mean. That's the point. They are fundamentally incapable of doing that. (To be clear, I don't think that's a limitation of their architecture, but rather merely in how they've been trained).
That's why it's up to you to be clear about what you want - to know precisely what it is you're asking for and how to ask it precisely.
obsolescenza@reddit (OP)
yeah you're right, I in fact, in many prompts also tell them "if you have any questions or you are unsure ask me" sometimes they do other times they still give me a shitty output. But why do you think it's a training thing more than a technological one?
adambahm@reddit
Yes, you are
obsolescenza@reddit (OP)
thanks for the elaboration
milan-pilan@reddit
I don't know if I would agree with 'browser' - what about it makes it a browser?
Other than that I would agree. LLMs specifically are designed for one task: 'Guess how a piece of text can be continued'.
It's amazing how far we got with just that, I do appreciate that. But in the end, that's what they do. They are just incredibly good at guessing. They find patterns, they continue the patterns. Computers where always good at that. But they can't be trusted to make decisions in the sense of 'think beyond the immediate context you have, what other things do we allready need to think about that aren't explicitly stated?'.
obsolescenza@reddit (OP)
with browser i mean searching for information, like, instead of googling "what does node js do" you could ask an LLM what does node js do and get a relatively good answer
emnotbr@reddit
Sounds about right for llms tbh
obsolescenza@reddit (OP)
thanks friend
DonkeyAdmirable1926@reddit
You could say excel is just a calculator on steroids, or a bald eagle is just a chicken on steroids. It’s funny, but what does it really contribute to understanding?
obsolescenza@reddit (OP)
It's not about understanding but rather if other programmers in this stage agree with me or if it's something that I don't really know about all the hype