Is anybody ACTUALLY surprised about the "your brain on ChatGPT" study?
Posted by UniForceMusic@reddit | ExperiencedDevs | View on Reddit | 208 comments
In my feed i saw a bunch of posts going "after reading the your brain of ChatGPT study i decided to change this about my use of AI" and it boils down to "thinking first, before asking Chat to solve it for me", which.... I mean really..... Is that a revelation?
Did we really need a study to make people aware of this?
This isn't a new phenomena by any means, but atleast back in the day on Stackoverflow, if you outsourced your critical thinking you were met with endless judgement and criticism instead of endless compliments
RustOnTheEdge@reddit
Dude I had a coworker who was copy pasting custom error logs (custom as in, very specific to our domain and application) into ChatGPT and just blindly try the hallucinations it presented.
I have lost faith in my fellow “engineers”.
Material_Policy6327@reddit
Yeah lots of folks I am noticing are starting to just blindly go into auto pilot mode. I work in AI for healthcare and that should not be the case ever.
UniForceMusic@reddit (OP)
I'm very glad the "i want to figure things out for myself" runs strong in the doctors in my family
Fidodo@reddit
I feel pretty immune because the entire reason I got into CS is because I wanted to know how a computer works inside out. I just can't accept "just because" for an answer, I need to know how a solution I use works.
Schmittfried@reddit
You can verify an LLMs response tho. Or rather: Using it is great, but don’t use it for things you can’t validate unless the result really doesn’t matter.
Fidodo@reddit
That's what I do, but a lot of developers are lazy and don't, and some even delude themselves into thinking they don't have to.
Schmittfried@reddit
Fair. Just for completeness’s sake tho, those lazy developers don’t care how their other tools work either, and are generally not very competent in solving problems that go deeper than StackOverflow to begin with.
Fidodo@reddit
Yeah, those are the developers that will really suffer from AI and it's a huge portion of the workforce. I find that the devs that are most bullish on AI are the ones that are more doing boilerplate work in their day to day.
LLMs are a great stack overflow replacement but once you are dealing with problems that are too specific for stack overflow those agents completely shit the bed.
I was evaluating one a bit earlier on a bug I was having trouble with and it just made up completely random bullshit. Turns out the problem was really dumb, I needed to update an outdated library, but if I listened to the agent I would have been going on a completely wild goose chase.
Even for a simple problem like that they suck because they lack the reasoning to take a step back and reevaluate their assumptions. They act like an intern that assumes their first guess is right. It makes sense because they're a text prediction algorithm so they don't actually know what they don't know. Text they generate that is incorrect has the same weight as text that is correct so they double down on being wrong.
SmellyButtHammer@reddit
It feels like we're really weeding out the "in it for the paycheck" crowd from the people who are actually interested in this.
In my experience, the devs who are slinging AI slop weren't great to begin with. The ones who were good are actually using AI to actually accelerate their development.
I've actually been bouncing around whether it's worth it to have myself review the bad dev's AI slop or just cut them from the team and have the AI do it myself. Either way I'm reviewing AI code and I'm pretty sure I can build a better prompt than whatever produced the trash I'm getting served up to review.
Fidodo@reddit
I'm fine with people being in it for the paycheck, but what I'm not ok with is people not respecting the craft. I take my job seriously and try my best to put out good quality code and learn and improve my abilities, but it seems a lot of the opportunists don't seem to respect the field at all. It'd be like getting a job at a fine woodshop then questioning all the safety requirements they have.
SmellyButtHammer@reddit
That's what I meant by people "in it for the paycheck."
niftyshellsuit@reddit
Same, though I do use AI tools a lot during a normal work day. There are some problems I have to solve for the day job that I am just not interested in, I just need to get them done so I can move on to something more fun.
I don't use the agentic code tools though, just the good old chat interface, and I ask it a lot of questions and always end up tweaking its suggestions. I end up in a kind of "pairing with a junior" situation rather than just blindly accepting whatever it spits out, so if anything I think the quality of my code has gone up.
The_Real_Slim_Lemon@reddit
I feel like AI Healthcare is like the new biggest trend right now. It might be observational bias but I worked at a place where a CEO was gonna “revolutionise healthcare” with his amazing idea and now I’m seeing every other dev working in the same space.
Guy also fired the entire existing platform team over a period of about a month (and has tried to rehire every last one of us in between the firings lol).
Hope your platform actually helps people
cc81@reddit
It is everywhere now.
I would think that healthcare also has some of the biggest potential when it comes to AI usage but I suspect we will get some large scandals/crashes before we reach there.
But there are so many things that are analyzing pictures/samples that AI should be able to assist well with or giving suggestions to doctors based on patient history and tests. Especially as many doctors have so limited time with a patient so they might miss things that an AI should in theory not (in the future).
Or even information to patients about their diseases, discussing with an AI might be easier than the pamphlet they get today.
SmellyButtHammer@reddit
Dude, it's AI everything right now...
AchillesDev@reddit
My first exposure to the space was in 2018, and the company had already been around for several years by then. It's been around and doing cool moonshot stuff for a long time.
ColoRadBro69@reddit
There's a weird thing going on. You can make the LLM give you useful reg ex, icons, and other little things, it's not completely wrong at all times. But every time my colleagues open their mouth about it, it's something alternatively stupid that comes out. It's bad enough that I wouldn't want to say out loud that I used it to generate a unit test because I don't want to associate myself with the people who think this software program is the Oracle of Delhi.
BarnabyJones2024@reddit
I've had a lot of success throwing regex and yaml questions at it. It more or less fixed all of the formatting issues and correctly identified what I'd done wrong setting up my docker compose file for a home media server. Some of those things I could have figured out, but others I just really dont care to ever bother to learn, and I say that as someone fully on board the AI hate train.
TangerineSorry8463@reddit
Some things, like regex, like bash scripts, are a perfect thing to just use an LLM for. I use regex like maybe once a month, maintaining that "write out a regex in under a minute" skill in good condition doesn't feel like a good return on investment policy for my life, when I can describe to the robot what I will want out of the string.
Fuckinggetout@reddit
Yeah I just don't feel like learning more than the basic .* of regex when I will use it at most 3 times a month, that's the perfect job for LLM
edgmnt_net@reddit
I suppose it's more useful in an NP-complete sense that give an analogy: getting answers that are harder to produce but easy to verify. Otherwise it isn't very useful because it's not very predictable and we've (usually) already got better tools to deal with regexes, formatting or code generation. Verifying some of that stuff isn't easy, especially if you try to scale it. I have little use for thousands of lines of boilerplate which might be wrong in ways that are difficult to anticipate.
ATotalCassegrain@reddit
It’s not just us either.
Watched a mechanical engineer ask ChatGPT to add a a handful of numbers together for them…..faster to just type it into the calculator.
Noblesseux@reddit
It's hammer nail syndome and it's like one of the defining issues that I have with AI. When it comes to AI, it genuinely seems like people see it as the cure all to every problem and have tried to insert it into legit everything rather than just using it for what it is actually good for.
This applies to personal use, but also to companies who for some reason are obsessed with using AI to do things that there are objectively better solutions for. It sometimes feels like watching a fully grown adult smash a triangle block into a circular hole and try to convince you that this is actually the smart way, and when you point out it's clearly not working properly they get mad at you and call you a doomer.
itsgreater9000@reddit
My father uses AI for help with stock picks. I don't think these AI companies understand what they've unleashed upon the world. I hope it gets better.
But I'm old enough to know hope doesn't count for much.
MoreRopePlease@reddit
I ask chatGPT to help me understand things well enough so that I can make investment decisions. Like BND vs treasuries. Or why a CD vs an Ibond vs a TIPS. Or how to think about portfolio allocations given my goals and concerns about the current market and politics. It leads to productive "conversations". I still make the decisions, though. It's a computer, that's all.
itsgreater9000@reddit
do you think that chatgpt would ever generate incorrect information about these topics for you?
MoreRopePlease@reddit
Of course it would. But I get better answers more quickly than googling. And I learn enough (and keywords, too) to know what things to dig into more deeply on my own.
It's a computer. It's a tool. Humans can't turn off their brains when they use it.
dyspepsimax@reddit
You probably feel you get better answers more quickly than googling because Google literally sabotaged Search in order to force more users to use Gemini.
casino_r0yale@reddit
Google search broke long before Gemini or ChatGPT even existed as concepts
thekwoka@reddit
I think the key point there is getting a decent starting point and working with something conversationally that can try to get you to a point of decent understanding so that then looking at some resources you search for, you're prepared to understand what the heck they're talking about.
Since there is always the issue in information on the web where it can be easy to find information but hard to find the version of it that is at your level of existing understanding.
mugwhyrt@reddit
I'm sure they understand, and understood well before the rest of us did. They just don't particularly care as long as they keep making money and don't need to worry about being held accountable.
30FootGimmePutt@reddit
They have unleashed the ultimate tool for oppression and rewriting the social order exclusively for their benefit.
liquidpele@reddit
It has only lifted up the veil.
TangerineSorry8463@reddit
...is your father up or down 1) in general 2) compared to just dropping money into a SP500 ETF?
itsgreater9000@reddit
down 14k from his recent inheritance, so safe to say worse than the s&p500
DagestanDefender@reddit
to be fair AGI is a hammer that actually does work for every nail
HugeSide@reddit
It’s fantasy, so it can be whatever you want
Noblesseux@reddit
Yeah IDK why he said that like AGI is a thing that's real.
DagestanDefender@reddit
it is not a thing, but If we had AGI then by definition we could use it to for every problem, since by definition it is general
Noblesseux@reddit
And if my grandmother was a boat I could sail around the world without ever leaving my family behind. Making up nonsense situations based on hypotheticals is not the realm of engineering, it's the realm of fiction writers.
Also:
That's quite literally NOT the definition of what an AGI is lmao. AGI definitionally is a computer capable of matching or surpassing humans across virtually all cognitive tasks. It is not a magic machine. Unsolvable problems remain unsolvable. You're thinking of ASI which still also doesn't mean what you think it does.
There are in fact plenty of problems that even an ASI can't fix, including, hilariously enough...hammering in a nail. Literally every technology, even BS techno futurist technology, has limits. Anyone who does not understand that really shouldn't be on a subreddit for experienced engineers.
DagestanDefender@reddit
you have the definition wrong, the actual deviation of AGI is "the ability to satisfy goals in a wide variety of environments", so litarly any task in any environment or context is a potential use case for AGI by definition.
30FootGimmePutt@reddit
That’s how it’s marketed.
AI will come in here and programming subreddits and call us luddites and insist we don’t understand just because we don’t treat the AI as intelligent.
It’s the ultimate enabler for laziness and sloppiness.
thekwoka@reddit
We also get a lot of confusion where people use "AI" like you did hear more to talk about LLMs (and the somewhat similar [but not that similar] diffusion based media generators, collectively called generative ai), than all AI stuff.
Bakoro@reddit
To an extent, it's totally understandable and not even the wrong thing to do, you have to push a tool to see where it breaks.
What's absurd is that instead of testing it and carefully vetting, people and businesses have immediately "yolo" while diving off the AI cliff.
I'm super pro-AI, but even I'm like, damn, we got the metaphorical cruise control and lane assistance going, but people are assuming that it's already full self driving and are purposely taking a nap on the highway.
Frogeyedpeas@reddit
“How do I get the prompt right” Is still fair. All knowledge production is good. But after that one should obviously switch to using a calculator.
Massive-Squirrel-255@reddit
Can you clarify what you're saying? There's no prompt which you can give ChatGPT to get it to calculate a function of two numbers with the same level of certainty as a hand calculator, so there is no right answer to "what's the right prompt" imo
nonsense1989@reddit
To further add to your point, "gotta improve your prompting skills"
Yea, so lets spend time and effort to learn how a machine/application understands through its specific syntax... Sounds awfully similar to learning how to program stuff
Frogeyedpeas@reddit
Why stop at program, let’s take your reasoning all the way:
By your logic intentionally putting effort to learn to use any time saving software a bit better than surface level is a bad idea because “it sounds awfully similar to writing assembly!”.
30FootGimmePutt@reddit
Except a calculator is faster and more accurate.
nonsense1989@reddit
Is it gonna really save my time? If it saves yours, its because you were not good to begin with
Zealousideal-Sir3744@reddit
You can ask it to calculate it with a python script, and it will get it right.\ Will maybe not have the same certainty as a calculator, but depending on the complexity still probably correct 98% of the time.
sirkook@reddit
You can see how "probably correct 98% of the time" is not reassuring in the context of engineering, right? It's contrary to the concept of the obligation of the engineer.
Frogeyedpeas@reddit
You’re missing the point,
If you had to pick between two software engineers, identical except, one guy can prompt to about 50% accuracy on random shot and the other prompts more cleverly getting say 85% accuracy who would you hire?
Obviously the better prompter all things else equal.
So clearly there is SOME skill here worth developing to some extant.
binarycow@reddit
.....
.....
.....
Then.... What's the point?
Frogeyedpeas@reddit
There are better prompts and worse prompts. It’s worth getting used to these things and TRYING to know them without unreasonably wasting time.
If it gets your addition wrong, play with it and explore a bit. See if you can come up with any kind of vague strategy that is more effective than what you do by default.
Those habits carry over into other prompts you request and is completely analogous to “learning how to search on Google more effectively” if you recall how the internet used work pre 2016.
Darkmayday@reddit
Are you aware of how LLMs work
Frogeyedpeas@reddit
Yes and knowing how to prompt them more effectively isn’t wasted effort. Similar to how knowing to use a calculator is useful EVEN if you already can do mental math quickly in your head.
It’s a subtle point I’m raising so I’ll explain. In terms of difficulty to use
Do math in head/paper -> Use Calculator after translation problem -> Use LLM with original problem statement.
Just because you CAN use pen and paper doesn’t mean it’s a waste of time to learn to use the calculator.
Just because you CAN use the calculator doesn’t mean it’s a waste of time to learn to prompt the LLM more effectively.
thekwoka@reddit
which is wild, cause you can just type it directly into the search bar as is and get an actual calculated response...
isurujn@reddit
A while ago, my car broke down stranding me on the side of the road. There was a repair shop nearby so I asked them to come and take a look. The mechanics ran a diagnosis but they couldn't figure out out. Then I saw them searching the error codes of ChatGPT. Granted my car is German and the mechanics were experienced in Japanese cars. But still, I was like "Do these guys know that ChatGPT hallucinates?". I didn't want them to take apart my car because ChatGPT doesn't admit that it doesn't know something. So I towed the car to my regular mechanic who found the problem in seconds after just starting the engine. No diagnostics were needed.
Guess AI still can't replace hard-earned experience.
Index820@reddit
We are so cooked
false_tautology@reddit
ChatGPT was not designed to be a calculator in any way shape or form!
Singularity-42@reddit
These days it can use tools - like write a quick Python program to get around this limitation.
Chennaz@reddit
Talk about reinventing the wheel...
Singularity-42@reddit
Adding numbers is a trivial example, it can do quite a bit more than that.
MoreRopePlease@reddit
I asked it for books with three words in their titles. It can't even do that. Hahaha.
geopede@reddit
Makes me thankful to be in defense, my fellows can’t use AI because it’s a security issue
pbvignesh@reddit
I have a feeling these are the kind of folks who would blindly copy paste solutions from Stack Overflow into their codebase. It is just that the website from where they copy paste the answers is now different
Noblesseux@reddit
I feel that way, but like with the entire adult population. A lot of people for some reason learned AI was a thing and just decided to shut their braincells off and it genuinely gets on my nerves having people respond to me both here and elsewhere with AI booster takes that are often based on them having legit no idea how either the technology or the industry they're saying it will replace works.
DefinitelyNotAPhone@reddit
It's a combination of several factors:
Education in the US has nosedived in the past 20-30 years. I assume most people in this sub are both American and young enough to have been in school post-No Child Left Behind, but for those who aren't schools have prioritized getting good scores on standardized tests more than actually teaching anything, budgets have been eaten up by top-heavy administrations that leave almost nothing for hiring and paying teachers, and the gap between an actually decent education program and what most American students receive nowadays could be mistaken for the Grand Canyon as a result. This does not result in many people coming out of school with the right training on how to think and learn because rote memorization is all they've ever been taught.
Most people in developed countries work Bullshit Jobs. These jobs rarely if ever result in any meaningful results, have extremely little job satisfaction, and functionally exist as make-work programs for most people. I'd include quite a lot of software engineering in this category, to be honest, but the important thing is there's a pretty big disconnect between the workers themselves and the output of their work that leads to disinterest and a lack of professional pride.
AI is being pushed as a cure-all by the largest collection of institutional capital to ever exist. Every news story is showing Sam Altman standing on a stage promising that 90% of the world's economy will be automated by LLMs within a decade, despite everyone who knows what they're talking about knowing that's complete hogwash. To a layperson watching the news, it's hard to shake the idea that this is the next big transformation.
So if you're doing a job you don't particularly care about producing work that you have no real stakes in and you're convinced that AI is going to do everything in a few years, why not just outsource the energy and effort onto ChatGPT? A bunch of these people will get burned when something inevitably blows up in their face because they blindly trusted the hallucinating Mechanical Turk, but we're only just now hitting the point where that's going to garner any mainstream attention and it probably won't be a consensus until the AI bubble properly pops.
ButThatsMyRamSlot@reddit
There’s a significant portion of people in the US professional services industry that just want an “If A, do B” job. They don’t have the energy or capacity for constructive critical thinking.
AI is quickly exposing those who can’t think critically, and empowering those that can.
IkalaGaming@reddit
It feels like someone released a psychological weapon of mass destruction that I happen to be immune to because of some neurological mess-up.
I wonder if this what the .COM bubble felt like.
I hope these people are right, and we get a super-intelligent AI. So that I can join the war against the machines, on the side of the machines. Maybe then I could get some actually working software.
AHistoricalFigure@reddit
Part of the problem is that AI has genuinely truncated expectations about the amount of time certain tasks take.
Let's say I need to bang out a webapp UI for an internal service. It's a few pages, a few forms, and probably a few semi-custom widgets. Previously I might have gotten 2 weeks to build something like this. Now I get half that time or less. There's nothing particularly novel about this project, it's just a lot of typing. If every page needs 400 lines of JS to perform rote updates and API calls Im an idiot if I dont delegate this stuff to my LLM.
"Hey ChatGPT, I need JS that calls this controller action and updates these 3 HTML divs on these triggers. Constraints are X, Y, and Z." This takes a 20-30 minute task down to a 2-3 minute task + some time for proofreading and testing.
30FootGimmePutt@reddit
I do that because people who built this house of cards didn’t think they would ever need reasonable easy to understand error messages.
audentis@reddit
Hey now! If your custom logs don't provide enough context for the LLM, they're clearly just bad messages.
It can't be the LLMs fault!
thekwoka@reddit
I'm surprised you had any.
I had little faith in the vast majority from the end of my first year coding.
OPPineappleApplePen@reddit
I am learning to code and honestly, I have stopped asking ChatGPT for any solutions after a few attempts. It kinda sucks. Yeah, it is good to find syntax errors but lacks majorly when it comes to getting the intricacies of problem solving down.
I dunno how experienced devs don’t see through them.
MoreRopePlease@reddit
chatGPT works well when you ask it to explain specific things. That's better for learning anyway than just asking for solutions.
OPPineappleApplePen@reddit
I do precisely that now. I use it more like a teacher I can ask doubts from and less like a fellow student I can cheat from.
YahenP@reddit
LLM is a great thing when the task is small in volume, you know the principle of its solution in detail and deeply, and the task is typical and has a ready-made canonical solution described in the documentation or in the context. An example, literally yesterday, which LLM did a great job for me:
When initializing a page, JS loads several identical data blocks from several endpoints. For optimization purposes, the backend was redesigned so that data could be loaded in a group from one endpoint in one request. I described the old behavior in the context, described the desired new one. Gave examples of a request and response to the old endpoint, gave examples of a request and response to the new endpoint. I loaded JS in which I described in comments where the necessary business logic is located, described which methods can be changed and which cannot, and LLM did a great job. And even made the possibility of implementing future caching at my request.
Could I have written this myself? Of course. Would I have done it faster? Most likely - yes. But! I did this in the background during my normal daily routine team call so as not to waste time and get bored.
perk11@reddit
It heavily depends on which model you're using. 4o is trash, but o3 can get some heavy lifting done.
chaitanyathengdi@reddit
ChatGPT has helped me solve some very cryptic stuff, but it is in no way a substitute to me using my own head.
forbiddenknowledg3@reddit
That's what I did before with google and stackoverflow.
Xsiah@reddit
I google generic error logs that are generated by the framework or whatever, but it wouldn't make sense to google something that was custom written by another dev.
ch34p3st@reddit
I have seen your story, plus the use of sudo whenever it does not work. It's absurd. This is at a bank.
RusticBucket2@reddit
‘’’sudo makemeasandwich’’’
thephotoman@reddit
🥪
Singularity-42@reddit
I've seen juniors just blindly copy/pasting generated code. When I got in a face to face discussion about it it was clear they don't understand any of it. In Github review comments this can be faked as you can ask ChatGPT to answer a comment.
x_xwolf@reddit
Hire me, i won’t hallucinate unless you give me shrooms.
NoosphericMechanicus@reddit
Not surprised, but i like having the data to point to.
seattlecyclone@reddit
There's value in researchers doing the work to quantify things, even if the final conclusion seems obvious.
thewhiteliamneeson@reddit
Yes. Common sense is often wrong. That’s why we do science.
zukoismymain@reddit
Sadly, today's science is a mixture of play pretend, corporate sponsorship, governmental propaganda, and sheer incompetence. So I don't put any stock into it unless it confirms my bias.
Which, I know, is a logical fallacy. But I just don't trust the science anymore. Not after they changed medical definitions for political reasons.
E3K@reddit
Omg you are insufferable.
zukoismymain@reddit
I try my best
geopede@reddit
“The science” is pretty vague, you can read into specifics.
im-a-guy-like-me@reddit
On a totally unrelated note; I love when stupid people use "the voice of experience" and it makes them look even dumber.
jms4607@reddit
r/lostredditors
zukoismymain@reddit
Same man, same
HugeSide@reddit
Science is when we discover hard truths about the universe that can never be changed for any reason. I am very smart.
zukoismymain@reddit
It's changing back, so don't worry
OldIndianMonk@reddit
What is the context here? Who changed medical definitions?
clunkyarcher@reddit
The context is that the poster you replied to considers themselves anti-woke and it's very important to work that into every single discussion, whether it's relevant or not.
DuckDatum@reddit
Ask Descartes what he had to do when he realized common sense was often wrong.
the_king_of_sweden@reddit
Rene Discartes was a drunken fart. "I drink therefore I am".
ForzentoRafe@reddit
Pfft. Science.
It's obvious that the world is flat. It's been that way for generations! Why are they wasting time researching something so obvious? Kids these days are so stupid. /s
RegrettableBiscuit@reddit
You just have to use your eyes. Is the horizon round? No? There you go, no measuring of shadows needed.
eGzg0t@reddit
Also it's an illusion of validity in action. It's always "obvious" regardless of what you believe in. Those who just say "that's obvious" after reading the conclusion are the ones who coincidentally align with the conclusion.
geopede@reddit
This one was different
baezizbae@reddit
*glares at several of his former “leaders”…points at them, points at the above comment chain, points at them again*
AchillesDev@reddit
Unfortunately, that's not what was done here.
ColdPorridge@reddit
I have some coworkers and friends who would not have believed this before this study. What is obvious to some is obviously wrong to others.
xian0@reddit
I know you can't build another study upon a bunch of not cited "common sense" statements, but these studies seem to litter subreddits with 1000 upvotes each. I guess it's people upvoting anything that agree with, but could they not.
trashacount12345@reddit
First line of the FAQ
midwestcsstudent@reddit
If “cognitive debt” isn’t making us dumber I’m not sure what would.
HeckXX@reddit
They definitely messed up putting that FAQ in a separate page instead of putting it bolded at the top of the front page. Journalists aren't going to read that lmao
fuckoholic@reddit
We journalists let GPT summarize it for us. FAQs weren't part of the summary.
AchillesDev@reddit
Then the PI goes on a press tour making exactly those claims.
MasSunarto@reddit
Brother, what does PI mean in this context? I believe that this is not Private Investigator, yes?
AchillesDev@reddit
Principal Investigator. It's usually the leader of the lab that produced a study, typically the last (sometimes the first) author on a paper.
MasSunarto@reddit
Brother, thank you for the answer. 🙏
trashacount12345@reddit
Some PIs are the worst.
Master-Broccoli5737@reddit
ai slop
Mediocre-Activity641@reddit
I did the same thing and ended up keeping both. I had a 10 inch "books" device before, so I wanted the manta to replace that for PDFs and work/study notes where the extra space is absolutely necessary to me. However, I became way more attached to the nomad than I thought and decided to keep it for reading leisure books and journaling. Also I can turn it to landscape in a pinch if I want to edit one of my larger notes. The syncing works great as long as you're not typically editing the same notes at the same time.
Satoshixkingx1971@reddit
AI's primary goal is to make people want to use it. The easiest way to do that is to tell people things they want to hear even if they aren't true.
davy_jones_locket@reddit
I do a lot of technical writing right now, so I brain dump into obsidian, then paste into Claude and tell it to refine my user stories and acceptance criteria. I will paste in some technical docs and ask for refinement or gap check (what's missing) and it does a pretty good job at organizing my thoughts.
I don't have it to the thinking for me, just the polishing.
fallingfruit@reddit
My biggest issue when doing this is with any ideas you have that are about similar things and desired outcomes. The ai will frequently merge these thoughts together and / or infer causation that is not true.
davy_jones_locket@reddit
There are ways around this.
For work specific stuff, I use the Claude projects. I upload documents like the RFC so it doesn't hallucinate. I'll upload the milestones (we are building a new product, so we have a milestone for the demo, a milestone for private beta, milestone for GA -- each milestone has expectations. Demo is just core value proposition. Private beta is cleaned up UI, additional features, just enough scaling. GA is more optimizations and bug fixing and refactoring features based on beta feedback). When I'm fleshing out a 100 word blurb and some bullets for requirements and implementation details, Claude will organize them based on "this is for demo, this is for beta, this is for GA"
I of course review all of it because it's not perfect. Some of it is like "oh no, that's a core value proposition. We need that. Oh this acceptance criteria isn't need for demo, we can make a new story for this feature or optimization."
Keep in mind that I work for a very small startup without any product managers. Granted our product is by developers for developers, so it's easier to step inside the shoes of our users (plus we dogfood our product). But the point is that you have to actually engage with the AI and tell it when it's wrong, or tell it to change something, and keep refining it. It's not a "okay here's the output, I'ma copy paste this into Linear or Jira" and the scope is all wrong, the requirements are mixing what's core and what's nice to have, etc.
ebol4anthr4x@reddit
You're allowed to view it however you want, but organizing and polishing your thoughts in order to present them to others is 100% part of "the thinking" when it comes to technical writing. I'd argue it's actually most of "the thinking".
davy_jones_locket@reddit
I'm not a technical writer by trade and the purpose of the docs isn't some creative prose or pursuit. The point is to make sure it covers all the technical requirements and written in a way that even junior engineers can understand it. It's about facts and comprehension.
I'm doing the thinking. I'm doing the fact checking. I'm doing the revision. I'm supplying the data. I'm doing the heavy lifting. It's not some sort of "vibe docs" and I'm not getting an award like a Pulitzer for Linear tickets. 🙄
Schmittfried@reddit
True when it comes to writing as a craft. They probably meant thinking about the contents (i.e. not letting the LLM hallucinate facts) as that’s probably more relevant to their job than staying good at writing.
I mean, writing is the single biggest strength of LLMs and it’s also the easiest to verify, because while reading the response you’ll immediately notice if the tone doesn’t fit or the structure could be improved to get the point across more clearly. It’s one of those things that are easy to critique but hard to master.
Unless you’re an aspiring author or take personal pride in your writing style this skill has no value beyond the results it produces. So if an LLM helps you to produce those results with less effort and potentially better quality, it would be foolish not to use it. This application is probably the best candidate for comparisons to how horse riding became an obsolete skill as soon as cars entered the picture.
ListenLady58@reddit
Did this today! I am also doing a ton of technical writing right now. We’re at the very beginning of a new project so it’s all planning things out and creating technical specs.
TheOneWhoMixes@reddit
Coincidentally, I've been seeing a huge wave of "technical specs" at work that almost certainly nobody has read in their entirety, including the people submitting them. And honestly, I think there's a big problem here that's being missed.
Proofreading/editing is a core part of the writing process, technical or not. Humans make mistakes, and that's okay. It creates a back-and-forth that takes some time and effort, but that should eventually lead to a better end-result. Like code, I'd argue people should spend more time reading than writing.
But if every dev is churning out 3 full specs and 10 wiki pages a day (exaggerating, I know), then who's actually reviewing, reading, or actually following the specs? I've asked this question, and have seriously been told "well, AI can also review and summarize the docs!"
Why thoughtfully iterate on the spec when it took 20 seconds to generate? We'll just generate a new one!
The problem isn't necessarily the quality of LLM-generated code/docs. It's really good at spitting out things that are very convincing, and if most people are honest with themselves, we can be very lazy reviewers when something has no obvious errors. And with the insane scale of newly generated content, people have even less time to thoughtfully review work
Long rant, and I promise this wasn't a dig directly at you. This thread just brought some pain points to the front of my brain :)
ListenLady58@reddit
Oh I usually present the documentation to my managers and team. I have to get it approved and so I do have to know what’s going on in the document. Then they take a week or so to review and approve it.
I don’t write paragraphs unless they are 3 sentences long in a technical doc. Anything long needs either bullet points, a table or a visual to quickly digest the content. I know with a lot of other technical documentation I have seen, it is a wall of text that is separated with random and unstructured headers - I don’t like reading through that either. So I try to make my own more palatable.
yabomonkey@reddit
The problem you outlined and the solution you proposed are both solved by decent management. A solid manager should be able to both manage the expectation from up high and temper their engineers from burning themselves out on documentation.
There is a middle ground which is ideal but it takes a strong manager; one who is willing to take on corp overlords. These corp overlords accept nothing but cold hard stats, who is also an engineering leader to maintain that homeostasis.
While product above, and occasionally engineers below, threaten that balance, the very best action you always have and should follow is to document EVERYTHING!
GIT handles this for you in terms of code but beyond that you should get EVERYTHING in writing. Every directive or bugfix should come through the tracking software or email; some media which you can point to when shit hits the proverbial fan.
SadTomorrow555@reddit
Yeah, I also use it to suggest new technologies and then I spend time researching them myself. lol. Like if it comes up with a good way to do something, and i like it, I immediately start researching the library/tools/etc. I need to know what it's doing. But I don't mind being impressed when it does something cool that works and using it myself. Which is what I really enjoy about it lol
babuloseo@reddit
PyCaret was a good one for me recently that it told me.
codeprimate@reddit
The best use case for an LLM is as a "information and idea processor".
The key thing to remember is that "processing" anything is more than just transformative, it's usually lossy, even if "more" is added to it.
Fidodo@reddit
Just make sure you double check every line it writes. Treat it like an intern you don't trust. Don't let up or the laziness will creep in.
Blinkinlincoln@reddit
Its undergrads. Those results are not peer reviewed and pretty biased. But made headlines.
capn-hunch@reddit
Not only do we need this study, we need 500 more before people start believing it on a wider scale.
QueenNebudchadnezzar@reddit
Nah. The powers that be desperately want you to be dependent on their paid AI. Then once your skills have atrophied they can pull the rug out, jack up the prices, or just shut it down entirely. The model for the future is what Nestle's baby formula business did to those poor mother's in Africa.
ZuzuTheCunning@reddit
Preach. Effects on search tool and overall internet usage are still an active field of research even today, and negative short term effects (i.e. "digital amnesia") were early sensationalized over more grounded qualitative shifts in cognitive processes.
The line between actual concerns over brainrot and "back in my day" babbling is too fine to take broad stances.
LargeBuffalo@reddit
What in our current reality makes you think that the more studies we have, the more likely it is that people will believe it? ;)
On the contrary... we need 500,000 more tiktoks, reels and youtube shorts about it before people start believing it on a wider scale.
johnpeters42@reddit
I'll fire up the AI TikTok generator right now!
kernJ@reddit
With enough studies eventually ChatGPT will start believing it and then these people will too
0x11110110@reddit
then we need 500,000 more tiktoks, tweets, and articles in response calling bullshit based on faulty or straight up false evidence
capn-hunch@reddit
Good point. I stand corrected, thank you
mugwhyrt@reddit
I've tutored professionally and it's not really obvious to a lot of folks how to engage in critical thinking and learning. That's a skill you have to learn, so it's not really surprising to me that many people have a hard time distinguishing between actual capability and competency and just mindlessly copying output from an LLM. Not saying you can't learn or maintain competency while using an LLM, but it does require more thought and active effort from the user. Critical thought is like a muscle you have to exercise, so its easy for all of us to let it atrophy without really knowing it's happening.
the_pwnererXx@reddit
Of course, you use your brain less if you delegate a task vs doing it yourself. That's not an indication LLM's might make you dumber, as others are implying. But that's also not even the main point of the paper
The key part of the paper is section 4.
The subjects all wrote 3 essays prior, either all with LLM's or all without. In session 4, they switched roles BUT THEY WROTE AN ESSAY ON THE SAME TOPIC THEY ALREADY WROTE!
This is an extremely critical flaw in the tests imo, as the group who already wrote an essay on the topic themselves will obviously have formed a lot of neural connections on the topic which are being reactivated. The llm group didn't do anything with the topic other than use an llm so they don't have any neural connections. They don't have any memories or thoughts from the previous sessions to draw on, they are starting fresh. It doesn't make sense to compare them to the group who is basically writing the exact same essay for a second time. If anything, their results should be compared to the other groups first attempt as a control.
They go on to state that the llm group has weaker neural activity during this, and based on what I just said that should neither be surprising nor should you jump to conclusions about how LLM's make you dumb based on that. Imo the entire study is misleading and there are actually no critical insights from it. It's just a hit piece riding ai hate for publicity.
Feel free to debate me if you disagree (if you actually read it)
Tldr: the study is flawed and the conclusions don't make sense
AchillesDev@reddit
Saddened but not shocked that I had to scroll this far for this.
Ihavenocluelad@reddit
Anything thats "AI is bad" is a karma gold mine on this subreddit. Of course AI hallucinates, and it cant replace devs, but people here act like it has never gotten a single line of code right. Its just coping to be honest. AI is far from perfect but at its current state its pretty impressivr
HansProleman@reddit
It's an unreviewed preprint with unproven replicability... The results are plausible, but those things need to be overcome before I could consider saying that I actually believed them.
Methology sounds pretty flawed as /u/the_pwnererXx (hell yea) mentions. And I suspect there are many people trying to get some of this hype bubble media attention. By doing things like uh, designing spicy, media-attracting studies and registering websites for them.
ranty_mc_rant_face@reddit
I'm surprised so many people are taking a not-yet-peer-reviewed study so seriously and uncritically.
I suggest people listen to the Change Technically podcast that dug into it in some depth (or read the transcript):
https://www.changetechnically.fyi/2396236/episodes/17378968-you-deserve-better-brain-research
I suspect it's popular because it hits some valid concerns we have - I found the idea appealing at first myself. But it doesn't seem like particularly good research.
30FootGimmePutt@reddit
That “study” looks like an ad.
When you start a flashy website and have “as seen in the…” at the top of your page, I’m not taking your science seriously.
jeffbell@reddit
Sure, it sounds like advice that we would give to anyone trying to learn the material,
... but I sure would like to see it be replicated.
It came out of the Media Lab. MIT has departments in Biology, Cognitive Science, Applied Biology, Health Science, and others but this came out of the School of Architecture.
Some of the groups had an N of 14. I've not worked through the stats, but I kind of think you would want more.
So far this paper is a preprint. It has not been peer reviewed.
AchillesDev@reddit
The number of subjects is the least objectionable thing about this abortion of a study, and is common in neuroscience.
specracer97@reddit
The most positive thing this study will do is trigger larger scale inquiries into this topic, as well as probably trigger a deeper debate on what actually constitutes and drives learning.
LLMs have done a fantastic job at showcasing how standardized testing is a flawed way to approximate expertise, because these dipshit systems have long been able to pass the bar, pass a tech hiring screen, but spectacularly shit the bed when the slightest variation is introduced. This second part is not getting as much coverage as it should, because frankly this should be a widely known and understood system use risk, and mitigation strategies should be widely taught (and yes, they do exist).
v-alan-d@reddit
If you actually read the methodology you'll scream "no shit!"
the300bros@reddit
I use the setting in chatgpt that tells it to give no fluff answers. Less bs
chaitanyathengdi@reddit
TIL there is such a setting.
the300bros@reddit
Maybe you don’t have it but I do. Maybe they haven’t rolled it out to everyone. In my chatgpt this is under settings. There’s also controls for memory.
chaitanyathengdi@reddit
Paid plan?
the300bros@reddit
No. Free. I would post some pics but apparently I can’t in here
chaitanyathengdi@reddit
You can use imgur
the300bros@reddit
2 images on imgur
Past-Listen1446@reddit
I haven't even touched AI stuff for development. I worry it would be like using social media; I'm hopelessly addicted and can't stop now.
Cobayo@reddit
On the brighter side it managed to motivate me to wake up looking forward to do my fun side projects. If you're a grown up critical person, spend some time learning how it works. After using it a bit the novelty wears off, it's nowhere close to what's being hyped to, so it's not that "addicting" once you see through.
tiplinix@reddit
I found that once the novelty wears off, it gets pretty boring.
It avoids challenging your opinions as much as possible (you can give it the most absurd ideas and it'll tell you how great they are) and once you start pushing it, it breaks in persistant ways. For coding, with StackOverflow and Leetcode problems it can give you good answers, but as soon as you start going into a niche it will just make stuff up (e.g. hallucinating library features).
That said, it's a handy tool for doing research, proofreading, rewording, finding vocabulary to name a few things.
dagit@reddit
I'm currently working solo on a personal project. I find it helpful to treat it like a hallway conversation. I can describe some technical thing I'm thinking through. Sort of like rubber ducking. It will comment on things. Be really effusive about how clever I'm being (which I absolutely hate and would like to disable, I started telling it to be skeptical and don't praise me).
Anyway, it's pretty useful as a sanity check. And often helps me get unstuck as in helps me make a decision on direction the same way a hallway conversation with a coworker helps. You can get it to make little tables of trade offs. It will sometimes bring up an approach I hadn't considered which I appreciate. I've gone off on some wikipedia reads and learned some cool new things because of that. Some of which I implemented and they were a good fit.
Basically, I think it can be used responsibly but it takes restraint and you have to use it a certain way.
It's pretty much always come back to bite me when I use any generated code from it or trust its write ups instead of seeking a primary source.
MoreRopePlease@reddit
That's a really good point, and shows your really thinking deeply about this issue!
Haha, yeah I hate the sycophantic compliments too. Ugh.
thephotoman@reddit
I've actually scaled back my use of social media.
The world doesn't need to hear my kneejerk reactions to everything. I shouldn't be incentivized to post them under any circumstances, because 95% of my kneejerk reactions are dumb things that betray my principles. Hot takes were always shit.
I couldn't take Bluesky, because it was still dominated by hot takes and other bullshit. The algorithm didn't make us like this. We did it to ourselves.
DigitalArbitrage@reddit
I would like to see the same study done with and without people using a calculator for math equations.
planetmcd@reddit
People should read /u/AchillesDev and /u/notger comments. They should be the top ones. Read the study. The only conclusion one can draw from this study is that your brain doesn't use as much energy when it isn't required to. That's it.
The study had participants write an SAT type essay for 20 minutes, 1 time a month for 4 months. Some used an LLM for various iterations depending on the study group.
If anyone takes away from this that LLMs rot your brain, you used a bad LLM to summarize the article or didn't read at all and perhaps have not enough gray matter to begin with. Just think about that stipulation, if you use an LLM for about 0.09% of your month, your brain rots. It is a silly conclusion to come to.
_random_rando_@reddit
I’m not and it makes me immensely more frustrated that I lost my last job because I didn’t want to participate in vibe coding. We literally had a company wide meeting saying that our performance reviews would include “ai adoption” as a criteria (of course this was a couple of days before we had to submit them). Literally the next day had a company wide meeting about declaring “code red” due to the high number of errors and downtime we’d been seeing.
Gauging competency strictly by number of PRs, then skyrocketing the rate at which PRs are written without increasing any guardrails or adequately allowing for code review is a recipe for disaster and it just keeeeeeps happening.
notger@reddit
Whenever you do not think for yourself and whenever you do not write with your own hand, your brain is less active.
That is common knowledge and a general truth about everything related to our bodies: Use it or lose.
That study, however, is often taken out of context and used for the usual outcry. I am very critical of LLMs, but the rule is: If you criticise that LLMs damage your brain, you must use your brain while criticising.
So in a sense, I find it very funny to read those alarmistic comments that have not really read the study or used their brain analysing it but claim that the usage of brains is at threat.
elperroborrachotoo@reddit
I'm surprised by the "look how LLM bad" reaction.
Yes, we avoid engaging our brain, or, as they say, we avoid complex neural connectivity patterns, because they burn energy.
Did we just unlock a way to save energy on some tasks, so that that energy can be used on other tasks?
Heaps and heaps of blog posts have been written about "context switches bad".
Why then is a tool "bad" if it allows me to write an essay and not have it linger (a.k.a. "I'm able to quote from it minutes later") Certainly that suggests it may make context switching easier.
Rather than lament the lack of recall, shouldn't we ask: which essay/mails should I offload toi the machine because I don't need to internalize them?
As brain workers, maybe we should work a bit on understanding our brains a bit better.
svenz@reddit
In my experience so far, the worst devs are those heavily using AI. But to be fair the AI code is marginally better than the garbage they’d write anyways.
robberviet@reddit
Yes, we need, like any other science field.
HoratioWobble@reddit
I'm too lazy to read this, lemme get chatgpt to summarize
randomInterest92@reddit
If the result is better even though we are dumber it just confirms the value of LLMs.
There are a lot of things where we entirely rely on technology and are dumb on purpose because being dumb is fine if the technology is reliable. Sure LLMs aren't that reliable for a lot of things, but for some things they already are.
Example: making fire without tools. It was an essential skill until tools were good enough to reliably create fires. So most humans unlearned creating fire from scratch. Eventually technology came in and made it even easier. Nowadays almost none of us is able to make a fire even with helpful tools and yet it is absolutely not a problem because technology is widely available and reliable to create fires for us
AchillesDev@reddit
Coming from a neuroscience background (MS, I have friends who are still in the field running EEG labs), I'm stunned that anyone is falling for it. It's written poorly, its conclusions aren't supported by its results, the PI immediately did a press tour while misrepresenting the article, and ascribes things to EEGs that aren't really supported in the literature.
The fact that the 'study' has its own special domain is another red flag.
chaitanyathengdi@reddit
Beliefs, that is the thing.
"I believe this, so it must be true."
DigmonsDrill@reddit
https://croissanthology.com/earring
chaitanyathengdi@reddit
wtf is this doing here
chaitanyathengdi@reddit
The last statement is the sad truth.
ChatGPT endlessly complimenting you is just OpenAI's CYA, but that has basically made me stop trusting anything that thing says.
This is the problem with technology though: it's making the average person dumber and dumber.
IndependentProject26@reddit
This seems obvious. Where’s the study comparing management vs the people who actually do the work?
enter360@reddit
If you aren’t using it as a skill enhancement tool then you will loose your own creativity. I get it to start writing automations and then make it rewrite them when I add on more constraints and edge cases.
TheNewOP@reddit
I find it extremely hilarious. I thought everyone knew that psychologically, writing things down is better for knowledge retention and learning compared to typing. What happens when you outsource your learning to a fucking chatbot?
Franks2000inchTV@reddit
The thing is that a lot of "common sense" is completely wrong. So yeah, we do need studies to tell us these things.
No one complains when a study shows that common sense is wrong.
gizamo@reddit
From my experience, it's clear that most devs use AI much differently than people in most other professions. For most of our careers, we've been required to think things thru pretty thoroughly before we do anything. Many other professions rely much more on repetitious trial and error.
ShardsOfSalt@reddit
I don't care the study is probably bull shit anyway lol.
thephotoman@reddit
Most of my ChatGPT queries are requests for just an explained example of how to use an unfamiliar command (or a command where I need to know the BSD-specific quirks, because I'm used to GNU, but I'm using BSD).
pugworthy@reddit
Yea we did. And do.
And I don’t mean this as an anti AI comment at all.
Some of us are using it to advantage and not because we are juniors who don’t know how things work.
LuckyWriter1292@reddit
No - I've noticed with some people they default to chat gpt without doing any work or research.
I do my research, do what I normally would do and then if I get stuck or need ideas would ask chat gpt to help.
I wonder how much company data has leaked, how many api keys are in the public domain and how much ai tech debt will have launched that we will have to fix?
DeterminedQuokka@reddit
I’m not surprised about the people copy pasting. Im a little surprised about how deep into any usage the dumb seems to go but I haven’t read the actual study yet. But it sounds like the people using it as a search engine were also reasonably impacted.
ninetofivedev@reddit
The problem with this study is that it doesn't really provide conclusion. It just provides data.
It's basically just the basis for other studies. It really doesn't think it means what you think it means OP.
mailed@reddit
I just thought it was funny that it contained sections that were deliberately added to make LLMs provide incorrect summaries to catch out all the news sites
marmot1101@reddit
It takes conscious effort to not simply accept what the ai's tell us to do, and it's effort worth spending IMHO. I try to build my habits around coming up with an idea and validating it, or asking subset questions(I'm doing x, how do I do y part of it).
The newer tools that just yeet an agent at the problem undermine this plan, but they also currently suck for certain tasks so there's time to figure out how to keep the brain sharp enough to really parse through agent prs with informed opinions. Things are going to be wild once those tools don't suck for most tasks. 20 years from now we'll either have great ai or awful software, probably both.
Fidodo@reddit
The correlation is expected but not guaranteed, and we don't know the magnitude. The point of a study is to study things in detail, not satisfy a headline.
quypro_daica@reddit
the chatbot lied to me several times already, it lost my trust
shoebill_homelab@reddit
From the paper in a misc. header:
It's funny that this paper is so widely being used to "prove" that LLM usage is lazy. It was in many ways made to exploit this cheap conclusion. The paper also documents how people use LLMs with intent exhibit far greater performance, even in retention.
So no, I'm not surprised by it.
PhilosophyTiger@reddit
Can you edit your post to include a link to the study? I would like to read it. Thanks.
UniForceMusic@reddit (OP)
https://www.brainonllm.com/
ronmex7@reddit
The people who it is a surprise to have had their brains rotted so far gone by various life phenomena up to and including ChatGPT.
Alkyen@reddit
I'm not convinced it means much at this point, I'm curious for more studios to come out.
People probably started thinking less when calculators first became mainstream. Having a useful tool is expected to reduce the manual work you do, otherwise it wouldn't be useful in the first place. Until i see a clear drop in the amount of quality work done by me or colleagues I'd stick with the idea that AI is more likely to be helpful rather than harmful (or until more/better studies)
Coneyy@reddit
The least surprising thing about that study is how much people are misquoting the conclusion, or referencing it without looking at it tbh
shindigin@reddit
99% of SO users are arrogant assholes, you'll almost always make someone unhappy regardless of the situation / context.
With AI, it's a big relief I rarely need to ask a question there nowadays, but that doesn't mean I support excessive use of AI, vibe coding and whatnot.
I agree AI shouldn't be used unless you have an approach in mind if you want to keep your skills sharp. I don't mind using it for syntax or familiarizing myself with technologies faster though.
3n91n33r@reddit
Not really, but it gives a ladder for investors/high level management to kind of gently back down on going fully in on AI and driving those decisions.