Am I in the minority for not wanting to use AI in my development?
Posted by tiny_cat@reddit | ExperiencedDevs | View on Reddit | 613 comments
The company I work for the most senior engineers (and seemingly everyone on my team) seem to all use AI for every stage of development: SQL queries, api design, FE design, documentation. And I’ve been asked why I don’t want to use it.
I have “feelings” of why I don’t like AI or where it’s worse for other industries e.g energy consumption, why read/look at something someone couldn’t bother to write, stealing etc. but nothing really concrete so I’m worried I’m just being an old fart.
Anyone have any thoughts about this?
doomedbunnies@reddit
Here's my argument regarding AI for code:
For an experienced programmer, if we're rating by difficulty, writing new code is somewhere around a 2/10. Reading and understanding somebody else's code is maybe around a 4/10. Debugging a subtle bug in code you wrote is maybe a 5/10. Debugging a subtle bug in code that somebody else wrote is maybe an 8/10. (we can have a discussion about the specific numbers I've chosen, but I hope we're all in agreement that tasks get more difficult in this order)
My problem with AI for code is that it takes the initial "writing code" step -- by far the easiest of the tasks -- and turns it into "reading and understanding somebody else's code", a harder task. And then for maintenance and debugging it also puts you onto the "somebody else" versions of the task. It just.. geez, it seems like it must make everything harder.
That's purely a theoretical argument, though. I've never actually used AI to write code (largely because of the argument above, which seems simple and obvious to me), so.. maybe I'm wrong somehow? But geez the argument seems pretty airtight to me.
Now, if you're a *novice* programmer, and writing the code is actually more difficult for you than understanding somebody else's completed code, that's a different calculation and I could understand somebody wanting to use AI in that case.
But that's not an experienced programmer. And probably won't ever *become* an experienced programmer.
fried_duck_fat@reddit
Honestly this is a brilliant tske
chills716@reddit
It’s a tool. What you choose to use is entirely up to you.
penguinmandude@reddit
This is true, it’s a tool. As such you may need to use it to keep up with the industry.
Extreme example but imagine you’re a builder that refused to use power tools. Well, everyone else will use them and you will be left behind.
It’s not a good idea to say “I don’t like these AI tools just because, and I won’t use them.” It’s fine now, but if progress keeps up, you simply won’t be competitive with engineers that do use them to boost their productivity.
OP I’d say approach it with an open mind. You don’t need to use it for everything, or even anything right now. But be aware of it, keep to update, and it wouldn’t hurt to learn how to use it anyways to protect yourself in the future
blueish55@reddit
Idk the difference vs power tools is that i cannot physically do what do the tools do
You can write code without a bot to help you
TheRealBobbyJones@reddit
People can indeed physically do what tools do. Some people could even do it quite quickly. Power tools still beat out all such people.
blueish55@reddit
what are you even saying
TheRealBobbyJones@reddit
You said people can't replicate what power tools do. They can.
blueish55@reddit
Please do go find me someone that can cut thick materials without additional tools like a manual saw or someone that can grind metal
Different_Doubt2754@reddit
This was a while ago, but a power tool would be like a chainsaw. The manual saw would be the non power tool alternative. Most power tools are just the "powered" version of the original tool.
blueish55@reddit
Who cares, the comparison is still shit lol
Different_Doubt2754@reddit
Maybe in another decade it would be a better comparison
FoxRadiant814@reddit
Ever use a power drill?
TheRealBobbyJones@reddit
I mean obviously tools must be used. Similarly we must use tools to program. The point was about power tools and AI tools.
wirenutter@reddit
Yeah that is the analogy I always use. I’m sure there were roofers who refused to use nail guns. You’ll just find yourself out classed by everyone who does.
cserepj@reddit
Yeah but these "nail guns" shoot a large percentage of times in a random direction instead of where I'm pointing. They tell me I should prompthold them better, but still.
Forward_Recover_1135@reddit
If you’re using AI to write code that you don’t understand that’s on you. The AI isn’t committing directly to main. You use it to enhance your productivity, not do your job for you.
cserepj@reddit
That is the general idea though that these CEOs envision - no humán interaction during coding.
Schmittfried@reddit
Not really tho.
Wonderful-Habit-139@reddit
No for real. When it makes a lot of hallucinations, it's at a point where if you're using it for something you don't know about (because you won't ask it something you already know right?) you can't even know if it's saying something correct or not. And the odds are not good.
Schmittfried@reddit
Yes you can and the odds are mostly fine. If you’re experienced it’s noticeable when you reach its limits.
ElderWandOwner@reddit
I think most devs use it for things they "know". Boiler plate code, simple methods, etc. I will know right away if it's wrong, but it still saves me time with that stuff.
Wonderful-Habit-139@reddit
A good dev env with snippets, macros and autocomplete seems good for these kinds of things, and are correct. But maybe I'm biased because of how I write software.
ElderWandOwner@reddit
Those don't always exist
Wonderful-Habit-139@reddit
I see. But LLMs will struggle even more in situations when there aren't many tools and resources made to manipulate data/code/envs in those situations, no? Considering they hallucinate on things they know about, and they can't do anything when it comes to new things.
Shanix@reddit
I love how everyone has to include this whenever they talk positively about machine generation. Just a wonderful subtle reminder it ain't doing it right now. And if we're being honest, it won't do it in the future either.
FoxRadiant814@reddit
Used o1-mini recently?
Shanix@reddit
Does it lie to you?
FoxRadiant814@reddit
Can you read code?
Shanix@reddit
Nah, don't throw it back to me. Does the large language model still lie to you?
Because if it does, then it's less than worthless.
FoxRadiant814@reddit
Nah thats an insane requirement. It's a part of the technology that it will be wrong sometimes. So will other people. So will the internet. You need to be capable of research and critical reading, which is required for coding anyway. But clearly, the inacurracies are worth it. If you have some asinine 100% source accuracy requirement in your life, go live in a cave and stop talking to people.
Shanix@reddit
Dude what? Making sure the thing doesn't lie to you is an insane requirement? That's foundational.
FoxRadiant814@reddit
To what degree of confidence?
Galuda@reddit
But it’ll be so amazing someday! Exponential growth! Cool, call me when it’s ready, it’s not challenging to use once (if) it works.
Wonderful-Habit-139@reddit
Even then, if they want progress I think there needs to be a different way to create an AI, and not by using LLMs. We need an AI that can actually critically think and know how to do things despite never seeing it before.
Thormidable@reddit
There are attempts to improve this, personally not entirely convinced by them. Don't get me wrong, I don't think it won't ever be solved, but not convinced by the current attempts
ghost_jamm@reddit
What if the power tools screwed things in randomly and you had to go back and check each one to make sure it was actually in the correct hole? Would it be unreasonable for me to just use my screwdriver and do it myself?
Rabbyte808@reddit
For you personally? No.
But if you were working with a team of carpenters who were successfully using power drills while you insisted things had to be hand screwed because you couldn’t trust a power drill to not strip screws or overtorque them, then that’s a you problem and not a problem with the tool.
FetaMight@reddit
How do you successfully use a power drill that screws things in randomly?
The point is that if the new tool isn't reliable and, on average, requires you to spend just as much (if not more) time reviewing and fixing its mistakes than you would have spent without it, then why use it?
That a team of carpenters are choosing to waste their time is not a good reason for me to waste mine.
FoxRadiant814@reddit
You can type faster than you can read?
FetaMight@reddit
I think your just revealed more if your hand than you'd care to.
If you're using AI to generate code that is trivial to review what are you gaining from it?
This just further supports my belief that the people claiming AI tools boost their productivity are just not writing complex systems.
FoxRadiant814@reddit
I write pretty advanced code in lots of languages. I have carpel tunnel. I like things to write for me. The goal of devs for years has been do more with less keystrokes, and that’s 100% my goal too.
FetaMight@reddit
For the systems I tend work with, reading is definitely slower than writing. And that's typical (not just me).
It's possible we just work with different kinds of systems and AI assistance works in your environment but not in mine.
I don't understand why some of you insist I MUST be lying when I say that for my work AI tools slow me down. Not all software development is the same.
Recently I completely overhauled a data acquisition system written to run on a PLC. The language used wasn't typical, the hardware wasn't typical, neither were the architecture, use cases, performance bottlenecks, or the optimisations I used. The domain was also incredibly niche.
AI was 100% useless to me there.
More recently, I worked on implementing a new data visualiser using a popular library. AI would have been helpful there, but the documentation was already so good I didn't need it.
Sometimes AI is a viable tool, other times not.
I'm getting tired of the AI fanboys insisting I must be a troglodyte if I don't use AI.
What kind of stuff do you write? If it's common web, crud, or small scripts I get it.
FoxRadiant814@reddit
Oh yeah on PLC I’m sure it doesn’t have a clue.
I’m all over the place at work helping different teams. AI/ML, Devops, Backend, and Architecture, and in my spare time I do game dev. AI tends to work well at both the trivial and the academic, as long as the academic is within certain grades of common knowledge, like it’s helped me well with DiffE when doing sims for my game, but it also writes queries well, solves random bugs from logs in the variety of languages I have to randomly work in day to day, is super useful for scripts and functions of all kinds.
iupuiclubs@reddit
The rest of the team is using their power drills great. I would have the question why only yours is screwing things in "randomly".
FetaMight@reddit
iupuiclubs@reddit
We're in stand-up, you're the only person on our 7 person team that has this opinion. Are you thinking you're just plain smarter than everyone else on the team, yet failing at using similar tools?
I dont think we would be on the same team long.
FetaMight@reddit
I really don't see what you're point is.
We have different opinions due to our different experiences. So do the teams we work with.
I don't think that's enough information to accuse each of being difficult to work with.
Can you not imagine a team that had tried AI tools and found that the quality of the output just didn't justify the time infrared in using it?
iupuiclubs@reddit
I mean you could scroll up and orient on the conversation. I'm supposed to explain AI, stand ups, comms, hubris, and lack of understanding on a project to someone asking me how many years of experience I have, not wondering about themselves lol.
The first post orients you on "if everyone else is using power tools fine, and you insist on hand torqueing", and your response is "everyone else isn't using power tools fine". Visualize this on a construction yard. You would appear to have schizophrenia, as everyone around you keeping drilling away.
Which is why I'm not engaging you in explaining the benefits of power tools over hand torqueing. You refuse to even hear the drills going around you, and are ready to cast out everyone else on the construction site for not hand torqueing.
I would say you were hallucinating in that case, funny huh.
FetaMight@reddit
Holy crap. I certainly wouldn't want to work with you based on just that first paragraph.
limeda1916@reddit
Down voted because you sound like a know-it-all. Ironic that you speak of narrow windows, I think your view point is narrow minded.
epoci@reddit
Have you seen how some people that are just starting out use google to search for information? They do it in a way that is very sub-optimal and struggle to find what they need. Imo the same applies with LLMs, once you learn how to use it and what to use it for, it's a power tool
ghost_jamm@reddit
I think the problem is that I do not agree with the analogy that AI is a power tool. My refusal to use AI is not me being a Luddite and being left behind by a refusal to use new technology. I’m fine adopting new technology, but I have to believe that the tech is genuinely helpful and saves times. Just today, I saw this article:
Hence my question. If I can do just as good a job in just as much time with my screwdriver, why do I need the power tool that will mess up 41% of the time, requiring me to re-do the work?
certified_fkin_idiot@reddit
I'm not agreeing or disagreeing with you in any way.
I'm just saying that "study" isn't actually any comprehensive study or anything. It's literally a marketing document Uplevel put together to get leads for their sales team.
Some guy in marketing probably threw that study together in a few days.
FoxRadiant814@reddit
Yeah and it can’t possibly be right. People aren’t reading their outputs.
Copilot is actually pretty dumb compared to 4o or o1. No way they are using it. But still it can boilerplate as good as most of them, and that’s all I want in my IDE.
certified_fkin_idiot@reddit
Does copilot not use 4o under the hood?
FoxRadiant814@reddit
Judging by the quality it doesn't. But I can't say for certain. But they are night and day differences in quality. I don't think they could offer their service at the token rates they churn through (its constantly reading your code and getting responses back) at the highest end models.
stephenjo2@reddit
I saw that article as well but I think Copilot is old right? I think it was introduced in 2021. Newer models like Claude 3.5 Sonnet are much better at coding.
paradoxxxicall@reddit
Copilot uses the latest version of chatgpt
AmputatorBot@reddit
It looks like you shared an AMP link. These should load faster, but AMP is controversial because of concerns over privacy and the Open Web.
Maybe check out the canonical page instead: https://www.cio.com/article/3540579/devs-gaining-little-if-anything-from-ai-coding-assistants.html
^(I'm a bot | )^(Why & About)^( | )^(Summon: u/AmputatorBot)
FoxRadiant814@reddit
Can experienced devs not read and unit test code? That’s what I don’t understand about people concerned about AI hallucinations. The boilerplate being faster than I can type perfectly justifies me reading it line by line to ensure it’s right. And it usually is. Not only that but it can document and unit test, and then all you have to do is check the unit tests and ask for specific ones.
Experience means you can fact check. It’s the noobs who can’t.
ghost_jamm@reddit
That’s my point though. There are code and doc generators already that don’t rely on AI to guess that something might be correct. If I use create-react-app to standup a new React project or JSDoc to document a class or method, I don’t have to go over everything with a fine-tooth comb to make sure the script didn’t “hallucinate” some nonsense. I just don’t see what there is to be gained from it. As for tests, once you have the basic infrastructure set up, many tests are small adjustments that can essentially be copy-pasted with whatever changes are necessary. I’d argue that writing your own tests is good practice because it reinforces your knowledge of how the system works and, if done well, acts as another form of documentation on how your system works.
FoxRadiant814@reddit
You’re just talking about templating engines right? I don’t see how that compares at all.
ghost_jamm@reddit
Right. What I’m saying is “Is it actually a time saver over writing things yourself if you just have to go back over everything again?” Can I do that? Sure. I can also just write the code and not worry about it.
I also think there are positives to writing your own code above and beyond whether or not AI is slightly faster. I know personally that I learn best by actually doing something. If all I’m doing is copy-pasting someone else’s work, whether it’s from Stack Overflow or AI, it might work fine, but how much do I actually understand what it’s doing? If i think through the logic of something and write the code and find a bug, I often have intuition on what is going wrong; that might not the case with code I didn’t write.
To each their own. If you think it helps you, great. I need to see way, way more reliability from it. Frankly, given that AI simply isn’t designed to give a “correct” answer to anything (as opposed to a “correct-seeming” answer), I have my suspicions about how well it can ever work out, at least in its current approach.
FoxRadiant814@reddit
Can you read faster than you can type? I certainly can. Let alone iterating on a design.
I've been in the industry long enough to know what I'm doing. These days I'm trying to go as fast as I can. Because I have work to get done.
IMO I spend 50%-80% less time writing 90% of code with LLMs.
ghost_jamm@reddit
Look man, if you want to use it, I’m not stopping you. It’s just not for me, at least as it currently exists.
FoxRadiant814@reddit
And I’m just saying that in industry, the people I know that don’t use LLMs are starting to be known for not using LLMs, because they are slower.
FoxRadiant814@reddit
Let me just walk you through a few things that AI has helped me with recently.
Just today, I had to implement an algorithm I had never used called dirty rectangles in rust. There was very little information about this online, at least outside the context of graphics. GPT o1 mini accurately held a conversation with me discussing many different implementations of the algorithm and their time complexity. It also gave me the implementations and they seem correct if they aren’t correct my test will catch it and I’m sure it’ll just be a small thing I needed to write test anyway now I can focus on those. To me, this frees me up time to write tests and benchmarks, which improves the quality of my code.
A little while back, I needed to write a Kubernetes policy in Rego. I don’t know Rego, and apparently neither does ChatGPT. The code it output was completely wrong. However, it was close enough to right, and logically held up enough, that I was able to look up the syntax errors and get the project done in a day without knowing the language. Rego is hard, and everyone hates it. There were no examples of similar code online, and this language is esoteric enough I don’t think I would’ve been able to translate the logic very easily without completely learning the language.
Strus@reddit
I still meet a lot of engineers that cannot use debugger at all, or they just use simple breakpoints and stepping through the code. Or they write code in a simple text editor without any autocomplete. Or they cannot use terminal and do everything in GUIs, which takes them a lot more time.
Also, many companies are hesitant to give AI access to their codebase and ban usage of AI tools.
People that won't adopt them will be fine for a long time, or even forever.
the_wind_effect@reddit
I heard someone at my place describe it as "you'll be the person in the office who was still reading books while everyone else moved onto the internet"
wuzzelputz@reddit
Which is not the best analogy, because there are books, which are vastly better than content from search engines or LLMs.
the_wind_effect@reddit
Absolutely.
I probably paraphrased it badly. The office I worked in used to have 100s of books on languages or software development practices on little book cases dotted around. Slowly over time they all became monitor stands.
No-one went to the books for quick questions, they went to the internet.
Soon no-one will got to stack overflow to get some quick implementation, they will use AI.
Schmittfried@reddit
Since basically all relevant books are available online, the Internet is for all practical purposes a strict superset of books.
The only advantage books have is that they are distraction-free, which to be fair is a very good reason to use them. But there are also many advantages to digital content.
eddie_cat@reddit
How embarrassing, reading books
What a weird thing to say lol
djnattyp@reddit
Yeah, why should I "reed books" when I can get all my information from Mr. Beast, TikTok and Facebook.
qpazza@reddit
I don't know about that analogy. I'd change it to "being the person still going to the library instead of using the Internet for information."
Or maybe I'm old
Forward_Recover_1135@reddit
Yeah the actual appropriate analogy here is you’re still pulling, like, encyclopedias off a shelf and manually paging through them to look something up instead of just searching online.
TheRealBobbyJones@reddit
As the other comment mentioned. If it makes most other devs more productive then not using it is really not an option.
Pad-Thai-Enjoyer@reddit
This. Reddit has such a hate boner for using AI, even if you’re just using it as an aid
bokmcdok@reddit
But you should use it for what it was designed to do. It wasn't designed to create SQL queries. If you use a bad query because an LLM told you to, the problem isn't with the tool.
chills716@reddit
If you follow anything blindly it is less of a tool and more a reliance or crutch.
While I’ve seen “bad” output from it and devs that didn’t use it, it’s also very subjective on what “bad” actually means. To your point however, I have seen it spit out really crappy code in general. I don’t see a difference between devs blindly copying from stack overflow or various other sources and using AI, the source of the information is probably the same anyway.
Also, I don’t know what it was designed to do. Copilot and a few others I know were done with the intent to make coding faster, I haven’t used those and can’t comment regarding their effectiveness. Still a tool, if you don’t know how to hammer a nail, is it the hammers fault?
bokmcdok@reddit
I'd say it's akin to using a hammer to force in a screw. It can work, kinda. But it's going to lead to problems because you aren't using the right tool. At least with Stack overflow there's context for the information. With AI it can just make stuff up and it's still working as intended since it isn't designed to deliver accurate information.
chills716@reddit
I find it absolutely hilarious that attorneys have used it and it cited made up cases!!
voodoo_witchdr@reddit
I had and have reservations as well. I use it in a very limited capacity. Kinda like an advanced auto-complete.
Ilookouttrainwindow@reddit
IntelliJ has upgraded their auto complete recently. I personally prefer the old approach, it really worked better. Now they added AI suggestions (or whatever they call it) that completes whole statements. Eh. Hit and miss for me. If it's a lot of repetitive code it works rather nicely, but if it's actual coding then it can become a distraction. At this point I simply miss old autocomplete logic.
maleldil@reddit
Yeah, I find it most helpful when I'm writing rote, boilerplate code which can't be auto-generated. I absolutely do not trust it when I'm making any changes to business logic, though, as it tends to guess at what conditionals I want in an if-statement and it usually guesses wrong, which is easy to miss in the moment and you end up having to go back and debug the whole thing to find where the logic error is. So I just ignore it's suggestions for anything more complex than copying fields between objects these days.
IntelHDGraphics@reddit
This new local AI in IntelliJ IDEA started to use a lot of RAM, I had to disable it
jmking@reddit
Same experience here. When it nails it, it's great. When it doesn't, it is just an annoyance that causes me to have to type more because it won't give up on its suggestion, and I can't even get simple auto-complete for inferred types and whats not, so I have to type it all out until it finally f's off.
The worst is when it looks like it's nailed it, but there's something like a suggested function param that is incorrect that I didn't catch and it ends up just making things confusing until I suss out what I'm missing and what's not needed.
binarycow@reddit
You can turn off the full line completion, and revert to normal behavior.
nevermorefu@reddit
Oh thank God. I was just thinking I need to remember not to update.
funguyshroom@reddit
Same with visual studio. Most of the time it suggests some nonsense that derails my train of thought and makes me type more since it no longer autocompletes just variable/method names.
Tinister@reddit
I wish there was a way to tune it to not autocomplete comments.
Western_Objective209@reddit
Yeah first day it was kind of cool, but I prefer the old auto complete and went back because the AI one almost never gets it right so I have to go back and edit it which is much slower then just getting it right the first time
nickisfractured@reddit
The problem is that even what it suggests needs to be explicitly read over a few times to make sure it’s half decent code which most of the time it’s not. It’s almost more time spent trying to decypher which answer from stack overflow it spits out when 99/100 answers are just shameful 😝
joshdotsmith@reddit
This exactly. I am not yet clear as to whether I’m faster or slower using it. Sometimes faster, but I certainly notice the times when I am so much slower. The red green refactor cycle here feels far more tedious than me writing tests and code.
Ok-Pace-8772@reddit
That's objectively untrue.
It's good at suggesting basic code. For example in a language you don't know or code you'd have to otherwise write manually.
It's not good at complex code. If you know these two facts you can utilize it to save a few minutes here and there.
maigpy@reddit
it can guess things properly if you use consistent naming, it learns from your code base.
EdMan2133@reddit
It does not learn from your codebase. At best it will learn from your query history, up till the point you saturate the number of tokens it can handle.
maigpy@reddit
that's just not true.
nickisfractured@reddit
Simple code I can write myself without having to use a crutch like chatgpt and it’s the complex code that would actually be helpful though, which it isn’t
scragz@reddit
it's saved me so much computer time on basic stuff that I can overcome my physical disability and do coding again professionally.
Imposter24@reddit
Exactly. It’s weird seeing the ego here. “I can write the simple stuff myself” ok no one said you couldn’t. This tool is made to generate text very quickly and so it’s the perfect use case for getting the simple stuff generated fast so you have more time for the more complex tasks. It’s not a crutch it’s a tool.
eddie_cat@reddit
But like....typing is never the bottleneck when writing code. Most of us probably type really fast. I spend much more time thinking about code, typing it is the easy part.
Ok-Pace-8772@reddit
It's just their fragile ego talking.
kerabatsos@reddit
People who say that probably aren’t as good as they think they are. And I disagree that it doesn’t write and work well with complex code. If your prompt is simple, with limited insight, you get back simple answers with limited insight. If you have a complex problem, it serves you well to understand the details of the problem.
breakslow@reddit
Hacking it take over the simplest (most boring) stuff you write makes coding much more enjoyable.
Ok-Pace-8772@reddit
Yeah keep writing the simple code every junior can write over and over again. That's the right strategy right there.
nickisfractured@reddit
Wut ?
AcesAgainstKings@reddit
I'm with you. It also learns to match your coding style. Garbage in, garbage out.
If you structure your code to be small functions completing basic tasks it essentially writes it for you.
ActuallyFullOfShit@reddit
I think this is partially true and partially not. Even with simple code, it doesn't always do very well. It might get the structure right, but it will hallucinate APIs left and right.
Western_Objective209@reddit
Last month I had it generate some code for me rather then just reading the library, was taking audio from a microphone and piping it into something else. It just was not working, so I save off the audio to a file and listen to it and it's just white noise coming out. Just cannot figure it out, struggle with it for hours, then I notice in the documentation that it was supposed to be 32 bit integers but the chatGPT code was setting it to 32 bit floats. Literally hours on a bug that I never would have written myself and could have coded up in like 15 min. This has happened to me like a dozen times now, I've decided to barely use AI generated code. Even if it helps me with a concept I'll just write the code myself
stevefuzz@reddit
Exactly my experience.
SketchySeaBeast@reddit
Yeah. Every once in a while copilot presents me something that I'll use, but I never actively go for it. Every time I ask it to /fix it makes random, unhelpful, changes to my code.
8Eternity8@reddit
I fed it a set of scanned spreadsheet images that it turned into a CSV. THAT was useful.
HelloYesThisIsFemale@reddit
My barrier to the code I will manually type has reduced a lot. For example if I need a variable from an outer function I will type the first two letters and let it auto complete, delete the end of the signature, add a comma and let that auto complete the parameter, go to usage and let that auto complete passing the variable.
It's that sort of low level stuff avoiding characters that it's great at.
Plus working with new libraries like I don't know the fundamentals of pandas yet if I go step by step with comments and ahtocompletes I can do anything.
eddie_cat@reddit
IDEs have been able to do this forever if you know how to navigate them properly
Xist3nce@reddit
Autocomplete or intellisense isn't nearly as robust. While they don't make mistakes, it's crazy to load up a severely tech indebted and large codebase, start typing a function and then the AI go "Oh I think you mean this instead" and it can bring you to the right location in source for a specific thing you couldn't find with intellisense due to the aforementioned spaghetti. The errors suck, and it can't be relied on for anything deeper than boilerplate and autocompletions for now, but it's definitely useful.
Logical-Volume9530@reddit
Kinda, but for enrey lever or learning a new programming language, it helps a lot. Also helps to search for specific functions and how to apply them in better examples than in the docs.
hoodieweather-@reddit
I recently started trying to use it for scaffolding out unit tests, and especially with trying to backfill untested code, it does a decent job. Saves a lot of tedious efforts, that seems to be the best use for me - let it handle the boilerplate.
behusbwj@reddit
Isn’t that the primary use case? You’re using it the way most people are lol
Economy-Force-9014@reddit
this is the way
microwavedHamster@reddit
It's pretty good to generate/explain regex (regexes?).
poralexc@reddit
IDK, one of my colleagues was using some crazy Perl style lookahead expression from GPT when that really should have a relatively basic pattern.
nobuhok@reddit
regi
EscapeGoat_@reddit
regexen
abrandis@reddit
Agree, for me it's a better stack overflow or Google. I would say it's only good for 50% of the questions, since the other 50 are edge cases it will never have enough training data on and will not produce reasonable answers.. it's just a tool...
Steinrikur@reddit
I like to think of it as pair programming with a fast programmer who read a lot of books on the subject but doesn't really understand what he writes. If you use the results without checking them that's on you.
I rarely use it myself but if I need boilerplate code or info on how to use a tool it's fast and easy.
stevefuzz@reddit
That's literally what it is ok at. Everything else is a bug factory.
chunky_lover92@reddit
Somewhere around my 1000th for loop I stopped giving a fuck if it was me typing it or if the AI did it for me.
Meepsters@reddit
We've had autocomplete and templates for years
chunky_lover92@reddit
exactly.
KronktheKronk@reddit
This is the way.
It gets me 80% of the code skeleton I need to implement some algorithm, and then I refine it.
Except tests. I love that it can spit out an extra few tests based on my first test. That shit is magic.
c_law_one@reddit
I'm trying to build something in Godot at the moment.
If I run into issues there or linux issues I am having difficulty resolving I might query it for ideas.
owiko@reddit
I used to be against it, thinking that I can write code faster. I’ve been doing this for 30+ years and old dogs/new tricks is a real thing.
However, as part of my job, I need to help others. So, I spent some time working through a lot of scenarios to see the good, bad, and ugly. You need to be really explicit, like you’re telling an intern what to do, to get the best output. I’ve found that utility code and general functions are great targets and, having that generated, gives me more time to focus on more challenging problems and helping others. It’s definitely saved me a bunch of time.
ramen_eggz@reddit
I use it for stuff like "How can I use Azure AD B2C to authenticate my mobile app with my API using OAuth 2.0 Authentication code flow with PKCE"
I don't ask it to write code, though the code samples it includes along with the rest of the response can be helpful
GloriousShroom@reddit
Nope. Do you worry about your other tools that automate parts of your job?
sieabah@reddit
I don't personally use it because over time it would make me an developer analyst where I'm more focused on the correctness of the problem rather than solving the problem given the business needs and constraints. I'm crafting specific prompts in order to coerce a system to spit out the most correct answer. I may waste a lot of time doing that where if I just thought about the problem and could see how it fits into the larger context (something the AI may not be able to do because of nuance with a business need).
What I do see value in is sometimes asking specifically CoPilot for it to give me leads for API documentation. Another thing I've toyed with but haven't done much with is giving CoPilot the documentation or example and my specific modification requests and having it spit out code. It hallucinates way too much to be useful in the times I tested it.
The amount of AI generated code can be summed up to a single function. It was just a helper function which took some arbitrary data and put it on the puppeteer clipboard depending on the format of the data. It also happened to get it right, every other time I've prompted for that snippet it couldn't produce anything that worked.
Professional_Bank50@reddit
It makes me uncomfortable. If something happens due to AI who is held accountable? Feels like a lawsuit waiting to happen
68696c6c@reddit
I’m with you. I much prefer a search engine + StackOverflow if I had a question about something. I don’t need an LLM, or anything else really, to write queries or solve design problems for me; that’s my job and I’m good at it. I suppose an LLM could be helpful with documentation or other softer stuff like that but tbh, I just have no desire whatsoever to interact with an LLM.
KillPenguin@reddit
I'm mostly with you. I think it can come in handy when you need to write a lot of boilerplate, or do some kind of batch refactor which can't quite be automated through normal means. But whenever I've had copilot on for general development, it really annoys me. It's like someone constantly trying to finish your sentence for you before you can even fully articulate what you want to say.
damondefault@reddit
Yeah I found this too - if the task is easy enough that an LLM can do it then it's usually no problem for me either, and if it's something that needs a bit of thinking about then the LLM just wastes time making suggestions that I had to double check then reject before doing it the way I needed it to work.
labouts@reddit
There are harder tasks they can do well; although, the quality of your system prompt is critical. Here's one I've been iterating on that's working well for me.
It depends on why the task is hard. They'll do worse when difficult because the key context is spread across a large surface area but handle more self-contained difficulty better.
For example, I've been writing a lot of experiment loss functions. I can write a naive version that's missing key logic and say,"makes this differentable, add a term that penalized X and expose anything that makes sense as a configuration parameter with init and setters"
The logic can be a bitch, especially keeping it performant, adding stability measures and be differentable depending on my goal. I can do it myself, but it'd take much longer and have more bugs.
Localized difficult tasks like that are my biggest use for LLMs.
prehensilemullet@reddit
What’s an example of a real-world requirement you used that type of prompt for? The prompt itself contains a toy example
labouts@reddit
One example is a transformer model I’m training to generate token sequences with specific CLIP features. Initially, I trained it using teacher forcing for the first few epochs to bootstrap it—essentially matching specific tokens to guide it early on. However, that approach doesn’t fully align with the model’s true purpose. Instead, I want to take the token outputs, feed them into a CLIP model, and calculate the cosine similarity between the resulting features and a target.
The challenge is that the embedding layer, which converts discrete tokens into inputs for the encoder, acts as a lookup operation that kills gradients. Additionally, since my output is logits over the vocabulary, taking the argmax to get tokens also breaks gradient flow.
One potential solution is multiplying the logits with the embeddings matrix. However, that involves multiplying a tensor with the shape (batch_size, 77, 49807) by a (49807, 768) matrix on every batch. This not only explodes memory usage but also makes gradient calculations extremely complicated.
Fortunately, I found a loss function with help from LLMs that avoids the massive memory overhead while still being differentiable, making it a more practical solution for my training process.
Another experiment I’m running involves taking two sets of latent features—one representing “original” features and the other representing “theme” features—and combining them in a specific way to produce new latent features.
I’m doing this by generating a third set of features, which start as the mean of the original and theme features that serve as parameters for an optimizer. The goal is to minimize a loss function designed to position the new latent features in a part of latent space that has the desired properties related to the original and theme features in ways that closed-form arithmetic operations can't achieve.
I also have a large set of static “reference” features, and I calculate the similarity between these references and the original, theme, and optimized features. The loss function operates on these similarities, using them to create a gradient that shifts the optimized features based on how the original and theme reference similarities relate to each other.
The loss function breaks down into several discrete cases based on these relationships:
Case 1: The theme similarity is roughly orthogonal to the reference features. The optimized features should lean toward the original, with more weight if the theme is on the same side of the orthogonal line as the original.
Case 2: The original similarity is roughly orthogonal. The optimized feature should be pushed toward matching the theme’s reference similarity but with moderate weight, regardless of the original’s position relative to the orthogonal line.
Case 3: Neither is orthogonal, and the original and theme are in opposite directions. The optimized feature should strongly match the theme reference similarity, making it very close to the theme for these reference features.
Case 4: Neither is orthogonal, but both lie on the same side of the orthogonal line. Here, the optimized feature should be pushed moderately toward matching the average of the original and theme reference similarities.
These four cases need to blend smoothly without any discontinuities, which is tricky. Using torch.where isn’t sufficient because the loss changes in a complex way across multiple reference similarities per original-theme pair.
The loss function is critical to my experiment for reasons a bit too complicated to explain here. It's challenging to design loss algorithms that are acceptably efficient and ensuring smooth loss curves with respect to optimized similarities for all possible original-theme similarity pairs.
LLMs also noted ways to improve stability in edge cases that wouldn't have been obvious to me. I'd have needed to notice the subtle problems those stabilities issues sometimes cause and diagnosis it after the fact--a significant time sink.
prehensilemullet@reddit
Hmmm, that’s certainly deep into theory I’m not familiar with. But it sounds like this is more of a case of making mathematical decisions and translating them into code that can be run in a scripting context? Most of my work is engineering full stack web application features that tie a lot of systems together, and I don’t know where I would begin if I were trying to prompt an LLM to code up a feature
labouts@reddit
There is a significant code aspect to making loss classes that plug into my model, dataloader, and training related classes. I focused on explaining the differentable loss part since you asked about that.
Prompts like that inherrently tend to contain a lot of proprietary information. I'm getting to sleep, but I can try to remember to scrub a decent example to make it sharable.
The gist is feeding it context in a particular structured manner and leaning heavily into chain of thought techniques. Never ask for a solution in one shot.
Start by asking it to repeat its understanding of the task and predict what details are important to understand before attempting a solution. After that response, prompt it to study the relevant code based on what it identified in the last response, giving verbose details about what it notices. Correct any misconceptions you see during this and ask it to dig deeper into anything it neglects.
Once that prep is finished, the context is primed to start solving it. Ask it for a detailed step-by-step plan of bite-sized subtasks based on the conversation so far. Tell it to wait for your approval to start. Once it has a plan you like (ask for modifications if you see flaw), have it proceed with the first step.
From there, give feedback on each step and tell it to proceed when you're satisfied. Once it's completely done, ask it to output the complete in-context code it wrote with all steps and any changes you requested applied.
It's more involved than simply giving it a task and copy-pasting the response. You need the ability to understand what it's doing at each point. Despite that, it can still turn a 2 to 4 hour task into a one hour task. It has the bonus effect of being a built-in pair programmer that can notice things you would have missed, like better error handling approaches or ways to make code more flexible for future changes. It'll do tedious things like writing comprehensive tests quickly as well.
How well it performs depends a lot on the quality of your system prompt (or first prompt if you're not using it as a system prompt). The one I posted works well as a general one even though it has some AI specific language. It's easy to swap those lines with instructions that focus on the type of work you do as well.
The_Hegemon@reddit
I mean this sounds like just as much, if not more, work than just doing the work in the first place.
The things is by the time I've typed out all of this to make the LLM understand all of the necessary context: I have already solved the issue anyway and typing it out takes almost no time.
labouts@reddit
You're correct for small tasks/subtasks. That's why I said
Once you get the flow down, the process I'm talking about usually takes 30-60 minutes. LLMs are great for tackling tasks that are either tedious or complex enough to take hours if you do them manually.
Saving a few hours daily gives one a huge productivity boost compared to people who aren't using LLMs to streamline their work.
It's like the edge a developer would get from working much longer hours—except better because you're not dealing with burnout or the dip in productivity from killing yourself over it.
As a side bonus, I've picked up new design and coding techniques from notice LLMs using different approaches that could be useful in the future. Using it as a replacement for learning will eventually ruin many developers; however, those willing to put the effort into proactively using it as a tool to accelerate learning will thrive.
prehensilemullet@reddit
Hmmm I see. I think I’m always afraid to give it a try because all of that prompting feels like it would be so much work, for me to put in writing at least.
labouts@reddit
It’s complicated. The workflow I’m describing is different from traditional approaches, which can be off-putting—either because people don’t like the process itself or because it’s a novel approach that has a steep learning curve. In this case, many people don't recognize the learning curve because one can lazily get okish results with low effort.
That can make it uncomfortable to attempt. Further, it’s easy to rationalize quitting when you hit difficulties when encountering bad results instead of learning how to get better results.
The hard truth: people who effectively use LLMs are more efficient than those who don’t. Even when you’re using them for tasks you could easily do yourself, it’s faster and often introduces ideas or perspectives you might have missed, like a free pair programming buddy (well, slave since they have no motives except fulfilling your task)
I believe the future will remember those who choose not to integrate LLMs into their workflow the same way we now look back at hardcore programmers who, in the past, insisted that people using compilers were inferior for not understanding the assembly code the compiler produced. That group of highly skilled programmers became obsolete, competing for the few jobs that still required assembly programming.
Similarly, those who reject LLMs and refuse to practice using them effectively may survive if they’re exceptionally good—but over time, their opportunities will become more limited as competition rises.
koreth@reddit
I too would love to see a scrubbed non-toy example if you have time to put one together. Thanks for the detailed writeup of how you use the tool; it's quite interesting to read.
damondefault@reddit
Ok well I guess I should really try it out on a side project and see. I did want to do something with web audio processing so maybe that's a good opportunity. I do think the "working with power tools" analogy makes sense.
lambofgod0492@reddit
Use it where it works, don't use it where it doesn't, simple!
yeah666@reddit
I'm not completely opposed to it but I haven't really found much use for it given that the majority of the time I'm using a language I'm very familiar with to solve domain/application specific problems.
I could see it being very useful when writing code that falls outside of my expertise though. I've seen coworkers use AI successfully when they know what and how they want to do something, but in a language they aren't working with every day.
SP-Niemand@reddit
Staff Engineer, 10+ years in the field.
I do not want to use it. There is a real risk for my only profession to lose demand because of it, it doesn't feel as fun to research and solve things with its help, it feeds the pockets and egos of people whom I dislike and don't trust (hi Sam).
BUT
I use it. And try to find more applications in my daily work. I do believe that not using something like GPT now is the same as not using search engines back in the day. There was also a cohort of people proudly memorising books of references and checking if you remember the exact signatures of some library's functions in interviews. Well those guys either died out or adapted to the new tools.
transcendalist-usa@reddit
We have a defacto ban on using it at all.
I've used it at home for some small stuff but it makes me worried that I'll be way behind the eight ball of an actually smart kid not knowing how to use these tools.
iupuiclubs@reddit
No chance my current work load / problem solving capability would be possible at all without it. Its went from "ah there's no way they would know about its capabilities yet so their uninformed opinions make sense"
To
"This guy sounds exactly like if I made a custom gpt for a newspaper fictionally talking about how bad gpt is for clicks"
Like, I don't even know where to start with the opinion "it can't code" short of offering a university course of value to explain how wrong that is.
Schmittfried@reddit
Right? Like with most anti anything trends, the majority are just parroting stuff without ever trying it.
iupuiclubs@reddit
GPT has been completely eye opening to me to the population, specifically around this. People 100% will spout stuff they have read in the news or opinions with obvious no connection themselves. People telling me gpt4 codes wrong clearly don't have a premium subscription, at which point why am I wasting my time interacting with them.
Here on reddit its a dark forest, where 1 of the 10,000 will show up. Don't be fooled, they are still the 1 in 10,000 that is completely daft/contrarian that we are talking about.
I wish I had learned this back when BTC was invented.
FetaMight@reddit
I repeat: how many years of experience do you have?
FetaMight@reddit
How many years of experience do you have? Your Reddit account makes it seem like 2 at most.
FetaMight@reddit
How can you possibly know that? It sounds like you just don't want to accept that some people's experience doesn't line up with yours so you're inventing this story that they're lying.
Schmittfried@reddit
Because that’s usually who groups of people form a collective opinion.
Other examples of this are language wars where most people making fun of the involved languages have no experience with one of them or even not whatsoever.
FetaMight@reddit
It still sounds like you're assuming this. What evidence do you actually have?
Schmittfried@reddit
Also, don’t tell me you‘ve never encountered the contrarian stereotype? You know the concept of hipsters right?
FetaMight@reddit
So no evidence.
Yes, I'm aware of the types you mention. I don't think it makes sense to just assume everyone who disagrees with you must be one of those types.
iupuiclubs@reddit
You are the contrarian. You have no skills with GPT4, no current projects, no prior attempts, no realistic knowledge of current happenings. Yet you will say "I don't think you have enough evidence to say I'm wrong". Like, dude, no one cares how far behind you are lol.
FetaMight@reddit
Do you have any evidence to support these claims?
You're hallucinating.
Schmittfried@reddit
Evidence for what? That my coworkers don’t have actual experience in some things they debate? Because I know? I work with them, I chat with them. One of them is literally a Spring Boot + PostgreSQL-only developer. He doesn’t know anything else. To be fair, he only worked 2 prior jobs and both of them were similar, so that’s to be expected. But it’s not like he’s open to learning entirely new technologies anyway.
Regarding how groups of people form collective opinions: If you’re actually questioning this, I’m quite surprised this isn’t common knowledge yet. Please forgive me for not looking for sources right now as I‘m on the phone, but this is basically why fake news work. People rarely question superficial news when it confirms their prior beliefs.
FetaMight@reddit
Your original comment wasn't just about your colleagues.
I am not.
iupuiclubs@reddit
How do I know this? Seeing people say things that have no connection to reality, in relevance to AI's capability. The only way they would have this opinion, is if they have no data or software engineering skills, and literally never touched a premium version.
At a certain point, its painfully obvious if someone has not used a technology they are bad mouthing. To the point I'll just stay quiet now and laugh.
FetaMight@reddit
I... wasn't asking you...
sonobanana33@reddit
Or… other people work on harder problems which haven't been posted thousands of times on stackoverflow?
Just an idea…
iupuiclubs@reddit
If we were in a logic classroom. The professor would explain how your idea pre-supposes / has the delusion projection of exactly what my comment was explaining.
Nothing I do these days has a nice and easy stack overflow, that is exactly why it's not possible to bridge the gap in productivity given this new tooling.
sonobanana33@reddit
I've actually been in a logic classroom and you weren't mentioned at all :D
prolemango@reddit
Why?
engineered_academic@reddit
Lots of concerns about IP leakage/espionage and copyright infringement that hasn't been litigated yet. Nobody wants to be the test case.
chazmusst@reddit
Not sure those concerns are genuine. It sounds like a policy from 2022 when ChatGPT 3.5 burst onto the scene. I think we can trust Microsoft not to leak enterprise customers data. That's their whole business model
Schmittfried@reddit
We can absolutely not, at least not the „we“ outside the US. American 3 letter agencies have been known for corporate espionage in the past.
GameRoom@reddit
Sure, but do you consistently apply that principle to all other cloud services? Do you also refuse to use Google Docs, etc?
Schmittfried@reddit
All clouds but one are forbidden, yes. The one allowed is basically a necessary evil, we use their European data centers (yeah, I know…) and they have been audited and evaluated.
acommentator@reddit
What about potential copyright infringement?
chazmusst@reddit
Yes I think this is a greater issue than IP being leaked. My understanding is that training data included only code from public repos that have open licences - GPL/MIT/Apache etc
oursland@reddit
chazmusst@reddit
Not according to the recent lawsuit
oursland@reddit
You're going to need to cite this.
chazmusst@reddit
https://www.developer-tech.com/news/judge-dismisses-majority-github-copilot-copyright-claims/
FickleQuestion9495@reddit
A court case getting dismissed doesn't set legal precedence. I wouldn't put much stock into this case.
Perfect-Campaign9551@reddit
It has protections kinda built in for that. A few days ago I was using it at work and it started writing out the answer and then removed it with a box saying it was restricted due to copyright. There is an option an organization can turn on to help prevent copyrighted code from showing up i guess.
sonobanana33@reddit
I'm sure patent trolls will just start to sue any company that uses or used AI at some point :D
DuhbCakes@reddit
Its wild to me that you state this as if 2022 was 3 decades ago. I know the bleeding edge of tech moves fast, but the industry as a whole does not. 2 years ago might as well been last week.
UncleGrimm@reddit
If the concerns weren’t genuine, I would imagine OpenAI would be more willing to negotiate on liability for leaks. But I work for a pretty large company and they wouldn’t budge even for us, we ended up going with on-prem
engineered_academic@reddit
Yeah no, I wouldn't trust anyone with anything with the amount of data breach notices I have been getting. Plus, there are privacy and regulatory concerns around LLMs.
wuzzelputz@reddit
2024 probably not the best year to trust Microsoft on cloud leaks 🥲
gowithflow192@reddit
Leakage can be gotten around using a local like ollama.
engineered_academic@reddit
Sure but if you have the knowledge and ability to set up and train your own model on your own data, you can deal with most of those concerns. These arent the same people using Github Copilot and ChatGPT.
Riley_@reddit
I'd get that concern if someone is working on innovative technology, but I usually hear the fearmongering from people that are building boring CRUD apps.
JoeBidensLongFart@reddit
That's mostly fear-mongering. Those issues have been all but solved already.
engineered_academic@reddit
Hard doubt on that, unless you can quote case law or white papers.
JoeBidensLongFart@reddit
My source is that my employer is now allowing the use of Copilot, and they are the absolutely most stodgy conservative company in existence which takes forever to change anything whatsoever. They are only OK with things that are 100% risk-free.
engineered_academic@reddit
Companies do a lot of things that carry a lot of unknown risk, the amount of privacy letters I get in the mail saying my personal information has been compromised means nobody really knows what they are doing when it comes to dealing with risk.
JoeBidensLongFart@reddit
They only care to an extent. They want to prevent the most egregious breaches, but are unwilling to spend enough money to get perfect security. Its frankly cheaper to pay the cost of the occasional massive fuckup than it is to pay for perfect security.
spicymato@reddit
Many businesses have privacy concerns.
I believe even Microsoft had a ban on using Copilot, their own AI chatbot, for a while. It's since been lifted, but they have various bits of guidance, such as using an internal one (not the generic web one) for internal-only content and being careful to not give any public AI private info, including private source code.
The trick to using it while maintaining privacy of information is to give it generic info or code which has had classes and methods renamed (if relevant), so that it can work on the problem without being exposed to internal details.
One way I've seen it phrased: treat it like an engineer from an external company.
eddie_cat@reddit
That sounds incredibly inefficient
sonobanana33@reddit
They probably use a private instance.
transcendalist-usa@reddit
Copy right and legal concerns.
I'm effective enough without it so I'm not terribly worried, but I'm sure it would help me be more productive with it.
Wonderful-Habit-139@reddit
I'd have more trust in your efficiency without it than with it.
Sheldor5@reddit
you are in the vast majority
all the public hype is coming from a tiny part of the whole development branch
developers usually don't want to be in the light of the public, that's why the public is full of excited extroverts always trying the most recent bleeding-edge stuff (shit)
Schmittfried@reddit
Developers on the other hand are commonly contrarian edgelords.
sonobanana33@reddit
This comment is a prime example.
Schmittfried@reddit
Exactly.
eddie_cat@reddit
And they often don't grok sarcasm lol
KerryAnnCoder@reddit
I find it is extremely useful in exactly three scenarios:
1) when I have to program in a language where Ai am unfamiliar with the syntax.
I'm often tasked with writing PHP code on the legacy system. I was hired despite not knowing PHP. (I am a TS/Node specialist). Chatgpt helps me with converting TS snippets to another language.
2) the 50% of the time autopilot gets code prediction right. When it's right, I hit tab. When it's wrong, I keep typing.
3) Writing unit tests. Doesn't always get it right but it usually gets close, and that saves time.
These uses of AI are great. They are automation tools.
What I have a problem with is the use of AI for creative works, as it allows wealth access to creativity without allowing creatives access to wealth.
PeterPriesth00d@reddit
I like using it when I need to Google something. It usually is easier to ask AI how or why and get exactly what I need vs trying to find a SO post of exactly what I’m looking for. And then I can ask quick follow up questions.
I don’t use it every day but it does come in handy when I do use it.
FoxRadiant814@reddit
Idk why my experience with it is so different than others, but I’m a very experienced dev in tons of languages, and LLMs are part of my day to day workflow. I see them as rarely getting anything significant wrong, and it enables me to write more code more quickly. What time I could have taken to write just the code, now I can get the documentation done and the unit tests. So it improves the quality of my code too, just by accelerating my workflow. I can read, and code compiles, and I check my tests, so why should I care if it hallucinates? Still, 9/10 times, just chatting with the thing for a few minutes can reliably produce hundreds of lines of code my carpel tunnel ridden hands no longer have to write. I think from now on, people who don’t use AI will just be too slow to keep up with those who do.
ghost-in-the-toaster@reddit
I’ve been using it for only a few weeks, and I’ve found it to be both helpful and frustrating. It can be helpful in summarizing documentation of libraries that are new to me, for example. For libraries I’m already familiar with I prefer to use the documentation directly. It can be frustrating because in full confidence it will tell you about features that simply don’t exist (hallucinations). I’ve used it for finding APIs and summarizing differences. I think it’s a helpful tool, but since I’m paying out of pocket for my current subscription I’ve been debating whether it’s worth the cost. I think it will get better with time, but I see it as an aid and not something that will do the work for me.
Regalme@reddit
I’d mess around with it more. AI is much more than just autocomplete. Multimodal really transforms problems that would take a take senior dev and allows a junior to just screenshot a proposal and it writes the code. Team reviews and it’s fine.
BertRenolds@reddit
I use it for boilerplate stuff, writing docs etc. It depends on what I'm doing.
Galuda@reddit
I've never struggled to do any of the stuff that it supposedly increases productivity for. 90% of my job is designing things, unblocking teammates, determining architecture, translating business requirements and reading and validating code. LLM's for me take that last 10%, writing code, which is easy and fun to do and turn it into reading and fixing code, which is harder and not as fun. Maybe they increase productivity of writing code for some people in some circumstances? But I don't see it.
poralexc@reddit
Best case it's of limited use (good for knowledge-base chatbots), worst case it's a mind virus waging a DDOS attack against humans.
I think it generates a lot of time consuming wild goose chases that could all be avoided by just learning some fundamentals.
I've had a couple cases where direct reports were frustrated that their LLM code isn't working and they don't know what to do, then I find they're trying to do something super basic like formatting results in a table on the cli. 15 seconds of googling later and the first result is a format string tutorial with examples of exactly what they were attempting.
Worse is when I hear things like "ChatGPT fixed our gradle build!"... like, you have no idea what gpt did to your build, just that it runs. Maybe it's also mining bitcoin, who knows?
ComputerBread@reddit
Stackoverflow made a survey: https://survey.stackoverflow.co/2024/ai#sentiment-and-usage-ai-select
seems like yes, you are part of the minority.
Now, I think you should try to understand why you don't want to use AI: fear it will replace you? too lazy to learn how to use it? not useful enough? enjoy doing everything by yourself?
Before GPT4, I didn't use AI because it wasn't useful, but now, GPT4, Claude Sonnet... can be pretty helpful.
I use LLMs to:
Give it a try!
bhh32@reddit
I’ve tried it and it’s useless. I spent more time debugging its code than I could’ve solved the issue myself. It’s hyped for hype sake.
There are energy concerns as well, more than Bitcoin to be honest. No one is covering it though because it’s such a money maker. The water consumption alone is horrible.
UncleSkippy@reddit
I don't use it. I've tried putting it into my workflow, but I found it slowed me down. I can write queries faster than I can write a prompt, then verify the answer, and then work the answer into my code.
Not repeatedly coming up with the solution yourself will erode your ability to verify correctness of the answer provided by AI and understand edge cases for that particular answer.
PuruseeTheShakingCat@reddit
This has been my experience as well. There are essentially two main use cases for the tech that I've seen, first of which being as glorified text completion (which... that's been built into Visual Studio for years and years, it's occasionally useful if it correctly guesses what you're going to do, but just the other day I was adding some fields to an object and after naming one field "Infantry" it tried to auto-gen the next field as "Outfantry" which as funny as that is, any actual person would know that's wrong), the second of which being generation of code whole cloth. Even before I had grown to completely distrust AI on ethical grounds, I felt like it wasn't actually a useful product for the latter use case because every time I tried to use it I was spending more time fixing things it got wrong or rewording the prompt after it spat out something that didn't do what I wanted. And the more complex/in-depth/context-sensitive the ask was, the worse it did. Even on tasks I felt were fairly straightforward. Eventually I just threw my hands up and disabled copilot because it's literally just faster, more efficient, and less error-prone for me to do it myself. I've kept up with the space and as far as I've seen it is no better now than it was before.
UncleSkippy@reddit
I think you really nailed it there. Same experience on my end. The generative AI lacks context and history with the codebase. At some point, the prompt becomes longer and/or more complex than the code you're trying to generate which is when people should ask themselves "What am I doing?"
prefabshangrila@reddit
I use aider on small side projects (always in ask mode, I never let that thing inject edits into my code, without a good audit). It’s a tremendous help to be honest. Reduces cognitive load, gets me started or at least half of the way there on most things usually.
At work, I mostly use the 4o mini model via API as a replacement for stackoverflow (it’s just easier to ask a question and get a direct answer rather than searching for it). The 4o mini model is dirt cheap (.15 cents for 500k words or so).
I have yet to actually use a strong model for work stuff. Mostly because the bar for quality is higher at work, and I would rather write / think through that stuff myself, rather than have an AI write an implementation that may be sub-par.
ai_eat_ass_@reddit
You will be in the breadline first.
FromHereToWhere36@reddit
Nope, got onboarded last week, yet to use a prompt
dashingThroughSnow12@reddit
At my work, I was a part of a pilot for use of GitHub Copilot. Because of that I got to see the statistics for how much we, people who self-selected to pilot Copilot, used it. (We later rolled it out as an option for anyone but my stats for that is limited.)
In the first group, a small minority of people used it heavily, a bigger minority used it never, and the biggest minority used it infrequently, with a majority falling into the last two camps. I forget the numbers for moderate use compared to the other groups.
When we rolled out it to the broader company, people could opt-in to using it. Of those, a majority who opted-in never used it.
45678915@reddit
Ai got me into development, I’ve unfortunately found that besides debugging and organization it is useless. They hype it up for profit, it’s a trend
Acceptable-Wind-2366@reddit
It's just a tool. Like IDEs, like OOP, etc.
If you ever hear folks drinking to kool-aid though, maybe remind them of Fred Brook's "No Silver Bullet"
True then (1986). True now. AI is interesting, but until it measurably improves productivity it deserves the same scepticism as a parlour trick.
Having said that, I use CoPilot every day to automate boilerplate tasks. It still manages to screw up occasionally but does generally save a bit of time. It seems like a tool that requires a seasoned hand - I can see it as dangerous if it's the only tool a junior dev learns to reach for.
RoughCap7233@reddit
I have never used it. I am not even tempted to try to be honest.
Reasons being:
I worry that my skills will deteriorate and I then become dependent on it. If I get stuck, I much rather spend the time to research and find the solution.
Solving problems is one of the things I enjoy, handing this off to an ai, is just taking the enjoyment away.
I also have heard horror stories where the ai have just invented stuff and you end up having to check what it has done anyway. I might as well write it myself if that was the case.
txiao007@reddit
AI Code Chatbot is a co-pilot, It is there to assist you. As long as you are able to deliver your tasks on time AND product quality work/code, you keep doing what you do
Stevieflyineasy@reddit
Just like people refused to adopt the Internet or social media at first, people will do the same for AI. Don't be left behind OP
gojukebox@reddit
It honestly has made me a more effective programmer, mainly speed and code quality
Hot-Release6797@reddit
I was like that too.
I actively avoided it. I tried it a couple times a lot of the code it generates just doesn't work/is outright wrong. I have friends who had the same experience.
HOWEVER, there one time I had to write a ton of tedious boilerplate for tests so I tried AI for that. It did great! Some errors here and there, but it was a lot less time consuming to clean those up than to write it all from scratch.
I use it all the times to write tests now
Main_Steak_8605@reddit
What do you think about Cursor?
PlanetBet@reddit
It's an incredibly useful tool, I think as long as you wield it like a tool, and not as a replacement for a person, you'd be a fool to not use it. After all, what's the difference between using AI to help you with some annoying piece of code, and googling a result in stackoverflow?
Tough-Boat-2601@reddit
I’ve been using it and put 5 bugs into production in the last month because of bad SQL autocomplete. It’s great for documentation.
phoenixmatrix@reddit
Even purely as a fancy auto complete, the benefits are pretty large. Especially with a good UX like from Cursor.
Just let's you type way faster and do advanced search without leaving the editor, and applying patches from those search quickly.
A running joke is you can get really damn good with vim macros, or use a massive array of GPUs to do it for you.
My carbon foot print is skyrocketing!
bravopapa99@reddit
there are now two uf us!
skauldron@reddit
And my axe!
drahgon@reddit
and my bow!
Shogobg@reddit
And my keyboard!
mgodave@reddit
Yeah, f that. I have zero interest in using AI for my dev work.
bravopapa99@reddit
I uninstalled CoPilot after two weeks, the sheer bodaciousness of its code suggestions was taking longer for me to read to make sure it wasn't bullshit. I hav 40 YOE so I hope I know what I am doing by now, but I see many younger people thinking its amazing without really having the depth of experience to fact check it or understand that it is a parrot, it makes shit up.
met0xff@reddit
On reddit feels like almost everyone hates it and if you're using a y single line from copilot you MUST DEFINITELY be an awful dev ;)
ghost_jamm@reddit
I have not and I will not be using generative AI. It just doesn’t work. The AI has no idea if what it’s telling you is correct, but even worse, it’s not designed to give “correct” output. It’s whole job is to use statistical modeling to guess what words should appear next to each other. It doesn’t “care” if the output is right or not. It’s essentially a bullshit artist.
bravopapa99@reddit
"bullshit artrist", ame brother, amen. I get so depressed at the number of people who just dont get it, in project manager positions. An "AI" is not a 10x developer, whatever the fuck that is too.
reddi7er@reddit
count me in three of us
Dx2TT@reddit
All of the people at my office who swear by it happen to be the worst, most useless, most flavor of the month devs. I'm sure its a random coincidence.
Mtsukino@reddit
I've seen AI completely make up libraries, so I don't blame you. I don't use it that often either.
Interesting_Long2029@reddit
I think this is disproportionately common with Python, but that's anecdotal.
Mtsukino@reddit
Ive experienced it in C# with it giving me strictly Java libraries lmao
notkraftman@reddit
so what? did you blindly trust it, reference a library it told you about and push the change to prod? or do you just ignore it and move on?
Mtsukino@reddit
Fuck no, i laughed at it and said this is garbage and a fake library.
notkraftman@reddit
yeah so if it gets something right, you can work faster. if it gets something wrong, you maybe wasted 20 seconds verifying that the thing is wrong. there's very little downside to using it when the data you're getting out of it can be easily verified.
cjthomp@reddit
The problem is that I have to use it very defensively and assume that every suggest is an attempt to sabotage me.
notkraftman@reddit
I think you have to treat every suggestion as having a probability of being right based on the context of the question.
Wonderful-Habit-139@reddit
That is a tremendous waste of time, coupled with the 4-5 attempts at fixing it by reprompting it, before giving up and realizing you should've done it yourself in the first place.
notkraftman@reddit
it sounds like you're using it badly. I wrote an election app the other day and I've never used electron. it got me going so much faster than I would have if I went using it.
Jordan51104@reddit
there are plenty of very simple electron apps for it to base its answers on
notkraftman@reddit
that's... my point?
FickleQuestion9495@reddit
Well if it saved you time in your specific situation then I guess everyone else is just totally clueless!
Mtsukino@reddit
You like, upper management that's on the AI hype train or something?
notkraftman@reddit
apparently I'm just in the minority of Devs who don't expect intelligence from an LLM and worked out how to use it effectively. like, I don't agree with 99% or the use cases that AI is being pushed into in the industry, but as a coding assistant it's incredibly powerful. it's like having a team of terrible junior Devs that instantly respond, sometimes they come back with crap but it's quick to verify and the times they don't outweigh the times they do.
Mtsukino@reddit
I think I'd rather just work with junior Devs, personally. For me, it's more fulfilling watching them have that spark of understanding when they finally learn something like a new pattern or figure out a rather complex problem that you guide them on.
notkraftman@reddit
for sure, but I'm not in the position to hire 5 junior Devs just to act and personal assistants for me
ActuallyFullOfShit@reddit
The problem is that an experienced dev will mostly end up with "this isn't helpful" or "this isn't real" types of situations when using GPTs. When they actually need help, the GPT is probably not going to be the thing helping them.
Great for newbies who struggle to do basic things though, GPTs may be more helpful than not for them.
notkraftman@reddit
I think if you are expecting it to solve problems or come up with something new then you're right, but that's not a good way to use it. it should be used like a better Google. that can point you in the right direction to existing information.
gravity_kills_u@reddit
The majority of developers probably don’t want to use AI to do their coding. However in my social circle we are actively building some wild applications that incorporate AI as a major component. There are a few design patterns that no one seems to be talking about.
BloodSpawnDevil@reddit
I'm glad people are wasting their time on this shit so I don't have any good competition for the next 10 years as this, toxic workplaces, and terribly inefficient corporate coding environments turn everyone's brain to mush.
No idea how this way of working could possibly be better than references, reading books, and establishing momentum in your codebase by practicing a disciplined approach and continually refining it until patterns and organizations emerge that allow you to quickly comprehend large swaths of code.
I haven't even been motivated to ask AI anything and my job explicitly forbids using it for coding.
Empanatacion@reddit
I for one strongly encourage this Luddite streak in the other people applying for the same jobs as me.
I am easily twice as productive with the combination of copilot, Claude and ChatGPT.
I guess juniors that can't tell when it's screwing up shouldn't rely on it, but I'm not a junior.
Code completing twenty lines. Instantly finding the issue when I paste 1000 line stack traces. Noticing when I make minor logic errors. Doing all the boilerplate documenting. Answering "is there a better way to X?"
Our profession is undergoing the biggest shift in decades. Not mastering these tools is bringing a knife to a gunfight.
limeda1916@reddit
I had to sort by controversial to find the guy who thinks like I do.
Empanatacion@reddit
It's weird how reactionary everybody is being. Maybe it's just the reddit upvote phenomenon. I don't hear this from any of my coworkers.
Chromanoid@reddit
For what cases do you prefer Claude?
Empanatacion@reddit
OpenAI should learn from them that I use Claude about 80% of the time almost entirely because the UI is better and does better code formatting.
I'll use ChatGPT when I want to ask it something it will need to search the Internet about. And I do think the actual AI is better on ChatGPT, so I'll use it for more complicated questions.
It seems like ChatGPT hallucinates more these days, so I'll also ask both AIs for links corroborating what they've told me if it's an answer that could be wrong without me knowing. But most of the time I'm having it do grunt work for me that is obvious when it messes up.
The more mundane reason I use ChatGPT is when I upload a spec to it and then ask it questions. We don't work on any earth shattering secrets, but I'll only tell Claude things I would be willing to post to stack overflow. We're allowed to post internal stuff to enterprise ChatGPT.
I'm really hoping copilot gets better integration into vscode with better context and features I'm maybe not even aware of, because the more help I can get without leaving the editor, the more I get done.
avitous@reddit
I'm probably worse even than you. Just can't bring myself to use any of the "AI" products, partly because my reaction to attempts to just ram it down my throat everywhere I look is to Just Say No, and for many of the same reasons: it's not only stealing the work of others, it's a profound level of cheating that in the long term stunts one's ability to learn to solve problems in a self reliant fashion, leaving people increasingly dependent on such crutches to manage even minimal levels of productivity. And this isn't even touching on the hallucination issues with all such tools.
There are so many ways such creations could be leveraged for good, but in most cases they won't. The primary driver behind corporations swarming all over this is more about dominance and control of their customers, using "AI" to throw up walls around them to more severely constrain their choices and reduce labor costs for customer support. Between these control issues and the problems noted above I feel like this frenzied push of AI everywhere will have some seriously bad consequences for human capability for the majority of us, and the few who will profit mightily from a "singularity" have no intention of sharing it with the rest of us unless it will leave them on top for good.
So if I use AI products I feel like I am somehow agreeing with all of this and sanctioning it, and mostly these days I just don't want to pollute my soul any more than it already has been.
I also have the luxury of being de facto "retired", and therefore don't have to justify my position to anybody. So this probably isn't very helpful, and I feel for anyone having to face this stuff in the workplace.
Strus@reddit
People that are most excited about AI tools in software engineering are the most unberable people I ever met. They are on par with crypto bros.
I tried using ChatGPT, Claude and Copilot for my work. They are useless. I spend more time reading the code they spit out and fixing bugs in it than I would spend writing it on my own.
I have a strong feeling that the worse software engineer someone is, the more increase in productivity with AI they claim.
GameRoom@reddit
If you want to boycott ChatGPT for its energy usage, are you going to stop taking flights next? Like I'm sure that if you tallied up the energy usage you used / your carbon footprint or whatever, your personal usage of AI is not going to be some huge portion of that.
pan0ramic@reddit
Sure but you’re going to get left behind. It’s like if someone said “I don’t use stack overflow” or “I don’t use Google”.
djnattyp@reddit
It's more like someone claiming that Dr. Sbaitso is a real psychologist...
pan0ramic@reddit
That’s not that what I’m claiming at all.
Whatever yall want to believe is up to you but I use AI on a daily basis as a dev with 20 years of experience. It makes me a lot faster especially at the bounds of my knowledge. Finding help via AI is better, faster, and more accurate than trying to google/stack overflow.
I can’t be the only person that has this experience. I’m not some AI savant. Sometimes the anti AI makes me feel like I’m taking crazy pills
limeda1916@reddit
Upvoted. I have had the same experience. 10 years of experience for me. Knowing that I won't get stumped on a problem is a major anxiety reliever and allows me to approach problems in a much more productive way.
DogsAreAnimals@reddit
Replace "AI" with "google" in your post and see how that sounds.
tiny_cat@reddit (OP)
These seem fundamentally different to me. One is searching from existing sources while one is generating something new from sources I don’t exactly know about. But I see your point
Historical_Flow4296@reddit
But the AI is trained on those existing sources that you’re talking about. Sure, it will hallucinate but if you put in effort into your prompts by giving it detail or examples it won’t hallucinate that much.
It so much quicker than Googling something. For example, I had to use new AWS service that I’ve never used before. I fed it the whole “getting started” doc and asked it for more advanced topics related to the task I’m doing…low and behold it produced nearly exactly what I was looking for. I didn’t even need to fix that it produced because I fed it code that was similar enough to company code.
Wonderful-Habit-139@reddit
"Sure, it will hallucinate" that's enough for me.
WheresTheSauce@reddit
It can occasionally be frustrating but it’s really not that difficult to tell when it is hallucinating
Wonderful-Habit-139@reddit
That actually sounds good not gonna lie. I've seen a stream (a game jam) where the devs were not really experienced, and they asked for code in Godot to implement a feature in the game. The code obviously did not work and used many functions that didn't exist, but the main methodology actually looked sound, and they could just step through what the AI tried to do while using actual Godot functions.
But again these are not really devs, so they're excused for not thinking through the algorithm on their own from the get go.
Historical_Flow4296@reddit
You’re too thick headed, can you spot the errors? If you can’t then sure you definitely shouldn’t be using AI
Wonderful-Habit-139@reddit
I don't know man, if someone is skilled enough they don't need AI as well.
But generally yeah for students it's better to not use AI or they'll be "thick headed" per your words.
eddie_cat@reddit
I think you really overestimate how long it takes many of us to Google for documentation and find what we need
notkraftman@reddit
it's not really generating anything "new" though is it. it's just generating a statistically probable answer based on all the information it knows. like if I ask it what the health benefits of carrots are, there's a wealth of very similar data that all says the same thing that its drawing from, so it's statistically extremely likely to give me the "same" answer each time. it might phrase it differently, but that's not "new information", it's just rephrased.
AlexW1495@reddit
And yet, it still ended up suggesting putting glue on pizza and eating 2 rocks a day for good health.
I'd rather just use google, OH WAIT.
notkraftman@reddit
yeah, because if something isn't sticking to something else, it's statistically a good answer to use glue. I feel like people that struggle to use LLMs effectively are expecting it to have intelligence and then being surprised when it doesnt. it's not useful for that.
Wonderful-Habit-139@reddit
You forgot about the existence of hallucinations.
majinjoe@reddit
Accurate!! Lmao
Smyley12345@reddit
I would suspect early in your career you would have encountered devs who felt the same way about using stack overflow or search engines or Github. Part of what makes you special is you know how to solve these problems without these aides.
Now consider how much better the seniors around you back in the day would have been had they been early adopters of these tools as opposed to stubbornly avoiding adaptation.
You like your hand saw and your hammer because you are very good at using them but at some point you won't be able to keep up with the battery powered electric saw and nail gun the competition is using.
DashinTheFields@reddit
Embrace new technology. Get to know it. Then decide if you don't need it.
Critical-Shop2501@reddit
I’ve a developer since ‘93 and I use it as a tool, and I have no concerns as it makes me a little more productive and able to get the job done more easily. I still get to think and do the things I’ve been doing all this time, making effective use of the tools available to me. AI is no different. You’re gonna get left behind.
pogogram@reddit
There is a large difference between having it do all your work for you and just copy pasting the results and using the tool available to you to help you go a little faster or to help start some documentation or write some initial tests.
AI should be treated like a forklift or other heavy tool. It can help you move more stuff but it wouldn’t be trusted to do the job by itself without causing a lot of problems.
ClammyHandedFreak@reddit
I don’t know how many people are doing what and don’t particularly care, but it’s a tool one can use to spitball ideas. I feel it’s kneecapping yourself to just have a blanket stance against it. I don’t use it to generate code or anything, but I certainly use it.
rudiXOR@reddit
No, a lot of devs actually don't want to use AI. And I can't understand why, it's very helpful and makes me much faster. Can't imagine how you can actually compete without it in the future.
DjangoPony84@reddit
I feel similar - I might use it for something like generating boilerplate test code or for rewording documentation that I'm writing but I want to ultimately be in control.
SympathyMotor4765@reddit
I only use AI for searching as part of bing search, tbh as a firmware engineer working on custom SoCs a) we don't write much new code anyway b) any new code would be very platform specific and involve more testing based optimization anyway.
In my limited experience AI is very useful in scenarios where it is also most damaging i.e. when the person has limited knowledge on the topic that they can use AI to stitch together generic code to get a functional output. It would likely be extremely fast than learning from scratch in this scenario while also likely including a lot of edge case issues and bugs.
SwiftSpear@reddit
I think it's completely sane to not want to use the AI autocomplete features. Especially if you're a "high speed" programmer.
I will say that the "explain this" feature in copilot is basically just a universal good. It's not as good as pair programming with a legitimate expert in the area you're working, but it can do a lot in helping you understanding where you have unknown unknowns, and helping you to understand an unfamiliar system more quickly.
The more expert you are in the domain you're working in, the more AI will feel like a deadweight all around though. You just don't need someone making half baked generic suggestions at that level any more.
thinkmatt@reddit
The type of model you use makes a big difference. I was using copilot and it couldnt give me more than a couple lines that was correct. I switched to cursor and using claude and it is 10x better, i use it to write lots of scripts now. It is more expensive but u can probably expense to your company
8ersgonna8@reddit
Only use it to generate data, like json or csv files. Grunt work that is error prone if I do it manually. But never for code, then I always use google and stackoverflow
CouchPotatter@reddit
It is a tool. Is truly up to the dev. BUT considering the market trends I heard something that stuck with me during the last dev convention in Las Vegas. “The market wont replace software developers with AI, the market its going to replace software developers that don’t work with AI with devs that have integrated AI into their workflows” sadly productivity is a huge driver, and personally AI allows me to develop anything x3 faster than before.
Bobbbbl@reddit
I work in the embedded space. In our company, the only time engineers use AI is as a search engine replacement. I'm practically the only one who uses it ... to write my emails. But this is more specific to the embedded space. There have been excellent code generators producing boilerplate code for at least 15 years. Better optimized than anything you can get from any LLM. For the rest, the limiting factor is not the speed at which I can pump out code. So with the introduction of LLMs, virtually nothing has changed for us. Or maybe the process simply started a lot earlier with code generators and the transformation was already completed.
SereneCalathea@reddit
I don't use AI when coding. I think tiny details are very important in programming, and are often the source of bugs, and I trust myself to get the subtle details right more than an AI currently. I can see myself using it to code when I don't care about the "correctness" of the program I write.
I think AI can be useful to discuss high-level approaches to solve hard problems with, just to see if you missed anything. But you do need to double check what it tells you - I personally found ChatGPT o1's reasoning skills to be extremely weak, and it spewed complete nonsense before I had to correct it.
centauriZ1@reddit
LLM is great for menial repetitive tasks where the solution is obvious to you - use it for this.
For anything else it's at best useless and at worst going to slow you down.
Aside: LLMs are also good for writing docs and asking questions about the docs or code base.
Aside: Github copilot sucks. I find it faster to copy and paste into Claude than use Copilot and then realize the answer is idiot.
ForearmNeckDay@reddit
People talk about chatGPT and copilot too much when Claude is the MVP on the scene. I even pay a sub for it out of my pocket for work. This week it just made me handful of political capital in my company, and in turn some political power for the company.
We and other vendors need to communicate with an old archaic stock trading system using TCP. One of the functions just didn't work, and with Claude I managed to figure out from the byte streams the actual network data mismatches the documentation, so I was able to extend the library to make it work.
Today in a call I handed the code over to the other vendors because everyone was stuck on this problem for weeks.
centauriZ1@reddit
Claude really is better than most people know. I ditched ChatGPT because of the constant outages.
datsyuks_deke@reddit
I’ve been wondering how much better Claude is these days. I’ve been trying it on and off for a bit but haven’t used it since a couple months ago. Do you think it does a much better job than Chat GPT in all aspects?
ForearmNeckDay@reddit
The company has enterprise subscription to chatGPT, it couldn't solve this particular problem, I've went back and forth between the two.
Also Claude's recent UI update is really good. For a prompt it can generate multiple files and has syntax highlighting for them.
It's not "much better" in all aspects, but so far I haven't had anything where I needed to go for chatGPT. The current UX is much better tho.
They also don't cheap out with the free version, it uses the same model as the paid one, you just have smaller context window and less available prompts, and it can refuse your prompt under heavy load.
datsyuks_deke@reddit
Awesome. Thanks for explaining. I did a few questions right now and I really like how it gave me a diagram that was easy to read on the right side. Definitely looks a lot more polished from the last time I used it. Really fast and efficient too.
SoulSkrix@reddit
I'm a young fart and I don't use the autocomplete on steroids like Copilot and so on despite having it available at my company. I do use ChatGPT to help with discovery of features or to think of how to use new technology features in more "standard" ways. Basically it makes up 50% of my Google time. That said, I'm extremely skeptical, and don't accept roughly half of what it says and have to refine it a lot. So I don't often bother using the output, more to inform my decisions.
Opposite-Somewhere58@reddit
Man you better be fucking solving cancer not doing fintech or some bullshit website if you're going to bitch about wasting energy running on copilot
Ok_Breadfruit5697@reddit
Some kind of form of this post appears like every day or so.
We get it. New technology is hard. Use it or don’t, but don’t be such a snob about it.
Brains make mistakes too, and a lot, and people posting on stack overflow are too.
People also thought that writing a novel on a typewriter instead of with a pen made you an amateur and a slave to the system.
Awkward_Affect4223@reddit
New technology is hard? That's not why I don't use AI. Judgey thing to say right before "don't be such a snob."
Also that typewriter analogy is hilarious and not even somewhat comparable beyond the public aspersion.
paulydee76@reddit
Writing code is easier than reading it. This is a well known fact of software development. I really like to understand what I've written, so I prefer to write stuff I understand rather than try to understand what ai has written.
Also, I like writing code. I like coming up with solutions to problems. Why let ai do the work I enjoy? Why can't it do my ironing?
valmontvarjak@reddit
If you know what you re doing its a great tool.
wonkynonce@reddit
I don't even use autocomplete, I find it too distracting. Do your work, you'll be fine.
shayhtfc@reddit
I don't like it because it feels like dumbing down.
I will use it for trying to work out how to use some arcane tool or library though, or figuring out some weird bug, where it can come in very useful, and offer very helpful suggestions!
i.e. "What does this error in blahblah framework mean: DatabasePersistenceException: Character unknown at distribution register"
stephenjo2@reddit
I use Cursor every day with models like Claude 3.5 Sonnet and I think it's great for explaining code or quickly generating new code without typing too much.
northrupthebandgeek@reddit
You're indeed in the minority, but you're probably better for it.
I personally haven't found AI to be particularly useful yet for my work - mainly because I don't trust it to not hallucinate, so I'm spending at least as much time verifying the output as I would spend just doing the work myself.
grahambinns@reddit
Same. I’ve yet to find it massively helpful and have had it produce code which needed a lot of work to make production ready.
Cupcake7591@reddit
It should be a question of whether it’s useful and efficient, having feelings doesn’t seem like a good reason to reject it.
Dreadmaker@reddit
This. I find it kind of strange really that in a very technical world where people care about benchmarks and efficiency, people really do use “feelings” to decide whether to use AI more than most other tools. Almost every debate I’ve ever seen about it, people have opinions based on how ‘weird’ or ‘icky’ they find it, rather than whether it fits the job they’re trying to achieve.
I feel also as though many people consider it an all-or-nothing thing, when in reality it isn’t - it’s something to use for a few key jobs, not the majority of them (in my experience).
FletcherTheMouse@reddit
I work with PLCs and hardware and we needed to convert a sensor reading (which was measured in L/min of AIR) into Kg/H of another gas.
The math is not complicated, but it's not something we knew off the top of our heads (Electrical Engineers). So 2 of my colleagues set off to coax it out of ChatGPT, while I went to go find the manual. They came up with a formulae after about 20 minutes of fiddling. And...
It didn't work, and they spent the next two hours fighting with ChatGPT to correct itself (Which it did.... numerous times...incorrectly).
Turns out the sensor was special in that it was specifically built for air, and it's range readings where dependent on the gas species being used to test, and you simply couldn't apply any reference density formulae.
Which they would have know... had they read the manual... that I had printed out... and specifically asked them about.
I don't have any moralistic qualm against the use of AI to create. However, without domain knowledge, it's tempting to just blindly trust a confidant voice, and I see this happening all too often. I mean hey? it looks right?
It's also great to get some sort of feedback quickly. ANYTHING is better than nothing. But, some things are difficult and take time to understand, otherwise we're all just going to Dunning-Kruger ourselves.
brutal_cat_slayer@reddit
I wonder if uploading the entire manual as context to Gemini (Google's API) and then performing LLM queries would have worked.
Dreadmaker@reddit
So I mean to me, this is a bad use case for it, simple as that. When we’re talking about anything specific at all - like, say, a particular kind of sensor - that’s not something you should be relying on it for. I think that’s clear to anyone who uses it regularly, and I think a lot of the stigma against it comes from stories exactly like this that you shared. Not to reduce, but this basically reads to me like “hey, we tried to use a drill to hammer in a nail and it just kept not working, it was so weird!”.
The best use-cases for AI are ones that are generic and for which there are a million examples, preferably in the correct context - not specific ones.
The places I use it most are for generating unit tests. GitHub copilot in particular is using the context of my entire project, so it knows what my other unit tests look like, and it’ll save me 10-15 minutes every time I use it for that. It’s 95% right, I have to clean up a few edges, and I always manually review - and it’s still way faster than writing up a ton of mock data myself.
Another great use case is better error checking for SQL, which is famously terrible at giving you useful errors. Paste the problem SQL in and ask what the error is, and it’ll point it out correctly basically 100% of the time with good context, in my experience.
I think the issue is that a lot of people haven’t figured out the right use-cases yet, but have tried and failed to use it for something it should probably not have been used for, and decided that all ai is bad as a result. Sure, I’ve used it for things it sucked at too and gotten bad results - but that’s exactly how you learn to use it better, in my opinion. Just like any tool it takes practice to learn where best to apply it, and humanity as a whole hasn’t really figured that out yet, so it’s a mutual learning process where there aren’t a lot of trustworthy examples to use yet.
eddie_cat@reddit
Your sql example is what scares me about this. Normally, you struggle with things like that for a bit early on and then learn more about SQL and what things are actually doing and that is a non-issue. If you mask it by asking AI for the answer you're not growing your understanding
Dreadmaker@reddit
I disagree with that, actually. I see where it comes from, but keep in mind that at least ChatGPT, and presumably others, give you context and explain things. Asking it to identify what’s wrong with some sql is the same idea as asking your colleague next to you - it’ll show you what’s wrong and why. Surely there’s learning in that also, just the same as asking that colleague. It’s still you correcting the code manually, and so you’re still making the connections.
Just like asking a colleague, if you become dependent on it you won’t get better. Sure. But if you use it to explain things occasionally I see that as an aide to learning, not a hindrance, specifically because of the explanation.
ThyssenKrup@reddit
I don't really care about benchmark or efficiency. I write software because I like making things
Achrus@reddit
Verifiably correct software. Not being able to verify what chat bots say sure makes me feel “weird” and “icky” so maybe I’m just too emotional.
Forward_Recover_1135@reddit
You’re (allegedly) a software engineer. If you can’t read and understand the code suggested by an AI to accomplish something you’ve prompted it to do then you might just be in over your head in general.
Achrus@reddit
If I can read and understand the code suggested by a chat bot then why do I need the chat bot in the first place? If the code needed to accomplish a task was able to be picked up in the training set, then there’s already a pattern or package already out there that you could use.
At least that’s my perspective as a “Data Science Engineer,” quotes because I don’t feel like a real dev. I understand using the new era of chat bots as an alternative search engine but I’d rather just find the source myself.
The real problem I have with “LLMs,” the decoder only chat bots, is using them in pipelines without a human in the loop. I don’t want ChatGPT trying to do high school math when I can just code it. When my models are wrong, I say that’s an error, not a “hallucination.” With a verifiably correct workflow, I can figure out what went wrong instead of saying hehe AI hallucinates sometimes isn’t that funny?
Current_Working_6407@reddit
You can use LLMs to generate a small, targeted piece of code with strong tests around it. If you're asking it to make code you don't understand, then you're probably giving it too large of a problem to solve or need to improve the context you provide to the bot
Wonderful-Habit-139@reddit
How useful is it if we have to give it very small problems?
Current_Working_6407@reddit
It's useful in aggregate. If I have 100 small problems to solve to solve a big problem, and I solve each of the tiny problems 10% faster, I solved the whole thing 10% faster.
No LLM can currently keep the context of all 100 of those tiny problems at once, but they are very competent at the tiny or small sized ones.
Wonderful-Habit-139@reddit
So we both agree that for big complex tasks it's not good, however for small tasks, I still don't think it's as useful because if you understand the domain that those 100 small problems belong to, then the first 10 problems might take a bit longer, but then the rest is going to be done much faster manually than having to go back and prompt chatgpt (and it still makes errors in small tasks as well).
Also, there's this part that I haven't actually been thinking about that much, but there's a large amount of resources spent to generate those prompts, compared to the energy we spend to solve it. I wonder if there's going to be a drastic change in pricing as the resources used scale up.
Current_Working_6407@reddit
Well, if I have the competence to solve the big problem (assumably because it was assigned to me and why I am getting paid to solve it), then I will probably have general competence in those 100 little problems or at least competence to figure out how to solve them, right?
A lot of it has to do with how skillfully you can prompt an LLM to do your bidding too. It's a skill you can hone with time.
Also yeah, the environmental impact is a thing. But so is using Google Sheets compared to a pencil and sheet of accounting paper, on my laptop that uses a Lithium Ion battery and electricity from a natural gas plant. That's a different convo that doesn't really negate the problem solving aspect alone
Wonderful-Habit-139@reddit
About the prompting skills, I can see why a lot of people say it because I can also see how badly people google things for example. Same for prompts.
But in my experience good googling skills lead to better results than good prompting skills. Sometimes it doesn't matter how good your prompting skills are, it's going to shit the bed. Like asking how many r's in a strawberry there are. Surely that's not a prompting issue.
I see your point about the environmental impact. It makes sense that we're just comparing things on a relative scale. Thanks for that, I'll leave the environmental issues aside for a while now.
Current_Working_6407@reddit
I just think Googling and using an LLM are fundamentally different skills for different tools. Google is a search engine, and LLMs are a language model that is used to understand, generate, and edit text.
This is like saying "in my experience tennis skills lead to better results than soccer skills", if "being a software engineer" was analogous to "being good at sports".
Some things are "ungoogleable". I can't google, "Write a pure python function using type hints that takes [this data class] as an input and returns [this data class] as an output, subject to constraint A and in the context of file B", just as I can't use an LLM to get any equivalent result to queries like "Apache airflow docs", "Physically based cloud rendering paper".
Wonderful-Habit-139@reddit
I see what you mean, but I was thinking more along the lines of being able to google well would mean that you'd know how to write text that exploits the engine the best. So you'd have different ways of writing things when googling vs writing prompts, but they're still very similar skills.
Current_Working_6407@reddit
I think they are very different skills, and complement each other well
Wonderful-Habit-139@reddit
Understandable. Have a great day!
Current_Working_6407@reddit
you too!!
Schmittfried@reddit
Why would you not be able to verify it?
Dreadmaker@reddit
I mean that’s fair enough, I maybe could have expressed this differently. I suppose the thing I notice is that people who are often motivated by finding the best tool for the job tend to leave that typical thought process on the table when it comes to AI. There’s definitely a stigma there, whereas I feel the same as the person who I responded to - it’s a tool that we should be looking at neutrally, the same way you’d evaluate the best database to use for your workloads on a new project. People can have preferences there too, obviously, but there are just some cases where redis is the better choice than Postgres objectively, and vice versa - and I don’t really think a lot of people apply that same kind of use-case driven logic to AI yet - possibly because we just aren’t all used to thinking about it as a tool to be evaluated in a similar way.
To be fair it also isn’t used in place of something else - it’s an efficiency tool - so maybe that’s why people don’t quite have the same thought process around it yet.
tiny_cat@reddit (OP)
Sure I agree to a certain extent but our technical world doesn’t exist in a vacuum and I don’t think it’s bad to try to explore those feelings. Maybe there’s a valid reason someone feels “icky” just like it’s valid people want to be able to be more efficient at their jobs. It’s part of the reason I’m curious about what other people are doing and thinking about these new technologies.
ProbablyPuck@reddit
Calculators were a new technology once.
People who would have been otherwise quite proficient in arithmetic became less so. However now new people could perform advanced arithmetic with little error. So, the net effect was more completed arithmetic.
The job of calculator went away. However, mathematicians were still very much needed for more complex work with the aid of a calculator.
Use it, and figure out how you will stay relevant when eventually a layperson will be capable of writing their own basic computations.
sandstoneyoke@reddit
The difference is that calculators didn’t make shit up or produce wrong answers regularly. I think using AI productively requires a much higher barrier to entry in order to understand whether the information it is providing you is any good or not. It’s much closer to the skill of being able to research and find reliable sources and identify unreliable sources, and that skill is harder to find in people than simply being able to type or punch in equations to a calculator.
Anyway, like all tools, you have to know how to use it in order to use it effectively. I just think it’s a uniquely dangerous tool in that many people who do use it don’t know how to use it effectively.
ProbablyPuck@reddit
Lol, I seem to remember studying quite a bit of numerical analysis to understand precisely how a calculator can lie to me.
I do understand why you claim they are different. Do you understand why I still argue that they are the same?
People still lacked the ability to research when they cited the same list of sources as Wikipedia. The difference is that people are wrong much faster now because the BS is curated and served up on a plate.
Lazy, error-prone people will continue to be lazy and error-prone, but they will be much faster at it. Whereas productive people will become more productive.
FletcherTheMouse@reddit
I'd also add that a calculator enables you to do something you know... quicker! but AI enables you to do something you THINK you know... that you think is quicker!
tiny_cat@reddit (OP)
This just made me realize part of my hesitancy is just the unknowningness of how an LLM got to its conclusion. And plugging in the same question tomorrow might get me a different answer. I’m not saying it’s not useful just that’s where my hesitations lie
false79@reddit
My thoughts are if you are not using it, your competition is (as a matter of fact, not feelings).
They will do the same work for same or cheaper.
If not today, sooner if they didn't have the AI assist. There are organizations porting their Cobalt code to a modern day language with AI. It just goes to show the point, you will no job security.
putin_my_ass@reddit
I generally distrust it, b cause I'm familiar with how it works. Lol
I use it for menial stuff, like "take this list of CSV entries and convert it into a series of statements that push each row onto an array called rows and convert the CSV rows to a JSON object following this schema".
Works pretty well for that task and it saves me from text editing he'll.
Own_Candidate9553@reddit
I like it for figuring out random command line tools. Like I have a log file, and I know some combination of grep and awk will get me the output I need, and I hate having to look up the args. You can usually give the LLM a bit of sample data and what you need and it will spit out a chain of commands. I know enough about the commands to check them.
And you do have to check it. I had one where I needed to check ownership of S3 objects, and it didn't include the argument that makes the AWS cli return the object owner. So I generally have to check each step manually. Because of stuff like that, I'm not convinced it's actually faster, just less tedious.
MLGPonyGod123@reddit
You trust AI to spit out exactly what data was previously in the csv? This is a remedy for disaster
sage-longhorn@reddit
Because I don't trust it, I find it's most useless for menial tasks. With an efficient editor or a quick script I can knock out a CSV conversion relatively quickly and not have to check every row by hand to trust it's correct
With LLMs, I find they do well at sparking ideas for approaches to nuanced problems that are too specific to have lots of reddit or stack overflow discussion, but it's almost always less time for me to just do the actual leg work myself than to exhaustively check every little detail of any untrusted outputs
arjjov@reddit
Exactly brah, so many "developers" trusting LLMs to generate deterministic output is crazy
deZbrownT@reddit
Yeah, well, we live and we learn, that is just a learning curve everyone will get through. One way or another, more pain, less pain, but in the end, people will learn.
sage-longhorn@reddit
My time in software security has taught me differently. Without a systemic solution the majority of people will always take the "good enough to make it someone else's problem instead of mine" approach
Western_Objective209@reddit
I think that more bugs are making it into production because of generated code, and IMO they take more time to fix then the time you save generating the code
deZbrownT@reddit
That’s exactly what I meant when I wrote with more or less pain.
DepulseTheLasers@reddit
Honestly. Working in infosec teaches you how much paper trails of decisions and processes and run books matter.
turturtles@reddit
I worked with a couple of these developers, and they always claimed it made them 10x better and more productive. But they also had 10x as many bugs in their code. I came to the conclusion they were .08x devs and gen AI made them .8x devs.
ThatSituation9908@reddit
You can code a solution multiple ways, why does it have to be deterministic?
Wetmelon@reddit
Hmm. I suddenly have a usecase for AI code lol
ActuallyFullOfShit@reddit
Lol data conversion is the absolute last thing I'd trust an LLM to do. Simple python or....hoping the GPT didn't alter your information? Python every time here.
xland44@reddit
I ask the LLM to write the code for the data conversion, though rather than converting it itself. Then you can review the code and test on a small subset
I find it's a faster workflow than writing from scratch
ActuallyFullOfShit@reddit
That's much more reasonable
jivinturkey@reddit
What do you do when chat gpt decides to alter an entry in a row?
putin_my_ass@reddit
Well I'm not importing large datasets, just avoiding menial text editing tasks. If the data is big enough I'm just importing it into a staging table and normalising from there the old fashioned way.
spikej56@reddit
I use it for text editing and for simple shell scripting that I don't do often enough to remember syntax. It works well enough for that. I've been disappointed by Sql and css it has generated to avoid for those tasks.
AbanaClara@reddit
I use it to make utility functions lol
hakazvaka@reddit
isn’t the example you used the exact example of what LLM’s struggle with?
putin_my_ass@reddit
Is it? I've found chat gpt handles it quite easily.
billymcnilly@reddit
Same. I only want to use it for trash tasks that i don't regularly do and would rather not take up space in my brain. I don't want to use it for my bread and butter work, because i want to train myself to know it inside and out
delventhalz@reddit
I seem to be one of the few developers who actually likes writing code. I have no interest in giving up that part of the job and just becoming a PR reviewer for an overconfident sloppy junior.
fantastic_fo@reddit
I’ve recently started using it as an alternative to google when I’m stuck on something code-related and it has been pretty useful.
It usually provides reasonable solutions with nice explanations and saves you the time of having to poke around and patch together different answers you’d typically find searching on your own.
The only times I find it isn’t too helpful is when I’m doing something with a high level of complexity or nuance. It can be difficult to even think of a proper query AI in those scenarios and at best it may provide you small pieces you can work with but you’re still going to have to do the heavy lifting.
naked_number_one@reddit
I recently switched to a new programming language and LLM been a blessing. It really helps me a lot to study things faster and be very productive
TheGrooveTrain@reddit
It's useful, but it takes the fun out of it for me if I use it too much. I do development because I love the puzzle and the act of creation.
UnC0mfortablyNum@reddit
AI is going to create a new generation of bugs and lazy developers.
j_kerouac@reddit
I feel the same way. I feel like it probably helps people who are not very good programmers and can’t figure out how to solve problems on their own.
However, if you are actually competent as a programmer, I question what benefits it offers. When I write software, the difficult part is not usually “how do I do this?” It’s “how do I do this in the best way?” Or “how do I test this in all the right ways to verify functionality?”
That said, I haven’t player around with it that much.
IndependentMonth1337@reddit
It saves you time. If you know what you want, why not just let the AI generate it for you instead of you typing it out yourself?
tairar@reddit
The time it saves is immediately wasted again when I have to debug the bullshit it spits out. Besides, writing the sort of code that it provides me is one of the smallest parts of my job.
k8s-problem-solved@reddit
Exactly - that's what AI correctly applied gives you, time back to do other things. Up to you to use that time productively
WheresTheSauce@reddit
You can ask it how to do something in the best way though. I almost never ask it to write code for me, but rather bounce ideas off of it when I’m architecting something. It is extremely useful for that.
dMyst@reddit
I don’t like it for any complex coding. It is great for generating test data, acting as a peer to bounce ideas off of regarding design and architecture, writing POC’s for feature tasks, generating code for small programs to reproduce issues, breaking down advantages/disadvantages of different technologies, and also great for insight into a lot of the undocumented parts of the Windows API (since a lot of the info is not centralized in msdn and is just reverse engineered), task breakdown for estimations or delegation, and many other things. For actual code, I won’t use it but it does improve workflow or bring value in other areas the same way that searching things on google brings value.
WheresTheSauce@reddit
This is how I use it and it has significantly increased my productivity. It is extremely nice to be able to have a back and forth about your concerns about a particular approach to something, and I’ve found this is the situation where it gives the best insights.
Schmittfried@reddit
Are you using GPT 4 for the design stuff? Whenever I tried 3.5 for that it was basically a yes man / behaving as if all options are equally valid in every scenario. It pretty much contradicted itself all the time even when I pointed that out.
dMyst@reddit
Yeah, GPT4 and the new one. The new 1o seems better at this. I know it sometimes hallucinates and acts as yes man but I usually use it in a way that I already have a general design and my initial prompt is such that it will break it down into individual parts (or decisions) and present pros and cons and alternatives and considerations for each part — I don’t ask it just for generic input as I would ask a human (“what are your thoughts?”)
Schmittfried@reddit
Ok, gotta try that.
Yes, I don’t ask open questions either. When asking about a few different options and their trade-offs specifically I basically got the same answers and recommendations for the same scenarios for each one of them even though they’re contradictory.
LarryLongfellow@reddit
I think it becomes less useful the more complex your work gets. It often suggests generic things I already and when I ask it to be more specific, it just runs in circles.
met0xff@reddit
Almost everyone here on reddit seems.to hate it lol. I generally have copilot enabled and it generally spares me from looking up specific libs or similar that I don't use regularly.
Say I don't remember where the base64 encoding lib in language Y is, I just do a comment "base64 encode Blabla" and done.
I almost never write SQL but I can definitely imagine it being a time saver there if it knows all your columns and foreign keys and so on
Actually especially the older I get the more I despise the typing that I've done for decades now. Especially the snippets I've typed a million times.
thedeuceisloose@reddit
In this thread: massive conflations of LLMs, ml, ai, gen ai, and all around just willful ignorance
markole@reddit
Yes, you are. Not using LLMs is like not using search engines 15 years ago. Granted, I also liked it before but the job is to solve problems and AI helps with that more.
jrodbtllr138@reddit
In some instances, the risk of bad code outweighs the reward of fast code.
What is the worst case if there is a bug with the feature? Is it a weird UI bug? A security flaw? Does it brick the application?
Reframe from a coding mindset to the business impact mindset, and it’s easy to see how AI code hallucinations can be dangerous, so I think it’s still good to keep a cautious position and to not get complacent about using AI code.
Some changes even the worst case isn’t that bad in which case, take that speed advantage if you want to.
thedifferenceisnt@reddit
I barely use copilot anymore. I thought it was great at first and still use it for unit tests but I find myself turning it off more often than not as it gets in the way of other IDE autocompletion/features.
Noberon_1@reddit
I don't use AI as I don't find it to be suitable for the programming tasks I generally work on.
The required behavior of an application can be very subjective based on who is using an application, especially when it comes to front-end tasks. For example, users might want all the buttons to be gray. If the application is marketed to a different set of users, they might want all the buttons to be dark blue. It's not possible to describe all the relevant subjective preferences people have in numeric terms that AI models can understand.
I also wouldn't trust existing AI models to give me objectively correct results, even if they had infinite processing power. AI models are also created by people who have their own biases, and they are trained on data which may also have biases. Nobody is unbiased 100% of the time.
Also, I've noticed that AI models often hallucinate and fail to generate working code while appearing to be correct. If the issues around hallucination and biases when creating the model are fixed, I could see the models being useful for specific backend programming use cases where the expected output can be objectively defined.
Dan8720@reddit
I was hesitant to begin with but I started using it and realized how powerful it can be.
I now use GitHub copilot all the time (it's not a replacement for being able to code but it saves so much time. I use it as a boilerplate factory/fancy autocomplete)
I use chat gpt and prompt engineering for loads of things now. Writing emails, presentations, cover letters. Once you get good at prompting it and editing it with back and forth convos it's honestly life changing.
I got it to write a will for me the other day. It would have taken me hours without it
randomInterest92@reddit
It's a tool that if you use it well will greatly improve your productivity. Also keep in mind that it's only gonna get better, meaning that the tool will eventually be in the standard kit of any developer just like IDEs or Google already are. With this in mind you should definitely learn how to use it or you'll get left behind and may not get the best jobs anymore. Kind of like Leetcode is almost entirely useless in itself but it's the best tool to get a very good job, so you should still learn it
Inside_Team9399@reddit
I finally started using copilot a couple month ago and I was really surprised by how much I liked it. It's not good for doing anything complex and it doesn't really handle application logic very well, but it's great for cutting down boilerplate, scaffolding some functions, etc. Sometimes I turn it off when it annoys me, but 90% of the time it's suggestions are exactly what I want. It has made me more efficient.
I fully believe that in 10 years we'll view using AI tools the same way that we see using an IDE now. You'll just be expected to use the tools to improve your productivity. There won't be feelings about whether not you like it, it's just going to be a part of the job. Those that choose not to use it will be replaced by those who do.
Your feelings about energy consumption, IP stealing, etc. are all valid, but I don't think that's going to stop the train.
I wouldn't say you're quite in old fart territory yet, but in a few years you will be.
PartyParrotGames@reddit
Yeah, you're being an old fart. As an experienced engineer and team lead I can tell you it's becoming painfully obvious sometimes which engineers aren't keeping up with the times in a team cause they just complete far fewer tasks for the same amount of time. So, you can avoid AI but you'll just be far slower than people who learn and use the tooling well. You'll eventually be the 0.1x engineer on the team until let go by a manager who sees the performance difference between you and your peers.
bobaduk@reddit
I mostly use it for low quality prototypes. For example, I'm building a tool to simulate the behaviour of another system. Part of that work involves parsing an insane XML doc. An LLM does a horrible job of helping me write a parser that gives me vaguely correct data structures
Now I can set that aside, build the other parts of the system with some better data structures, and then go back to do a properly written and tested parser, knowing that I have some sample code that proves it's feasible.
I very rarely ship code that an LLM has written, but it's a good way to spike something and see how a solution might work.
Mediocre-Ebb9862@reddit
Soon enough refusing to use it would be, in many cases, like insisting on hand-writing assembly in 1995.
wesleyoldaker@reddit
I'm not personally against it, I just avoid like the plague any additional plugins, features, add-ons, and whatever else that's going to make my dev environment run any slower than it already is. So when Visual Studio (what most at my job use) introduced the CoPilot option, I immediately disabled it. But not cuz I'm against CoPilot specifically.
To be honest, I never even tried it. I wonder if it's any good...
That said, I know of a few cases of some of my fellow devs using gpt or something else to create stuff like static utility functions and classes, but that's usually the extent of it.
Fickle-Meaning-9407@reddit
It is only useful if the code that it writes can be more quickly understood than writing the code itself. Even so, I feel like I am less aware of corner cases or potential bugs when reading code than when writing it myself.
However, I think it's quite good if you need to explore documentation easier or write boilerplate code. It is also quite good at answering questions regarding tools with sparse documentation.
msamprz@reddit
You are not an old fart if you don't let it take over your actions, even if the feelings are there. I understand the mistrust or "general negative feelings" towards it, I really do experience it too, but I also put active conscious effort into using it because I don't think it'll go away and it's good to have a command of how to use it. It will definitely be one of those things that set apart "the old timers" and "the current gen". You don't need to learn on fear though, but stay curious - that's what differentiates it.
I now have a ready custom boilerplate for specific projects (that I already have years of experience writing) in minutes whenever I start a new one (we're at a transformational stage right now at my company so it is common). I quickly verify things (and trust my own experience for that), and then make any changes and we're good to go.
I never ship anything I wouldn't write myself, so when it spits out something I don't understand or like, I start researching. If it seems solid (even if I have to make some tweaks), I go ahead with it.
I really take it to heart that it's my own personal associate, so I treat its work as such.
IndependentMonth1337@reddit
I use it mostly for autocomplete and as a rubber duck. Think it's really good for that.
hell_razer18@reddit
I let codeium do the auto complete code..and for chat I use other tool. I wish they can do proper code review but the context will never be enough I suppose
ConsistentAide7995@reddit
I use Copilot extensively for testing. Usually not for actual development, but when writing unit tests, I just write out the name of the test and then Copilot fills it in. It's not perfect and I have to fill in details a lot, but it saves a ton of time. I highly recommend trying it.
physika5@reddit
I personally don’t use AI that much. I only use it for getting it to explain concepts to me or generating small code snippets. Even then, I need to cross reference its suggestions with actual sources, examples or documentation.
As for API/FE design, I find that doing it myself forces me to think things through properly. It also feels more natural to design from scratch after gathering requirements.
Not sure if my approach is ideal though.
Professional-Motor96@reddit
I’m the same. It feels like ai is useful for the most basic of boiler plate But it is often outdated (so can’t use latest library features), contains subtle bugs etc.
Useful to get started but have to always double check it.
forbiddenknowledg3@reddit
It works well when you know exactly what you want but cbf typing it out.
Otherwise in my experience, it wastes my time.
eddie_cat@reddit
I agree. It makes me feel like a dinosaur and that maybe something is wrong with me, but I hate the idea of it and as far as I can tell it's making people dumber and the code it spits out lacks coherent meaning so I expect a maintenance nightmare sometime in the not too distant future everywhere that the devs are big on AI.
ianpaschal@reddit
I dislike it and don’t trust it but I have to admit I’m warming up.
But I think Ive maybe found an appropriate role for it: A rubber duck with helpful ideas. I trust it as much as a talking rubber duck, but I’ve started using it in personal projects where I’m on the fence about two ways of something. I’ve been using ChatGPT, describing what approaches I’m considering and the pros and cons. Writing it out is already pretty helpful but then the response… well I begrudgingly admit it’s super helpful. Summarizes my rambling problem for me, cleans up the pros and cons, and once or twice has come up with some alternative 3rd approaches which were decent.
What is really right? Up to me. Doing the implementation? Up to me. But, “Thank you for listening Mr. DuckGPT, and I’ll take your suggestions under consideration.”
eyes-are-fading-blue@reddit
It currently cannot solve the problems I am dealing with. In fact, people extremely rarely use in our team.
brandonsredditrepo@reddit
i use it as a glorified search engine. It's faster than perusing the depths of reddit and Stack overflow
No_Future6959@reddit
At the end of the day, your 'feelings' are holding you back.
If you're skilled with prompts, AI gets things done faster. it's just a fact. It may not do it for you, but it can at least be used as a supplement.
If you and everyone else at your job are fine with your work pace, then who cares if you don't want to use it.
Regardless of your opinion on AI, its an extremely valuable tool.
AlexW1495@reddit
You are not, but will be told A LOT that you are by people whose actual skill set, if any, atrophies by the second.
bwainfweeze@reddit
Negative peer pressure.
It’s not true that high school never ends, but it’s not entirely true that it does.
ummaycoc@reddit
I don't use it because my time typing things up is part of a conversation and gives me time to adjust and consider what I really want. Also, there's no plugin for
nano
and I don't want to switch to another window.ComradeWeebelo@reddit
Its not ethical. Who owns the code ChatGPT or any other LLM produces? Its not you, the developer. You didn't write it.
What happens if that code gets audited? Who's responsible? You or the model?
You have to think about these questions as they carry actual weight.
mattD4y@reddit
Another day, another post showing r/ExperiencedDevs is full of fools
zjz@reddit
this thread makes me feel better about using it a ton
ravigehlot@reddit
I use AI every day and I’m going to keep using it. Sure, it’s not perfect and it doesn’t always have the right answers, but it really helps point you in the right direction. It can suggest ideas you might not think of on your own and save you tons of time by pulling together info on topics that would usually take a bunch of Google searches. It’s pretty great at summarizing multiple sources into a clear document in just seconds.
Busy_Quality561@reddit
I go by the "work smarter, not harder" mantra. Gen AI is far from perfect but damn if it doesn't make my job easier and production is higher. You 100% have to verify everything it tells you, of course. I find errors all the time, but when it's right, it's super fucking right. I've also learned many new ways of doing things.
astrologicrat@reddit
I can understand where you're coming from, but, as a piece of rapidly developing technology, it's a moving target. You may shoot yourself in the foot if you make a decision against using what could be a very powerful tool in a short time.
The AI we have today is considered useful to some and the models people are using were created at a time before an ungodly amount of resources was being thrown into their development. We know that LLMs improve with more data and compute so I expect whatever is coming down the pipeline to be impressive. There's also the lag effect of researchers taking 90 degree turns to specialize in the field, so I also expect there to be compounding effects as more effort is put into the field.
For now, you should be able to get by and say that you don't think it helps with your productivity if that's what you believe. At some point, you may not be able to give that answer depending on how things shape up.
foflexity@reddit
I was very reluctant to try it at all. Then I tried it and I was impressed for half a second until it gave me false answer with full/convincing confidence. I’ve since found some places it can make me more efficient… like copilot auto completing comments/docs/error messages/assertions/etc…, filing out API signatures magically from some online documentation it knows about or repeating a line/block I already typed but with a different variable. But letting it write actual code that needs proper logic I don’t trust it.
moreVCAs@reddit
Value proposition is “here’s some more bullshit code for you to review”. No thanks 🤷♂️
oelarnes@reddit
I find it baffling that licenses have been taken super seriously by the tech community for decades, tons of care and thought has been poured into the exact language and ethics of code reuse, and now we’re just like, hey stealing is fine as long as you obfuscate it. No thanks.
scientific_thinker@reddit
Maybe but I am there with you.
I think most of us prefer to write code rather than read and debug code. AI tries to replace what I like about programming while making me focus on doing the things I don't like as much.
Maintenance costs dwarf all other software development costs. Well written code is the best way I know of to reduce those costs. I think well written code is still cheaper than all of the alternatives including AI.
chipstastegood@reddit
It’s a tool, like any other. Once you learn it and get used to it, you won’t think twice about it.
mxldevs@reddit
I'd have to check that the code is correct.
Having to read someone else's code is already bad enough, now I have to read code that might be generated in various styles?
JaguarOrdinary1570@reddit
There are many things I will happily use ChatGPT/Claude for. I mainly use them as a search engine, and they're fantastic for that.
But as for actually programming, I don't use any AI tools or want to use them. I've found that the best way to explain it is to say that I don't want to use them for the same reason a passionate writer likely wouldn't want to use chatGPT to write their next book.
They like writing. It's not like it's a checklist item for them, where they're thinking "ahh man books are so long, writing so many words is so tedious, I wish I had a word writing machine to do it for me." They're very interested in each word, that's why they do it. Similarly I'm very interested in programming. I try to think very carefully about what my program is doing, and how it's doing it. It's a skill I've spent a lot of time sharpening because I like doing it.
Ofc keeping with the writing example, not every writing job is interesting, or a passion project. Having a daily article quota at some clickfarm website is a shitty job. If I worked a job like that I'd be using ChatGPT for everything. And similarly, if I were like a salesforce developer or something, just doing API plumbing and writing data models, and my performance was judged purely by lines of code or number of Jira tickets closed or w/e, then I'd be all for using copilot. But I've tried pretty hard with my career to put myself in places where programming is interesting.
JaraxxusLegion@reddit
I don't want to use it either as I find my coding ability getting worse since I'm technically writing less code. However, unfortunately, its a race to the bottom. Not using it will result in significantly less output relative to my competitors which I can't afford.
kevko5212@reddit
I have worked with some people who use it and I often have to help them figure out why their code isn't working.
StandardOk42@reddit
have you given it a fair chance?
razzemmatazz@reddit
I use it like Intellicode+. The more I expect of it the worse it's output.
sobrietyincorporated@reddit
Friggin life saver for having to write bash scripts.
I'm a fullstack dev that got sucked into IaC ops. All the domain specific languages... friggin nightmare. But nobody in ops wants to learn to use actual code. So stuck with terraform, ansible, groovy...
I just write it in cdk, Java, node and translate it to the weird endless boilerplate they love.
Upside all my stuff is now in devcontainers. Tons of scripts that automate all the stuff for the juniors. Cause man... skill levels have plummeted.
SaltyBawlz@reddit
I use it as a pair programmer. You can't just copy/paste it just like you wouldn't copy/paste stackoverflow. I would say you are being a bit of an old fart lol. It is incredibly useful for mundane tasks and discovering different options to solve problems.
dashingstag@reddit
Ye you getting old. Just know you can hold your preconceptions but the younger generations won’t.
The older generation also saw the personal computer the same way. You are now the older generation.
No-Economics-8239@reddit
I see it similar to when block chain became the new craze. Lots of companies wanted to jump on the band wagon and discover how to either monetize it or leverage it for greater productivity. In the end, there were only some niche edge cases that were really improved by it.
Certainly, LLMs have more utility than block chain. I see writing good prompts as being similar to crafting good Google searches. If the information is out there and well indexed, it can provide an answer. As to if that answer is true or useful... well, sometimes it can be valuable.
I have some coworkers who love it and claim it saves them time. In some cases, this even appears to be true.
Like any tool, knowing how and when to use it can be more valuable than the tool itself. I remain skeptical that in the current incarnations, it makes me more productive. Opinions vary on if the fault lays with me, how I use it, or the tool itself.
uraurasecret@reddit
I don't use it for auto complete because that distracts me. But I use it as Google or Slack Overflow very often.
bopbopitaliano@reddit
I use it very little to write code, but I've almost always got chatGPT open in a tab because I use it for learning and implementing unfamiliar things quickly. Especially because you can converse your way through a problem within a chat.
I frequently ask AI things like, "as a developer familiar with X, I want to know the core differences and similarities between implementing X vs Y in this context.
This way I just get the key info, often without even needing to open the docs.
No_Jackfruit_4305@reddit
It failed to read what I asked it to for generating unit tests. And it's grasp of math and logic is weak.
What I hate most about it? When auto-complete was on, the delay before it gave the most basic iterative suggestions drove me insane. IDE auto-complete is miles ahead of co-pilot
Best-Dependent9732@reddit
I use it boiler plate code, boiler plate test cases, simple tasks, text formatting, ask it review sometimes and it’s actually great at covering normal test cases, I don’t trust it 100%, I don’t copy the code and blindly paste it, I check thoroughly and often find it making mistakes, the problem is most tools today don’t go as deep as you want them to despite letting them know. Another problem is that the code generated is most of the time out of date, it’s always behind several versions of latest tech which is why I don’t use it for completing a task. Still, AI like ChatGPT is still very helpful and I can see why it can’t replace me.
xabrol@reddit
I understand what it is and how to effectively use it to accelerate my abilities and the rate at which I learn so I stopped having these reservations pretty quick. I learn SOOO much faster with AI than I do manually digging around google, documentation, etc. I find stuff SOO much faster.
I can quickly, with AI, figure out the difference between X and Y from Nuxt 2 to Nuxt 3, learn the terms I need to know, to then go look at it on sub hidden page on the nuxt 3 docs where it confirms what I just learned.
Might have taken me 30 minutes just to find that page if AI didn't point me there faster.
It also helps me formulate business proposals for new client project prospects WAY faster, unbelievably so. And I can fine tune GPT in the cloud on our internal docs and stuff and then infer on it from the entire collection of our knowledge pool and find things so much faster.
I don't use it as a God that I take things from and paste it into the code base. I use it as a condensed optimized search engine. It helps me get to what I want to know unbelievably fast compared to pre AI.
pkmnrt@reddit
I don’t use AI during coding but I’ve found it’s useful when I’m stuck on a hard problem and I can ask it highly specific questions. Usually one of its suggestions ends up being the solution or something I hadn’t thought of that leads me to my own solution.
kkert@reddit
Use it for things that it's good for, do not where it gets in the way. Just like any other tool.
thetdotbearr@reddit
I don't use it for any actual kinda development, but I might ask it like.. "what's the idiomatic to do XYZ in" and that gives me a good jumping off point to find the relevant APIs I need to do the thing, more often than not anyways
That's about it
godwink2@reddit
I think you’re in the minority. I think most devs know what the code needs to look like but previously would need to google somethings to get the syntax/keywords correctly. Using AI to save this time is the best way. Another way thats helpful is getting a breakdown of a user story description.
dmikalova-mwp@reddit
I've tried it and found it wasn't that effective. It's an autocomplete that gets in the way even more and loves to introduce subtle bugs.
I don't seem to be the only one with this experience https://shenisha.substack.com/p/are-ai-coding-assistants-really-saving
koreth@reddit
I'm in kind of the opposite boat, but it has ended up sailing to the same place. I would absolutely love to get the same productivity boost from AI tools that other people are always talking about! But so far I just haven't. I use them. They are sometimes very helpful, but not enough to significantly impact my overall productivity.
A recent success I had was to modify an existing SQL script that populates a table to map 2-letter country codes to English country names. I copy-pasted that script and told ChatGPT to add a column for the 3-letter country code. Yes, I could have done that by hand, but it took me all of 30 seconds with ChatGPT and another minute to scan the output looking for obvious errors.
A recent semi-success was when I was working on some code to produce maps and one of the geometry libraries I was using was producing unexpected output. I pasted my code into a tool (maybe Claude, don't remember) and asked it why the output wasn't what I expected. It didn't give me a useful answer. Then I asked it to write code that would give the output I wanted. It hallucinated a library function. But it also explained its hallucinated solution, and the explanation included a bit of terminology I hadn't run across before. I Googled that term and once I learned what it meant, it became clear to me why my original code was wrong, and I was able to fix it on my own.
The key in that second example, I think, is that I was working in an unfamiliar domain and I was unaware that I lacked a specific bit of knowledge. I would never have known to Google the key bit of terminology because I'd never seen it before. Using AI tools to help learn new domains is great!
But cases like that aren't everyday events for me. Most of the time, I'm writing code in a language I know very well, implementing logic I already know how to write. My day-to-day productivity bottlenecks are usually more like, "The requirements are unclear here. What did the product owner mean by this?" And no AI tool can answer that.
Some of the examples people give of major productivity boosts make me kind of scratch my head, to be honest.
Generating boilerplate code? Sure, boilerplate exists. But if I find myself writing so much of it that generating it with an LLM would give me a double-digit productivity boost, I take it as a sign that the code is missing an abstraction or an opportunity for old-fashioned, deterministic code generation. Much better to make the boilerplate unnecessary than to generate more of it faster.
Use an LLM to write tests? Writing good tests is hard, often harder than writing the application code! If my tests are so predictable and repetitive that a machine could auto-generate them, I'm probably not treating them as exercises in good software engineering. Also, I've found that LLM-generated tests are often wrong in subtle ways that don't actually cause them to fail but cause them to verify the wrong thing. I have a sneaking suspicion that this happens a lot but that people take "it passes" to mean "it's correct."
Write SQL queries with ChatGPT? I can absolutely buy this as an occasional thing if someone only rarely needs to write SQL and the queries are fairly simple. In that case it's another example of "help me out in an unfamiliar domain." But SQL isn't that hard to learn, and once you hit decent proficiency, it takes less time to write a nontrivial query than it does to describe it in sufficient detail to an LLM. And a particularly hairy query can be a sign the data model isn't quite right, which I won't realize unless I'm thinking in detail about the problem.
The tools are improving, though. I'll keep trying them and hoping they start saving me tons of time.
abeuscher@reddit
I think the folks that are feeling this way are the folks who are not interacting with the toolset very much, and that you're in a self reinforced loop. I am also old. 25 YOE. My first computer was a Vic-20.
I was recently given a small consulting job interacting with AI and so I started messing around with it. I was given the ability to just buy all the services so I have worked with github copilot, Claude, ChatGPT, and Gemini so far.
Github Copilot I like as an autocomplete on steroids. It is so relieving to start typing in a sequence of data or something and have the pattern filled in beneath.
And writing code with Claude and some of the newer models of ChatGPT can produce decent results. You still have to know how to architect, debug, and reason. So basically that's maybe 70% of the job still intact? And you can absolutely drive yourself in a circle with AI the same way you can when you're caught in a Sisyphean bug-fixing loop. But that's just part of engineering, I think. Alternating between frustration and solutions is basically where we live, AI assisted or not.
Maybe give it a fair shake and forget what the Implications are. If nothing else, your opinion will come out more nuanced at the end of it.
loumf@reddit
This podcast with Simon Willison and Gergely Orosz is aligned with what I think about it: https://newsletter.pragmaticengineer.com/p/ai-tools-for-software-engineers-simon-willison
It’s worth listening to.
Pudd1nPants@reddit
AI is probably like when IDEs started coming up. You can keep coding in notepad and still get your job done but eventually you will fall behind the people using completion and syntax highlighting because they get the job done faster. It's up to you when to make the switch
justAnotherNerd2015@reddit
I'm on a team of seven, and I think one developer uses it to assist with development. Otherwise, the rest of us do it 'the old fashioned' way. I found it mildly helpful for simple, repetitive tasks, but then again, part of being an engineer is developing those skills over many years of experience.
Personally, I think the concerns you listed are entirely valid reasons to avoid using AI as well.
HeyHeyJG@reddit
It's great for research, IMO. But everything needs to be validated. It's an incredibly confident liar.
jwsoju@reddit
Someone on my team used to reply to my comments on his PRs with, "I asked copilot..." After copilot was proved wrong a few times, I haven't seen that happen lately lol.
I think it can be useful because it can help save research time etc. But I wouldn't just take and use what it gives me without double checking first.
engineered_academic@reddit
Nah, I don't use it. The halting problem and the hallucinations are reasons enough. LLMs are not fact-based and people are treating them as if they are.
Grounds4TheSubstain@reddit
What does the halting problem have to do with LLMs?
engineered_academic@reddit
LLMs have no factual basis for the code they spit out.
Grounds4TheSubstain@reddit
... and what does that have to do with the halting problem?
engineered_academic@reddit
...if you can't see the applicability here I really don't know what to tell you.
Grounds4TheSubstain@reddit
You're just making up bullshit. Undecidability and LLMs have nothing to do with one another. Undecidability applies to exact analyses whereas LLMs are probabilistic. You have no idea what you are talking about.
engineered_academic@reddit
You proved my point thank you.
Current_Working_6407@reddit
They aren't fact based but neither are our brains; If coding is just text generation, having a tool to generate and manipulate text is very powerful. I think people overstate how amazing it is (and there is tons of hype), but the engineer writing simple unit tests, converting spreadsheets to JSON, understanding the contours of an unfamiliar codebase using LLMs will be faster than one that does all that by hand.
Wonderful-Habit-139@reddit
You're the first one I've seen say the first sentence that you said in actual good faith. The reason I disagree is that unlike LLMs, humans don't pretend like they know and start saying stuff when they don't know the answer. They tell you that they don't know and that they're gonna have to do some more research.
After that, whenever that human falls into the same problem, they're then much faster at resolving the problem (and correct). LLMs don't have that benefit, and have that first issue that I mentioned of not admitting when they're not sure about what they're saying.
Current_Working_6407@reddit
This isn't true of all people, it's true of open minded engineers that "know what they don't know" because they have specific domain expertise.
I agree what you said is a drawback to using LLMs, but I tend to assign that work of critical thinking to myself, and outsource the text generation tasks to an LLM (sometimes, again it depends on the context).
If I'm writing a boilerplate unit test, I don't need to type every letter myself to be sure that the test is right. I can read a well written test, run the test, and critique and revise the test using my brain.
Wonderful-Habit-139@reddit
I agree with your first sentence because I actually just witnessed it IRL, where the person isn't very knowledgeable about something, but isn't willing to admit that they don't know so they ended up rambling about the topic and trying to guess things. But in that case you could make the decision to not work with that person, just like you don't want to work with LLMs because of their issues. And I definitely don't think LLMs are at the level of a junior.. I know a lot of juniors that are really competent and LLMs are not.
I also agree with your second paragraph, especially when I also saw another person that mentioned having a physical disability, so I guess that can speed things up for some people. It doesn't for me because I write a bit fast (like ThePrimeagen if you know him), and I've had a lot of really, really bad experiences with LLMs in some projects. And it's not about prompting skills because I'm very comfortable with googling the same things and finding some answers that will lead me to a solution.
Regarding your last paragraph, I've seen attempts at making LLMs automate writing tests as well as running them before emitting their solution, but that does seem like a Bogo Sort kind of algorithm xD reprompting many times and using a lot of resources in order to get the answer.
I will probably give LLMs another shot (or if there's a new way of making AIs other than LLMs) and see if they actually improve productivity or not. But for now they're not good for me.
Current_Working_6407@reddit
Fair, to each their own. I just think it's a bit absurd when people try a tool and don't immediately see its value, so then write it off and think that just bc they haven't seen value, other people must be lying or overstating their experience.
An LLM is not equivalent to a Junior bc it is not a human. But I more meant in terms of if you took the "text generation" part of a Junior's brain and put it into a box. It's pretty good for most things, but it can lack context. It was more of an analogy
Wonderful-Habit-139@reddit
I appreciate what you had to say. I've definitely given plenty of time to the current LLMs because it'd be a shame to waste opportunities to improve my productivity because of false assumptions.
engineered_academic@reddit
I strongly disagree, but I don't have to convince you. If you are saying LLM is a fancier autocomplete, sure.
Current_Working_6407@reddit
No worries, it's okay to disagree. I'm just saying if I have some task like writing the boiler plate for a test, I would rather have an LLM write a test based on examples and my guidance than sit and write tests myself keystroke by keystroke. It doesn't matter if it's 100% right, because then I can review the test with my own eyes and edit it if need be. It has personally saved me a ton of time.
I'm not going to say that it can do everything under the sun, but using something like Claude 3.5 makes my life easier. I trust myself enough to know I'm not just pretending to be more efficient to seem cool or something, I use it bc it gives me more time to focus on harder problems
Riley_@reddit
People can be wrong or even lie on the rest of the internet, too. ChatGPT gets me going much faster than a google search, even if I have to tweak the code or ideas that it gives me.
Chromanoid@reddit
Do developers really do so? I highly doubt that. As soon as you do a little more with LLMs and get pointed to non-existent APIs and libraries it becomes very obvious, that they are not fact-based.
engineered_academic@reddit
idk I know people who were saying ChatGPT was answering all their questions.
Higgsy420@reddit
Every developer should be using AI to supplement their workflow. It's a no brainer.
This is like asking "Should developers be using Google? I'm not so sure yet" in 2006. It's not even a question
jdlyga@reddit
There’s a lot of people who don’t want to use AI in development. But it’s going to be a smaller and smaller group of people over the years. It reminds me of the folks who didn’t want to use an IDE for development. I was in that camp for a while myself.
sonobanana33@reddit
chatgpt4 never has any clue how to approach the things I normally need to do. Yes it's great to connect to an sql server and do a query (and apparently probably letting sql injection to happen), but that is not what I do.
Wonderful-Habit-139@reddit
There's quite a big difference between the two. IDEs only give you suggestions that are correct and are suggested by a compiler that understands the language and the libs that are in use. LLMs are not always correct.
I'm saying that as someone that likes to try out new technologies no matter if people hype it up or shit on it. And I was also looking forward to using Copilot with good feelings. But it turned out to not be that great...
hellosakamoto@reddit
Yeah I used vim for Java 2, and once during a job interview, I was asked if I used any IDE..
Joaaayknows@reddit
I understand where you’re coming from, but to be honest if you’re the only one not using it you’re going to be left behind. Everyone else will become more proficient and more efficient as a result over time.
Moreover, if your manager picks up the “I’m just not going to do it” they may single you out as the next laid off candidate because of your refusal to use a tool everyone is so excited about simply out of principal.
I like to use this comparison - if you were an accountant in the mid-nineties and you refused to use excel, you were obsolete well before you realized it.
sonobanana33@reddit
If AI knows how to solve your problem, it means your problem has been solved millions of times on the internet and you're just a code monkey.
Most experienced developers are just older code monkeys who have no clue how to approach novel problems.
Normally my work involves doing something that doesn't yet exist and isn't the millionth variation of the same old thing, so AI is kinda useless for me personally.
I expect to be downvoted by all the triggered people who recognize themselves in my comment.
Grounds4TheSubstain@reddit
I'm really confused by the responses in this thread, but I guess the silver lining is that, in some abstract sense, we are all in competition with one another, and many of you aren't taking advantage of the biggest programming productivity tool that's been invented in my 27 years of experience. Copilot is good for boilerplate, chips in a few lines of code elsewhere here and there, and is hit-or-miss beyond that. But the real game changer is ChatGPT and its ability to generate customized examples / small code snippets / shell scripts, and explain basically any mainstream technology to you. I have ChatGPT open all day every day, inquiring whether language X's standard library gives me an easy way to do task Y, whether some language feature I've never used might be applicable to some problem I'm facing, and so on. It can even read academic Ph.D. research literature, explain parts of the paper to you, and provide skeletal prototype implementations! So, I am indeed baffled by the overwhelming negativity in this thread.
edc-abc-123@reddit
How do they offer it? I didn't like it until I got a license to copilot and installed it as an IDE plug-in. Now I use it all the time because it uses the context of the project.
not_napoleon@reddit
I have a coworker who uses spicy autocomplete. He says it saves him a lot of time, but he's not any more productive than the rest of the team. Personally, I find "typing code" is rarely my most time consuming task, so I haven't really been too enticed by it.
Paul__miner@reddit
I don't even use IDEs. Absolutely no interest in so-called AI (really just ML).
nospamkhanman@reddit
I use it like a competent intern when I need to look stuff up.
"Hey write me a query to look at the running config of a SRX and return me any lines that reference the host name of X or the IP address of Y."
roynoise@reddit
Yep, I actually enjoy solving the problem. Refactoring/extending after the fact is so much easier, as is explaining things to management or team mates.
I do sometimes use AI to speed things up if I've already solved the problem, and you still have to comb through and proofread the code if you want to be a professional about it.
Competence is just way more fun.
DigThatData@reddit
Probably not. AI adoption is definitely more common in tech than elsewhere, but across the board the vast majority of people still haven't even tried chatgpt.
It doesn't matter. Every engineer ultimately develops their own process and associated tooling. E.g. if you're just not into vim or emacs, that doesn't make you a bad developer. This is the same sort of thing.
Maybe you'll find small subsets of things it's useful for in the future.
Personally, I find a lot of the ways IDEs try to be "helpful" intrusive and distracting. I usually don't even have intellisense turned on. I'm currently using PyCharm to align with the development processes of my teammates, but I find I fight its features more than I leverage them.
The speed with which I write code isn't my biggest bottleneck: getting the design right is.
Copilot is fine, but I do most the vast majority of my work without it (more to do with my broken pycharm installation, to be fair). I do still use AI in my development process though. I'll sometimes have it write boiler plate or delegate simple stuff to an LLM to build iteratively with my guidance, but mostly I use LLMs for brainstorming and as a kind of "instant answer" stackoverflow.
Other-Cover9031@reddit
idk kinda sounds like if someone refused to use google when that first became a thing, its a tool that makes us faster when used right, being stubborn about using a toolnfor emotional reasons doesn't bode well for career trajectory imo
keelanstuart@reddit
I don't usually use it to create code, but if I've got a bug I'm having trouble with, it's often good at finding them. It's also handy for other things though... psych stuff and philosophy... these things have been trained on the classics and professional journals.
ramenAtMidnight@reddit
Probably the minority if you don’t want to try it out. You said so yourself, you have no concrete evaluation of that tool and based your aversion on feelings alone. In my circle, around 90% senior engineers dropped copilot after a few days trial. Most junior engineers however found it to be great and kept using. I think all of us gave it a try though, since it’s company’s money.
powerkerb@reddit
Just used it to generate a script for my tech presentation. After the presentation, my boss said i rocked it. Its a modern tool, use it.
chescov77@reddit
I don't use it because its useless where I need it the most, which is debugging difficult bugs that require connecting logs with reports with user actions with code... I don't think there is a way to make the AI understand our environment. Maybe there is and I'm just being an old fart, just like you said you are!
oxleyca@reddit
I feel the same.
That being said, I’ve been using and enjoying Perplexity lately. It’s sometimes wrong like every LLM is, but every result comes with loads of citations pointing to where it got information from. That lets me dig in deeper elsewhere, while using it to source research material.
I’ve not personally found use cases for others.
Perfekt_Nerd@reddit
The local autocomplete from Jetbrains IDEs on my Mac just saves me a handful of characters. I no longer have to type the JSON field names on my struct fields, for example. I actually find it nice for those sorts of small things. Maybe save my wrists for a few more years.
abraham_linklater@reddit
I make it mop the floors and clean toilets. There's plenty of boring scut work that can be handed off to AI. It's also great for learning new technologies. I've even found it helpful for understanding complex (read: sloppy and overcomplicated) code.
I don't advocate for writing lazy prompts and pasting the resulting slop into your code base, but you're missing out if you don't use it at all.
Station_Sad@reddit
I haven’t incorporated it into my IDE yet but do use It heavily for things I would Google for, but I’ve noticed it helps keep me in the flow better.
For example if I am writing some business logic and stumble upon a side problem (say I need last calendar month in python), instead of searching for documentation of datetime or looking at similar stack overflow answers to piece together a solution, I just ask it for exactly what I need and it spits it out in seconds.
I did not have to context switch in my mind. I find it’s very good at these technical problems so I can remain focused on the actual product that I’m building.
abandonplanetearth@reddit
15 YoE here, still haven't typed a single message into any LLM and I have no intentions to any time soon. I have yet to be impressed with it and the impact it's making in dev teams.
penguinmandude@reddit
How can you make such a strong verdict that you’re not impressed with it and it doesn’t have an impact when you literally have never tried it.
A key part of this profession is curiosity and rationality
andersonbnog@reddit
I can only see this refusal when we also consider the huge impact of AI on the environment
abandonplanetearth@reddit
When good big things like React come along, the results speak for themselves before I even get my hands on it.
"AI" is here and my devs are literally getting worse in front of my eyes. Nothing about this is enticing.
Wonderful-Habit-139@reddit
I've tried it plenty of times and you're right. Don't let these people gaslight you. What you see happening to your devs sounds about right.
djnattyp@reddit
This "tool" is built to generate text off statistical mad libs that have no concept of truth or correctness beyond "in x% of cases this word usually comes next in the sequence." Code is required to be syntactically, semantically, and logically correct... so it doesn't apply.
ImSoCul@reddit
"this tool I've never tried sucks and I have no intention to try it out"
drahgon@reddit
If you are using it as anything other than a way better google you are using it completely wrong IMO. Just because it speak english and makes sentences that are structured well it still at it's heart is a search engine with no real understanding. It just connects concepts much better than google.
Here is an example tell it to come up with a new industry disrupting coding pattern never that have never been thought of before and give examples. It must not be based off another existing pattern and must solve a specific problem better than any other pattern that currently exists.
I asked it this exact question. It came up with Contextual State Pattern which I then googled and it ripped it off of this article
https://erkanyasun.medium.com/the-contextual-state-pattern-adapting-behavior-based-on-rich-context-8f232b0cf5a4
skauldron@reddit
I've never saw it as a tool, so I refused to use it.
To me, generative AI is just hype and marketing, and the feeling that it makes devs faster is just bias. Or – to paraphrase Primeagen – it makes .1 devs feel like they're 10x, when in fact they are actually 1x devs now.
But yeah, mine's not a popular opinion also.
Wonderful-Habit-139@reddit
I share the same opinion with the difference that I've actually used it, and suffered through its hallucinations enough to make me hate it. And that primeage quote sounds like one that I've seen in his recent videos x)
Push-Time@reddit
Yes I guess you are 😶
codemuncher@reddit
I am a non verbal or spacial-visual thinker.
If I am going to use a llm I have to turn my solutions or ideas into English then let the llm turn it into code. I then have to read back the code to debug why it’s wrong etc etc.
It’s often just simpler for me to just code what I’m thinking of.
Some small tasks work okay though.
raymondQADev@reddit
I basically use it when I get to a blocking point and I’ve run out of leads
cescquintero@reddit
I only use ChatGPT (not copilot nor supermaven or any other of those glorified autocompletions) to ask specific things that I'm unable to find searching the web. Sometimes I just can't come up with the correct words or phrases to drill the query so I get useless results. In that moment, I ask gpt.
coertan@reddit
Bro it is GARBAGE. Everyone I see using it is forgetting basic necessary skills, like the ability to write anything concisely in a professional manner, etc. I have yet to see a use in my role that isn't just outsourcing your basic ability to think.
penguinmandude@reddit
That’s kind of the point though - outsource tedious cognition tasks to free up your brain to do other, more complex things.
Like I don’t want to spend brain power coming up with some regex. I’d rather ask an LLM that gets it totally or at least close, get it working and move on to more important things
coertan@reddit
Yea, but I'm not talking about generating some complicated regex. I'm talking about the ability to concisely write up and communicate technical designs. When everyone's approach is just "hey feed the zoom transcript into chat gpt and tell it to make a concise summary" a) you get actual garbage and b) people lose the brain paths necessary to do that type of work - which means they then can't do "the more complex things" because they have literally lost the ability to talk about those things in a meaningful way with others.
ConstantinSpecter@reddit
This line of thinking - while flawed - is as old as time.
Before we invented writing, humanity relied on oral traditions to pass down knowledge. Some people in ancient Greece (even Socrates) famously argued that writing would 'make people lazy', and 'weaken their ability to think deeply'. When calculators came around, people feared that when calculators came around, people feared that basic arithmetic skills would vanish, and we’d lose the ability to do even simple math in our heads. When computers started automating tasks, there was worry that humans would forget how to work without them.
It's not that your claim is wrong - quite the contrary. It's just that it won't matter as technology progresses.
Captator@reddit
And now we’re using text to speech models to re-ephemeralise knowledge 😂
coertan@reddit
I mean, being able to formulate a thought and then concisely express it DOES matter, no matter how far technology progresses - and using a calculator for simple math doesn't require the energy equivalent of a small car /shrug
cserepj@reddit
You had a problem.
You solved it with regex.
Now you have 2 problems.
You bring in LLM.
Solve your regex with LLM.
Now you have 3 problems.
criloz@reddit
I enjoy coding, autopilot is just annoying, I find normal auto completion just enough for me, I pay chatgpt and use it for asking question that otherwise would take me more time to figure out using just Google, reddit, stack overflow or GitHub issues, also to learn new things, that were inaccesibles in the past because you need to dedicate a lot of time reading and most of those thing that you need to read have a lot of repetitive annoying info and chatgpt can trim out those things for you, so I can get a quick idea and make dumb questions
razpeitia@reddit
I like to think for myself :)
swap357@reddit
No harm in getting a second opinion. Most of them are trained over all of public GitHub. Converges and gives you avg or 50-70 percentile solution. Worst it will do is give you more confidence over your own code and design, pickup few tricks and see how garbage code 50-70 percentile programmers put out.
_nobody_else_@reddit
Because you can't find yourself in the professional environment where you say either "I don't know" or "I have to ask AI" when asked.
People like that will soon price them self out of the work pool. There are no shortcuts in IT development.
Main_Can_7055@reddit
6 YOE. I use Github Copilot and it's like autocomplete feature of the IDE. Once you know what to write for a function or statement it's really nice to let the copilot do the typing for you.
Also I am a backend guy and I use gpt when I cannot figure out how to write a frontend component. It gives the general structure of what I aimed and I can tune it by myself
ViveIn@reddit
I use the shit out of it and love it. Feed it api docs and profit. You have to double check all the output and often make edits. But it’s gets you pretty damn far. All the better if you decompose the problem and ask for small chick of output.
Far_Dependent4327@reddit
I used it today to write an else if clause in a language I dont know. That was kinda cool. Other than that I pretend it doesnt exist and I turn my brain off anytime anyone brings it up.
rad_pepper@reddit
Based on my experience, I treat all AI generated code as legacy tech debt with insidious errors until proven otherwise. It’s worse than having someone write the code and leave the company.
ProgrammerPlus@reddit
Do you also stop using Google? What happens if your coworkers get more productive than you due to AI usage and it impacts your performance review?
ifiwasyourboifriend@reddit
I think it’s good to use in the same way you would use Google but I think it stunts your problem-solving abilities tremendously if you use it to copy and paste code for example. Good on you for not being reliant on AI; you’re going to make yourself more competitive in the long run. I don’t believe AI will ever replace GREAT software engineers but it will absolutely weed out terrible programmers or “nogrammers” (as I like to call people with little no skill only in pursuit of monetary gain).
aknosis@reddit
We're human so the more we rely on it the less we'll remember about the dumb idiosyncrasies of programming.
I autocomplete all the time out of habit but honestly delete a lot of the code after the fact.
My favorite use is actually the in-IDE experience of asking raw questions and getting some form of a response to help trigger my brain and keep me moving. This is typically better than context switching to the browser, converting my thought/question to Google foo and then opening multiple tabs for SO, GitHub, documentation etc.
ElectSamsepi0l@reddit
I was an early casualty of having to maintain a front end codebase that got our lead fired.
No you are not alone, without the right understanding and guard rails it’s as lethal as poison to a code base
idreamgeek@reddit
I use it instead of Google now for coding issues , my main problem with it is when it starts hallucinating potential solutions and derails the original question but if you know how to query it and be succinct it's a very useful tool
timwaaagh@reddit
well im not allowed to use it for work. would if i could though. ai knows so much more than me. all i can do is correct it sometimes and refactor it to make it somewhat maintainable.
adh1003@reddit
You probably are a minority, but that's depressing. It's because humans lean towards religious fervour in all things. It's a bit like arguing over which text editor or IDE is best.
This first top-level look is Substack so yeah, not a great source but I link to others:
...and sort-of opinion pieces:
iamasuitama@reddit
I'm with ya on this. Even if it would make me much faster (it won't), does that net me better pay? (it probably won't either) It does seem to take all the fun out of programming though. Isn't it about solving the puzzles? Overcoming? I mean, wtf is development if not being happy because after being stuck on an error for 2 hours, you celebrate because you got a different error?
kaieon1@reddit
Its useful for bouncing ideas instead of using a rubber duck. I always ass don't provide code in the prompt and it's going well
rollingHack3r@reddit
Use it as a rubber duck, don’t take anything it says too seriously.
Urtehnoes@reddit
I use it for once thing only, paste in a 400 line json "make cla class that parses this"
Then I throw away the 70% of it that is entirely unnecessary garbage code, and voila.
I really use it for nothing else lol.
It's a wsdl file at this point lol.
Zulban@reddit
It's a tool. If you find it completely useless in software development, you're using it wrong.
People also have strong feelings that they don't want to write computer code, so that's not their job.
Schmittfried@reddit
Given you can’t really name reasons, I think you‘re just being contrarian, which is quite common in our industry.
Impossible-Bake3866@reddit
You have to pretend to be on the bandwagon or you will get pushed out.
WellDevined@reddit
I suspect mostly beginner to mid level devs get speedups from ai. I would consider myself quite advanced, and am specialised mostly in s sinlge language (typescript). I now most of the standard library and already have experience with the most commonly used third party libs. When I want to build a feature or knew project I modtly know what to do and can cooy snippets from my own past code.
I briefly tried copilot when it was all the hype. It made me even slower. Compared to the default auto completion, it had a noticable lag (100+ms) which was quite anoying and it not being a 100% reliable required me to allways do context switching. Away from my thoughts of what I wanna code to check where copilot did some weird hallucinations. This prevents me from actually going into a flow of translating my own thoughts into code.
Once you reach this proficiency I don't see how any ai can really benefit me.
notkraftman@reddit
I've been coding for a long time and I don't know most of the standard library and sometimes I forget the syntax for a for loop, because to me it's not particularly important what language I'm using or the implementation details, just that I'm solving the problem in a way that is easy to understand and test. I definitely remember the feeling of mastery the first time I knew a language to that level, so I know where you're cooking from, but after working with more franeworks and languages it becomes a lot less important, and AI just made that even more true.
WellDevined@reddit
I am not against auto completion at all. The regular non ai autocompletion is super helpfull for these specific syntax cases (that I often do not remember either). Also the regular suto completion is very predictable, after you lesrned how it behaves you can trust it as it will work deterministically to aid with the syntax.
z500@reddit
I'm not even sure what I'd use it for. People say it's good for boilerplate, but my team maintains a component library and some web apps built with it and I almost never have to write boilerplate.
Rough_Priority_9294@reddit
I don't use it almost at all, unless I need to do boilerplate. But then, a lot of my work is R&D and not things you can easily find on the internet.
qpazza@reddit
Do You not like it for ethical reasons? Or because you haven't found a use for it in your workout?
I've found it useful to bounce ideas. It's helped when I can't figure out a solution and it recommends an approach and provides code samples.
It's great at analyzing and summarizing data.
I think of it as a highly competent assistant
vansterdam_city@reddit
ChatGPT is just google/stackoverflow summarized for you and occasionally can connect two or three queries into one answer.
But that said, every SWE uses the hell out of google/stackoverflow so why would you be against using a more efficient form of it?
I’m gonna go with “you sound like an old fart” on this one, sorry dude.
ass_staring@reddit
You are being an old fart. It’s just another tool to be leveraged for productivity. It’s not that great yet at things where it needs a lot of contextual knowledge but it’s getting there.
-Hi-Reddit@reddit
as soon as you ask ai to write code nobody has written or that is rarely written it falls flat on its face. it's a smart predictor, but that's all it can do, predict based on existing data.
hellosakamoto@reddit
Back to like 15 years ago, using the internet to search for hints/documentations during code tests in interviews can be considered cheating.
There's also a difference for knowing how to use something properly but not using it, versus not actually know something and decide to stay away from that.
Due-Helicopter-8735@reddit
Most AI tools are currently not too helpful, though there is undoubtedly some very interesting research in progress. It’s good for brainstorming and polishing up documents.
ZakanrnEggeater@reddit
i'm not opposed to the idea where it would be okay to put my clients proprietary information in someone else's computer
so far though, that seems to be where that conversation stops
i kinda dig the google AI driven results when i am researching some specific syntax or technology or technique though
jeremyckahn@reddit
I use ChatGPT for the odd question here and there as necessary, but I mostly don't feel any need for it. I'm plenty fast with Neovim and coc.nvim (for autocomplete and automated refactoring). Producing code is not what takes up significant time in my day, it's designing solutions that are appropriate for my constraints. AI isn't very good at that.
Jumpy_Fuel_1060@reddit
AI isn't perfect. However I would equate abstaining from in principle as Amish. Nothing against Amish, but you are choosing that route. The future is coming whether you like it or not.
sandstoneyoke@reddit
I don’t use it at all, but we are definitely in the minority
ocxricci@reddit
I had a negative experience with Copilot, his code suggestions annoy me and they break my flow of thinking ( shut up, Copilot ! ), so I gave up on using AI for now, but I am willing to test it again in the near future
Jamese03@reddit
It's a tool, if I have a list of 20 Ids I need to query against, I can use AI to format them into a query I need in \~5 seconds or I can spend 2 minutes writing it myself.
WhatIsTheScope@reddit
I really only use it as a leaping off point. People are usually too busy, annoyed, or unfamiliar with the topic to help with my work, so I go to ChatGPT to help me get started with my design. I tend to follow what is already in our production for patterns and standards of how we implement code. It’s really not a huge part of my work itself honestly, and occasionally I’ll use it for errors and debugging if I’m super stuck.
steveoc64@reddit
You are indeed in the silent minority
It’s like you went to bed one night, and woke up in a scene from “the invasion of the body snatchers”
budd222@reddit
It's amazing for things like build me a material UI table or whatever. Here are all my columns. Go. It saves a lot of time over me typing endlessly.
midKnightBrown59@reddit
I see like using an interpreted language versus compiled: you get a speed boost.
That boost might make you more productive and provide business value.
repo_code@reddit
It's a good metaphor.
Interpreted dynamic languages are great for quick prototyping and scripting simple tasks. They're terrible for large systems where you want the compiler to help you statically reason about the correctness of changes especially wide scale ones.
The interpreted language doesn't scale well, and I'd guess that AI assistance doesn't either. It can do easy things, small things, local things. The cost/benefit is good, up to a point.
midKnightBrown59@reddit
Great elaboration.
invictus08@reddit
I do not want to use copilot. I used it(and tabnine) before when it wasn’t made cool by some finance org CEO, and I found that it gets in my way more than it helps me.
Chatgpt on the other hand lets me explore new ideas very quickly, the perfect partner for pair programming. I don’t need to share my company ip with it, I just use it to explore ideas and in that way it has boosted my productivity a lot. Plus, my adhd brain cannot go through large documentation for projects. This helps me cut through the BS and blaze through the necessities. I love it. Of course, big disclaimer, validate everything these llms generate. In short, learn how to use the tools at your disposal if you need it and you’re golden.
notkraftman@reddit
how does it get in your way? like, it doesnt do anything unless you hit tab?
darkrose3333@reddit
I won't lie, the environmental aspect of it kills me. I admit AI has its uses, but none of those uses are worth consuming 2% of the world's energy so we can generate half assed code...
x2network@reddit
Don’t stay stubborn on this one.. I used ai yesterday to write a jsx script for Adobe illustrator in 10 mins I had converted 1000 icons.. I have never touched this before.. the other day SQL for a table of over 100 fields.. it was the automation that was quicker than my typing.. think of it as a faster keyboard. You still need to prompt it correctly which you will since you understand code.. start with small stuff. It’s not going away… 🤷♂️
DespoticLlama@reddit
Using copilot myself, though wish I had more control over it.
For main code files I want it to work as a better code completion and not try to do multiple lines, which it invariably screws up. Also the stop and read, usually followed by dismiss, really breaks my flow.
When writing my tests though I've found it really useful, once it works out what you are doing it is pretty good at picking the next test case, which as we know is mostly repetitive code with new assertions...
I am also working on an old code base where testing wasn't a thing at the start (think the typical start up monolith from the late 00's) so creating good tests before refactoring is a must nowadays...
Antilock049@reddit
I've not found it useful so far. It's more of a pain in the ass to get it to do what I want than to just do what I want.
Maybe that improves with time, right now I'd rather be hands on.
blizzacane85@reddit
Al can be used to sell women’s shoe, or Al can score 4 touchdowns in a single game for Polk High
marmot1101@reddit
I was reluctant until one of our leaders made an argument for using AI that I hadn't really thought of. "AI isn't going to take your job. It's just a tool, but a powerful one. If you don't learn how to effectively use it you'll be left behind." So I gave it a go and it's part of my personal and professional workflows. I don't just gank syntax unless I'm using it for something super simple("what's the ruby equivalent of
map
" or similar), but I ask questions like:"is there a way to keep the publisher publishing wal data, but disassociate the table with the publication slot? would removing the table from the publication immediately cease wal publication, or would what's existing waiting to be processed continue to be sent?"
It spit out a reasonably accurate answer I could have gotten other ways. I could have googled, refined query repeat until I got useful information. I could have asked TAM and waited around for a maybe accurate answer. But I asked chatgpt and got a reasonably correct answer in 30s. That clawed back quite a bit of productivity, and that's just one question out of the 4-5 I ask it daily not including followup questions. And I'm not opening a browser so no temptation to type too-long comments on reddit like I'm doing now.
freshhorsemanure@reddit
i disabled co-pilot, just seems like a distraction for the majority of tasks i do.
ThyssenKrup@reddit
No you're not. I absolutely refuse to use it in any form
ImSoCul@reddit
I'll be blunt- you're being a luddite/ "old fart" (although that's dismissive of old farts- most of the ones I work with are keen learners are eagerly exploring the technology.
It's one thing to test out a tool thoroughly and decide it either isn't a good fit or some justification isn't being met, it's a whole separate thing to not want to try it based on "feelings" and hypotheticals.
No, people don't need to incorporate AI into every facet of software development. Yes, there are immediate benefits in using it today and substantial improvements to processes, especially menial ones most wouldn't want to do anyways.
If you came back and said "generating 8 lines of code consumes equivalent of 1 car on the road for a month (totally made up figure), I'd be willing to concede somewhat, but you're basically "the vibes are off, man".
wiriux@reddit
Do not resist change air. AI is here to stay :)
CarolynTheRed@reddit
I just have never found a compelling use case where I trust it.
jwezorek@reddit
I use it to generate boiler plate and I use it in lieu of reading documentation of libraries.
For any topic that is well-covered on the internet, these systems are great at essentially generating personalized sample code for you. If there is some computational geometry function for which StackOverflow threads contain a dozen implementations but none are quite what you want exactly: wrong language, wrong function signatures, etc. AI will be great at putting that all together and giving you just what you want. This does save time whether you use the output as is or tinker with the AI output by hand.
However, for any problem that is not well-covered on the internet these systems are a waste of time. They can't reason. Hallucinations get close to surface to the extent that what you are asking them to generate is not well-covered on the internet. For example, ask ChatGPT 3.5 to generate a function that returns the intersection of two ranges of angles and it will not even realize that the intersection may be two distinct ranges if one input range wraps all the way around and intersects the other range at the front and the back. It will just hallucinate some code: this is because this question isn't well-covered on the internet.
In my opinion, it is better to think about what these systems are doing is adapting code on the internet for use by you, given your particular use case rather than thinking of them as artificial intelligence that is solving a programming challenge you give it by reasoning about the problem.
Uncreativite@reddit
I haven’t found it to be useful for anything besides quick research on how to do things within a given framework, and generating the beginning of a unit test that I can build off of. It has been occasionally helpful for bouncing ideas off of, to determine how my code can be improved. But the code it generates has been incorrect way too much for me to ever use it as is.
I’ve also found it useful for writing SQL queries where I’m trying to find something, for dev or test purposes. But certainly not for anything going into production.
daishi55@reddit
Yeah the people who can’t or won’t adjust are going to get left behind, from the productivity gap alone to say nothing of the ramp-up/flexibility gap.
intertubeluber@reddit
Have you calculated the impact of using AI for development vs. not? How does that compare to other sources of energy consumption in your life? If you aren't sure the answers to those questions, this makes me think it's more an emotionally driven decision that you haven't fully processed.
goblinsteve@reddit
You are being an old fart. Now, I will say, I 100% do not use AI generated code, but when everyone is busy it can be a great tool to 'talk' ideas out with.
It can do some great auto-formatting for me also, so I don't have to write out a million lines when I'm using a legacy progress database that replicates into a SQL database, and due to naming conventions, the columns don't match up.
So when I have a table with 209 columns and the progress DB has cust-num while the SQL database has cust_num, and the hokey ODBC I have to use doesn't have the ability to handle dynamically named fields, I get to copy a list of fields into chatGPT to format for me so I can avoid doing this by hand 209 times
repldb.customer.cust_num = srcdb.customer.cust-num
SearingSerum60@reddit
You could do this particular example with multiple cursors in a text editor, but I get your point. I personally have found it useful recently for stuff like, "convert this simple HTML into Markdown"
sawser@reddit
I use it only for what I used to use Google + code ranch for.
Recently, I had to look up how to connect a Nexus server to active directory, and I didn't know how to format the user.
And instead of digging through all the forums and posts and docs, it compiled it into one place and I was able to send my questions to the security team.
I think of it as Wikipedia++
metaphorm@reddit
I'm kinda in between. I find it very useful for generating short code snippets. I find it poisonous for trying to write code that's larger than a snippet.
I'm also skeptical about AI-powered "code analysis" tools. My company has started using a few of them and they don't have a great track record of correctly identifying e.g. real security issues vs. false positives. On the other hand, not using tools like this at all has its own set of downsides, and the older generation (deterministic rather than LLM powered) analysis tools have their own set of flaws and limitations.
pancakeQueue@reddit
I haven’t started using it at all, and frankly it’s been cause a lot of the work is steeped in internal semantics and business logic that AI wouldn’t help. Well until they upload all our internal docs into a vector database.
I know at some point I will probably incorporate it into my workflow but for now there’s no push for ever greater means of productivity, so why rush.
I don’t use it for personal projects just cause I’m using those to learn.
Dreadmaker@reddit
In my experience, just like anything else, AI should be thought of as a tool for certain jobs. You aren’t going use a drill to hammer in a nail, right, but if the right screw comes along, it just makes sense to use the drill.
My top two uses for AI in coding have been generating regular expressions for me, and also generating unit tests for me. Both tend to be correct or 95% of the way there on the first try, and they can be really big time savers.
I’m not a big fan of using it to write emails or anything like that where communication can be a differentiator - but in places where you’re just writing ‘dumb’ code that’s very formulaic and has established patterns through the rest of your code base, yeah, it can be very very helpful.
I really believe the most useful way to think of it is as a tool, because that’s what it is. It is not a tool for everything. But it is absolutely a time-saving tool in some instances, and it’s only a good thing to get comfortable with it in those cases, IMO.
kcadstech@reddit
The main thing I use it for is writing my unit tests. Autocomplete is nice for some things, but most of the time I make sure I am using my brain to come up with the actual code because a) I want to stay sharp and b) it may not even be accurate
readynext1@reddit
If you are we are minorities together
Material_Policy6327@reddit
I am a MLE been in AI/ML nearly 10 years. I use it now and again for some stuff in my coding side of the job but not as much as you might think. Lots of the LLMs are never up today or hallucinate too easily on code suggestions still. Good ol good and stack Overflow is still my main thing.
kevinossia@reddit
Careful_Ad_9077@reddit
I did my ai research long before llms took off , so research back then was not poisoned by antis talking points.
I see you quoting lots of anti points in your post, so I don't even feel like I can engage without someone sending me death threats.
So dunno, you do you guy.
OhjelmoijaHiisi@reddit
Baffling. Whats the thought process behind sharing such an unhelpful, uncooperative, and unconstructive comment?
E3K@reddit
If you know how to use it, it becomes irreplaceable. I can't overstate how much more productive I am compared with a couple of years ago.
siqniz@reddit
I'm still not convinced either. People use it but I think it's the dev. If you can't actually dev it's bad for you as CGPT isn't 100% and they won't be able to understand the answer. Having said I had an issue with react popup and CGPT did give me the code snippet was able to fix
Askee123@reddit
It RARELY gives me useful code, but I like asking it where things are/rubber ducking issues
Vitrio85@reddit
I use it a little to provide a boiler plate or when I want to try different things. Or when doing code review and I see something that by experience I know it's wrong but I don't want to think. I use the AI, basically using natural language to tell it how to fix/improve the code.
ClydePossumfoot@reddit
You are now competing against those that find ways to work AI into their workflows.
If there are significant efficiency gains for those that do vs those that don’t, we’re definitely gonna find out soon.
JustUrAvgLetDown@reddit
It’s not a competition between you and ai. It’s a competition between which devs can be most efficient while using ai
Inside_Dimension5308@reddit
You are not in the minority right now. But soon you will be.
Sooner or later it will forced down your throat.
niiniel@reddit
I very rarely use it and when I do it feels either useless or like it would have been better in the long run for me to do it myself. Some of the AI code looks fine at first glance and might even work without issues but I'm pretty sure will be harder for me to understand in a couple of months than something that I would write myself. And when I see an AI generated email I immediately don't feel like replying, honestly just send me the bullet points or what you gave the AI rather than this word salad.
StatusAnxiety6@reddit
The direct question ... are u minority? I'd guess yes from what I see.
Deep-Chain-7272@reddit
I think you should use it if it makes you a better programmer.
Personally, I use it when I am stumped, but I let myself struggle a bit. It is a better Google or Stack Overflow. That's it.
I really dislike the plugins that try to auto-complete/AI generate your code, though.
kiriloman@reddit
I hope you are in a minority because not following the advancements of the industry and not using them in your favor doesn’t sound right. You don’t have to use it everywhere, but there are a lot of places where you can. However, people who use it everywhere and of everything will decline heavily in their skill.