Copilot Induced Crash: how AI-assisted code introduces new types of bugs
Posted by klaasvanschelven@reddit | programming | View on Reddit | 148 comments
Posted by klaasvanschelven@reddit | programming | View on Reddit | 148 comments
ZorbaTHut@reddit
. . . Two hours? Really?
That's the hardest bug you had to deal with in all of 2024?
Kwinten@reddit
That's fucking hilarious. I praise the heavens if the average bug doesn't take me upwards of a 5 day work week to figure out.
pyabo@reddit
Nice. My most difficult bug ever was 5 days. Multithreading C++ issue. I went over the code line by line from the entry point until I found the unprotected alias. It was like 30,000 lines of code. Pretty sure I could do it with a linter in 30 seconds now.
dcoleyoung@reddit
There was a fun chat on S/O about the worst bugs of all time. One was tied to the cpu clock so it was nearly irreproducible. It was a months long investigation
pyabo@reddit
Oh man that reminds me a very smart dude I met once that worked for Microsoft. He was the one reviewing the core dumps that got sent in with crash reports. He said he could tell by looking at the raw instructions being executed that caused the fault whether it was an application-layer bug or OS level bug that caused the crash. That was his job... finding the ones that were Microsoft's fault and then sending them to the right place. Without having access to or looking at the raw source code.
WriteCodeBroh@reddit
“Well, we traced the error to this failed request to this API we don’t own. Let me go check their logs. wtf does this error mean? Better reach out to their team. 2 days later when they answer Oh, that error is actually due to a failure from some garbage data in this esoteric queue. Let’s go look at those logs…”
drcforbin@reddit
Not all of us work on hard problems and difficult code. Some people's code can be pretty much replaced by AI, and the hardest part of their year is when they have to debug it.
nanotree@reddit
Yeah, I look at what AI can do and think to myself "how could any developer believe this could simply replace them?" Then I hear stories like this one and think "it's no wonder there are so many developers whining about having imposter syndrome these days." I mean, I hate to be like that, but if an AI can truly replace you, then maybe there is something to the feelings of imposter syndrome after all?
But really, I can only think of a few times in the past 6 years I've been doing this that an AI could have completed a task I've had without a lot of hand holding.
drcforbin@reddit
That's what I learned from this article, that there are developers that can, in fact, mostly be replaced by LLMs.
ZorbaTHut@reddit
While you're right, that's also a situation where you probably save a lot more than two hours by using AI.
drcforbin@reddit
That's clearly the case here
eracodes@reddit
I know right?! If someone told me they could guarantee that my hardest-to-bug of 2025 would only take 2 hours to solve, I would suspect them of being some kind of coding siren luring me into the rocks.
thatpaulbloke@reddit
The fix for the bug only takes two hours. The fixes for each of the five new bugs that the first fix introduced will take two hours each. As for the four new bugs each that all of those fixes introduced, they go into the backlog and I open the gin.
account312@reddit
That's job security
eracodes@reddit
Just a handful of iterations more and we're out of time before the heat death of the universe.
NiteShdw@reddit
Omg... There's a bug you know is in your uncommitted code, you know you used AI to write some of it, and you don't think to check the AI written code?
Vimda@reddit
> Complains about AI
> Uses shitty AI hero images
mfw
loptr@reddit
Some people have more complex thoughts than "HURRDURR AI BAD".
OP is clearly not against AI as a concept, and even specifically points out its usefulness.
Not every post about an issue arising from AI is about shitting on AI or claiming it's useless, even though some people seem to live in a filter bubble where that's the case.
And there is virtually no connection between using AI generated images and using AI generated code for anyone who has matured beyond the "AAAAH AI" knee jerk reaction stage. OP could literally advocate for punishing developers using AI generated code with the death penalty while celebrating designers who use AI generated art without any hypocrisy. The only thing they have in common is originating from an LLM, but there's very few relevant intrinsic properties shared between them.
axonxorz@reddit
Personally, I don't like that it's caused Google searches for extremely precise technical topics to be completely undiscoverable in web searches.
I just love having to postfix my searches with
before:2021
. I love knowledge past that point being unfindable.So yeah, I push back on it in general when it's used outside of relecant avenues, because it seems like it's worming its way into useful avenues. I say this as someone who finds CoPilot and JetBrains' version to be quite useful.
Zopieux@reddit
Of all the AI hero images I've seen on this sub, this one is by far the least terrible.
It's definitely plagiarizing some artist's style as designed and not bringing any value, but the contrast isn't too bad and the end-result pretty "clean" compared to the usual slop.
zxyzyxz@reddit
Images don't cause actual production level harm since they're static while code gets executed and can perform actions. One is way worse to use AI for than the other.
Vimda@reddit
Tell that to the artists AI steals from
zxyzyxz@reddit
It's been a few years now since image generators came out and the only news story in all those years is some concept artists getting fired, one out of literally thousands of companies. If artists were actually being harmed, you'd have thought artists being fired would be more widespread. It's the same fearmongering as devs thinking they'll be replaced with AI, turns out real world work is much more complex than whatever AI can shit out.
carrottread@reddit
A lot of artists are not full-time employees but just sell their art through various web marketplaces like shutterstock. And they experienced huge reduction of income since people who previously bought images now get them from AI generators.
zxyzyxz@reddit
Do you have a source on this?
Dustin-@reddit
I guess we'll find out in time as we get more employment data from artists, but anecdotally I'm seeing a lot of people talking about losing jobs in digital graphics work and and copyrighting that they were laid off because of AI.
zxyzyxz@reddit
That sounds to me just like programmers getting laid off due to AI, it's more like an excuse for companies to lay off than to admit they overhired during the pandemic.
Vimda@reddit
Stability is literally getting sued for it https://www.loeb.com/en/insights/publications/2023/11/andersen-v-stability-ai-ltd
zxyzyxz@reddit
Sure but most of the claims are being dismissed in most other lawsuits regarding AI and copyright. See OpenAI's recent lawsuits.
EveryQuantityEver@reddit
They still cause extreme amounts of power usage.
sweetno@reddit
How did you detect that it's AI? I'm curious, my AI-detection-fu is not on par.
avidvaulter@reddit
It's the literal first sentence below the image in the article.
myhf@reddit
The main clue in this case is that the subject matter is based on keywords from the article, but the picture itself is not being used to depict or communicate anything.
In general: - Continuous lines made of unrelated objects, without an artistic reason. - Embellishments that match the art style perfectly but don't have any physical or narrative reason to exist. - Shading based on "gaussian splatting". - Portraits where the eyes are perfectly level and perfectly centered.
sweetno@reddit
I see. Somehow I still love this. It's a bit too extreme for the article's contents indeed, but I see a real artist using the same idea.
stackPeek@reddit
Have been seeing too much of blog posts like this in the tech scene. So tired of it
Hefty-Distance837@reddit
Very ironic
klaasvanschelven@reddit (OP)
maybe there's an analogy in here somewhere about how the kinds of bugs that LLM-assisted coding introduce are similar to the artifacts of generative AI in images.
You wouldn't expect a human to draw a six-fingered person with inverted thumbs as much as you wouldn't expect them to both an import statement like in the article.
Ok-Yogurt2360@reddit
This is basically the problem i'm most warry about when i hear people talking about LLM's as a sort of abstraction. A lot of people tend to trust tests as a tool to tell them when things go wrong. But testing is often based on high risk parts of your code base. You don't always test for unlikely and at the same time low impact problems.
The problem with AI is that it is hard to predict where problems will pop up. You would need a completely different way of testing with AI written code compared to human written code.
wRAR_@reddit
Typical blogspam
klaasvanschelven@reddit (OP)
One person's "shitty AI hero image" is another's "evocative illustration of the accompanying text".
Also, I don't think I said any and all use of AI is a net negative, though I'm certainly aware that AI-created (or assisted) illustrations are triggering to some.
Dwedit@reddit
Use WinMerge on the "Before" and "After" versions of your document. See what changed and actually read it.
Big-Boy-Turnip@reddit
It seems like the equivalent of hopping into a car with full self-driving and forgetting to check whether the seatbelt was on. Can you really blame the AI for that?
Misleading imports sound like an easily overlooked situation, but just maybe the application of AI here was not the greatest. Why not just use boilerplate?
usrlibshare@reddit
If it is marketed as an AI assistant to developers, no.
If it's sold as autonomous AI softwre developers, then who should be blamed when things go wahoonie-shaped? The AI? The people who sold it? The people who bought into the promise?
I know who will definitely NOT take the blame: The devs who were told by people whos primary job skill is wearing a tie, that AI will replace them 😎
Plank_With_A_Nail_In@reddit
No matter what you buy you still have to use this brain.
"If I buy thing "A" I should be able to just turn my brain off" is the dumbest reasoning ever.
Media_Browser@reddit
Reminds me of the time a guy reversed into me and blamed the cameras for not picking me up .
EveryQuantityEver@reddit
But that's how they're selling it.
Here-Is-TheEnd@reddit
And yet we’re hearing from the Zuckmeister that AI will replace mid level engineers this year.
People in trusted positions are peddling this and other people are buying it despite being warned.
WhyIsSocialMedia@reddit
Whether it's this year or in a few years, it does seem like it will likely happen.
I don't see how anyone can not think that's a serious likelihood at the current rate. Had you told me this would have been possible in 2015, I'd think you're crazy with no idea what you're on about. Pretty much all the modern applications of machine learning were thought to be lifetimes away not very long ago.
It's pretty clear that the training phase is actually encoding vastly more information than we thought. So we might actually already have half of it, and it's just inference and alignment issues that need to be overcome (this bug seems like an alignment issue - I bet the model came up with this a bunch of times and since humans find it difficult to see, it was unknowingly reinforced).
Big-Boy-Turnip@reddit
You're at the wheel of a full self-driving car, too. If it crashes into a tree, it's 100% on you. AI isn't blame free, but it doesn't undo your burden, either...
usrlibshare@reddit
If I have to sit at the wheel, and in fact if it has a wheel at all, it isn't a full self driving car.
Big-Boy-Turnip@reddit
That's the entire point I'm making. There's a wheel. I never said it's 100% autonomous. Has Microsoft said Copilot is?
usrlibshare@reddit
Have I mentioned copilot anywhere? No, I have not.
klaasvanschelven@reddit (OP)
Luckily the consequences for me were much less dire than that... [but the victim-blaming is quite similar to the more tragic cases](https://www.boredpanda.com/family-blames-elon-musk-after-son-loses-life-car-accident-tesla-driving-in-autopilot/).
The "application of AI" here is that Copilot is simply turned on (which I still think is a net positive), providing suggestions that easily go unchecked all throughout the code whenever you stop typing for half a second.
If you propose that any suggestion by Copilot should be checked letter-for-letter, the value of LLM-assistence would drop below 0.
Shad_Amethyst@reddit
That's my issue with the current iteration of coding assistants. I write actual proofs in parts of my code, so I would definitely need to proofread the AI, making its contribution often negative.
I would love to have it roast me in code reviews, just as a way to get a first batch of feedback before proper human review. But so far I have never seen this being implemented.
fishling@reddit
It's a feature in Copilot integration with VS Code that I'm investigating right now, designed for exactly that use case: get a "code review" from the AI.
Calazon2@reddit
Have you tried using it that way for code reviews? It doesn't need to be an official implemented feature for you to use it that way. I've dabbled in using it like that, and want to do more of it.
f10101@reddit
On the latter, I often just grab ctrl-c ctrl-v the code into chatgpt and ask it if there are any issues, (or ask what the code does). I have setup my background prompt in chatgpt that I'm an experienced dev and to be concise, which cuts down the noise in the response.
It's not perfect, obviously, but it finds a lot of the stupid mistakes, and areas you've over-engineered, etc.
I haven't thrown O1 at this task yet, but I suspect it might be needed if you're proof-reading dense algorithms as you imply.
mallardtheduck@reddit
LLM generated code should be no less well-reviewed than code written by another human. Particularly a junior developer with limited experience with your codebase.
If you feel that performing detailed code reviews is as much or more work than writing the code yourself, it's quite reasonable to conclude that the LLM doesn't provide value to you. For human developers, reviewing their code helps teach them, so there's value even when it is onerous, but LLMs don't learn that way.
klaasvanschelven@reddit (OP)
What would you say is the proportion of your reviewing time spent on import statements? I know for me it's very close to 0.
Also: I have never in my life seen a line of code like the one in the article introduced by a human. Which is why I wouldn't look for it.
fishling@reddit
Maybe this is an experience thing: now that you have this experience, you will start reviewing the import statements.
I always review changes to the import statements because it shows me what dependencies are being added or removed and, as you've seen, if any aliases are being used.
And yes, I have caught a few problems by doing this, where a change introduces coupling that it shouldn't, and even an issue similar to what the article describes (although it predates both AI and even Git).
So congrats, now you've learned to not skip past imports and you are a better developer for it. :-)
mallardtheduck@reddit
Depends what kind of import statements we're talking about. Stuff from provided by default with the language/OS/platform or from well-regarded, popular third parties probably doesn't need reviewing. Stuff downloaded from "some guy's github" needs to be reviewed properly.
klaasvanschelven@reddit (OP)
So... you're saying you wouldn't catch this: it's an import from the framework the whole application was built on, after all (no new requirements are introduced)
Halkcyon@reddit
The bug wasn't the import statement, you inherited from the wrong class in the first place. That would be caught in review.
klaasvanschelven@reddit (OP)
I did not? As proven by the fact that that line never changed...
The problem is that the import statement changed the meaning of the usage location in the way you refer to
Ok-Yogurt2360@reddit
Or boilerplate code that is normally generated with a non LLM based tool. Or a prettier commit.
renatoathaydes@reddit
I find it quite scary that a professional programmer would think otherwise. Of course you should check, it's you who are committing the code, not the AI. It's your code if you accepted it, just like it was when your IDE auto-completed code for you using Intellisense.
BeefEX@reddit
Yes, that's exactly what many experienced programmers are proposing, and one of the main reasons why they don't use AI day to day.
klaasvanschelven@reddit (OP)
Many other experienced programmers do use AI; which is why the article is an attempt to investigate the pros and cons in detail given an actual example, rather than just close the discussion with a blanket "don't do that".
recycled_ideas@reddit
Except it doesn't.
Copilot code is at best junior level. You have to review everything it does because aside from being stupid it's sometimes straight up insane.
If it's not worth it if you have to review it, it's not worth it, period.
That said, there actually are things it's good at, things that are faster to review and fix than to type.
But if you're using it in reviewed you're not just not experienced you're a fool.
Shad_Amethyst@reddit
More like, forgetting to check that the oil line was actually hooked to the engine.
You're criticizing someone's past decision while you have hindsight. As OP showed, the tests went first through an extension class, so it's quite reasonable to have started looking in the wrong places first.
AFresh1984@reddit
Or like, understand the code you're having the LLM write for you?
Big-Boy-Turnip@reddit
Perhaps I'm not using AI similarly; I often ask Copilot to help with refactoring code blocks one at a time. I wouldn't want it to touch my imports or headers in the first place.
lachlanhunt@reddit
That’s not a hard to find or fix bug. It’s someone who didn’ review what the AI wrote for them to understand whether it was doing what it was trying to do. If 2 hours is the hardest to fix bug of the year, then you’re not trying hard enough.
hpxvzhjfgb@reddit
people who blame copilot for bugs or decreases in code quality are doing so because they are not willing to take responsibility for their work.
Sability@reddit
Of course they aren't taking responsibility for their work, they're using an LLM to do it for them
hpxvzhjfgb@reddit
well, using an llm is fine, as long as you actually read all of the output and verify that it generated something that does exactly what you would have written by hand. if you use it like that, as just a time-saving device, then you won't have any issues (I used copilot like that for all of last year and I have never had any bugs or decreases in code quality because of it).
but clearly, people who complain about bugs or decreases in code quality are not doing that, otherwise it would be equivalent to saying that they themselves are writing more bugs and lower quality code, and people who are not willing to take responsibility are obviously not going to admit to that.
PiotrDz@reddit
This is worse than writing code yourself. Understanding someone else code is harder than the one written by you
batweenerpopemobile@reddit
understanding other people's code instead of trying to rewrite everything is a valuable skill to learn
Botahamec@reddit
Sure, but when I review other people's code, I usually don't need to check for something
import true as false
hpxvzhjfgb@reddit
I agree that reading code is harder than writing it in the case where you are, say, trying to contribute to an established codebase for the first time. but if it's a codebase that you wrote yourself, and the code you are reading is at most 5-10 lines long, and it is surrounded by hundreds more lines of context that you already understand, then it isn't hard to read code.
ImNotTheMonster@reddit
Adding to this, dear God at least I would expect someone to do a code review, so at the end of the day it should be reviewed anyway, wrote by copilot, copy pasted from stack overflow, or anything.
PiotrDz@reddit
I trust more juniors than llms.juniorsatleast have some sort of goal comprehension.llms are just statistics.
EveryQuantityEver@reddit
I mean, that's the reason they're using things like Copilot in the first place
klaasvanschelven@reddit (OP)
On the contrary: thinking about the failure modes of my tools such that I can use them more effectively is taking responsibility.
hpxvzhjfgb@reddit
true
ndusart@reddit
Damn, I just spent two hours debugging why
TransactionTestCase
does not work.But Klaas told me it wasn't problematic anymore if followed by importing
TestCase
fromunittest
. So I couldn't challenge this at first. I lost two hours of my life for this :(Who am I gonna trust for my code copy paste now ? 😨
syklemil@reddit
This feels like something that should be caught by a typechecker, or something like a linter warning about shadowing variables.
But I guess
from has_foo_and_bar import Foo as Bar
isn't really something a human would come up with, or if they did, they'd have a very specific reason for doing it.JanEric1@reddit
If the two classes are compatible then a type checker wont catch that.
A linter could probably implement a rule like that (if it can inspect the imported code) but i dont think there is any that does so currently.
larsga@reddit
Probably this is only the tip of the iceberg as far as weird LLM bugs go, so linter developers will probably be busy coming up with checks for them all.
syklemil@reddit
… while the typechecker programs for Python should have information on which symbols a module exposes available. So that's why I mentioned them first, and then the linter. One has the information, the other is the one we expect to tell us about stuff like this.
Unfortunately the tooling for Python is kinda fractured here and we wind up with running multiple tools that don't share data.
This code is pretty niche, but it might become less niche with the proliferation of LLM tooling.
Qwertycrackers@reddit
I don't think it's possible to write a linter for "AI generated misleading code" which is really what this is. And I think the current LLM wave of AI products is very likely to write code of this type, because of the way it works by synthesizing patterns. It will tends to output common patterns in uncommon circumstances and generate tricky sections like this.
syklemil@reddit
No, we're not gonna solve that general case any more than we're gonna solve the general case of "a human wrote bad code". But this specific problem is recognizable by a pretty simple check as long as it has access to the names/symbols in the module.
Qwertycrackers@reddit
Agreed. I'm working off an intuition that weird patterns like this are unlikely to satisfy a check that says "flag any rename imports which shadow a variable from that module (or perhaps just any shadowing through rename imports)".
That approach leads to writing and maintaining endless "weird AI code" lints to cover strange stuff Copilot does, all the while Copilot thinks of new and creative ways to write bad code. If you're going to deal with stuff like this the whole thing seems like a loss, and it's the reason I rejected Copilot when I tried it.
lookmeat@reddit
So we make our linters better, and the AI will learn.
Remember the AI is meant to give out code that passes instructions. In other words the AI is optimizing for code that will make it to production, but it doesn't care if it'll be rolled back. Yeah we can change the way we score the data, but it would be a higher undertaking (there just isn't as much source and the feedback cycle takes that much longer). And even then: what about code that will be a bug in 3 months? 1 year? 5 years? At which point do we need to make the AI think outside of the code and propose damage control, policy, processes, etc? That is far far faaar away.
A better linter will just result in an AI that works around it. And by the nature of the programs the AI will always be smarter and win. We'll always have these issues.
sparr@reddit
Whether code gets to production or gets rolled back as a bug in a day, week, month, or year... You're describing varying levels of human programmer experience / seniority / skill.
lookmeat@reddit
Yup, basically I am arguing that we won't get anywhere interesting until the machine is able to replace the junior eng. And Junior engs are a leading loss, they cost a lot for what they get you, but are worth it because they will eventually become mid-level engs (or they'll bring in new mid-levels as recommendations). And the other thing, we are very very very far away from junior level. We only see what AI does well, never the things it's mediocre at.
hjd_thd@reddit
If by "interesting" youmean "terrifying", sure
lookmeat@reddit
Go and look at old predictions of the Internet, it's amazing how even in the 19th century they could get things like "the feed" right, but wouldn't realize the effects of propaganda, or social media.
When we get there it'll be nothing like we imagine. The things we fear will not be as bad or terrible as we imagined, and it'll turn out that the really scary things are things we struggle to imagine nowadays.
When we get there it will not be a straight path, and those curves will be the interesting part.
nerd4code@reddit
“May you live in interesting times” isn’t the compliment you thought it was, maybe
EveryQuantityEver@reddit
Prove it.
hjd_thd@reddit
Anthropogenic climate change was first proposed as a possibility in the early 1800s. We still got oil companies funding disinformation campaigns trying to deny it.
Its not the AI I'm scared of, it's what C-suite fuckers would do with it.
dvidsilva@reddit
if you have linting enable, is so dumb that copilot doesn't try running the output or something
i heard them speak about it in the universe conference, but it seems like things like that are very hard becuase copilot outputs slop and is not necesarily aware of how it integrates into the project
Kered13@reddit
I sunny believe there is any shadowing here.
TransactionTestCase
was never actually imported, so could not be shadowed.syklemil@reddit
Hence "something like". Importing a name from a module as another name from that module isn't exactly shadowing, but it's the closest comparison for the kind of linting message that I could think of.
oorza@reddit
Without additional qualifiers, because of all the mess it can cause, this should be a lint error that needs to be disabled with explanation every time it has to happen. It does have to happen, but rarely.
phillipcarter2@reddit
Semi-related, but it really is a shame that Python (one of the worst packaging, sdk, and typechecking ecosystem) is also the lingua franca of AI in general.
klaasvanschelven@reddit (OP)
The types are fine, since the 2 mentioned classes are part of a hierarchy. A linter wouldn't catch this, because (for normal use) this is perfectly idiomatic.
syklemil@reddit
Yeah, hence "feels like something that should be caught". If you'd gotten a
Warning: Importing foo.Foo as Foo2, but foo contains a different Foo2
it'd likely cut down on the digging.After all, the usual way to use
as
imports is to create distinctions in naming where they might otherwise be clobbered, not to intentionally shadow another part of the API.klaasvanschelven@reddit (OP)
You're right that this could be added to a linter in principle... But because no human would do this in the first place, it's unlikely to happen
TheOldTubaroo@reddit
I'm not sure I agree that no human would do this.
This particular case with these particular names, maybe not. But if you have a large module that has many names inside it, and a less experienced developer who isn't aware of everything in the module, I could easily see them doing an alias import and accidentally shadowing an existing name in that module.
It feels plausible enough that having a linter rule for it might be a decent idea even without accounting for LLMs.
syklemil@reddit
Yes, that's why I wrote that in the second paragraph in my original comment.
klaasvanschelven@reddit (OP)
Indeed :-)
Man_of_Math@reddit
This is why we should be using LLMs to conduct code reviews. They can be pedantic and thorough. Of course they don’t replace human reviews, but they are a counter balance to code generation nonsense
syklemil@reddit
LLMs have no actual understanding of what they're reviewing and you can tell them they're wrong about anything.
Not to mention unusual user behaviour like this is catchable by regular tools as long as they have the information available. Involving an LLM for something a linter can do seems like a massive excess, and sounds like it won't reach anything like the speeds we expect from tools like
ruff
.chaos-consultant@reddit
An absolute nothing-burger of a post.
The way I use copilot is essentially by hoping that it generates exactly the code that I was going to write. I want it to be like autocompletion on steroids that is nearly able to read my mind.
When it doesn't generate the code that I was already going to write, then that's not code I'm going to use, because blindly accepting something that a magical parrot generates is going to lead to bugs exactly like this.
mouse_8b@reddit
Well yeah, if that's all you're doing, then don't use AI.
Why is there no gap between "generating perfect code" and "blindly accepting suggestions"?
TarMil@reddit
The fact that this is the only reasonable use case is why I don't use it at all. It's not worth the energy consumption to generate the code I was going to write anyway.
OMGItsCheezWTF@reddit
Does it really take you much energy to backspace the answer out if you don't like it?
Houndie@reddit
It's even less effort that that because you have to explicitly tab to accept the code. If you don't like it, just keep typing.
EveryQuantityEver@reddit
The power used to generate that code still was wasted.
Houndie@reddit
Oh that kind of energy, yeah that's fair.
TarMil@reddit
I'm talking about the amount of electricity and cooling used by LLMs. Even if you use one running locally, it needed huge amounts for training.
TheRNGuy@reddit
It's not electricity and cooling in your house.
hjd_thd@reddit
My house is on the same planet though.
aleenaelyn@reddit
The LLM is trained whether you use it or not. Copying data is free.
Halkcyon@reddit
It wastes time. It breaks your focus. Both are bad things for developers.
OMGItsCheezWTF@reddit
Yeah true, I haven't ever tried to use copilot or anything like it, it's explicitly disabled at a corporate policy level for our IDEs (we use the Jetbrains suite) - I already have a few options turned off for code completion in general because they were annoying.
Halkcyon@reddit
Yeah, if I wanted code generation, I'd use a snippets extension that covers the use-case for my language. I tried using AI tools (both in-editor and chat) for a month, but I found it just wasted my time since I had to engineer the prompt and still rewrite most of the code it spit out.
darthcoder@reddit
The non copyright shit is a big reason I refuse to use it, in general.
As an autocomplete assistant, sure. Chatbot helper fory IDE? Ok
Code generator? No.
EveryQuantityEver@reddit
If its the code you were going to write, just write the code yourself, instead of burning a forest to do it.
shevy-java@reddit
So Skynet 3.0 is becoming more human-like. Now they add bugs. And humans have to remove the bugs.
Trip-Trip-Trip@reddit
Just gave it another try as a sceptic. It successfully extracted the 3 useful lines of codes from ~200 lines of nonsense I gave it to test, was very impressed with that. Asked it to review actual working code and copilot started to lambast me for using things it’s certain are bugs or missing aliases with completely hallucinating the basics of the language.
WhyIsSocialMedia@reddit
Almost like it's for assisting you, and not currently at a level where you can just always trust it without bothering yourself (at least without spending a ton of money on a reasoning model).
Lothrazar@reddit
Are people actually using that trash service (ai) at their job? for real products? oh my god
WhyIsSocialMedia@reddit
Do you also get upset at people using existing completion, lifting, etc?
rsclient@reddit
Yes, many people are. And there are too many kids on lawns, boomer.
snb@reddit
It's
copilot
, notautopilot
.colablizzard@reddit
I had a case where I asked copilot to come up with a CLI command to checkout a branch (I know, I know, I was learning to use Copilot, so was forcing it to come up with even trivial stuff).
The command didn't work, and I simply couldn't figure out why such a simple command doesn't work. and since I had the command in front of me that looked fine it was harder.
Until it finally clicked: The branch name had nouns in it and copilot FIXED a spelling mistake in the branch name that a human had made. Which was crazy when all I asked it to do was to "git command to checkout branch named X".
i_invented_the_ipod@reddit
The "no human would ever write this" problem has come up for me a couple of times when evaluating LLM-written code. Most of the time, you get something that doesn't build at all, but sometimes you get forced typecasts or type aliasing like this, and it's just...VERY confusing.
StarkAndRobotic@reddit
THIS IS HOW JUDGEMENT DAY BEGINS
0x0ddba11@reddit
from skynet import T800
StarkAndRobotic@reddit
PACKAGE NOT FOUND
vinciblechunk@reddit
Nobody said the hunter-killers needed to be anatomically correct
klaasvanschelven@reddit (OP)
Given my own experience debugging it is indeed more likely that judgement day begins with someone mumbling "that's interesting" than the "Nooooooo" typically seen in movies.
StarkAndRobotic@reddit
THEY SHOULD CALL IT ARTIFICIAL STUPIDITY. NOT INTELLIGENCE. IT IS LIKE NATURAL STUPIDITY. BUT ARTIFICIAL.
pnedito@reddit
'They' should learn how to type with both cases, and stop yelling!
GalacticHitch42@reddit
“copilot introduced this line”
There’s the error.
dethb0y@reddit
That's absolutely fascinating, thank you for the writeup.