Dialectics, AI, skepticism
Posted by messedupwindows123@reddit | ExperiencedDevs | View on Reddit | 30 comments
I've always been very skeptical of tools like Copilot. At first, I would say things like "a good type system, with a good IDE, gets you most of the way there". I would also point out that the code generated by these tools is often wrong/bad.
But I think I figured out what really bugs me about these tools. For me, writing software contains a super important feedback loop. I have ideas, and I write them down. Then, by engaging with the written representation of my ideas (and by looking at how the computer reacts to these written representations), my own ideas can evolve. And then I write down the new ideas. And I run/inspect the code, and this feedback loop continues.
So, for example, I might have an elegant idea, and this idea doesn't pass the type-checker. And the type-errors will raise really interesting conceptual issues that I may have initially overlooked. And, by harmonizing these two things (my own ideas, and compiler-feedback), I'll get to something I'm really happy with.
So, loosely speaking, the ideas do not stand on their own (they have to be validated by being materialized into code). And the code does not stand on its own (you have to actually fully understand it, and you have to contextualize this understanding in a knowledge of how your ideas evolved over time). So the "code" and "concepts" are really intertwined. Sort of in the way that the sun and the moon orbit one another.
By leaning heavily on code-generation tools, this feedback loop gets broken. And I can't help but think that it will be really easy to wind up in really nasty situations where you're stuck at a "local maximum", and you don't have the tools to break out and find the real solution that you're really hunting for.
Does this resonate with anyone else? Are you able to maintain this feedback-loop while using these new tools?
BeerInMyButt@reddit
As a counterpoint, I find AI kind of gets me out of the weeds a lot of the time, especially early in a project. I'm able to engage with my ideas more directly without getting bogged down in the implementation details. The feedback loop you describe is maybe healthier than it's ever been for me. Maybe it's just because I'm distractible and tend to fall into rabbit holes. When I want to make an idea concrete, I'm now less likely to spend an hour attempting to grok some random library that is just a means to an end. I still do that sometimes, but at least now I don't have quite as many half-built sketches of ideas that I abandoned because I ran out of steam before the brain lightning stopped.
archetech@reddit
That makes a lot of sense and I think a significant risk of using current AI. You need to find a way to stay engaged and be the human in the loop to both prevent issues and build your own expertise.
geeeffwhy@reddit
i think the dialectic is exactly where generative AI is at its finest.
i have an idea, i try to express it as clearly as possible. the computer responds. i respond to the way this diverges from or exposes gaps in my thinking, i adjust my instruction, and we repeat, onward towards synthesis.
Empanatacion@reddit
This is too abstract to engage with.
Copilot figures out what I'm in the middle of typing and finishes it for me. That saves me a ton of time. It figures out syntax I'd have to look up. It can crank out boiler plate.
I don't ask it to implement a whole feature. Folks get really defensive about this stuff.
Camel_Sensitive@reddit
You’re engaging in a feedback loop when you do this. It sounds like OP just needs more practice.
Virtually all AI detractors are people that would have had trouble adjusting to the existence of stackoverflow back in the day. The wheel turns.
sonobanana33@reddit
You never heard a daily stream of "stackoverflow will replace you" though :D
Regular-Active-9877@reddit
I think people who don't use copilot regularly miss this point entirely. On an existing codebase, it can be very accurate at predicting exactly what I would have written. When the prediction is wrong, I keep typing until it becomes right.
Copilot isn't about suggesting what I should write, it's about predicting what I would write.
The iteration that the OP alludes to still follows, it just involves fewer keystrokes.
You could also use AI for building whole features in new codebases. I do this all the time for prototypes. In those instances, you definitely might restrict the design possibilities by following the AI... but for a prototype, this usually doesn't matter, as you're probably less concerned with implementation details.
AvidStressEnjoyer@reddit
Copilot is amazing - I've found that it is great for menial, laborious tasks and makes you slower for most others. If this is not the case for you, I would argue that you are writing code that isn't novel and you should be looking for a new job because an LLM is able to autocomplete your work because it was done many times in it's training set.
Things that I've found it good at:
Empanatacion@reddit
I'm stealing "basic-bitch test cases" 😆
Antique-Echidna-1600@reddit
Cursor and Copilot are great on a preexisting code base.
That said, it does create a weakness for doing leetcode.
DraxFP@reddit
You can adjust your feedback loop mechanism to include the new AI tools. Use it more as a discussion tool not as a one shot do it for me tool. Read the response and talk back to the AI with pointers, adjustments or questions. You can steer the AI discussion for more conceptual idea working out or dive deeper into the code. I often like to just use the ChatGPT or Claude website not any IDE integrations. Copy paste some code snippets sometimes, but I like to keep the discussion separate.
InfiniteMonorail@reddit
You're concerned that AI is an idea generator and you'll never have an idea again? And you'll be limited by the ideas the AI was trained on?
You're still allowed to think and use AI at the same time.
nath1as@reddit
We're writing the same code over and over with minimal variation for 99% of the time. AI for now is a compressed context dependant database that can generate this same code with appropriate variations.
messedupwindows123@reddit (OP)
a pretty bleak view of our (very artistic) profession
PotentialCopy56@reddit
Very artistic?? 😂 God I hate people who write "artistic" code. Aka over engineered to feel smart.
messedupwindows123@reddit (OP)
there's an art to finding the simplest solution/description of a problem(!!!)
PotentialCopy56@reddit
(!!!) cool story. Man you take this way to seriously 🤓
chris_thoughtcatch@reddit
Copilot did break the loop your mentioning and I didn't use more than a few weeks, but ChatGPT does not. It is a consultant. It does not force you to copy/paste (I don't). It can help with indecisiveness, naming, you can bounce your ideas off it, it more often can pull up quick syntax questions faster than reaching for the docs. It outshines Google in almost every programming related query. Just write the code yourself and you will be fine. It's just a tool, it doesn't have to be a paradigm shift.
OverEggplant3405@reddit
The sun and the moon do not orbit one another.
LetterBoxSnatch@reddit
The type-checker can be wrong, too. Not in the types that it's explicitly given to work with, but in how the pieces fit together to make a coherent and correct program.
The corrective loop with AI tools is similar, it's just a lot more verbose. If your existing code (and types) are correct, the generated code will be closer to correct, much the same way that having more correct types naturally flows into more correct programs downstream. Whether or not this makes you more productive is dependent on whether receiving the large chunks of information at each iteration speeds you up or whether it just gives you additional problems to sift through. ...sorta like type checking, actually. The difference is that with type checking the thing you are generating is early feedback wrt incorrectness, while with AI tooling the early feedback is wrt possible next steps based on prior steps. The engagement loop is similar, but with roles reversed. You are "correctness checking" the "compiler's" output.
messedupwindows123@reddit (OP)
this is fascinating - the point about inversion
RobertKerans@reddit
It also highlights why it's not some general purpose silver bullet tech, that it has specific sweet spots. It's fuzzy, which is something that is traditionally very difficult for computers. Having both together gives precision when you need precision and guessing when you want guessing. Having both together [in the hands of someone experienced enough to ignore a lot of the guesses] is very useful but you absolutely don't want to rely on the guesses.
fortunatefaileur@reddit
I think your meta skills matter more here, we did this exact thing three days ago: https://www.reddit.com/r/ExperiencedDevs/s/nLRNkaM5md
Obviously trivial things:
This leads to some nearly trivial things:
Also important to remember but not obvious to some from the above: despite the fact that we think of ourselves as valuable, it doesn’t mean that capitalism or technology won’t take our careers from us. Plan ahead for the gravy train ending.
Adept_Carpet@reddit
It's important to think about the social element here.
Originally, programmers were thought of as a kind of computer secretary or typist. A prior generation of programmers demanded their fair value and so we have a nice job today, but we have to continue that effort and demonstrate the difference between a system made by an intern with a chatbot and one made by a professional and educate people about our value.
There are administrative assistants and clerks and such out there who are the most valuable person at multimillion dollar companies who make $40/yr. Lots of them.
fortunatefaileur@reddit
Absolutely, thanks for being more explicit and clear than I was.
There’s been a period where a lot of programmers felt like an exalted uber-class, but that’s not historical or inherent, and we should prepare for it to change.
DeterminedQuokka@reddit
The problem is that you are thinking about ai as code without human intervention. That’s not a thing. Or at least not an intelligent thing. Copilot is a slightly dumber version of you for you to pair with. So it writes a test and you fix it, and lint it and do all that other stuff.
dvogel@reddit
Ding ding ding! Yep! This is what I try to tell younger engineers. The journey through Sr SWE is essentially filling in a circle with experiences until, Monte Carlo style, the shape of your skills approximates a fully competent engineer. You've reached that stage when the probability of you making little to no progress on a random new endeavor drops near zero. You can automate writing code and get wins quicker and maybe even more consistently but your circle will fill in much more slowly because every failure is also a dot in that circle. By jumping to success you're foregoing the opportunity to collect experiences that will inform how you can approach those random new endeavors in the future.
messedupwindows123@reddit (OP)
another dialectical relationship - failure and success
daishi55@reddit
You can get some excellent feedback loops going with generative tools. You just need to be good as expressing your ideas, breaking down problems into constituent parts, and knowing when to exit the loop and do the rest yourself. If you can do these things, code generation can greatly enhance productivity.
PanZilly@reddit
It does resonate. I too like to validate my ideas myself.
I tend to only ask copilot for explanations of concepts, boilerplate, and unittests. I also tend to validate what it tells me or the code that it writes, for example by checking the actual docs