We are going to be the last generation of developers to write code by hand, so let's have fun doing it.
Posted by ninetofivedev@reddit | ExperiencedDevs | View on Reddit | 57 comments
No tokens were consumed during the composition of this post
That's a quote by Dr. Erik Meijer, a dutch computer scientist who is known for his work writing compilers for Haskell, C#, VB, Dart, Hack, etc, etc.
I always find it interesting when you talk with AI skeptics who think they're part of this opposition to the non-skeptics.
We were all AI skeptics at some point. And in some regard, most of us are still skeptical to some degree.
However, at some point, you start to realize that AI ability is on this curve. And the arguments I had a year ago are no longer valid today. And the arguments I have today will not be valid in 4 months.
And instead I realized I needed to catch up. I couldn't put my head in the sand and ignore this, because it's coming. And I still have about 20 years left in my career.
SSDD. In this industry you have to adapt. It used to be knowing a new language and tech stack. Then it was knowing distributed systems. Then it was new frameworks. Then it was distributed systems running in the cloud that are automatically orchestrated via YAML files.
Today, the latest trend is being able to effectively build workflows that orchestrate agents. Understanding context windows. Understanding when to reset context windows. Understanding when to fill context windows with a ton of information about the problem you're trying to solve.
There is still skill in effectively writing software. Even if it's different than how you wrote software for the past 30 years.
dystopiadattopia@reddit
Just wait for a year until companies start hiring developers to fix all of their brilliant predictive text generated code
upsidedownshaggy@reddit
If linkedin and reddit posts are anything to go by (yeah I know 99% of them are made up stories) that's already happening. I've been seeing a lot of posts made by contractors talking specifically about being brought into small/medium sized companies who don't have proper software departments hiring them specifically to clean up their AI slopified code bases.
ninetofivedev@reddit (OP)
I don't think that day is coming the way you see it coming.
I think smaller and smaller teams are going to be more and more efficient, and big companies are going to shrink while the number of companies grow.
dystopiadattopia@reddit
All I know is that the day is coming when the number of people who outsource original thought to a mindless text generator, or when reality is insistinguishable from AI generated images, will lead to some sort of societal crisis that will be extremely difficult to extricate ourselves from.
I know a lot of lazy and/or easily dazzled developers and uncritical executives love to think that AI is jUsT aNoThEr ToOl, but it's not. A hammer is a tool. It doesn't design the building it's being used to build. AI is the first technology that purports to replace original human thought. Even worse, too many people are perfectly willing to allow themselves to believe that whatever the machine spits out is the best possible answer and blithely paste it into their PRs. And it's already abundantly clear that AI often produces garbage, but those who let their skills and thinking abilities to atrophy by simply entering prompts and believing themselves engineers refuse to recognize that - or are no longer capable of doing so.
I have personally tested this. I had an intricate problem which I had to develop an original algorithm to solve. I created what I considered the most efficient approach. Out of curiosity, I gave the AI the same problem. Its solution took 2½ minutes to run. Mine took 5 seconds. I was not impressed. I shudder to think of how many junior devs who turned in ChatGPT-written essays in school would have simply accepted the AI's answer as authoritative. Moreover, the approach I took required they kind of "out of the box" thinking AI is incapable of, which helped me learn new skills and new ways of solving problems, making me a better developer, skills today's "prompt 'engineers'" lack and would never develop because they think the AI is all-knowing.
Certainly AI can be a tool, but it can't be a panacea, and that's what my root concern is. Too many people are abdicating original, creative, sentient human thought to a glorified predictive text generator, which will never be as good as human thought. Of course AI would be more useful if it ever schieves sentience, but that would multiply our problems by more orders of magnitude than they would solve.
So yes, AI is easy and more efficient on the surface, but it produces middling results on the whole at the cost of human thought. The easy, convenient shortcut is rarely the best course of action.
Icy_Computer@reddit
The clearer the actual cost of running and training these models becomes, the more apparent it is that we won't be using them the way most of us are at the moment. $200/month for a Claude subscription that boosts productivity up to 10% is a no-brainer. When that turns into $200+/day in API credits we'll see a lot of organizations pull back.
I feel like in the end there's not going to be a market for LLMs as a service and developers who want an AI workflow will be using something like Claude Code with specialized, open sourced, local models.
ninetofivedev@reddit (OP)
It just depends on the math. Can a team of 7 + ai accomplish what used to require a team of 8?
mechkbfan@reddit
The more I use AI, the more I realize I'll be writing code by hand forever
moh_kohn@reddit
It's all about context, right? Some code has to be right to the tiniest detail and run in a dazzling array of environments for decades to come. Other code just needs to work on one computer, right now. Most code is somewhere in between.
I am unconvinced by claims that full-agentic workflows are producing the same quality at a higher productivity. It seems very likely that the cost of the tech debt will become apparent over longer time spans.
On the other hand we might just accept the decline in software quality, we have been doing so for over a decade.
mechkbfan@reddit
Agreed on all points
My comment was just a knee jerk reaction to these silly AI posts
moh_kohn@reddit
Oh I meant to be agreeing with you :)
Crazy-Platypus6395@reddit
Can we just stop using the "AI" misnomer? Its just a very fancy search engine that compiles words together based on semantics and sentiment. The intelligence part is still widely not understood.
madwolfa@reddit
What makes you think the human brain doesn't do the same?
punio4@reddit
What makes you think that it does?
madwolfa@reddit
Nothing, I'm not nearly qualified enough to make that determination. That's why I'm not acting dismissive.
Crazy-Platypus6395@reddit
So youre defending a point youre not willing or ready to argue your own point? Lol ok. Why even post?
madwolfa@reddit
No, I'm just saying I don't know shit and neither does OP.
Crazy-Platypus6395@reddit
I'm the OP you dont need to get defensive :D I work on ML all day as a job. Pretty much all ML is a black box if the models are large enough. I know plenty. You can definitely analyze what you call a black box. Believe it or not, we used to do this in our AI/ML class to prove it (although on smaller nets). Good luck out there.
ninetofivedev@reddit (OP)
Technically I'm the OP, but yeah, not what was meant.
punio4@reddit
The OP wasn't acting dismissive. He said two facts:
It's you who implied that transformer models are similar to how the human brain operates.
madwolfa@reddit
Here's another fact: researchers themselves admit they don't fully understand how LLMs arrive at their outputs. It's a black box. That alone makes the way it processes things a lot more similar to a human brain than you'd think.
Defending the "fancy search engine" take is the actual definition of a reductive argument.
Crazy-Platypus6395@reddit
Researchers dont understand gradient descent and how it arrives at optimal solutions based on stats modeling? Ok
Crazy-Platypus6395@reddit
It does, but the problem isnt knowledge bases its actual intelligent use and reasoning thats extremely lacking.
sklz0@reddit
The fact that composing words is not thinking. People with severe global aphasia can still play chess and solve math problems, for example.
madwolfa@reddit
Reading some Reddit comments proves your point.
SpritaniumRELOADED@reddit
This comes up frequently in "AI art" debates
"It's not making art, it's just stealing from other artists! It's looking at how people in the past conveyed principles of framing, composition, color, shading, and it's just doing the same thing in a slightly different way!"
Like, yeah? That's what artists have been doing for as long as art has existed
madwolfa@reddit
That's literally what learning is.
EmberQuill@reddit
Your edit reeks of so much condescension that it makes me not even want to reply to the actual subject of your post.
Since the self-styled "thought leaders" have completely outsourced their brains to AI at this point, we're all thought leaders now.
punio4@reddit
Same reply I give to Jehova's witnesses:
"Thanks, I'm not interested"
JandersOf86@reddit
As someone learning to program in his 40s and (hopefully) build my github portfolio and (hopefully) get into my first software engineering job (hopefully) soon, your post gives me hope. The idea of adding AI use on top of also learning programming is fucking daunting, to say the least.
I love coding. I wish I would've gotten into it when I was younger, instead of focusing so much on women, weed and video games. I still want to learn to code rather than depend upon an agent to do it for me. Ive seen a lot of posts that have definitely punched my motivation in the gut re: the "necessity" of learning to use agents so that Im not "left behind". The idea, though, leaves a very sour taste in my mouth.
So thank you for this.
punio4@reddit
My recommendation would be to use LLMs for learning, while it's still accessible and somewhat useful. The future is unknown, and the only thing you can bet on is yourself. Especially true if the alternative is giving the reins to your future to a billionaire fascist class.
JandersOf86@reddit
Yea the only reason I use chatGPT anymore is to organize projects for coding, as far as learning paths, but otherwise I've started getting back into reading technical books on coding from actual humans, trying to figure out the skeleton / architecture on my own. Sadly, this is proving difficult, probably just due to my inexperience, but I'm working at it.
Seriously cant thank you enough for that wheresyoured article. Great read.
1One2Twenty2Two@reddit
What do you have to catch up to? Writing prompts? Write skills for Claude?
AI is an enabler. If you're not strong technically, you're not going to be able to leverage it.
ninetofivedev@reddit (OP)
You're talking to a target audience that uses git every day and most have never bothered to learn it.
1One2Twenty2Two@reddit
You still didn't answer my question. Where is the complexity in learning how to use AI?
ninetofivedev@reddit (OP)
Well right now, people are tiredly working on figuring out how to build agent orchestrators that actually work.
So if you've figured that out, there is probably a bag for you waiting.
madwolfa@reddit
Claude Code works charmingly for me.
ninetofivedev@reddit (OP)
More proof that most of us are not building anything innovative, we're just adopting what others have built.
punio4@reddit
There is no complexity, only churn due to the inherent nondeterminism and ever-changing models.
Jmc_da_boss@reddit
I would say most of us have learned to use it at a fairly deep level actually.
ninetofivedev@reddit (OP)
No. Some of us have. Definitely not most.
1One2Twenty2Two@reddit
My dude, what kind of gatekeeping is that? Who cares if someone uses a GUI or not in order to create branches and push 4 commits a day?
ninetofivedev@reddit (OP)
The same people who blow away and reclone their repo locally whenever their GUI chokes because the author of the PR they're supposed to be reviewing did a rebase instead of a merge.
Acrobatic-Ice-5877@reddit
This is absolutely true. Just look at people in the SaaS or side project subreddits for proof. You’ll see vibe coders say they developed an app, they got users, but they’re reporting bugs and they don’t know how to fix them.
earlgreyyuzu@reddit
Does having autocomplete on still count as writing by hand?
xegos@reddit
No. In addition, if you did not file your TPS report together with your punch card deck, it will not count.
punio4@reddit
> are not thought leaders in this industry
I don't need a thought leader, or a tool to offload my thinking to, TYVM.
hondacivic1996@reddit
We'll see about that.
ninetofivedev@reddit (OP)
This is true. We will in fact see. But it's Pascal's Wager.
That's what helped me get over the curve. Simply asking myself "What if I'm wrong?"
hondacivic1996@reddit
If I am wrong, I will spend two hours learning how to use agents effectively. The idea of "falling behind" is just ridiculous, there is very little to it. Some minor tooling, .md files etc.
As software engineers, we learn ten times more complex stuff every week.
ninetofivedev@reddit (OP)
Churning out another CRUD app or ?
Seriously, what are these extremely complex things you're working on, because I know what most engineers in the industry work on, and it's not that complex.
hondacivic1996@reddit
Its not necessarily that what we are working on is super complicated, its just that agentic coding with AI is a trivial "skill".
Graybie@reddit
I mean, there is non crud software out there - engineering, analysis, modeling, animation, gaming, etc.
ClideLennon@reddit
I should believe in AI so I don't go to hell? WTF?
Ok_Individual_5050@reddit
Pascal's wager is a great analogy in that it also doesn't hold up to scrutiny. I've decided to pray to Christian god on my death bed. I arrive in the afterlife to find that Thor really prefers the atheists to the Christians.
jbokwxguy@reddit
This assumes that the curve is an exponential or even linear growth.
It seems to me that it is not and LLMs are only getting incrementally better and may be hitting an upper limit. And most of the current progress is due to the harnesses being strapped around them.
Furthermore it assumes that they are economically viable for most companies. Current evidence supports that it's not, given the rate limiting and subscription changes happening across the providers.
ninetofivedev@reddit (OP)
That upper limit seems to keep moving, because this exact thing was true a year ago and that limit is far beyond what it was then.
bystanderInnen@reddit
No, the bubble is bursting soon!!!!!111 compute is going up and its just a fancy autocomplete!!111 /s