AI doom and gloom vs. actual developer experience

Posted by Any_Rip_388@reddit | ExperiencedDevs | View on Reddit | 195 comments

Saw a NY Times Headline this morning that prompted this post and its something I've been thinking about a lot lately. Sorry in advance for the paywall, it is another article with an AI researcher scared at the rate of progress in AI, its going to replace developers by 2027/2028, etc.

Personally, I've gone through a range of emotions since 2022 when ChatGPT came out, from total doom and gloom, to currently, being quite sceptical of the tools, and I say this as someone who uses them daily. I've come to the conclusion that LLMs are effectively just the next iteration of the search engine and better autocomplete. They often allow me to retrieve the information I am looking for faster than Googling, they are a great rubber duck, etc. Maybe I'm naive, but I fail to see how LLMs will get much better from here, having consumed all of the publically available data on the internet. It seems like we've sort of logarithmically capped out LLM progress until the next AI architecture breakthrough.

Agent mode is cool for toy apps and personal projects, I used it recently to create a basic js web app as someone who is not a frontend developer. But the key thing here is, quality was an afterthought for me, I just needed something that was 90% of the way there quickly. Regarding my day job, toy apps are not enterprise grade applications. I approach agent mode with a huge degree of scepticism at work where things like cloud costs, performance and security are very important and minor mistakes can be costly, both to the company and to my reputation.

So, I've been thinking a lot lately: where is the disconnect between AI doomers and developers who are skeptical of the tools? Is every AI doom comment by a CEO/researcher just more marketing BS to please investors? On the other side of the coin you do have some people like the GitHub CEO (Seems like a great guy as far as CEOs go) claiming that developers will be more in demand in the future and learning to code will be even more essential due to the volume of software/lines of code being maintained increasing exponentially. I tend to agree with this opinion.

There seems to be this huge emphasis on productivity gains from using LLM’s, but how is that going to affect the quality of tech products? I think relying too heavily on AI is going to seriously decrease the quality of a product. At the end of the day, Tech is all about products, and it feels like the age old adage of 'quality over quantity' rings true here. Additionally, behind every tech product are thousands, or hundreds of thousands of human decisions, and I cant imagine delegating those decisions to a system that cant critically think, cant assume responsibility, etc. Anyone working in the field knows that coding is only a fraction of a developers job.

Lastly, stepping outside of tech to any other industry, they still rely on Excel heavily, some industries such as banking and healthcare still do literal paperwork (pretty sure email was supposed to kill paperwork 30 years ago). At the end of the day I'm comforted by the fact that the world really doesn't change as quickly as Silicon Valley would have you think.