Large scale refactoring with LLM, any experience?
Posted by The_StrategyGuy@reddit | ExperiencedDevs | View on Reddit | 102 comments
I am working on a critical (the company depends on it), large (100k+ files), busy (100s of developer committing daily) and old (10+ years) codebase.
Given the conditions, I believe we are doing a stellar job at keeping the whole codebase somehow manageable. Linting and conventions are in place and respected.
But of course it is not "clean code".
Developer velocity is low, testing is difficult and cumbersome.
Dependencies between our components are very tight, and everything depends on everything else.
My team and I have a clear mandate to make the situation better.
A lot of tooling to manage the overall complexity have been build but I believe we have reached a plateau where extra tooling will not make the situation any better. If anything it will increase cognitive load on developers.
I start to think that handling the overall complexity of the codebase is the way forward.
Dependencies are needed, but we are not doing a stellar job at isolation and at keeping dependencies at a minimum.
This comes out as huge files with multiple critical and busy classes. Creating dependencies that are there for syntaxical reasons but not semantical reason.
I don't think it is feasible to manually address those problems. Also my team doesn't have the right business context.
Moreover none of the changes we should do are justificable from a business perspective.
The solution that we see somehow feasible are 2:
- Somehow force/convince the other teams to handle their complexity. We already tried this and it failed. 2. Figure out a way to do it ourselves.
Only 2. is an acceptable solution given that 1. already failed and the social capital we can deploy.
Approaching this manually is unfeasible, and naturally I am leaning toward using LLM for this kinda of refactoring.
The idea is to avoid updating the architecture and simply put as in a better position to eventually make architectural improvements.
I would like some sort of pipeline where we feed the codebase and the problem on one side (this file is too big, move this class), and get a PR on the other side.
I did try a quite challenging refactoring, and the AI failed. Not terribly, but not something that I can sell just yet.
I am here asking the community if you have tried something similar.
mamaBiskothu@reddit
As expected every comment is either negative or gently telling you no.
My suggestion is try augment or ampcode. Try to refactor one module with it. See what it does. Decide for yourself.
Cute_Activity7527@reddit
The point is many ppl tried and all failed. You first need well tested codebase that you can verify during refactor. 90% of codebases fail at this first step. It gets only worse from there.
Ive not heard of a single successful AI refactor that was not human rewrite in disguise in last 2 years.
mamaBiskothu@reddit
The best coding agents do a pretty good job of writing tests too. Im only advocating it because I use augment every day to do almost all my coding work and that includes test coverage. No the tests arent stupid. They are as good as any semi competent engineers work.
I still look at the code, I still prompt it individually and work on the prompt. But I am 5x faster still and its a viable approach. If it doesnt work for you, i think its a you problem.
that_young_man@reddit
Dude, for everyone here who does software for a living it’s clear that you are simply lying. Just stop
Subject-Turnover-388@reddit
It's so funny they look us right in the face and claim that incrementally tinkering with prompts is faster & smoother than just writing the damn code themselves.
Pretty sure a solid number of them are bots desperately trying to promote the AI bubble.
Cute_Activity7527@reddit
Who said it does not work for small work ? I also generate tons of unit and integration tests, whole bunch of boilerplate, one tap completion of whole sections in code.
But that’s not rewriting huge monolith application with AI.
scapescene@reddit
Doing this in ampcode will probably bankrupt the company
mamaBiskothu@reddit
That wasnt the question though. Will it work? If we give credence to the losers here it won't.
scapescene@reddit
I actually share your opinion and really didn’t expect going through the entire thread without finding a single comment with actual applicable advice, I think it can be done it’s just that it’s much more complex than throwing ampcode at the problem, I would give advice myself but I did sign an NDA so I can’t
mamaBiskothu@reddit
Sometimes I question myself, am I delusional, all the work Im doing, it looks like its solid, but the entire programmer internet seems to be insisting that this is all bullshit, but at this point i think its becoming clear that most engineers have without realizing been just useless and counterproductive, solving problems they themselves create and this new era is revealing that.
Subject-Turnover-388@reddit
Of course, why wouldn't you ask the infinite bad idea generator to refactor your extremely complex, niche codebase.
Are you stupid?
ImNateDogg@reddit
Ai doesn't have enough context to fully understand and grasp the size of that codebase well enough that I would trust it with a refactor. It could suggest ideas, plans, etc. But if you're currently managing fairly well, I think you'll run into a disaster if you let Ai just do what it wants.
The_StrategyGuy@reddit (OP)
For sure not the whole codebase!
But splitting a file that has 2 classes into 2 files with 1 class each? Yeah we should be able to manage.
Introduce dependency injection touching only 2 or 3 files and removing dependencies from a concrete class? Yeah it should be able to manage.
I am not talking about deep architectural work, I am trying to get us out of the low value added grinding that is needed.
Possible_Cow169@reddit
You have entirely too much faith in AI. Especially when you can do what you described with a simple algorithm and ripgrep.
ooplease@reddit
Okay, but your IDE has a non-ai tool built-in to it that can do a lot of that deterministically, so you don't have to babysit it
ched_21h@reddit
this
ashultz@reddit
I've watched in step by step set up to split the file and then delete the original file which a human would have just copy pasted from and then edited. Then it started to fill in the new files from scratch losing all the original specialized knowledge. I killed the session and git reverted.
There's are things they can do and can't do but the division won't be what you expect. This means you have to babysit them 110% of the time.
The_StrategyGuy@reddit (OP)
Yeah this is a bit where I am not that comfortable.
Of course there are tests, lint, and eventually manual review. But the work is worth doing if it will require less, much less, supervision than a manual refactoring.
I am seriously considering forbidden the AI to copy code from one file to another and provide it with tools to copy and paste.
So that the copy paste operation is strictly deterministic, implemented in code.
VRT303@reddit
You can turn all InteliJ IDE into an MCP server and expose their functions for something like Claude
ashultz@reddit
I'm not sure if any existing agents know how to use copy paste effectively, I'd be very surprised. There isn't any data for them to learn from, they only see the results.
You also can't prevent them from doing things, you can only nudge. Saying "don't X" often just places "X" into the context.
The_StrategyGuy@reddit (OP)
The general idea would be to provide a tool that does copy paste. Being very assertive in the prompt. And use another tool to verify that the copied section is actually bit by bit the same, or refuse the diff and start from scratch.
I could prevent them to write to a file, and force them only on tooling or gating "manual" file updates conditional to only small modification.
No_Indication_1238@reddit
Are you looking for advice or stroking your ego?
Perfect-Campaign9551@reddit
"But splitting a file that has 2 classes into 2 files with 1 class each" doesn't remove cognitive load.
The_StrategyGuy@reddit (OP)
Not strictly but I believe it is more nuanced.
If the file is 10k lines of code, it may help.
If the file contains other critical classes it may help by removing imports that the class we are interested in moving does not need, helping in breaking circular dependencies.
Similarly testing can be simpler, because you are not importing dependencies that you don't need.
Etc...
Will it be a 10x productivity gain? No.
Will it help, especially if you do this work for 1000s of files? Maybe.
Perfect-Campaign9551@reddit
If you do find files that happen to have a lot of unrelated classes jammed together yes it can help to break those out. But don't just assume breaking into separate files becomes an easy improvement to work on the code. The design of how the classes interact is more important for that.
The_StrategyGuy@reddit (OP)
Absolutely thanks for your perspective
yubario@reddit
What? Yes it does. It makes it far easier to understand what each class does instead of mixing multiple responsibilities in the same file.
If this wasn't true why not just put everything in one gigantic file with no classes at all? ffs
Perfect-Campaign9551@reddit
No, it does not. Because that second class probably needs to still be looked at. It's entirely based on how cohesive the design is. If you still have to open both files to work on a feature, that doesn't decrease cognitive load. At all.
Telos06@reddit
If they are that intertwined, why are they two separate classes? Once you have good encapsulation and separation of concerns, splitting the classes into separate files absolutely makes sense.
Perfect-Campaign9551@reddit
Which is the goal that OP really should strive for. Not just pulling classes into separate files. You have to change the actual architecture so you can split them at responsibility boundaries. That's what will help make things better.
Telos06@reddit
OP states the dependencies are "not for semantic reasons", so if we them at their word, the classes do have clear separation of responsibilities already.
yubario@reddit
If that was the case, then there would be no purpose for the strategy pattern or chain of command pattern, which is basically the bread and butter for the vast majority of major applications.
yubario@reddit
You're basically saying there is no point to splitting functions or refactoring classes into singular responsibilities.
Splitting behaviors **DOES** in fact reduce cognitive load.
If I am wanting to check the code for sending emails, I should not care about the authorization code or load balancing code. Even though they're all together in a sense in split files, I can debug much faster only looking at the code I am looking at.
For you to even think it doesn't help at all is just pointing to two things, either you are not an experienced dev or you're just flat out trolling.
Perfect-Campaign9551@reddit
Did you read what I said? I said it depends on the cohesiveness of why those 2 classes are in the same file. They might be related to the same functionality.
You are talking about a different situation where 2 classes are unrelated and are in the same file. Which is not the same thing as what I suspect OP is seeing.
yubario@reddit
Even if the entire class does one singular purpose overall it is still better to split large files.
Example:
Lets say you have a Calculator class and there are several operations (think strategy pattern). I could put every single behavior in one gigantic file and still be 100% compliant with Single Responsibility Principle because, like you're suggesting, if code always belongs together what's the point of splitting?
Well, the point of splitting is that even though the class does one single thing, with multiple strategies I can have one file dedicated to multiplication, one to subtraction, etc without having to scan through unnecessary code.
Perfect-Campaign9551@reddit
As I said, if you still have to open the multiple files to work on a part, that doesn't reduce cognitive load. It's still the same amount of code just spread out more.
if you are saying " well now you don't have to scroll past 5k lines of stuff you don't care about" ok sure. But whether or not you consider that a cognitive load problem? Debatable. The cognitive load comes with the architecture, how the code communicates with each other. File structure doesn't affect that.
yubario@reddit
Why would I have to open multiple files to fix something when I know exactly which file contains the code?
Perfect-Campaign9551@reddit
Exactly my point to OP. If you have a good architecture, then sure having things in a separate file can work with that.
You are talking about strategy pattern. Wow guess what that pattern works along super good with separate files.
but if you don't have a good arch, then no amount of file splitting is going to make your code easier to use/work with. It's not an automatic "I split the files so our cognitive load is less".
Vfn@reddit
It's very good at refactoring, actually. I would start with writing tests for each unit you change, if you don't have that already. Then just go at it one module at a time, one unit at a time.
Don't make large changes in one pull request, and keep track of where you are in the process at all times.
The_StrategyGuy@reddit (OP)
Do you actually have experience with creating this kind of pipeline? Any other tips to share?
Vfn@reddit
We did a little poc at our company migrating an app from python to go, those were our findings during the retro.
Large scale projects I don’t, I don’t think we have had enough time in the field with these new workflows to really say whether it’s a good choice or not in the long run. But it looks promising here, imo.
The_StrategyGuy@reddit (OP)
Thanks a lot! Appreciated your insights
k3liutZu@reddit
Yes. At this level it works. Bear in mind hallucinations and check the work at every step.
The_StrategyGuy@reddit (OP)
Do you have experience? Any tips for pushing this kind of work in production?
realdevtest@reddit
Just use the refactoring tools of you IDE for refactoring. You’re just asking for trouble to use an LLM to do that
ArriePotter@reddit
Validate. Use intense unit testing that covers every possible case. Make sure that you write the unit tests personally.
Unit tests don't have to be efficient or well structured, they just have to get the job done. Make sure that the unit test output is identical before and after the refactor
sebf@reddit
That. Most of those problems are due to lack of tests. With proper tests that describes the business rules, using AI would be possible, although most likely a bad idea.
MushroomNo7507@reddit
Yeah exactly that’s the core problem. AI doesn’t fail because it’s bad at coding, it fails because it lacks context. I’ve actually built a tool that solves this specific issue by giving AI all the surrounding context first like feedback, docs, and architecture info so it understands the system before generating anything. From that it can create proper requirements, epics and tasks that guide the changes in a structured way. If you’re interested I can share access for free if you drop a comment or DM me.
Simple_Horse_550@reddit
Do not use LLM to manipulate anything, tried this and failed. You can tell it to create a plan for charges and use it as a way to get a second opinion, but don’t let it run it…
clusty1@reddit
I tried for the first time a few weeks back:
The job was to replace all class instantiations with an easy pattern with a factory function.
There were a few hundred instances.
It took about 2 days: first day me to install and learn the cli ai agent and some initial refactor and the second day for me fix all the fuckups.
All in all I dont find I saved a ton of time and it wound up costing the company 250$ in computer credits. I would consider it a fail
smontesi@reddit
If you had 50 dedicated developers they wouldn;t be able to complete this in 5 years.
No shot LLMs can do it today...
Best approach is to put rules in place for "new things" and have management agree on a % of time to be used on reworking things as you go (current task involves invoicing? Then factor in a +20% so that some interface can be refactored and some tests added).
If you want to use LLMs I suggest you use them to write tests right now and for some smaller refactoring tasks, as mentioned, on things a developer is working on for a given task, so you can have proper code review.
As for the context, I don't think it's a problem today with tool use etc, but I have never touched a codebase with "100k+ files" (and likely never will), so all bets are off there, but should be easy to test... Try asking for informations about some implementation and to add tests, see if it does something useful... If it doesn't, well, could be context, could be something else!
Or... Just wait a couple of years... Improvements are slow and incremental, but it won't take long for LLMs to be clearly above the average developer.
RemindMe! 3 years
RemindMeBot@reddit
I will be messaging you in 3 years on 2028-10-13 20:31:20 UTC to remind you of this link
CLICK THIS LINK to send a PM to also be reminded and to reduce spam.
^(Parent commenter can ) ^(delete this message to hide from others.)
Knoch@reddit
If there is a pattern you want to repeat, you can refactor a piece of code including tests, then use it as a reference to the AI and have it repeat the pattern on many other files.
Firm_Bit@reddit
Spotify or discord or some other big tech company did this a few years ago. They migrated their test suite or the actual code or something. The article said it took like 3 years of prep and then 6 months of execution.
Idk where you are but I doubt your team could pull this off.
telewebb@reddit
Adding LLM to the equation is equivalent to each dev getting their own junior engineer that will enthusiastically write code without asking any questions, without seeing if it runs, and will not remember a single thing you tell them from one session to the other. If that sounds like it would be an improvement to your team, then go for it.
MushroomNo7507@reddit
I’ve been experimenting with something similar and large scale refactoring with LLMs still hits a wall pretty fast. They can handle local improvements but once dependencies and architecture come into play things fall apart because the AI doesn’t really understand the full system context.
That’s exactly why I built a tool that connects all the relevant context first like feedback, docs, architecture notes or API inputs and from there generates the requirements, epics and user stories that define what the system should be. That structure can then be used to guide and validate AI driven refactors in a controlled way instead of hoping the model figures it out.
If you’re exploring this direction I can share access to my setup for free if you drop a comment or DM me. It’s basically my attempt at making context first AI development work for complex and evolving codebases.
Ok_Individual_5050@reddit
I've tried this. Even getting it to do a survey of existing components in a codebase is painful, even though that should be right in its wheelhouse.
lllama@reddit
Without fixing this, don't bother changing anything else.
aeroverra@reddit
Why would you want to spaghettify your code even more?
LogicRaven_@reddit
This project is already high risk. Who should be involved in a go/no-go decision for this project in your org?
Well, something will eventually break. In your place, I would work on improving all these 3 points.
You need to understand what could break and how to detect that quickly.
You need to be able to articulate the business value - time to market, stability or else - and get that acknowledged within the org before you break something.
You might want to test deterministic tools for the refactoring and see how much you can tackle with those.
VRT303@reddit
...what are 100 devs doing daily in one repo??
Get business to dial down on features, make 50% of them only do maintenance and separation of concerns, inverse dependency injection whatever is needed to make it stable.
Once it's stable you might first use some safe refactoring tools, depending on the language (PHP has Rector for example).
After all that you might throw in AI
rolim91@reddit
AirBnb has done it before.
https://medium.com/airbnb-engineering/accelerating-large-scale-test-migration-with-llms-9565c208023b
Krom2040@reddit
My assumption is that your time would be better spent using AI to analyze the behavior of existing code, and then using your insight into the problems with the current code and the business domain to try to come up with a more effective model for a replacement system.
rayfrankenstein@reddit
Let’s refactor OP’s post:
wwww4all@reddit
Large scale refactoring always fails. Large scale refactoring with LLMs will always fail.
Fresh-String6226@reddit
I work with AI all day on large codebases and I’m a big advocate for it at my company. When used responsibly it’s recently become a huge benefit to us.
This is not what I would consider as responsible usage, though. I would not attempt a large scale refactor with current generation AI. Maybe in a year or two it’ll be possible, but today you’re just likely to introduce more bugs.
Where it could be useful today: use AI to add another validation phase prior to making changes (Codex’s validation is great for example), use it to allow people to make changes to the code they own. But do not attempt to offload a large scale refactor to a broken codebase, it’s way too early for AI to do that.
30thnight@reddit
Start with tests before anything
account22222221@reddit
It was a disaster and we were convinced that AI, in its current state, remains snake oil for anything other then small jobs.
dinosaurkiller@reddit
May God have mercy on your soul.
Intelligent_Water_79@reddit
You need to define manageable pieces and for each piece, iterate. First find the anomalous code not adhering to the pattern, get ai to refactoring that, test it. Then adjust the design of that piece with ai. Then test. This is all go na take a very long time
Kietzell@reddit
AI will turn that code base into dumpster fire that neither AI or anyone can stop 🔥🗑️
danielt1263@reddit
Even if you figure out a way to refactor all the code yourselves, it will quickly fall apart if all those other hundreds of developers are not on board.
My recommendation is to first do no harm. Code that hasn't been touched in years and doesn't have any bugs in it should generally be left alone. Find out what files are touched the most often because that is where your focus should be. It will also be far easier to justify the changes from a business perspective if you are working on improving code that currently has a lot of churn anyway. If a file is edited often because it contains a lot of code/classes, then break that code up into separate files, then see where the churn shifts to after the breakup. If you find that several files are always changed together, that's a good area for refactoring. In general, find ways to apply the open/closed principle to the files that have a lot of churn to reduce it.
Obviously, the book Working Effectively with Legacy Code should be your bible here.
steveoc64@reddit
42 yoe here, so no doubt I will get a tonne of downvotes for my opinion on this one as being “out of date thinking”
I have been through at least a dozen major multi-year modernisations of very large critical systems over those years.
All of the ones that were highly successful involved building a completely new system from scratch, with fresh new thinking, in parallel to running the old system. Use a fresh new set of requirements, the latest tools, and accept that much of the legacy system scope is no longer even relevant, so ditch it.
Strangler pattern is OK up to a point, but fails where it has to mimic functionality that is no longer the best approach, or worse, has to interact with the same old data models that were always fundamentally broken in the old system. You end up dragging across the same operational bugs and mess of dependencies that are out of control.
The worst ones were cases where management had some deeply emotional attachment to the existing system, and really wanted to keep it alive through incremental improvements. Just don’t !
All systems reach a point in their life where it’s best to just lock them into maintenance mode and push on with something better. Some systems reach a point where they can’t even be maintained properly, are obviously broken, and need to be killed asap. Hanging on too long can actually destroy the company.
It’s extremely rare for any large systems to be so well thought out the first time around, that they are worth keeping up to date till the end of time itself (ie - some operating systems, languages, compilers, database engines, etc). Problem is, your top management probably thinks “their system” is in the same ultra-brilliant category as the Unix OS, SQL database engines, etc .. and should live forever.
It’s not even a question of whether LLMs can help with this refactor, it’s whether the system is worth investing any more resources into keeping it alive.
changhc@reddit
See what airbnb did: https://medium.com/airbnb-engineering/accelerating-large-scale-test-migration-with-llms-9565c208023b
tl;dr: it's not any sort of one-click magic, though llms are not useless either. Be specific and careful.
The_StrategyGuy@reddit (OP)
Thanks for the link
Beginning_Basis9799@reddit
Where it starts to really crumble.
Test coverage is low.
Is any language with a loose type definition.
Code has anti patterns or just bad code.
Code base exceeds 2 million words.
AI struggles massively as an example here because I documentation it's ingested is based on best standards.
A lazy developer putting in an array called UserIds the LLM will assume it's only UserIds (1=>1) but it's (1=>array()). Or just a random node setup.
Overall LLM great for new things but for older projects it's just painful, I keep it on smaller parts because it's just not trustworthy.
jamesbunda007@reddit
I'm found of the Strangler FigStrangler Fig pattern for that.
It's slow af and requires a lot of effort buy-in from pretty much everyone, from C-level to jr. devs, but in my experience it's the only way to make this work.
This allows the new system to co-exist with the legacy one and you can compare metrics and KPIs along the way. Once there's enough confidence that the new system is working as expected, you can kill that part of the old system.
Depending on the size of the organization, there can be multiple strangler figs at once, but doing one at a time it's the safest approach.
That's the best I can do without further context.
Good luck.
jabz_ali@reddit
I have a pretty large code base at work, and have been experimenting with Copilot. First step is to create a copilot-instructions.md file (other LLMs may differ) to help the LLM understand the existing structure. The second step is to ensure that you have adequate test coverage at least for the critical components in your code. Set up VS Code to use Edit mode and feed it a few files at a time, ask it to refactor a small feature and ensure that it also writes appropriate unit tests, you can then review the output and accept it or clarify the requirements further. An LLM will not be able to refactor your whole project in one go but can certainly make incremental improvements.
Perfect-Campaign9551@reddit
Um yes, you can start to minimize dependencies. Strangler Fig pattern
Resident_Car_7733@reddit
It's quite funny how this fellow stated his exact problem (everything depends on everything) and instead of trying to fix that he just wants to destroy everything with AI.
My man, please tackle your actual problem. You can reduce coupling/dependencies by adding more APIs / interfaces.
beachandbyte@reddit
I work on very large code bases and combining repomix with million context windows usually plenty good for refactoring. Still up to you to manage the includes / ignore so the relevant context is there.
dashingThroughSnow12@reddit
At work we have a 35K LOC component written in Go that is about 10 - 12 years old.
I had a task to lint the codebase (the why is irrelevant). For fun I decided to turn on the agentic mode, told it the command (golangci-lint) to find linter issues, gave it singular files at a time, and told it to fix my linter issues.
It was a comical disaster.
I’d not trust it anywhere near what you describe.
In a situation you describe, the only approach is incremental sanity. We have a PHP monolith that is 19 years old. You eat an elephant one bite at a time. You build little kingdoms of sanity one plot after another. You have some sane structure for net new code. You slowly move some things out to their own services. Or make new ones. You encourage small refactors. It took a decade to get into this. It may take a decade to get out.
You gotta find that freeing. You aren’t responsible for fixing this. You are responsible to be like a Boy Scout and leave the place cleaning than you found it.
stewcelliott@reddit
We recently had to refactor our test projects to drop a test library which has gone to a commercial license. Trying to refactor a single project in its entirety with one prompt just didn't work, but going file by file produced satisfactory results. I think the key takeaway is that LLMs cannot handle large scale work without a very good prompt or plenty of supporting context via an agents.md file or something like that. But this is definitely something I'd file under Attempt At Your Own Risk right now.
Perfect-Campaign9551@reddit
All of these open source libraries going commercial makes me realize that an old "insult" to companies was the "not invented here" syndrome where the company wanted to build everything part in-house. And it was touted as a bad thing.
Well, is it so bad now? Now you can see how libraries can be pulled out from under you? I'm starting to think "not invented here" syndrome might be a good thing at least for long term projects. LOL.
maxip89@reddit
You let AI refactor something where the company depends on it?
Did you updated your resume? Because when this go south your job will go first.
Logical_Angle2935@reddit
I was working on a small refactoring project which was a little more complex than a search-replace use of class A with class B. LLM did "okay" for a while, but got worse as it progressed. At one point it ran into a compiler error and it's solution was to delete 5000 lines of code.
Not recommended.
flavius-as@reddit
To refactor successfully such an application, you need a good architecture in the first place.
A good architecture is in AI terms "context management".
cjbooms@reddit
I've recent experience using copilot cli, with claude-sonnet-3.5 as model. This was to do a refactor of an application that streams events from dynamodb using kcl 1, to using the latest AWS sdks including kcl 3. The code base is not large, but application is critical, and moves thousands of events per second where data loss is not acceptable. Given the changes in the AWS sdk, it's essentially a rewrite of the event consumption logic of the application, everything from config to connection settings. Critical, as we have up to 100 active streams and the change has to be transparent. But crucially, this is mostly a transition from one version of public libraries to a later version....easy right.
Pros: - immediate progress understanding the code base and switch code to new ask libs - allowed me to progress via automated refactorings during meetings and other low-focus time, with me acting as code reviewer. - impressive refactoring speed for targeted tasks such as generating Jackson module transformations to fill in for the lack of pojos in the latest AWS sdk - fantastic for generating unit tests for behaviour that is changing, that can pulled out as separate prs and merged prior to change. - great at critiquing changes in a new session. "I have made changes to do x on this branch, but I think there is data loss, find where it is happening"
Cons: - confidently assures changes are safe for production when they are not. - rewrites failing tests to pass, which would result in data loss or other issues if not deployed - repeatedly claimed to have migrated all config options.... despite missing critical ones such as sts roles, batch sizes, timeouts, retries, or logic that detecting failures and - lies about features in public libraries. Assured me multiple times that AWS had deprecated support for dynamdb streams in their latest sdk, and we must pipe streams through kinesis. - long run context windows fail after awhile. The model forgets immediate task, and reverts to prompts from hours previous, regressing progress
In conclusion, I can't imagine going back to not having this as a tool, it's a definite productivity booster. But holy shit is it a dangerous tool in inexperienced hands. Be very cautious about using it unsupervised, especially if your code base diverges significantly from code base the models have trained on.
Keep the tasks small and targeted. Small changes (context windows) tested and merged one at a time. Restart the context
OkLettuce338@reddit
Don’t touch that with AI my god
dudevan@reddit
In b4 the LLM misses a critical line in a function somewhere and OP wastes days trying to find it.
justUseAnSvm@reddit
Yes, I've done this, and built a system for doing this.
This is just what worked for us. You can scale LLMs, but there are some tricky things to it. IMO, it's important to remember that LLM solutions are built for one developer, with manual overview, then manual merging by one team. Once you go outside of this use case, you'll find LLMs can do automatic changes, but the infrastructure isn't really built to do that.
ekronatm@reddit
Tricky but probably fun work. Not sure if AI would "know" more about the code than your dev or business.
I would ask everyone with a stake, devs or business, to list their top 5 issues. Try to sort out duplicates, and then have everyone give points or something to the favorites. After that, categorize them into ongoing refactoring (things you can progressively move over to) and larger reactors thats needed in one go. And sort them by impact or nr of points and see where it gets you.
But, perhaps most important, how would you know if a refactor have succeeded without tests or a clear definition of success? Ideally it should be something better then "I think this is better".
fkukHMS@reddit
LLMs are excellent at recommending and implementing "textbook" solutions. Their ability to provide value decreases sharply as you stray from that sweet spot. In your scenario it seems as though most of the constraints and limitations are due to legacy concerns and past mistakes - this is NOT something AI will do a good job out-of-the-box. I think it is probably possible to leverage AI for the "heavy lifting" of completing repetitive tasks, updating boilerplate code etc - but you (HUMANS) will need to break down the bigger tasks into small, well-defined, testable steps which can then be fed into agents for execution.
lordnacho666@reddit
Make sure tests are working. Lots of them. Make sure it doesn't change your fit. Make sure it doesn't just short circuit the tests. Then YOLO it on thinking mode.
supercargo@reddit
I would suggest using LLM tools to assist with understanding and analysis more than performing the refactor. They also might be helpful for authoring regression suits. I’ve had the best results when asking these tools to catalog findings in structured formats (e.g. csv for tabular data, graphviz dot for graphical data)
k3liutZu@reddit
It fails on large scopes. Not enough context. You need to go step by step.
SolarNachoes@reddit
AI sucks at this.
However, if you walk it through step by step a refactor exercise and save that set of prompts to a readme style file. Then you can refer to that readme prompt as a context and make similar refactorings efficient.
But as for refactoring a large codebase you need to take a domain driven approach and find boundaries in the code which can be broken off into its own module and decoupled.
Once the boundaries and modules are created then you can start to cleanup individual modules.
The only thing that couples modules together is the data. So focus on the data access patterns and workflows.
Also recognize if this is a skill set that you have. Not everyone does.
noiseboy87@reddit
Do not. Even with cursor Max, if it's big enough, a codebase will far exceed the 2million token limit on context, which it wouldn't need if everything wasn't so tightly coupled....but it will need it for what you propose.
Plus, fuck me but it's still really bad at most things.
Also - if your team doesn't have the right business context, how the hell you gonna make sure it all works again afterwards?
I'd say a better 1st pass might be to use it to write tons and tons of tests. Use code coverage reports, and rope in as many humans as possible to verify.
Only then would I even consider just letting it loose. And I'd be making the changes versioned, incremental, and preferably feature flagged.
Godspeed, it sounds awful!
Potential-Music-5451@reddit
This won’t work. Your best bet is a manual refactor using agentic tooling to speed up the code writing (especially tests) and review process. But it is incapable of doing what you want from end to end.
TopSwagCode@reddit
Test it out. My best recommendations is make good markdown file that describes your code base with folder structure and context.
Perhaps also one AI in folder you know is going to change. Eg if its mono repo and only frontend is changing only open AI in that folder with that context. No need for it to know about entire solution then
helloWorldcamelCase@reddit
In my experience, AI is good at adding flesh to well structured code, but quite poor at structuring the broken code.
aj_rock@reddit
In our experience, LLM assisted refactors and migrations shine where you can effectively apply strangler pattern to it, so that you can isolate and well-test components. But that’s also easy to refactor manually. Llama would struggle for the same reason your devs do.
Never hurts to allow some codex like tool to scan through it though (as long as you have good tools which won’t share the data) , might be you’re missing some things for quick wins, but you’re in a tough spot with or without ai assistance.