Heres my formula, which works like a charm for me for decades now:
Break the task down into subtasks until the subtasks are absolutely manageable, cannot be broken down any further in a meaningful way, and the effort required for each is immediately apparent.
Consider what could possibly go wrong in the absolute worst case scenario for each subtask and add these extra costs.
Round each number up to the next higher Fibonacci number.
Take your first estimate and multiply by 4, worked for me for the last 30 years. No one really seems to care what the estimate is just that you meet it.
My experience is that you must give an estimate that is fairly short. It doesn't need to be real, but if you try to give a longer timeframe you're in for a lot of meetings, calls, requests for justifications and Powerpoint presentations, etc.
It's much smoother if you give an unrealistic and unreal estimate that makes them happy, then decalre a delay later.
Best advice I ever had was "Think back to how long it took last time to do something similar. That's probably an accurate estimate"
We usually go "Last time it took me 3 months, but that went wrong, this happened, I dropped that, those specs changed. That won't happen again - I know better know. And really, it only took 1 week when you ignore all that", then say "Just one week, it's easy"
Idk you must be lucky and with 3 decades of experience carry some weight with your words but I've never told my Lead or someone higher up something would take 3 times longer than my real estimate and it be okay.
If course I don't tell them the real estimate, just the 3x or 4x estimate.
The original source for this was in the movie Star Trek III, when Kirk asks "Mr. Scott. Have you always multiplied your repair estimates by a factor of four?" To which Scotty replies, "Certainly, Sir. How else can I keep my reputation as a miracle worker?"
Joel Spolsky's 2000 essay on why you should never rewrite a project is probably his most famous essay and covers many of the same points as well as several others.
With AI writing something from scratch is often relatively easy, understanding the details of the existing code (both by humans and by AI) seems to be substantially harder.
If we're going to a situation where it's used more and more, maybe we'll have to get used to rewriting a feature from scratch more often instead of refactoring it.
The problem had never been creating a feature from scratch. The problem had always been discarding institutional knowledge by discarding old code and crippling your software.
If you spent 5 years solving all your product's bugs, then rewrite from scratch, you just literally threw away all your work.
Just because you created something new, it doesn't mean it is better. You can easily create even worse software.
The actual problem in my experience is test coverage at the right level for that 'instituational knowledge'. If your coverage is good, you can start benefitting from AI in a legacy code base.
Our legacy component has improved in code quality because we focus on testing at the use case level more. I've come to really hate low level unit tests. Most of the time they test such trivial stuff without providing an extra feeling of safety. Getting to the point where that's easy takes a while because you need good in-memory impls for your infra etc. Combine that with good steering files and skills, and you can just let the AI do its thing and it will closely resemble your style.
I have a six months reflection that references that essay. I don't think my articles are good so I rarely bring them up unless it's relevant.
I'd like to see more articles about rewrites and how they can go wrong. I knew I wouldn't hit 1.0 in a year but I didn't know I wouldn't have a debugger by the end of the year (too many things were priorities over GUI)
I think that hairy thing very true in bespoke Enterprise software. Spent a lot of time with "why do they do this?", only to try cutting it out, and find out in testing, or production, that that weird edge case was business logic of a sort. But what is hard is some of that hairiness is obsolete, or to deal with other bugs that are now solved, or cases that no longer exist, and unless it is properly documented it is hard to tell the difference.
I think the "order of magnitude" rule probably applies. If the rewrite isn't going to bring a clear and significant improvements, maybe not a strict "order of magnitude", but something concrete, it is too easy for the vagaries of the process and risks to predominate.
To be honest, rewriting and boilerplate are where LLMs seem to stand out IMHO. Once you figure out the new architecture, interfaces and toolchains, those seemingly hated statistical tools could really shine and save lots of work. Your mileage can vary, of course, but for glue code and an overall rewrite, I'm finding them really useful.
Nah. (I'm the author.) The prototype wasn't 'successful'. I should have stopped working on it sooner than I did. The reason why the code is bigger this time around is because I'm implementing everything and not writing todo(); everywhere
My good PM would take any estimate I gave and would double it plus ten percent. I started doing that in my head before giving them my estimate and they would still double plus 10... They were normally right.
On my last project manager doubles estimates from dev to add safety margin and QA efforts. When top management asked why our estimates always so big compared to the similar tasks done on other project. The reply was: because we almost always do job within estimates and they almost always not.
The estimation was for fun thankfully. I know some people want programming playlist. I don't have any but during development I did listen to a lot of Ella Boh and ILY Ghoul. Both relatively unknown
Background-Quote3581@reddit
Heres my formula, which works like a charm for me for decades now:
Break the task down into subtasks until the subtasks are absolutely manageable, cannot be broken down any further in a meaningful way, and the effort required for each is immediately apparent.
Consider what could possibly go wrong in the absolute worst case scenario for each subtask and add these extra costs.
Round each number up to the next higher Fibonacci number.
Sum everything up and multiply by 2.
Plank_With_A_Nail_In@reddit
Take your first estimate and multiply by 4, worked for me for the last 30 years. No one really seems to care what the estimate is just that you meet it.
ilfaitquandmemebeau@reddit
I guess t depends on the company.
My experience is that you must give an estimate that is fairly short. It doesn't need to be real, but if you try to give a longer timeframe you're in for a lot of meetings, calls, requests for justifications and Powerpoint presentations, etc.
It's much smoother if you give an unrealistic and unreal estimate that makes them happy, then decalre a delay later.
QuickQuirk@reddit
Best advice I ever had was "Think back to how long it took last time to do something similar. That's probably an accurate estimate"
We usually go "Last time it took me 3 months, but that went wrong, this happened, I dropped that, those specs changed. That won't happen again - I know better know. And really, it only took 1 week when you ignore all that", then say "Just one week, it's easy"
Hungry_Importance918@reddit
Yeah same here. We usually double the estimate and it's still optimistic half the time.
steinmas@reddit
My first comp sci teacher in college said pretty much the same. Take the longest you think it could possibly take, then double it.
bastardoperator@reddit
I don’t even like the term estimate, it’s just guessing. Ship when you’re ready. Under commit, over deliver.
fucklockjaw@reddit
Idk you must be lucky and with 3 decades of experience carry some weight with your words but I've never told my Lead or someone higher up something would take 3 times longer than my real estimate and it be okay.
If course I don't tell them the real estimate, just the 3x or 4x estimate.
shagieIsMe@reddit
https://wiki.c2.com/?ScottyFactor
https://www.youtube.com/shorts/U2UB4jdwqZw
Plank_With_A_Nail_In@reddit
I didn't actually know this consciously but I have watched the film so maybe subconsciously U remembered it.
fbpw131@reddit
I was x3, but I like yours better.
BuriedStPatrick@reddit
Oh man, I want to live in your world.
dead_alchemy@reddit
Really appreciate a post mortem that isn't a thinly veiled brag.
levodelellis@reddit (OP)
Haha, thanks
levodelellis@reddit (OP)
I'll bite, how am I bragging?
neutronbob@reddit
Joel Spolsky's 2000 essay on why you should never rewrite a project is probably his most famous essay and covers many of the same points as well as several others.
ZirePhiinix@reddit
Rewriting software is fine. Throwing away old code is the bad part.
Refactoring is rewriting. You just need to know what kind of problems you're solving before you're solving them.
ValuableKooky4551@reddit
With AI writing something from scratch is often relatively easy, understanding the details of the existing code (both by humans and by AI) seems to be substantially harder.
If we're going to a situation where it's used more and more, maybe we'll have to get used to rewriting a feature from scratch more often instead of refactoring it.
turtleship_2006@reddit
Using AI to start from scratch solves absolutely none of the problems with starting from scratch in general.
If anything, the fact that the code was generated for you probably means you understand it less, which puts you in a worse situation.
I'm not a blind hater of AI, I use it for some boilerplate on new projects, but it would not help in this case.
ZirePhiinix@reddit
The problem had never been creating a feature from scratch. The problem had always been discarding institutional knowledge by discarding old code and crippling your software.
If you spent 5 years solving all your product's bugs, then rewrite from scratch, you just literally threw away all your work.
Just because you created something new, it doesn't mean it is better. You can easily create even worse software.
rastaman1994@reddit
The actual problem in my experience is test coverage at the right level for that 'instituational knowledge'. If your coverage is good, you can start benefitting from AI in a legacy code base.
Our legacy component has improved in code quality because we focus on testing at the use case level more. I've come to really hate low level unit tests. Most of the time they test such trivial stuff without providing an extra feeling of safety. Getting to the point where that's easy takes a while because you need good in-memory impls for your infra etc. Combine that with good steering files and skills, and you can just let the AI do its thing and it will closely resemble your style.
ValuableKooky4551@reddit
Yes, but it will be a lot cheaper to make. It may be where we're headed, at least for some part of the work. And that part will grow over time.
ZirePhiinix@reddit
It isn't cheaper to make if doing nothing gives you a better result.
levodelellis@reddit (OP)
I have a six months reflection that references that essay. I don't think my articles are good so I rarely bring them up unless it's relevant.
I'd like to see more articles about rewrites and how they can go wrong. I knew I wouldn't hit 1.0 in a year but I didn't know I wouldn't have a debugger by the end of the year (too many things were priorities over GUI)
gnufan@reddit
I think that hairy thing very true in bespoke Enterprise software. Spent a lot of time with "why do they do this?", only to try cutting it out, and find out in testing, or production, that that weird edge case was business logic of a sort. But what is hard is some of that hairiness is obsolete, or to deal with other bugs that are now solved, or cases that no longer exist, and unless it is properly documented it is hard to tell the difference.
I think the "order of magnitude" rule probably applies. If the rewrite isn't going to bring a clear and significant improvements, maybe not a strict "order of magnitude", but something concrete, it is too easy for the vagaries of the process and risks to predominate.
TexZK@reddit
To be honest, rewriting and boilerplate are where LLMs seem to stand out IMHO. Once you figure out the new architecture, interfaces and toolchains, those seemingly hated statistical tools could really shine and save lots of work. Your mileage can vary, of course, but for glue code and an overall rewrite, I'm finding them really useful.
xxkvetter@reddit
Didn't read too closely but I wonder if this is an example of the second system effect.
levodelellis@reddit (OP)
Nah. (I'm the author.) The prototype wasn't 'successful'. I should have stopped working on it sooner than I did. The reason why the code is bigger this time around is because I'm implementing everything and not writing
todo();everywhereKokopelliOnABike@reddit
My good PM would take any estimate I gave and would double it plus ten percent. I started doing that in my head before giving them my estimate and they would still double plus 10... They were normally right.
saf_e@reddit
On my last project manager doubles estimates from dev to add safety margin and QA efforts. When top management asked why our estimates always so big compared to the similar tasks done on other project. The reply was: because we almost always do job within estimates and they almost always not.
neutronbob@reddit
Hofstadter's law: It always takes longer than you expect, even when you take into account Hofstadter's law.
teknikly-correct@reddit
Most estimates are political, as in: How much of the real time can I expose in an estimate right now?
jesuslop@reddit
"Two weeks" -The Money Pit.
levodelellis@reddit (OP)
The estimation was for fun thankfully. I know some people want programming playlist. I don't have any but during development I did listen to a lot of Ella Boh and ILY Ghoul. Both relatively unknown