Those with PR pipelines that take longer than 1h...
Posted by Budget-Length2666@reddit | ExperiencedDevs | View on Reddit | 105 comments
I have been working on quite some large CI pipelines in the web/frontend space and had the impression that those tend to get pretty slow quickly because:
- old JS-bundlers like webpack are slow
- type-aware linting is very slow
- e2e tests are slow and flaky
- unit tests are either slow with something that is browser based, or flaky because of OOM in jest
So, if you have a long duration on CI pipelines - why? what is your stack? what are the bottlenecks?
Round_Head_6248@reddit
I told our frontend team they should modularize their shit, but they didn’t. I guess they’ll have to live with 1+ hour build times for their mess.
lordnacho666@reddit
Remember things can run in parallel. Multiple images can build at the same time, test suites can be split up.
ForgetTheRuralJuror@reddit
We have a custom pipeline with a warmed up docker container and 10 parallel test runners, still takes 45 minutes 🥴
loosed-moose@reddit
Fixing this should be a roadmap priority
1mbdb@reddit
Making this a priority for an engineering leader is exhausting. You have to fight the leadership and sales head because they promised something to the client and now that's top priority. The leadership team will prioritize features that investors are pushing on. Engineering head is left alone protecting the system and slowly becomes a villian in the eyes of other leaders.
I have felt and seen this many times to say that this is easier said than done, I would even say it's almost impossible sometimes.
No offense to you in any way, just sharing my experience.
Round_Head_6248@reddit
You can fight for your future, or live peacefully in an ever growing trash heap.
ForgetTheRuralJuror@reddit
We all know that (except the key decision makers). Luckily a load of engineers are quitting and we can't backfill because new hires don't last long either. Execs are getting the message and we've roadmapped some direly needed fixes for our huge tech debt
mattk1017@reddit
Well, if he works at a place like I do, where developers comment out tests to deliver features by a deadline, then I doubt they would prioritize something like this. I hope his job has a better engineering culture than mine... :(
Kenneth_Parcel@reddit
How?!? Seriously, is that the volume of tests? How long specific tests are? Something else?
ForgetTheRuralJuror@reddit
✅ custom testing framework that isn't parallel (because many tests rely on the order they're run in) ✅ cross service tests ✅ huge data fixtures
A few other worse things that might be too specific to relay on Reddit.
ZarrenR@reddit
Oh gods, tests that rely on one another is such a big red flag.
BrownBearPDX@reddit
They're called 'unit' for a reason.
brophylicious@reddit
Yeah, like "that test is an absolute unit"
Imaginary-Poetry-943@reddit
For real!!
ings0c@reddit
😭 whyyyy
firestell@reddit
At my job our unit tests take 50/60 mins to run. There are over 10000 of them.
Not sure if this is slow or not, but those are the numbers.
Kind_You2637@reddit
It is not unheard of. Large projects can run into a situation where one machine cannot handle the test execution in reasonable time. Majority of test runners offer sharding, and some codebases even need additional tooling on top of that to split the load across a farm of runners.
Even in smaller codebases it can be beneficial to employ sharding to cut for example the 10 minute execution time.
SnakeSeer@reddit
My company's is longer. The answer is "mother of all monoliths" with near-100% test coverage that must run every time. Each individual test takes about five minutes to run due to the amount of data involved, and there's thousands of them
KronktheKronk@reddit
That's absurd
ForgetTheRuralJuror@reddit
My company rolled their own everything, and for some reason it doesn't work as well as the industry standards 🤔
Nyefan@reddit
Sometimes the core tech stack really just sucks that much. eslint dependency cycle detection takes 12 minutes on CI in our main nextjs application, after migrating that job to a ramdisk. We can't parallelize that, nor are we willing to allow code with dependency cycles to enter main since that will cause nextjs to crash.
Of course, we can do it at the same time as our 20 minute nextjs build...
God I hate working with javascript and python applications. The industry somehow standardized on the slowest and most error-prone languages and build systems and then stacked even slower fake type systems on top of the house of cards. It makes me yearn for the days of java and tomcat on a regular basis, especially because maven is still the only build system that got packaging right.
zeorin@reddit
Madge is much quicker than ESLint at detecting dependency cycles.
donniedarko5555@reddit
Also what in the monolith application is this.
There's no way the IC's PR should trigger this many tests, even a whole team merging an epic into development might not touch half these.
CzackNorys@reddit
Our pipelines take about 10 minutes, which seems like a very long time to me.
Most of that time is installing dependencies.
I run tests and deploy to dev in parallel, as I don't want failed tests to get in the way of fast feedback.
What I need to set up is an intermediate image that has the main dependencies already installed, which should speed up builds to under 1 minute.
crixx93@reddit
We have a massive mono repo / micro frontend (Single SPA) Angular project. All of our tests are executed in parallel, but the bundling is taking around 40 mins to complete
loosed-moose@reddit
The S in SPA stands for Single
luciensadi@reddit
It still makes sense to call out a single SPA vs multiple SPAs backed by the same services/monolith.
loosed-moose@reddit
Bit of a stretch
luciensadi@reddit
Sure! I just find it's better to give people the benefit of the doubt about ambiguous meanings instead of jumping in to correct them based on an assumption.
joyousvoyage@reddit
especially on a sub-reddit like this.
as a dev, i hate when people think they're correct, correct me - and they're way off base.
when i was a younger dev, i would do this and i got burned so many times because i didn't have all of the context. i imagine other engineers have developed the same feelings while progressing through their career.
Budget-Length2666@reddit (OP)
single spa is a framework for microfrontends https://single-spa.js.org/
faze_fazebook@reddit
And make sure you have a single test runner and not one that takes 5 seconds to initialize to then run tests that take 3ms
Money-Maintenance-90@reddit
Similar thing here. But we actually break the monolith into a couple of services. So I am not pulling my hair out. But the tests, even in parallel, take over 10 minutes on complete on each runner
PoopsCodeAllTheTime@reddit
10 mins is awesome. I had to wait for some cron job that ran once per day, and people were pinging me the day-after merge about a broken test (that was already fixed) because the runner had some delay on getting the latest commit. Was great, got fired in 90 days from that garbage fire.
nevon@reddit
Deploying cloud infrastructure as part of end-to-end tests. The system under test was literally responsible for provisioning other systems, safely orchestrating and evaluating deployments, and so on, so it was kinda unavoidable that the longest running test would end up taking 40 minutes or so.
The test duration wasn't the really bad part, but flakiness where you weren't always sure if the tests failed because you broke something or because someone sneezed in us-east-1 while the moon was in retrograde.
Alpheus2@reddit
Everything you mentioned can be sped up. There’s no reason why your pipeline would be slower than your slowest, smallest component. But sometimes that speed is harder than moving the bar and saying “this is fine, we can’t do it faster than this.”
That’s when the work truly starts, in overcoming that temptation and do something about it instead.
thewritingwallah@reddit
if your PRs take longer than a couple of minutes on average to review and merge (after your pipelines have passed), then you may want to look at how you can improve in that area.
Treating async PRs as central to your quality process is one of the most common sources of dysfunction, wasted time, and missed opportunities to do real improvements in our industry.
Two developers pairing on the problem in real time will discover issues faster, solve them faster and will typically find a better solution than if either were working in isolation. You also get the benefit of knowledge sharing, and if people work together like this, a PR becomes a mere formality.
In teams I've worked on, PRs do happen, but if people pair they're allowed to self-merge, so in reality the "review" is already done, so after the pipeline passes, usually within 30 seconds, the PR is merged.
When you've worked this way, going back to a flow where people waste time context switching and debating in async review processes feels as painful as pulling teeth.
jdanjou@reddit
Totally feel this pain. 1h+ pipelines are a turnaround killer.
One pattern that has helped many teams is using a merge queue with batched merges and a two-step CI.
The trick: PRs are validated once in a queue, merged only when all checks pass, and you batch related changes together so you don't re-run the same long jobs multiple times.
It doesn't magically make pipelines faster, but it prevents redundant work, reduces flakiness exposure, and stabilizes merges.
danintexas@reddit
2 hour pipeline for our mono repo. Don't get me wrong the product is MASSIVE. But it literally will be the death of me.
Budget-Length2666@reddit (OP)
FAANG?
danintexas@reddit
No. Large bank
kevin7254@reddit
We have 6+ hours. For some reason people usually make bigger PRs rather than splitting them up. Wonder why
danintexas@reddit
Right? Then all the devs are frustrated cause we don't want to spend an hour going through a giant ass 100+ file changed PR that so many are doing only to actually get some work done. We are all sick of waiting on some flaky test that fails 10% of the time in a 2 hour pipeline.
faze_fazebook@reddit
From my experience with web tooling, nx is another culprit. Unless you have a giant and modular codebase, it probably does more harm than good.
The idea that you for example only need to run unit tests on parts of the code that changed is good on paper ... but when that means that the unit test runner like jest needs to be initialzed over and over for every "project" it defeats the purpose.
titpetric@reddit
A shitty test suite will do it. I assume with big monoliths there is really no other way, someone doing matrix/e2e in serial would do it rather quickly 🤣
Overtesting. Serial testing. Parallelism can usually lead to big gains, if pipeline running time is prohibitive, also having a bigger CICD cpu/ram wise, before looking at the code and tests, separate by test type, unit, integration.
Maybe tests only need to be deferred; test the area containing the code change, test the build, and leave full scope test suite in a daily cron job. Either way, there has to be an objective reason to justify a slow pipeline, like a long build process you can't shorten, or comprehensive test suite you can't cut down based on the scope of the change.
Always fun running the CI for a readme change...
EnvironmentalRace383@reddit
Bottleneck is dogshit runners and cheap ass company not willing to fork over the cash for high speed local build cache.
That plus building a massive project with tsan, asan debug, Fipa compliance, etc etc.
Takes an hour to do all of this on my 9950x home system. Incremental builds after the first are seconds at most
NotGoodSoftwareMaker@reddit
~ 8 mins for the whole thing.
Deployment to all servers is tricky, depends on where you land on the deploy queue but usually you will be merged and live within 30 mins
Solonotix@reddit
Problem #1: the authors of the GitLab pipeline (at work) gave it 30+ stages that cannot run in parallel. Most of them are named in the pattern
<action>-<environment>-<type>
which means none of the build/deploy steps can be run in parallel by designProblem #2: There are 5 environments + a local (Docker Compose) step for validating the build via integration tests.
Problem #3: Every environment has a manual, push-button step to prevent unintended deployments, because no one trusts the deployment process to not fuck up.
That about sums it up off the top of my head. My company has a really bad habit of always making "The One Pipeline To Rule Them All"™
ShodoDeka@reddit
Our bar is pretty high.
It’s a well known database product that runs on millions of servers the world over. 20ish million lines of code, it takes close to 24 hours to run all validation (in a multi million dollar/year test lab). 🧪 We have around 600 active developers.
At the PR level we run as much as we can in 8 hours (an algorithm picks the test that the PR is most likely to break). If you break a test optimised out of running on the PR, your change gets rolled back by the system.
So it basically takes at least a day to get a PR in, but given the level of scrutiny most teams review at, the automation is usually done before the manual reviewers are.
Keep in mind this is a product that earns somewhere between 3-4 billion of dollars a year and it is critical to millions of applications.
Steinrikur@reddit
We have yocto builds that used to be built from scratch nightly on a single server. Hours for qtwebengine alone - total 6-8h.
We parallized and used a lot of caching, and are now under 15 minutes.
30thnight@reddit
This is frontend specific but outside of E2E tests, there’s very little reason why a build pipeline should take longer than 20 minutes on a web project.
CI: Use faster machines. You get a pretty noticeable bump on Github Actions by using the ARM M1 Mac runners. Test it out but understand that this approach is expensive. This should be treated as more of a last resort for a web project.
E2E: Playwright > Cypress. Its design is better suited for avoiding scenarios that lead to flakes. It also has OOTB support for running parallel tests. (Cypress doesn’t)
Bundlers: use Vite for new projects. Old projects can use Rspack as a faster drop in replacement for Webpack.
Linter: we have a lot of rust based linters like Oxlint and Biome. Choose Oxlint if you lean on custom eslint rules.
oofy-gang@reddit
I don’t think this is an issue unless you have massive monorepos, supposing everything is implemented somewhat reasonably. In that case, you just need to make sure only targeting relevant pieces. Don’t lint the entire repo if you can statically prove that nothing was affected in 80% of the repo.
No PR pipeline should take an hour actually running… but can probably forgive it if it gets stuck in queue.
Brilliant_Law2545@reddit
Mono repo has nothing to do with it. Slow builds is always a tooling issue nothing else.
oofy-gang@reddit
I don’t get why you would say “mono repo has nothing to do with it” and then say “slow builds is (sic) always a tooling issue”.
Obviously, if things are slow, it’s a tooling issue. Also obviously, you can get away with slow tooling when your repo is small. What’s the difference between 1 minute and 4 minutes? 3 minutes. What’s the different between 15 minutes and 60 minutes? 45 minutes.
Monorepos tend to be the most common place where these massive repos come up, and where there is the largest space for optimization (as I already mentioned above, due to unnecessary CI checks on decoupled portions of the repo).
Xenasis@reddit
You're fundamentally misunderstanding the issue. Mono repo does not and should not mean mono build. Repo size doesn't have anything to do with it. It's the scope of what you're building (and tooling, like E2E tests etc).
oofy-gang@reddit
Jesus Christ, did you even read what I wrote?
_Jiot_@reddit
Well yes but tooling issues are more common in more complex repos, like monorepos
Brilliant_Law2545@reddit
My product is a few repos due to acquisitions and SOA but it’s actually extremely lean. CI/CD is super stable and sub 15 mins. It requires a lot of effort but we have great test coverage and amazing developer ergonomics. If you are stuck with hours of flaky build times your team and company is likely garbage and you should leave if you can
forbiddenknowledg3@reddit
Always the frontend js shit that is slow 🤔
someonesaymoney@reddit
Bro, some PR pipelines within silicon design related repos can take like 8 hours lmao.
prana_fish@reddit
Bro, some PR pipelines within silicon design related repos can take like 8 hours lmao.
greenstake@reddit
I'm a little shocked reading this thread.
I try to keep my CI pipelines under 2 minutes. That includes formatting, linting, static analysis, build for prod, docker image, push to registry, and trigger deploy.
Basic feedback to the developer on formatting and linting is usually about 15 seconds from the time they push their commit.
RaktPipasu@reddit
Woah!!! This seems really good. Can you share some details how you achieve such great timings
greenstake@reddit
What is your tech stack? Mine is usually Python and TypeScript.
ZennerBlue@reddit
Work for a bank. Code takes 4 mins to build. Tests take 20. SonarQube takes another 20 synchronous. Then Veracode. 90+ mins for a small code change. That’s the snapshot build. Then release build goes and does the same thing.
shavnir@reddit
Most of the stuff I work with is in the 2-4hr range, but a lot of that ties back to the tech stack and scope of the application / packaging. It isn't uncommon for the build stage artifact to weigh in over a handful of gb, even after compression
Horror-Primary7739@reddit
So this is my take: are you a billion dollar company like Netflix? Is your web app used by hundreds of millions daily? Can your code topple entire economies? Have a full fledged testing pipeline doesn't matter how long it takes.
Do you sell artisanal sparkling water? Maybe your testing pipeline is too robust.
_a__w_@reddit
Last time I checked, the Apache Hadoop full CI pipeline took 24 hours. But it uses Apache Yetus to make sure that only the relevant bits actually tested during a PR.
FredeJ@reddit
Sitting here waiting 6 hours for my pipeline to finish 🥲
However, it slightly different. It’s for QA validation of a medical hardware product, with hardware in the loop, where some tests simply just take a long time. The test in waiting for took 84 hours to do manually - not of actual work, mostly waiting around.
But I’ve been thinking about moving some of these long running tests into a later stage, so I can fail the build earlier.
FeliusSeptimus@reddit
Y'all are making me feel bad about being annoyed at our 6 minute build/deploy time. Not bad enough to go slice another 30 seconds out of it, but bad.
solstheman1992@reddit
Our pipeline takes like 8-10 hours. Our code base has like millions and millions of files and there are multiple fail safes because failures are extremely costly…
But on a more relatable note: 1. How are you handling external dependencies? Are you downloading pre-packed ones or building them again in your pipeline? 2. I’m surprised the linter is the thing that takes so long. Do you run linter against everything in one go? If you have multiple repos I would imagine you can run linter on individual repo contents, expose typedefs in your webpack files and then downstream dependencies can focus on limiting just their files.
grawies@reddit
Flaky tests are my #1 source of frustration with slow CI pipelines, especially when it is slow or needs to be retriggered manually. Commonly end-to-end tests, as you listed.
I suppose it comes naturally, when web apps effectively use a browser as a sort of extra complicated VM.
plshelpmebuddah@reddit
Same for me. I've worked with a lot of complex diff e2e tests that guard really critical business logic changes, and man... Those get very flaky if you don't continually maintain them.
jonmitz@reddit
I used testmo to address our flaky tests. My company eventually adopted it and used it across all teams. Getting people to deal with flaky tests was our issue, once there was monitoring and metrics in place the problem generally solved itself
lppedd@reddit
Took me a while to get e2e tests correctly. I spent an insane amount of time on Selenium wait conditions, especially to avoid hardcoded sleeps, and finally no more flakiness and decent CI times.
Alphasite@reddit
Anything that involves infrastructure is slow. Your pipelines looks like: lint/units (2-5 mins), integration (60 mins), build ova/ami/vhd/oci/etc (1-2h), deploy/test on gcp/aws/azure/vsphere (4h?), publish staging (quick except azure where there’s manual review).
There’s no way to make it fast. Similar story when I worked on a k8s distro, ci took hours to days for full approval since it’s too expensive to run for every commit and changes get batched.
QuirkyFail5440@reddit
I don't even know. It's a giant pile of homebrew custom crap that really bored, really smart, architects engineered like a decade ago.
Attempts to improve it have been tried, and failed, so many times that I think everyone just accepted it. Management is not giving more resources to fix it, and in their defense, our attempts seemingly didn't help.
Now we all just waste half a day waiting for crap to build.
Even my local IDE can't do things like rename refactoring without hanging for ten minutes.
My local build takes four hours. We do a partial build in 35 minutes.
I just kick back, post on Reddit and contemplate how my life went so wrong. Things that should take me 15 minutes, end up taking 8 hours.
Xanchush@reddit
Wait till you work at Microsoft, they have something disastrous.
aradil@reddit
I added some static analysis to my Android CI pipeline and my build went from 3 minutes to 12.
I need to do something to fix that for sure. Even if it’s just paying for more hardware.
grizzlybair2@reddit
We don't have one over an hour right now but thinking back, it's been slow end to end tests or just running all forms of tests (unit, component, acceptance, contract, performance, end to end, probably some other nonsense test types, etc.)
jl2352@reddit
I work on a monorepo, that’s only recently been moved to it. There is lots to cleanup.
The main issues are:
Finally there is an unusual internal blocker. I’m currently working on removing it, which should allow us to remove a lot of the caching needs (especially in the docker images). It’ll also help us unpick the projects building other projects. It’ll also allow us to have one simple cache across the monorepo for the short to medium term future.
(I would add that our caching needs would be pretty trivial to setup on Github. If you take anything from this comment, it’s that Gitlab sucks.)
Fashathus@reddit
Rookie numbers. Have fun with an fpga build your software depends on that takes 12+ hours.
In all seriousness though you should look at pre building and caching things and adding as much parallelization as possible
Shazvox@reddit
Well. Our IT department decided to have a monorepo.
Turns out that building the entire repo for every little thing takes ages.
FibbedPrimeDirective@reddit
I don't work in webdev, but work in a huge monorepo with hundreds of thousands of tests that need to run and pass for a PR to land.
The initial batch of tests to make the PR eligible to pass can take 2-4 hours to pass (all run in parallel).
Once the PR is eligible, it's placed in a batched PR queue where overnight tests (even more costly , numerous and heavy tests that can take even longer, 6+ hours) are run on it too to make sure it doesn't break anything. These are also run in parallel.
The repo we work on does regular releases and has hundreds of thousands of users that have their livelihood tied to it, so it's essential we do not break it.
Even if it may sound painful to develop in a repo like this, it is much preferred for us to run a massive amount of tests so we do not break things for users (which we still do regularly, despite this setup). I also personally like this way of developing because it's safe and careful, and we're rewarded for doing things right even if it takes time.
Idea-Aggressive@reddit
Take some time to modernise the process. I've done that in several projects. You'll spend a day, two or a week of work and gain years.
Watchful1@reddit
A couple years ago the pipelines for our C# mono-repo took 2+ hours and failed easily 50% of the time due to flaky tests. My company invested heavily in multiple different ways to fix it.
Now most smaller applications run build and test in 10 minutes or less. If you touch shared code it can take up to 45 minutes to run everything, but it's far more reliable.
Also another separate team spent a bunch of time redesigning our local dev environments into using docker so that most of it can be shared between local and pipeline builds. Now if something in the pipeline fails, it tells you the exact command you can run locally to build/test to reproduce.
It took a lot of complaining about productivity and then lots of leadership buy in and resources, multiple staff lead teams for a year+.
KronktheKronk@reddit
You're running too much of the e2e suite. That's got to be the lions share of the time right? The slow pack and translation tools should still only be tens of seconds...... Right?
Ghworg@reddit
Build itself takes 20 minutes, but there is another 15 minutes added by mandatory security scanners. Then the test runs take 30-45 mins, and that must be done on specific hardware which cannot be virtualized, and we have a limited number of devices so we can't parallelize too much.
Add in the overhead of packaging and uploads/downloads between stages and we have 1h30 average.
There are lots of things we could do to improve this, but none of them are quick fixes and it's a real struggle to allocate resources to improve it. We could componentize our build rather than having a single monolithic one. We could develop pipeline variants that don't comply with the company security policies and use those just for PRs, all main/release builds use the official pipelines.
Things we have done. 1. Split tests into those that actually require the custom hardware and the generic unit tests that can be run on any VM. 2. Exclude some of the longer running tests from PR runs and only have them on main/release. Some risk, but it's been generally okay.
It's a constant struggle and every time we gain something, someone goes and adds another feature that adds even more time to the run, so most of the time we are working hard just to maintain our position.
Dependent-Guitar-473@reddit
I work in a bank and we have the same issue... 56 minute each run because we have around 15,000 unit tests and hundreds of components, integration, and ewe tests ... they all take so much ... I hate it so much... and the worst is the flakiness of the e2e tests... by now whenever e2e tests, I run the pipeline one more time to make sure it is actually a failing test, before I start debugging
AromaticStrike9@reddit
I made a flutter pipeline faster by splitting some of the jobs up so they could run simultaneously and adding some caching so it wasn't downloading libraries every time. Unfortunately, a lot of FE linting and tests are just crazy slow and we hit a point of diminishing returns on how fast we could make it without investing a ton of time.
For our backend, we ended up moving some of our slow integration tests out of the PR pipeline into a scheduled job. We would get alerts in Slack if one of them failed. Not a great idea if you do continuous deployment, but we were only deploying weekly.
AppointmentDry9660@reddit
I like the automated test running in the background. Have caught some strange, awfully built tests this way by odd date handling etc. May not have found it without running this way and instead (potentially) playing Russian roulette with prod deployment test running
PickleLips64151@reddit
You can designate CPU resources for Jest testing.
Experiment with the values. I cut my testing time in half by bumping it from 50% to 75%.
This is all dependent upon your system and the number of tests, so YMMV.
It's worth taking a look at to knock some time off your pipeline.
engineered_academic@reddit
I had a client who had a build that took 15 minutes on GHA, I got them to sub-30 seconds on Buildkite. Most common are lack of caching, lack of parallelization, loading docker containers like the kitchen sink. Most pipelines are long because the business is ok with them being long - but they don't have to be. Clients call me in when they have a CI or observability problem that is causing major pains. If it isn't causing pain there is nothing to fix, in the eyes of the business.
al2o3cr@reddit
Former client had a CI pipeline for their monolith that used 80+ c2.8xlarge instances in parallel for every build. Still took 45+ minutes because they had hundreds of thousands of CSV-driven tests (they were in a heavily regulated industry)
NibblesIndexus@reddit
E2e sharded 10 minutes each, can rerun per shard (hosted on ci machine to test before deploy, its great to catch issues in PR instead of in deploy). Build image another 5-10min and unit + integration also in parallel with e2e. Whole thing takes 20 minutes and has for over a year.
Flakiness pops up now and again and is always related to bad testable code (i.e. cant reliably detect state changes in dom) or bad test code (not waiting for correct condition). I have never been faced with unfixable flakiness. Takes a while to undo the fuckery left by previous developers but its achievable in my experience.
NibblesIndexus@reddit
Oh and caching, big time speedup for builds. Especially packages. I'm toying around with a msbuild buildserver to try and get intermediate build results cached too and so shave a few more minutes of the pipeline by only compiling changes since last build
slyiscoming@reddit
This is old but I worked on a project a few years ago with a massive Angular App, and more than 100 modules.
Builds were running on a very large server but the clock speed was < 3 Ghz.
This had the build taking about 45 minutes. Which caused a lot of frustration with management. During an argument after a release was delayed several hours because of some issues with the build; I explained that the build was a single-threaded process and we could have an immediate reduction in build time with higher clock speeds.
The next morning there was a desktop from BestBuy on my desk. I installed a build agent, a few dependencies and we had 15 minute build times by lunch.
WebPack and single threaded builds was the big issue here and we solved it with a proof of concept followed by a custom built server running the highest clock speed CPU we could find and a ton of RAM.
boreddissident@reddit
e2e tests are gonna be slow. break them up and run them in parallel.
Everything else you listed is a solvable problem. I'm anti-unit-test on frontend except for functionality that can be tested independent of the browser. Maybe I've been on teams that have done it wrong, but it just doesn't seem to catch enough problems to be worth the extra effort.
BERLAUR@reddit
If it's more than 15 minutes you might be doing it wrong. If it's more than 5 minutes I would already start thinking about optimizing things. We mostly run Python it has a lot of the same problems that you describe.
Without knowing your specific situation, here's some tips that might help:
A 1 hour, flaky CI pipeline loses a lot of it's value and the team is going to hate it. You can gain much goodwill by fixing it ;)
chipstastegood@reddit
I run security scanners and they can take a long time to complete
awkward@reddit
I have to say I've never seen a 1 hour javascript build, but I recently wrangled one from 20 minutes to 10 minutes.
The most reproducible thing I did was get stuff that reduces tree shaking time into the linter. No unused imports and no imports of full libraries that allow component level imports. Checking for and eliminating circular dependencies is good too, but you probably don't want to run that one locally because it's slow.
Favoring unit and integration tests over end to end tests can be helpful as well.
Heavy_Thought_2966@reddit
I’m in Java and maven land and worked with scala and sbt a while ago. In my experience most of the time it comes down to a combo of poor module structure/dependency management, and poorly written unit tests.
On the former, if you don’t have someone watching out, you end up with a dozen or more modules that all need to build sequentially. If each takes a couple mins, that’s a 30+ min build. Most well maintained module setups allow a lot of parallel builds. My current project has 20 modules with 5 teams working in there, but it builds in 5mins because I’m the benevolent dictator of dependencies.
People can also get silly with testing, like spinning up actual services, initialising guice, or hitting external APIs. Stuff that can eat up 10-30seconds. You don’t notice each one but you build up a few of those and it’s so slow. I once shaved 5 mins off our build time by finding that some code was initialising an in memory cache of 1000 things by hitting an external service for each one in a constructor, and it did this for each test method.
KitchenDir3ctor@reddit
Loading millions of records in the db based on csv scripts (and some other formats for the real larger ones).
We should have used subsetting and synthetic testdata in hindsight.
vvf@reddit
Parallelize, cache artifacts, and try to upgrade/fix the underlying repo. Often a little bit of TLC can slash huge chunks out of that runtime.
largic@reddit
I cut an hour off an old angular build cause it was using the ahead of time compilation option on every pr build.
It was a version of angular from 2017, so the aot build option massively slowed the build down. Aot is very important for prod builds, but not for pr builds.