How to tackle adding integration and end to end testing to a large UI project with 20+ microservices?
Posted by abl4k@reddit | ExperiencedDevs | View on Reddit | 11 comments
Hi all, software engineer with ~10YoE primarily with the same team. Currently in a senior/lead developer role for my application team. We have a very large Angular application and quite a few Spring microservices (current count is about 20), and we've started to have an issue with defects not being caught during the testing process. The application has no integration or end to end testing of any kind, only unit tests, and we are wholly reliant on manual business testing. This has always been an issue as business is not always available for quick testing turnaround, but as management has pushed us to move faster in order to meet some long term deadlines, and as CVEs pop up that are prod deployment blockers for our quality gates, defects have slipped past, especially when there's no defined way for us to handle, say, a Spring Boot dependency having a critical CVE and having to resolve it for 20 services in a 30 day business SLA. We've even had cases of our business team giving approval after testing but defects are found in production because business was not aware what pages in the application are dependent on which microservices. I've been given the task of trying to add integration testing for our microservices and end to end testing for our application. The problem is I can't even fathom how to start tackling it. Testing has never been my strength, but I am familiar with some tools/frameworks we can use. I've gotten Playwright stood up for our UI but business requirements of each page are not well defined, and I'm not sure how keen business is in trying to move to BDD in order to define Cucumber tests. Has anyone else had to add more robust testing to a large scale application?
gfivksiausuwjtjtnv@reddit
I haven’t experienced that kind of churn-related bug, but idk if it’s because of architecture or something else
Usually I would have microservices emit events which are picked up by a BFF. Meaning no shared state (BFF maintains its own).
Code search might be the way to go honestly. Anything that matches the URL of a service. Monorepo or GitHub code search or just clone every repo honestly. String regex or LLM search.
Integration testing comprehensive enough to hit every path is going to take aaaages to run and will slow down CI/CD - if it runs on dev machines it’s going to balloon out to some ridiculous inconvenience
DeepHomeostasis@reddit
The requirements gap you mentioned is probably the harder problem here. Playwright tests will lock in whatever behavior the app currently has, regardless of whether thats correct. Before scaling across 20 services, document the top 3-5 user flows as expected behavior so the tests have a target
Ninja-Sneaky@reddit
So far in my not so long journey the cool stuff I see being done by the experienced devs:
Linting literally everything even readmes, mandatory prek (or pre-commit) whichof configs are actively updated
Validation contracts or their tool specific equivalent in templates and anywhere possible
Scans triggered very early in the dev CICD cycle
A lot of stuff gets cleaned very early so that leaves room and clarity to the actual tests and fixes
UnintentionallyEmpty@reddit
If you're completely clueless, just do a PoC first.
Try to write one or two end-to-end tests and one or two integration tests. You can't add tests to the entire application in one go, so don't even try. Just try to write one or two tests and see what problems you encounter. Get a feeling for where you'd want integration tests and where you'd want e2e tests.
Writing tests is just programming. If you have 10 YoE you know how to do it.
Doesn't matter. Just assume that whatever the code does now is correct and write a test on it.
_Atomfinger_@reddit
First of all: Using paragraphs hasn't killed anyone.
I would look into contract testing before jumping straight to E2E tests. They're a pain to maintain and, imho, should be reserved for critical flows only.
Then I'd look into how to think differently about testing. Testing UI flow does not need integrations. Testing integrations does not need a UI.
By splitting tests into categories, you can build a robust test portfolio that you then harden.
Objectdotuser@reddit
Wow i totally agree with your suggestions
*presses enter to create a new paragraph*
Aghhhhhh--------!!!!!
*dying noises*
Itz_Naj@reddit
You already alluded to the answer, there’s no traceability from requirements, functional specification, design specification and implementation.
You start by declining priority bug fixes where the requirements aren’t clear and declarative, and treat them as enhancements instead.
You start insisting on clear requirements and traceability on any new features or enhancements and agree a minimum standard.
You start treating documentation and test coverage as techdebt, beginning with contract tests between microservices to assure predictability in addition inter/intra-service communication.
You get buy-in from the team. As a team you present the proposal to business and your management to do this and dedicate time to it, and at least walk away with a clean conscience while it burns around you.
BusEquivalent9605@reddit
Cypress. LLMs are pretty good at reading your angular forms code and generating basic Cypress tests.
Cypress will produce screen recording of the automation so you can watch what happened/see what went wrong
originalchronoguy@reddit
This is an ideal use case for MCP that tracks UI/UX flow on the front end that coordinates with backend to compare.
Angular sends a PUT/POST payload. MCP tracks that, hands off what it expects on the backend agent that traces what the API is taking in. What is being written to a data store. Compares and sends coordinated response to a share memory. MCP can also do things like when you click a button, call an API, the observable/behavior subject has data it stores and flushes across what ever methods/ data flow. And it can tell you things like API return json array object, your method on /middleware/data.ts parsePayload(obj) at line 52 is trying to use an object when enum and type is a string. Database row is 52 and column shows malformatting in the JSON.
Just saying.
Also look at Angular CompoDoc when you do angular builds. At CICD, you can use those artifacts that your testing can use.
throwaway_0x90@reddit
Okay,
Step#1, map out this kind of stuff. Tell management proper testing cannot be developed if we don't know what's connected to what and which behavior(s)/component(s) availability determine various outcomes.
Step#2, I wouldn't spend too much time creating UI tests if nobody knows how the UI should look or what it should do. Focus on those microservices as APIs and understand at least the happy-path of what they should do given known inputs.
Step#3, design docs. Get the team onboard with the most important and well-known components that should be automated. Start with just basic smoke-tests. Decide whatever tool you want to use. Regardless of tool choice, this will be a helpful read: https://www.postman.com/api-platform/api-testing/
TempleBarIsOverrated@reddit
There's quite a bit you can do here but it's going to take time, so better prepare management for that news as they sound a bit .. unfamiliar with how quality is made.
First a few questions to get an idea about your situation:
* Are the ones doing the manual testing available to write out some acceptance criteria in the tickets? E.g. "I should be able to click this button when I'm a user of X, but not otherwise", or "the new form should show X fields only to users of type Y".
* Do you want to do testing with the UI in mind, or are you happy testing the backend as a starting point?
* Do you have a testing environment?
* Are you in a position to easily spin up new instances of your services?
* Can you seed them with data?