gRPC: Client side vs Server side load balancing, which one to choose?
Posted by Constant_Fun_5643@reddit | Python | View on Reddit | 5 comments
Hello everyone,
My setup: Two FastAPI apps calling gRPC ML services (layout analysis + table detection). Need to scale both the services.
Question: For GPU-based ML inference over gRPC, does NGINX load balancing significantly hurt performance vs client-side load balancing?
Main concerns:
- Losing HTTP/2 multiplexing benefits
- Extra latency (though probably negligible vs 2-5s processing time)
- Need priority handling for time-critical clients
Current thinking: NGINX seems simpler operationally, but want to make sure I'm not shooting myself in the foot performance-wise.
Experience with gRPC + NGINX? Client-side LB worth the complexity for this use case?
jaerie@reddit
How would you ever do prioritization between clients if you're balancing on the client side?
notkairyssdal@reddit
why would you lose http/2 multiplexing benefits?
the extra hop should only add 10-15ms in the same region
notkairyssdal@reddit
why would you lose http/2 multiplexing benefits?
the extra hop should only add 10-15ms in the same region
gdchinacat@reddit
Do you trust your clients to load balance responsibly?
teerre@reddit
If you care about performance, you have to benchmark, not ask reddit
"GPU-inference" can mean a million things, but presumably the bulk of work is, you know, in the GPU, not nginx, so unless you have a huge supply of GPUs but somehow only a small server for you load balancer, it's unlikely the latter will be bottleneck