Choosing a Python Logging Library in 2026 (Comparison)
Posted by finallyanonymous@reddit | Python | View on Reddit | 28 comments
I just published a comparison of Python logging libraries for 2026: stdlib, structlog, Loguru, and a couple of others that still show up in search results (Logbook, picologging).
The short version: stdlib + python-json-logger is the safe default. structlog is faster (~2x in benchmarks) and has the best OTel and framework integration story. Loguru is the easiest to set up but needs an extra indirection layer for OpenTelemetry.
Curious what people here are actually using in production and whether the OTel integration story (or lack of it) is actually influencing choices.
nicholashairs@reddit
Maintainer of python-json-logger here 👋
When I took over maintenance of the library, one of the things I added was the ability to use different high performance JSON encoders - specifically
orjsonandmsgspec.Did you use these in your tests, or did you just use the standard library JSON encoder?
I'd be curious to know the results of you did (I've not had the time to setup my own benchmarking).
finallyanonymous@reddit (OP)
I use the stdlib json, but just retested with orjson/msgspec and got a 30-35% speedup:
I should mention this in the article
nicholashairs@reddit
Ooh cool didn't know you had a guide specifically for it 👀
It's really well written, somewhat jealous comparing it to the docs 😅😅😅
Good to see the speedup. I couldn't see it in the article (I may have just missed it), but what extra context fields did you use?
I suspect the speed difference between the two comes down to when they need to fall back to the default function vs types they natively support.
andrewprograms@reddit
Thank you both for your contributions and benchmarking!
Ketty_took@reddit
you’re going too deep into reversing bmp itself. server signal depends on consistent device fingerprint and real session flow, not just one payload. if ihg setup isn’t close to a real user, it won’t validate anyway. focus on fingerprint and request sequence first, then dig into signal generation.
barseghyanartur@reddit
structlog
jsabater76@reddit
Do all of them support async logging?
saucealgerienne@reddit
been running structlog for about a year on a fastapi project and context binding is honestly what made it stick. once you start binding at the request level and every downstream log is automatically tagged with request_id, user_id, whatever, it's hard to go back to manually passing that around or fighting with LoggerAdapter.
loguru was tempting, the API is much nicer to write. but hit the OTel wall a few weeks in and ended up reverting. the indirection wrapper approach just felt like extra work for something the lib wasnt really designed for.
TheseTradition3191@reddit
structlog's context binding is the one feature that actually matters for API or proxy type work. you can do it in stdlib with LoggerAdapter but its clunky and easy to break once you add real async concurrency.
structlog lets you bind once at the request start and every log call in that context automatically gets the fields:
makes debugging concurrent requests actually doable. without it you're grepping through interleaved lines trying to reconstruct what happened to request X vs Y. switched to structlog specifically for this two years ago and haven't looked back at stdlib for anything with real concurrency
Ok-Plankton-4703@reddit
Been using structlog at work for past year and the performance difference is really noticeable when you're processing tons of flight data. The OTel integration saved us so much headache when we had to trace issues across different microservices - everything just connects naturally without extra config hell.
stdlib is fine for smaller stuff but once you need structured logging with proper context, structlog just makes everything cleaner. The learning curve isn't that bad either, took maybe a week to get comfortable with it.
FarRub2855@reddit
I'm usually on the client-facing side but whenever engineering can trace issues quickly across microservices it saves us from some very awkward calls. Definetly a massive win if the tooling actually makes that painless for you guys.
Vivid_TV@reddit
Hi OP , Would you know the monospace font used on the website for the code sections? It looks great.
amroamroamro@reddit
f12 to inspect page says: martianmono
https://github.com/evilmartians/mono
finallyanonymous@reddit (OP)
It's martianMono: https://fonts.google.com/specimen/Martian+Mono
Vivid_TV@reddit
Thank you.
Black_Magic100@reddit
Love logiru but lack of native Hotel support is painful
Spleeeee@reddit
I also had issues with loguru when staying at a Hilton. It worked perfectly fine tho when I was at an Airbnb.
Black_Magic100@reddit
Lol autocorrect. Keeping because it's funny
ExoticMandibles@reddit
I just wrote one, but it's not for conventional "enterprise logging" with logging levels and such. It's more designed for debug-print style logging. It's high performance, and pushes most of the formatting work off to a worker thread (if you want), so the actual amount of time you spend making a logging call is small.
You can find it as part of my "big" library. Tutorial here:
https://github.com/larryhastings/big#the-big-log
Note: I have big plans for... a rewrite, sigh. Which will completely change up the external extension interfaces, as well as changing some of the logging interfaces. Sorry! I'm trying to finish it, but another sleeping project woke up and took a giant bite out of my schedule.
Ketty_took@reddit
Coming at this from scraping pipelines (\~1M URLs/day), logging ends up looking a bit different than typical web service setups. Stdlib + python-json-logger as a base is the right call imo. pretty much every library already logs through stdlib, so going against that just creates friction. We run structlog in prod. the processor pipeline is really where it shines — being able to attach stuff like worker_id, target_domain, retry_count to every log line pays off quickly. perf is fine, but honestly the bigger win is just cleaner, more consistent fields. OTel only really matters if you’re already doing tracing. if not, I’d just pick whatever feels easiest to work with. One thing that surprised us at higher volumes: the processor chain itself can become a bottleneck. doing sampling and dedup before serialization saved way more CPU than switching logging libraries ever did.
knobbyknee@reddit
Standard library logs for when you need sysadmins to manage your logs.
aminoy77@reddit
Using Loguru in a CLI agent project and the OTel thing
hasn't been an issue yet — but it's local tooling so
no observability stack to worry about.
For anything going to production I'd probably go
structlog based on your benchmarks alone. 2x on
logging adds up fast in high-throughput services.
What was picologging's story? Dropped it from the
comparison or just not worth mentioning?
finallyanonymous@reddit (OP)
Picologging does not work on the latest Python so I couldn't test its claims.
ac130kz@reddit
structlog and loguru, unfortunately, both require quite a bit of massaging to make them work as intended, especially if one's goal is to integrate with some web framework. My choice is probably stdlib, my only complaint is a bit outdated configuration interface.
groosha@reddit
Honestly once you find your favorite structlog config, you can just copy paste it to other projects. And LLMs know this library quite well.
totheendandbackagain@reddit
Useful! Tx
thicket@reddit
Thanks for putting this together- it’s really useful.
My stdlib logging nightmare was an inherited multiprocess app with 3+ layers of runtime-determined config files, meaning there wasn’t a clear code path to follow and I could never trace which of several configurations was being used by which executable. Logs would somewhat randomly eat errors and I never teased apart what configurations were being layered on top of what. It was less the logger’s fault than it was the fault of too much configurability, but I don’t miss that one at all.
aloobhujiyaay@reddit
This is super helpful, especially for beginners who jump straight to fancy libs