Sunday Daily Thread: What's everyone working on this week?
Posted by AutoModerator@reddit | Python | View on Reddit | 39 comments
Weekly Thread: What's Everyone Working On This Week? đ ď¸
Hello r/Python! It's time to share what you've been working on! Whether it's a work-in-progress, a completed masterpiece, or just a rough idea, let us know what you're up to!
How it Works:
- Show & Tell: Share your current projects, completed works, or future ideas.
- Discuss: Get feedback, find collaborators, or just chat about your project.
- Inspire: Your project might inspire someone else, just as you might get inspired here.
Guidelines:
- Feel free to include as many details as you'd like. Code snippets, screenshots, and links are all welcome.
- Whether it's your job, your hobby, or your passion project, all Python-related work is welcome here.
Example Shares:
- Machine Learning Model: Working on a ML model to predict stock prices. Just cracked a 90% accuracy rate!
- Web Scraping: Built a script to scrape and analyze news articles. It's helped me understand media bias better.
- Automation: Automated my home lighting with Python and Raspberry Pi. My life has never been easier!
Let's build and grow together! Share your journey and learn from others. Happy coding! đ
AlSweigart@reddit
Redoing my PyCon US tutorial ("Python for Absolute Beginners"). The last dry run pointed out multiple problems, and I need to break down the topics even moreso since it's a tutorial for people who haven't coded before.
PyCon US 2026 is in Long Beach, CA (in the LA area) May 13 to May 19.
sszz01@reddit
I built a tool that turns a Sentry URL into failing pytest. Want some feedback before going further
What My Project Does
Paste a Sentry URL, it pulls the stack trace and frame locals, generates a failing pytest that reproduces the exact crash, runs it in an isolated Docker sandbox against your current branch, and tells you whether the bug still reproduces or your branch already fixed it.
At least for me the part that I think actually matters is that most tools that generate a test from a stack trace give you something that might just error on import or fail for a completely different reason than the original crash.
This captures the frame locals at the exact crash frame - the actual variable state from production at the moment it broke and replays that. So you know the test is hitting the same thing, not just something that looks similar.
Works with raw Python tracebacks too, not just Sentry. Paste from Datadog, Rollbar, CloudWatch, a Slack message, it'll work, just lower fidelity because those tools don't always capture frame locals the way Sentry does.
Target Audience
Backend engineers who deal with production incidents.
Comparison
You can get partway there with Claude Code + Sentry MCP. And it'll read the trace and write you a test. Problem is there's no guarantee that test actually reproduces the original crash vs just erroring somewhere else. So the verdict is that it either reproduces or doesn't. You can also just write the repro manually which takes 30-45 minutes depending on how gnarly the state is.
Few questions I'm genuinely trying to answer before I keep building:
Just trying to figure out if I'm solving a problem people actually have before I commit more time to it. Thanks!
TLDR:Â Building a tool that takes a Sentry URL and spits out a failing pytest that actually reproduces the crash. Trying to kill the 30-45 min manual repro step. Want to know if this is a real painpoint or just me.
More-Information6707@reddit
Since, its too much of a hassle to view UTC times on android phone (dont know about pcs / laptops; dont own /used any) and system doesnt really give a default way to view the UTC time like a widget or something. You always have to manually add the UTC offset to your locatime in your head or look up to chrome which is somewhat time consuming keeping when you unlock your phone open chrome search the UTC time that takes time and effort. This interestingly marked the beginning of my first utility task i had to do for myself and to be honest anyone can use this script to same themselves a bit of time.
So, I came up with the brilliant of automating this task using PYTHON + BASH and TERMUX to always send an outgoing notification showing current UTC Time and its updates itself every minute with a delay calculated using [60 - seconds_at_which_script_executed] to always keep up with UTC Time.
I used no AI for this and i dont use any AI for any of my projects. GitHub: https://github.com/astral-gg/android-utc-notifier.git
Eastern-Surround7763@reddit
working on Kreuzcrawl this week! It is a high-performance web crawling engine with Python bindings. It was designed to reliably extract structured data, operating natively across multiple languages without enforcing a specific runtime. For more details, see: https://github.com/kreuzberg-dev/kreuzcrawl.
The MCP server is integrated from the start, enabling web-crawling AI agents as a primary use case. Streaming crawl events allow real-time progress tracking. Batch operations handle hundreds of URLs concurrently and tolerate partial failures. Browser rendering supports JavaScript-heavy SPAs and includes WAF detection. It's part of the Kreuzberg org: https://kreuzberg.dev/
Academic-Yam3478@reddit
**Hermes CLI** - AI-powered git commit message generator (local LLMs)
**What My Project Does:**
Hermes analyzes your staged git changes and generates
conventional commit messages using locally running AI models
via Ollama. No API keys, no rate limits, runs 100% on your machine.
Features:
- 3 commit suggestions with arrow-key selection
- 174x faster responses with intelligent diff caching
- Daemon mode: watches your repo, auto-suggests as you stage
- Learns your commit style from git history
- Edit messages in-place before committing
- Supports Mistral, Phi4, Gemma4
**Target Audience:**
Developers who:
- Use git daily and want consistent commit messages
- Prefer local AI over cloud APIs (privacy, no costs)
- Want to follow conventional commits without memorizing format
GitHub-Repository
Substantial-Cost-429@reddit
been working on caliber, an open source tool for AI agent config management. syncs agent setups across environments so you stop manually wiring things every time you switch machines or deploy. just hit 700 GitHub stars which was a nice surprise
https://github.com/caliber-ai-org/ai-setup
also looking for feedback on what features to build next if anyone works with Claude Code, Cursor or Codex
giulioprocopio@reddit
I made a Python package that implements an `overload` decorator. It allows declaring multiple `def`s with the same name but different arguments in the same scope and then dispatch calls depending on the input.
Obviously don't use this, but I think it's a fun implementation :)
https://github.com/giulioprocopio/python-overload/
gud_ni8@reddit
Hey everyone! Iâve been working on a Python project called Klix and wanted to share it here.
What it does:
Klix makes building interactive CLI apps in Python way easier. Itâs command-first, has typed session state, built-in prompts, and supports rich terminal outputs like tables, panels, and syntax highlighting. You can start small and scale to full-featured workflow tools without messy glue code.
Who itâs for:
Why itâs different:
Most Python CLIs end up juggling multiple libraries for commands, prompts, outputs, and state, which quickly gets messy. Klix bundles all of that into one framework so everything stays structured and easy to maintain.
Hereâs the GitHub if you want to check it out: GitHub
Documentation: Docs
venkatcodestuff@reddit
FYI, i copied it from my GitHub README file
GigaBookLM is a local-first, telemetry-free private alternative to propeitary third-part AI-based research assistants. The idea is to turn documents into researchable assets that contain as much as information as the original information does, but it's more lightweight and reusable.
Features:
How to use this: Well, quite frankly, this is still under a WORK IN PROGRESS (WIP), so i'm still figuring how gigabook can be used
What Have I Built So Far: * PDF Text Extraction * Basic onboarding * Basic tools for the project to work
I gotta be honest here, i definitely need some help, so if you wish, please DM me here
DanceStrong396@reddit
Built a Python batch runner on top of the Automatic1111 API.
It reads a prompts file, runs the batch automatically, saves images plus metadata, and lets me replay exact outputs later from saved seeds/settings.
Main pain point was generating something good, closing the tab, and then not being able to reproduce it exactly later. Saving replayable metadata next to each output fixed that.
Walkthrough: https://youtu.be/D4nsUA2E2UU
Code: https://github.com/automatikalabs/sd-batch-factory
DanceStrong396@reddit
Built a Python batch runner on top of the Automatic1111 API.
It reads a prompts file, runs the batch automatically, saves images plus metadata, and lets me replay exact outputs later from saved seeds/settings.
Main pain point was generating something good, closing the tab, and then not being able to reproduce it exactly later. Saving replayable metadata next to each output fixed that.
Walkthrough: https://youtu.be/D4nsUA2E2UU
Code: https://github.com/automatikalabs/sd-batch-factory
littlenekoterra@reddit
Im experimenting with the interpreter pool executor. Finally had a real use case for parrallelism and what do you know, pythons just in time for me to play with it!
programmer-ke@reddit
Nice. How is it going?
Looking forward to try it out myself on a real-world problem.
littlenekoterra@reddit
Tbh it ebded up not being so hard, im sure ill run into a wall sooner or later. Surely.
hy-token@reddit
Iâm working on a single-file graph database for AI memory. Itâs written in Rust for performance, but Iâm keeping it zero-dependency for Python users.
âIâve just published the first version to PyPI yesterday!
Now Iâm focusing on setting up the CI/CD pipeline and polishing the documentation for its official open-source launch this Tuesday.
hy-token@reddit
Updating: My first PyPI package Liel hit 250 downloads in no time. Probably just mirror bots doing their job, but it's still a cool milestone for my first-ever release. Thanks, bots!
Western_Win4674@reddit
I have created this cli tool:
Dep Age â Cross-language dependency age analyzer for answering: How old and risky are your dependencies?
Why Build This
npm outdated only works for npm â no crossâlanguage viewFeatures
Quick Start
What It Shows
đŻ Where Itâs Useful
Note: This is my first Python CLI project, and Iâd love feedback from the community.
đ Repo:Â Dep Age on GitHub
MIT licensed, openâsource, contributions encouraged.
Icy-Property-2147@reddit
doing nothing
mimoo01@reddit
Working on a pattern-based Python DSA reference. Each pattern has a visual, walk-through of a LeetCode problem, and practice table with LC numbers.
https://github.com/maryamtb/rook/tree/main/community-notes/dsa-python
BFS, backtracking, 1D DP (open as issues if anyone wants to contribute)
Part of the community-notes folder for rook, a macOS notes app launching friday
AlgonikHQ@reddit
I built a fully automated football edge detection bot from scratch, open sourced the whole thing GitHub Iâve been building automated trading systems for the past year as part of a long term plan toward financial independence by 45.
Forex bots, crypto DCA bots, Solana snipers.
But I wanted to add something different to the stack, a football stats bot that could find genuine statistical edges in upcoming fixtures and post alerts automatically, 24/7, with zero manual input.
This is the breakdown of what I built, how it works, and where to find the code.
What it does
StatiqFC scans upcoming fixtures across 6 leagues, Premier League, Bundesliga, Serie A, Ligue 1, La Liga and Champions League, and scores every fixture across a 6-layer engine before deciding whether to post an alert.
No human picks. No gut feel. Pure data.
The scoring engine
Every fixture gets scored across 6 layers:
An alert only fires if a fixture scores 4 or more out of 6. Everything below that threshold gets a skip card posted instead so followers know the bot scanned it and moved on.
Markets covered
The data stack â all free
Total cost to run: ÂŁ0 in data fees. Runs on a Hetzner VPS at around ÂŁ4/month.
The tech stack
00:00 BST â Nightly data refresh
Fixtures pulled for next 7 days
Team form recalculated
xG scraped from Understat
Standings updated via API-Football
07:00 BST â Morning fixture digest
All today's fixtures grouped by league
-2hrs KO â Edge scan fires per fixture
6-layer score calculated
4+/6 = alert posted
Below threshold = skip card posted
Post match â Result logged
P&L updated
Running ROI recalculated
21:00 BST â End of day summary
Near-miss log (fixtures scoring 3/6)
Daily ROI update
Sunday â Weekly stats digest
BTTS leaders, CS leaders across all leagues
Transparency by design
Everything is paper staked from day one, ÂŁ25 standard, ÂŁ10 builder single.
Every pick is timestamped and logged.
Wins and losses both posted publicly.
No deletions. No cherry-picking.
The near-miss log goes to a private dashboard, every fixture that scored exactly 3/6 with the layer that failed. Thatâs how I tune the thresholds over time without guessing.
The goal is to validate the edge publicly over 50+ selections before considering any commercial use. If the data doesnât support it, Iâll know exactly which layer to fix.
Why build it this way
Most tipster services are black boxes. You get a pick, a result, and no idea how it was generated. I wanted the opposite, every decision traceable, every threshold documented, every data source named.
If someone wants to clone this, tweak the thresholds, add their own leagues or markets, they can. The whole thing is on GitHub.
Where to find the code
Full source code, README and data source documentation on GitHub.
Happy to answer any questions on the build. Still early stages, form data is populating, xG data builds over time, odds confirmation gets sharper as the API budget is used efficiently.
But the architecture is solid and the scoring engine is live.
â ď¸ Paper portfolio only. Not financial advice. 18+ please gamble responsibly.
Vitalic7@reddit
https://shipfolio.app
Shipfolio is built for the devs who ship across five side projects at once.
cogSciAlt@reddit
Working on some legacy code at work. Absolutely zero unit tests. And failures to handle galore. Hoping to find some systematic way to start making improvements
me_myself_ai@reddit
Been there! Random unsolicited advice:
Some time invested into context-specific debug tools never hurts, IMO. Stuff like functions for reformatting stack traces,
.pretty_print()methods for any relevant objects that print out YAML of just their relevant attributes, etc. Hopefully youâre a whiz atpdband donât need this as much, but itâs my crutch!Otherwise nuthin to it than to plant some testing seeds, friend. By the end of the summer youâll have a gorgeous garden of pytest cases to enjoy the fruits of! You can even start off with store-bought (i.e. chatbot-written), but nothing beats a good handmade unit test đ
Best of luck, regardless
cogSciAlt@reddit
never have I heard of
pdb you are a life saver. Blessed đââď¸bachkhois@reddit
I built
journald-send, a library for writing logs to journald. It is low-level, talking to journald using its native protocol. It is intended to be used by other logging frameworks like standard liblogging,logbook,structlogto write the logs to journald. I also made handlers for those frameworks: - chameleon-log: Integrating logbook with journald. - structlog-journaldjournald-send is written in Rust with the target to support Python 3.14 and its free-threaded mode.
InfinitelyTall@reddit
I've be working on my first CLI tool to scan, fix and sanbox vulnerable packages in the python projects. It acts as a wrapper for the known vulnerabilities scan tools such as pip-audit and datadog. It gives you an easy command set for CI/CD pipelines and Dockerfiles, and hides the complexity of running multiple vulnerability scanners.The sandbox feature was the most useful for me, and I think it's an interesting idea for further development.
I built it with AI help, but Iâve been tightening the rough edges myself and trying to keep the output practical instead of flashy.
Check it out or give a feedback, the repo is here: https://github.com/Artemooon/snake-guard
jcubic@reddit
I've published my first Pip package. I'm mostly doing JavaScript stuff.
The project is called Horavox:
https://pypi.org/project/horavox/
After installation, it introduces the command
voxwhich is speaking clock. I created the tool mostly for myself. I wanted a clock that would tell me the current time. It can run in the background and speak the time at a given interval. It uses local AI models from Hugging Face.I use it like this:
Which says the time every 30 minutes from 9AM to 1AM.
Most of the code was written by AI (Claude Opus 4.6). I don't see a reason to create new projects by hand just to prove something.
RollCharacter1601@reddit
fake winreg on linux / macos for testing winreg functions on a fake registry
Python'sÂ
winreg module is only available on Windows. If your code reads or writes the Windows registry, you can't test it on Linux or macOS - and you can't run it in CI on Ubuntu runners.fake_winreg solves this by providing a complete fake registry that works everywhere Python runs. Just replaceÂimport winreg withÂimport fake_winreg as winreg and your tests work on any platform. Test Your code before its hitting real registry !fake_winreg provides a drop-in replacement for Python's built-inÂwinreg module, enabling testing of Windows registry-dependent code on Linux and macOS without a Windows environment.Key capabilities:
winreg API functions (OpenKey,ÂSetValueEx,ÂEnumKey, etc.) with matching signatures and error behavior.reg files (Registry Editor Version 4.0/5.0 format).db,Â.json, andÂ.reg via CLI or Python APIwinreg behaviorI hope that is useful for someone.
Source :Â https://pypi.org/project/fake-winreg/
Github :Â https://github.com/bitranox/fake_winreg
Target Audience :Â Developers
Comparison : cant find any comparable project
jason810496@reddit
I'm working Agent Hooks: local permission dialogs for Claude Code/Codex + a FastAPI-like hook framework.
One pain point in multi-session AI coding is the permission flow: the prompt shows up in another session, so you have to break focus and go find the right screen just to approve it. Agent Hooks brings those permission requests back to a local macOS dialog on the desktop youâre already using.
https://www.zhu424.dev/agent-hooks/latest/
efalk@reddit
Drop-in replacement for cgi.FieldStorage: https://github.com/efalk/fieldstorage.
Effective-Total-2312@reddit
Not a lot of time, but this weekend working on some new fixes and features for my side project Pymetrica, a tool with codebase-level metrics: https://github.com/JuanJFarina/pymetrica
On my current job, we're looking to start using this specifically as a counter-measure to AI generated code, both as a PR check, and pre-commit hook that AI agents can also leverage to understand if their changes are good enough or not.
graduallydecember@reddit
I've been working on a sans-IO BACnet (building protocol) library written in Rust, with a light weight python interface (https://github.com/yujia21/libbacnet). For now, it is only a BACnet/IP client with a few supported services.
A few python bacnet libraries exist, but the key idea of a sans-IO implementation is that the codec portion can be re-used by other libraries, allowing for different higher level clients for different use cases to share the same base and bring their on I/O logic (for example a client using trio or anyio instead of asyncio). Which currently isn't the design of the existing libraries.
Furthermore, the main difference with the most popular existing library bacpypes3 is the use of async context managers for the client which handles cleanup on exit without needing an explicit call on a close function.
Using a rust backend for CPU bound tasks in libbacnet means encoding/decoding is about 30 times faster than bacpypes3. Although of course in BACnet the bottleneck is generally network latency and not encoding/decoding speed!
Would love to have any feedback!
Outrageous_Ranger812@reddit
Built a tool that yells at you (in CI) when you forget to update .env.example
Outrageous_Ranger812@reddit
We've all been there.
You add a new feature, it needs a new environment variable, you add it to .env,
you deploy... and then someone else clones the repo and has no idea why it's broken
because .env.example is three features behind.
I built a small tool called envsniff that scans your JS/TS/Python/Go code for
every `process.env.X`, `os.environ.get("X")`, `os.Getenv("X")` call, and checks
it against your .env.example. If something's missing it exits 1 so your CI fails.
It also generates/updates .env.example for you:
And there's a GitHub Action if you want it baked into your PR workflow.
(Please Consider dropping a star â on my GitHub - envsniff This will motivate me to do more open source projects)
Feedback welcome especially around JS/TS edge cases.
Outrageous_Ranger812@reddit
GitHub: envsniff
(Please Consider dropping a star â on my GitHub - envsniff This will motivate me to do more open source projects)
Every project I've worked on has the same problem: someone adds `os.environ.get("NEW_SECRET_KEY")`
somewhere, forgets to update .env.example, and the next dev gets a confusing KeyError at runtime.
I built envsniff to fix this.
(Please Consider dropping a star â on my GitHub - envsniff This will motivate me to do more open source projects)
What it does:
- Scans Python, JS, Go, Dockerfile, and Shell files for env var usage (AST-based, not regex guessing)
- Generates or updates .env.example automatically
- Detects "new" vars not yet documented and "stale" vars no longer used
- Optional AI descriptions via Anthropic, OpenAI, Gemini, or Ollama
Usage:
GitHub Action (drop-in):
Privacy note: when using AI, default values are stripped from code snippets
before sending to the provider, so no secrets leak.
Would love feedback, especially on the shell plugin and edge cases you've hit
with env var management.
(Please Consider dropping a star â on my GitHub - envsniff This will motivate me to do more open source projects)
Outrageous_Ranger812@reddit
Where to install ?
Both are live. pip is the primary install path; the npm wrapper delegates to the
Python binary so JS teams can use it without caring about pip.
Outrageous_Ranger812@reddit
Qn: Does it work with monorepos or just single-language projects?
Outrageous_Ranger812@reddit
A: Works great with monorepos. It scans all supported file types in a directory
tree in one pass â so a repo with a Python backend, a JS frontend, and a Go
service in subdirectories all get scanned together. You can also use `--exclude`
to skip directories (e.g. `node_modules`, `vendor`, `dist`).
(Please Consider dropping a star â on my GitHub - envsniff This will motivate me to do more open source projects)
Lanky_Independent402@reddit
Working in some API integration tool for our ticketing system at work - basically trying to automate all the repetitive stuff that comes through daily tickets. Been wrestling with authentication headers for past few days but finally got breakthrough yesterday
Also messing around with computer vision library for nail art designs on weekends, trying to detect color patterns and suggest complementary shades. Still pretty rough but the edge detection is getting better