Showcase Thread
Posted by AutoModerator@reddit | Python | View on Reddit | 37 comments
Post all of your code/projects/showcases/AI slop here.
Recycles once a month.
Posted by AutoModerator@reddit | Python | View on Reddit | 37 comments
Post all of your code/projects/showcases/AI slop here.
Recycles once a month.
probello@reddit
par-storygen v0.4.0 — Update: TTS voices, story export, relationship tracking, and more
GitHub: https://github.com/paulrobello/par-storygen PyPI: https://pypi.org/project/par-storygen/
AffectionateWar5927@reddit
Repo -> https://github.com/ArnabChatterjee20k/domdistill
Most scrapers treat all content as equal weight nd the llm ends up paying attention to each texts.
Scraping is unsolved. Not because it's hard to fetch HTML. because pages are chaos and LLMs aren't free.
Throwing a full page at an LLM works. It's also expensive and lazy.
I wanted something smarter. So I asked: what do humans actually pay attention to on a page?
Not just metadata. Not just content. The relationship between the two. I wanted a distillation based approach on the dom.
TheseTradition3191@reddit
nice angle. the relationship betwen structure and content is the useful signal.
one thing that pairs well is text density scoring before the llm sees anything:
high density = signal. low density = markup soup. lets you prune beofre you even reason about dom relationships, so the distilation step runs on cleaner inputs.
AffectionateWar5927@reddit
Yep I thought about it at some point and having the model as well as a code regression(Chunk). The thing I beleive most of the time a developer may not follow a proper semantics. What if the sense node itself is not relevant or combination of dense + shallow is a good combo? I am focusing towards finding better chunks combination from each splits
Atamakit@reddit
EcoSound Monitor. Open source wildlife compliance platform for wind farms
GitHub: https://github.com/okalangkenneth/ecosound-monitor
Processes field audio recordings from wind turbine sites, identifies bird and bat species using real ML models (BirdNET + BatDetect2), and generates regulatory PDF compliance reports.
Tested with a real recording, 5 European species correctly identified (Robin, Chaffinch, Blue Tit, Blackbird, Great Tit) at 78–92% confidence.
Stack: FastAPI · birdnetlib · BatDetect2 · React 18 · Docker · GitHub Actions CI
One command: docker compose up --build
MIT licensed, contributions welcome.
Ryuchido@reddit
NeoShell – Control your Windows PC from your phone via Wi-Fi (no Python required)
I built a Windows utility that runs a local web server (compiled Python with PyInstaller). Download the folder, run
NeoShell.exe, scan QR code with your phone, and you can:NeoShellAppsfolderTech stack: Python (FastAPI) backend compiled to .exe + static HTML/CSS/JS frontend + PWA for mobile install.
Why FastAPI? It's lightweight, async by default, and gives other developers freedom to extend or modify the server without being locked into Flask-specific patterns.
Installation: Just download and run – no Python installation required. Works offline after download.
Compared to TeamViewer/Unified Remote – NeoShell is free, open-source, no Python needed.
GitHub: https://github.com/rud1x/NeoShell
Would love any feedback!
ErrorArtistic2230@reddit
Evalkit-bench — pytest-style regression tracking for LLM prompts
What My Project Does
evalkit-bench is a CLI tool for writing and running structured test suites against LLM outputs. You define cases in YAML (prompt, expected output, scorer), run one command, and get a terminal table of pass/fails, an HTML report, and an automatic regression diff against your last run.
The regression tracking is the core idea: every run is saved as JSON, and the tool diffs against the previous run for the same suite — telling you exactly which cases regressed or improved. So when you tweak a prompt or swap models, you know immediately what broke.
Three scorer types: exact match/regex, LLM-as-judge (configurable rubric), and semantic similarity via sentence-transformers.
Target Audience
Developers and ML engineers who are iterating on prompts or comparing models in production or pre-production contexts. Not a toy — it has CI-friendly exit codes, 44 tests (all run without API calls), and a proper HTML report you can attach to a PR or send to a teammate.
Comparison
GitHub: github.com/Arman176001/evalkit | v0.1.2, MIT
sheik66@reddit
On my free time I’m building the python library Protolink. It’s a lightweight alternative to langchain/langraph focused more on agents communicating with each other (A2A) rather than chaining calls.
Also supports both structured flows and autonomous agents, and avoids a lot of the abstraction/boilerplate.
Check it out here: https://github.com/nMaroulis/protolink
Motivation: I wanted a simpler and more comprehensible way to build and deploy ai agents with python, while also it is really interesting to experiment with custom llm inference loops.
MYGRA1N@reddit
Built a small Python TUI to configure Claude Code's status line. Toggle fields, pick a theme, hit Enter. Pure Python, no dependencies.
https://github.com/jsubroto/claude-code-statusline
dhyanais@reddit
Gordon’s Sun Clock – real-time solar dial using Skyfield
I built a solar-based clock that visualises the actual position of the Sun, Moon, planets and stars for a given location.
Instead of fixed hours, the dial follows the Sun’s path, so you can see solar noon, day length and seasonal changes directly — as a more natural representation of daily rhythms.
Tech:
Repo:
https://github.com/gaxmann/gordonssunclock
Junior-Form7665@reddit
Gordon’s Sun Clock – solar-based clock with real ephemerides
I’ve been working on a solar-based clock that visualises the actual position of the Sun, Moon, planets and stars for a given location.
Instead of fixed hours, the dial follows the Sun’s path across the sky, so you can see the progression of the day, solar noon and the changing length of daylight over the year.
On the technical side:
The project is here:
https://github.com/gaxmann/gordonssunclock
drodri@reddit
We're introducing conan-py-build: a PEP 517 build backend that brings Conan's C/C++ dependency management directly into the Python wheel build.
If you maintain a Python package with native C/C++ extensions, you've likely had to manage those dependencies outside the wheel build, through system packages, vendored source trees, FetchContent, or a separate native package manager step. conan-py-build pulls that dependency layer inside pip wheel, so resolving C/C++ libraries is no longer a separate step before the Python build.
A few things you get with this backend that uses Conan as part of the wheel build for native C/C++ dependencies:
• A large catalog of C/C++ recipes from Conan Center
• Binary caching across builds and CI runs
• Profiles and lockfiles for reproducible wheels
• Conan-managed runtime libraries deployed alongside the extension
The project is in beta and under active development. Maintainers have a long experience developing and supporting Conan. Try it on a project, open an issue if something doesn't work, and tell us what you'd like to see.
Repo: https://github.com/conan-io/conan-py-build (MIT license)
Blog: https://blog.conan.io/cpp/conan/python/2026/05/05/Introducing-conan-py-build.html
Documentation: https://conan-py-build.conan.io/
xubylele@reddit
I built a VS Code extension to level up the Jinja2 development experience.
It features natural, smooth syntax highlighting, a built-in way to inspect Jinja2 variables directly in your file, and several other improvements that make working with Jinja2 noticeably better.
Check it out on the Repository and the VS Code Marketplace.
End0rphinJunkie@reddit
The variable inspection alone makes this worth installing. Writing complex jinja templates without it usually just turns into a massive headache of print debuging.
xubylele@reddit
That's right—the idea came to me while I was working at my previous job, where the only way to know if a template would work was to create the document over and over again, going through the ordeal of generating multiple datasets to test different use cases.
niqqaficent25@reddit
I made this Python CLI (lockdiff) that parses diff of package lockfiles.
Lockfile diffs are unreadable once you have a few hundred transitive deps. lockdiff parses uv.lock and package-lock.json and prints just what changed — added, removed, or version-bumped. Stdlib only. MIT. pipx install git+https://github.com/Basliel25/lockdiff Feedback and collaborations very much welcome. Repo
andreabarbato@reddit
I’ve been iterating on this algorithm for quite a while. The original goal was to beat
numpy.sort100% of the time; that turned out to be unrealistic, but this implementation is already often faster on a wide range of inputs.Most of the code was AI‑assisted, so if you spot bugs or suspicious benchmark behavior, please open an issue or PR instead of silently judging. Constructive feedback is very welcome.
https://github.com/RAZZULLIX/super_fast_sort/
Codemageddon@reddit
Hi everyone. Today I released the first beta of an async Kubernetes client for Python, built on top of Pydantic v2 inspired by kube.rs. Why I decided to build it:
* got tired of writing `# type: ignore` every time I used kubernetes-asyncio
* got tired of endlessly digging around to figure out what shape kubernetes-asyncio expects for a given piece of a resource spec
* limited built-in support for working with custom resources, which is critical when writing controllers
**What's there now:**
* Strictly typed API and resource models
* Support for multiple Kubernetes versions simultaneously
* Typed models covering the entire Kubernetes spec
* Full custom resource support — just write a Pydantic model for the resource you need, and you can work with it the same way you'd work with a built-in
* `aiohttp` and `httpx` as the underlying HTTP clients
* Support for `asyncio` and `trio`
* Thanks to Pydantic v2, Kubex is dramatically faster than kubernetes-asyncio, uses much less memory, and makes fewer heap allocations (see benchmarks)
**Links:**
Docs: [https://kubex.codemageddon.me/0.1.0-beta.1/](https://kubex.codemageddon.me/0.1.0-beta.1/)
GitHub: [https://github.com/codemageddon/kubex](https://github.com/codemageddon/kubex)
**Code example:**
from kubex.api import Api
from kubex.client import create_client
from kubex.k8s.v1_35.core.v1.pod import Pod
async with await create_client() as client:
pod_api: Api[Pod] = Api(Pod, client=client, namespace="default")
pods = await pod_api.list()
for pod in pods.items:
print(pod.metadata.name, pod.status.phase)
---
The library is currently in early beta, meaning the public API surface may still change — but it's unlikely to change much, at least for the core functionality.
Maleficent-Emu-4549@reddit
opensmith – local-first LangSmith alternative for Python
Built opensmith: a local-first LLM pipeline tracer.
No cloud, no account, no Docker.
pip install opensmith
u/trace decorator + autopatch for OpenAI, Anthropic,
LiteLLM, Qdrant, ChromaDB, Pinecone. Traces store in
SQLite locally. Dashboard at localhost:7823 with live
WebSocket updates, charts, search, and filters.
Async support, tags, console mode, opensmith.json config.
GitHub: github.com/shivnathtathe/opensmith
Would love feedback from Python devs building LLM apps!
Ok_Issue_6675@reddit
We have built a package for WakeWord/Hotword for Python:
https://github.com/frymanofer/Python_WakeWordDetection
We recently added:
1. Speaker identification and isolation.
2. Very powerful and fast Text to Speech.
Pytrithon@reddit
Introduction
I have already introduced Pytrithon three times on Reddit.
See:
https://www.reddit.com/r/Python/comments/1q8dwsm/pytrithon_v119_graphical_petri_net_inspired_agent/
https://www.reddit.com/r/Python/comments/1nr3qvm/pytrithon_graphical_petrinet_inspired_agent/
https://www.reddit.com/r/Python/comments/1mx9w5r/graphical_petrinet_inspired_agent_oriented/
What My Project Does
Pytrithon is a graphical Petri net inspired agent oriented programming language based on Python.
It allows writing code as a two dimensional graph of interconnected elements and separates data as Places and code as Transitions. Inter Agent communication and GUI widgets are first class components of the language. Through the Monipulator, Agents can be monitored and manipulated.
Target Audience
The target audience is both experienced and novice programmers who want to try something new.
Why I Built It
I realized the power of Petri net inspired programming and the joy of having a more expressive way to specify control flow.
Comparison
There are no other visual programming languages which embed actual code into their graphs.
How To Explore
To run all included example Agents you need at least Python 3.10 installed. To install all dependencies, run the 'install' script. Then you can start up a Nexus with a Monipulator by running the 'pytrithon' script, where you can start Agents through opening them with 'crtl-o' twice and hitting the 'Open Agent' button. You can also directly specify which Agents to run through the command line by starting a Nexus, Monipulator, and Agents in one single command: 'python nexus -m \ \'.
Recommended example Agents to run are: 'basic', 'prodcons', 'address', 'kata', 'calculator', 'kniffel', 'guess', 'yahtzeeserver' + multiple 'yahtzee', 'pokerserver' + multiple 'poker', 'chatserver' + multiple 'chat', 'image', 'jobapplic', and 'nethods'. As a proof of concept, I created a whole Pygame game, TMWOTY2, which is choreographed by 6 Agents as their own processes, which runs at a solid 60 frames per second. To start or open TMWOTY2 in the Monipulator, run the 'tmwoty2' or 'edittmwoty2' script. Your focus should on the 'workbench' folder, which contains all Agents and their respective Python modules; the 'Pytrithon' folder is just the backstage where the magic happens.
What Is New
Since my last post I have added a distributed Yahtzee game which you should try out. In order to setup a server on a reachable machine and connect other machines, you need to do the following:
On the machine meant to be the server, run 'python nexus yahtzeeserver' first. Then on the machines meant to be the clients through which users play, run 'python nexus -x \ yahtzee'. The clients probe the interconnected Nexi for a server and start with a lobby mask where you can select your name and start a game with all players signed up.
GitHub Link
https://github.com/JochenSimon/pytrithon
-------------------------------
This is the fourth post about Pytrithon on Reddit. There is a plethora of example Agents to view and run included in the repository.
Please check it out and send feedback to the E-Mail address stated in the Monipulator About blurb.
Adventurous_Sky_433@reddit
**tcs-macro-pulse** — open-source macro data pipeline for FRED + GDACS
A small (\~1.4K LOC, MIT) pure-Python toolkit for fetching public financial/macro data:
📊 **L1 Macro (FRED):** 10 indicators (Fed Funds, CPI, unemployment, 10Y/2Y treasury, VIX, S&P, HY spreads) + built-in yield-curve-spread helper
🌪️ **L2 Events (GDACS):** Natural disaster RSS parser with severity scoring
💬 **L3 Sentiment:** Lightweight keyword-based news sentiment (en + vi). Optional `[nlp]` extra for FinBERT.
```python
from tcs_macro_pulse.fetchers.fred import FREDFetcher
fred = FREDFetcher()
spread = fred.yield_curve_spread() # → -47.2 bps (recession indicator)
Adventurous_Sky_433@reddit
GitHub: https://github.com/TCS-PLATFORM-OFFICIAL/tcs-macro-pulse
Looking for feedback on additional public sources to add (IMF? ECB?) and testing patterns for time-sensitive financial data. Cheers!
yehors@reddit
I have added ability to scrape .onion websites to https://github.com/BitingSnakes/silkworm with async API
FrenchFries505@reddit
https://github.com/AniruthKarthik/qrtunnel
share or receive files instantly via QR code with smart LAN + tunnel routing, zero logins, and simple security
probello@reddit
Parllama -- a Textual TUI for managing and chatting with LLMs (showcase of what you can build with Textual + Rich)
Repo: https://github.com/paulrobello/parllama
If anyone is building TUIs with Textual and wants to compare notes on architecture, happy to discuss.
asphyxia-a@reddit
I recently built
simple-tls, a TLS library designed to have an API almost identical to Python's built-insslmodule, but with support for modern, advanced features that the standard library doesn't cover yet.Key Features:
read(),write(), and contexts similar to the nativesslmodule.mypytyping, and clean dataclasses for easy extension parsing.You can check out the source code and examples here:
https://github.com/asphyxiaxx/simple-tls/Any feedback is appreciated.
MORPHOICES@reddit
I’ve been working on a system to turn what you already know into a structured digital product — without juggling a bunch of disconnected tools.
What I kept running into wasn’t a lack of effort.
It was that nothing actually held together.
You try things. They work for a bit. Then you switch, restart, or lose momentum.
So instead of adding more tools (or even more AI on top), I started focusing on how everything connects:
idea → offer → workflow → validation → iteration
The AI part is there, but more as infrastructure — not the main thing.
Still early, but that’s the direction I’ve been exploring.
dangerousdotnet@reddit
pyhaul is a lightweight Python library that provides safe, resumable HTTP downloads around all popular Python HTTP libraries. Pure Python, zero required dependencies, provides automatic byte-ranged request negotiation, crash-safe atomic file handling, plus it handles all the weird HTTP protocol edge cases correctly so you never end up with a partial or corrupt file on disk. Full documentation
How pyhaul works:
requests,httpx,aiohttp,urllib3, andniquestsfully supported today in both sync and true async modes).pyhaulnever creates, configures, or closes sessions.httpx.ReadTimeoutstays httpx.ReadTimeout, so you should be able to drop it into your existing codebase.How to use pyhaul
pyhaulhas zero required dependencies. Pick an HTTP client extra that matches what you already use:The entire API surface fits in one function:
haul()(orhaul_async()for async code). Pass a URL, your HTTP client, and a destination path:haul()either returns a CompleteHaul (which means full file was downloaded and is present on disk atdest), or it throws either aPartialHaulError(an error the library knows is retryable, with a nested native error inside it) or some other kind of (probably non-retryable) error.What happens on interruption
If the download is interrupted — network drop, process kill, Ctrl-C — two sidecar files remain on disk:
big.zip.part— the bytes downloaded so farbig.zip.part.ctrl— a binary checkpoint with the cursor position, ETag, and block-level hashesThe destination file (big.zip) does not exist at this point. There is no state where a partially-written file sits at the final path.
Resume
To resume, call
haul()again with the same arguments.pyhaulreads the checkpoint and negotiates an HTTPRangerequest for the tail of the object. When the checkpoint holds a strong ETag, pyhaul also sendsIf-Rangewith that validator as well (we differentiate between weak or missing validators exactly the way the HTTP spec requires). Assuming validation doesn't fail,pyhaulthen appends from where it left off:If the remote file changed between attempts, pyhaul detects the ETag mismatch and restarts from byte 0 — no silent corruption.
Add retry logic
One
haul()= one HTTP request. When the stream ends early, pyhaul raises PartialHaulError and saves progress. Bring your own retry logic, async processing loops, rate limiting, etc. You can addtenacityaround it like you would your own stuff.HaulStateis an optional mutable bag updated in-place throughout the download — useful for progress reporting, paint a TUI or GUI, or deciding whether to adapt retry.Track progress
Pass optional
on_progressfunction to get called after each chunk lands on disk:Ok-Bother-8872@reddit
FMQL: Working with a lot of frontmatter markdown files (Obsidian vaults, Jekyll sites, agentic skills)? FMQL treats them as a schemaless graph/document database, with Cypher-like syntax for the CLI and Django-style
field__op=valuepredicates in Python.Pure Python framework + CLI. Plugin architecture for search backends; Basic text scan built-in and
fmql-semantic(hybrid dense vectors + BM25 with reranking).MIT,
pip install fmql. Semantic backend:pip install fmql-semantic.Nikolay_Lysenko@reddit
A package that takes YAML files as inputs and renders 2D floor plans in PDF and PNG. In addition to the basic elements (such as walls, windows, and doors), the tool can also draw special symbols for electricity and lighting as well as supporting info (dimension arrows, text boxes, etc).
[GitHub](https://github.com/Nikolay-Lysenko/renovation)
**What My Project Does**
The project is a wrapper to the well-known `matplotlib` library. This library is very versatile and I have added some functionality on top of it:
* Now, it is a standalone CLI app, not a library. So, programming skills are not required from the user, but familiarity with YAML is essential.
* Patches used in engineering floor plans are added.
* The management of inter-dependent floor plans is simplified with anchors and inheritance of element collections.
**Target Audience**
I see the target audience as those people who do not like drag-and-drop GUIs and prefer text-based control instead. Config-based interface simplifies fine-grained control and allows versioning projects with VCSs like Git. The last, but not the least, it's easy to generate configs with AI agents.
**Comparison**
In the Python world, I can not find any mature alternatives. Probably, you may look at [this repo](https://github.com/luzpaz/floor-planner).
However, there are lots of commercial drawing tools that are way more advanced. Even 3D modeling software is widely available. To name a few, there are SketchUp and Fusion 360.
My tool is both free and sufficient for most non-professional tasks. It is the golden middle for DIY enthusiasts who want to draw renovation plans themselves.
**Links**
[GitHub](https://github.com/Nikolay-Lysenko/renovation)
[PyPI](https://pypi.org/project/renovation/)
PretendPop4647@reddit
What My Project Does :
I’m building Briefly AI, a Python CLI that turns long content into concise AI briefs from the terminal.
It supports local text/files, URLs, PDFs, YouTube videos, and piped input. It extracts the content first, then generates a brief. URLs use extraction with fallback, PDFs use pdfplumber, and YouTube tries captions first with transcription fallback.
Target Audience : Developers, students, researchers, or anyone who reads a lot of long content.
It is still early-stage, but already useful in my and my friends daily workflow.
Comparison : It is similar to AI summarizer tools, but focused on terminal workflow and flexible input handling, not just one prompt/API call.
Repo: https://github.com/Rahat-Kabir/briefly-ai
If you find it useful, a star would mean a lot. Happy to hear what input type I should add next.
RealDevDom@reddit
Für alle, die python mit KI CoPilot anwenden, habe ich specfact cli als OSS Validierungs Tool gebaut: https://github.com/nold-ai/specfact-cli
Die CLI läuft lokal in so ziemlich jeder Umgebung, sendet keine Daten irgendwohin und kann über Slash Prompts eingebunden werden in euren Entwicklungs-Prozess.
Ist noch im Beta Stadium und kostenfrei.
bert_plasschaert@reddit
Interactive Github banner, Add your name to my profile!
I've created an interactive Banner for my Github README homepage.
Fully powered by Python in Github Actions so you can easily add the system to your own profile.
Use the link under the banner to open up an issue and your username will be graffiti tagged onto the banner. The banner is fully light and dark-mode compatible, so will look great on every device!
Try it out: https://github.com/BertPlasschaert
I'd really appreciate stress-tests and any feedback or suggestions.
Or read a more detailed write-up on what issues I had to solve along the way:
https://github.com/BertPlasschaert/TaggableBanner/blob/master/writeup/writeup.md
If you liked the idea or learned something new, consider giving it a star! 🌟
No AI was used during this project
Input-X@reddit
A local multi-agent framework where your AI agents keep their memory, work together, and never ask you to re-explain context
https://github.com/AIOSAI/AIPass
bnyhil31@reddit
Aevum — open-source context kernel for AI agents (Apache-2.0)
Sits between your agent and the data it accesses. Every operation is policy-governed and recorded in a tamper-evident sigchain (Ed25519 + SHA3-256). Any past session is deterministically replayable.
Built around the governance and liability questions that kept coming up while researching AI agents — who's accountable, how do you prove what happened, how do you satisfy a regulator. This is a best current answer, not a final one.
Docs: https://aevum.build/?utm_source=reddit&utm_medium=post GitHub: https://github.com/aevum-labs/aevum
jftuga@reddit
https://github.com/jftuga/withpy
Batteries-included Swiss-army CLI using only the Python standard library and no other dependencies. Still very alpha.
Since this only uses standard lib, I can still have the source broken up into multiple files and then have my
build.pycreate a single-file artifact a la the SQLite Amalgamation technique.