Persistent Memory For AI
Posted by PlayfulLingonberry73@reddit | LocalLLaMA | View on Reddit | 5 comments
I have been using persistent memory for couple of months and it has been great experience. Thought to share here with all. Please feel free to check it out and share your feedback.
Have multiple ways to use it:
-
Standalone
-
MCP Server
-
Clustered Server
https://github.com/yantrikos/yantrikdb-server
https://i.redd.it/dc5y8s4saevg1.gif
koushd@reddit
did you ask it to recall if this exact concept was posted by 50 other people every day for the past 2 years, including one by Milla jovovich
PlayfulLingonberry73@reddit (OP)
Well, I did my research and filed a patent for it. So this is not something I am taking lightly. I respect where you are coming from, with all the AI slobs. But I could not find a decent solution. So I made my own.
koushd@reddit
yeah there's no patent
PlayfulLingonberry73@reddit (OP)
True, I will hearing from the patent office next year I guess. You can read the whitepaper if you want. https://zenodo.org/records/18793952
For some reason I like this debate. Debate always opens up new finding and possibilities.
It is definitely true that many folks are working on it. Same goes for models and other apps. So I will keep iterating and improving mine. My ultimate goal is to make Jarvis like companion. Dream is what keeps me going in my hobby world.
PlayfulLingonberry73@reddit (OP)
The main reason for me to post here is that this is to help folks like us who runs LLM locally and want to have a coherent behavior regardless of the model.