I made a system that detects what you're asking about and makes the AI think like an expert in that field before answering. The difference is astonishing.
Posted by BLOCK__HEAD4243@reddit | LocalLLaMA | View on Reddit | 8 comments
So this one took a while...
First problem. You ask an AI something and if you dont have a lengthy convo to back it up or a very detailed question, you get mush. Its not that AI is dumb, it just has to be shown how to think correctly. Ok, but how do you do that?
Well, you make it think like an expert in what you need answers on. OK, well how do you do that?
- TLDR: you build massive prompts, that stem from much research, directing the AI how to think like an expert. I'm not talking, 'here's a bunch of info on this subject'..... delve into the minds and established thought patterns of these experts. Say an engineer. I don't need to feed an LLM a bunch of formulas, the LLM knows how to find those and can run them. No, teach the LLM, an engineer starts here, then gets here, and finally arrives at this (for this specific problem), then just add about a thousand rules and bingo. All of the info already exists, but if the LLM is fired off shotgun style, God knows whats going to come back. Thats all i can give without giving any secret sauce. Ok... so then you do that for as many Domains as you want.
But I'm lazy and don't want to have to auto select a domain every time.
Well now is the super simple task of building a classifier. Take what 'user' (me) says and and match it to the prompt and then inject that on top of the question. Without giving away too much, the classifier isn't a word parser or matcher. It is more complicated than that, especially for edge cases, but i digress.
What if your question combines multiple domains?
basically take the classifier, stack it for the number of domains you have and add a percent value of matching to that classifier. Highest two win. I haven't had a case that demands more than two yet. Attachments? yes. it factors in attachments (to a certain degree) as well.
Put all that together and you start getting some really well thought out questions, and especially because somewhere in there are sets of rules blocking the AIs from defaulting to giving BS data and rather asking follow ups. Answer the follow ups and it starts getting deadly accurate.
oh and last thing... i didnt make one AI do this. I made 5 different companies flagships do this then debate each other... for multiple rounds. That engine is a whole different story.
any questions or if you want to use it feel free to ask!
denoflore_ai_guy@reddit
This person invented system prompts and is acting like they found the Ark of the Covenant... wtf is going on today.
BLOCK__HEAD4243@reddit (OP)
If it were that simple đź«
breezewalk@reddit
Sounds neat! Wondering if the all of that would create noticeable latency especially as context grows and is used up in long-horizon tasks. Would love to try it out if you are comfortable sharing!
BLOCK__HEAD4243@reddit (OP)
Sooo… it did. I cut about 80% out. Single AI is just as fast as any without it. But up to 5 and then adding a synthesis engine does make for 1-3 minute wait. But we’re talking about multiple debate rounds and honestly the time you spend getting any agent up to speed it saves 10x the time
erkinalp@reddit
Isn't this the default behaviour of DeepSeek v3.x models?
BLOCK__HEAD4243@reddit (OP)
Not really…DeepSeek is a strong model but it’s still one model giving you one answer based on however it decides to reason about your question. What I’m describing is an external system that detects the domain before the model sees the question, then injects a structured methodology prompt that was built from actual expert reasoning patterns, not the model’s own interpretation of how to think about it. The model doesn’t decide how to reason. The system tells it. Big difference in output consistency.
aldergr0ve@reddit
Yo dawg, I heard you like AI, so we put AI in your AI so you can prompt while you prompt
BLOCK__HEAD4243@reddit (OP)
LOL