Choosing the right model
Posted by Bowdenzug@reddit | LocalLLaMA | View on Reddit | 4 comments
I need your opinion/help. I'm looking for a self-hosted LLM that's perfect at tool calling and also has logical reasoning/understanding (it should be somewhat familiar with tax/invoicing and legal issues). I currently have 48 GB of VRAM available. I was thinking about using llama3.1 70b instruct awq. I would describe everything in detail in the system prompt, what it should do and how, what superficial rules there are, etc. I've already tested a few models, like Llama3.1 8b Instruct, but it's quite poor in terms of the context for tool calling. Qwen3 32b works quite well but unfortunately fails at tool calling with VLLM openapi and langchain ChatOpenAi. Thanks in advance :)
FullOf_Bad_Ideas@reddit
Try Seed OSS 36B Instruct, GLM 4.5 Air and Mistral Small
oktay50000@reddit
Mistral is beast, specially the q8 24gb is crazy
Bowdenzug@reddit (OP)
tried out Mistral Small 3.2 24B instruct out... tool calling seems to be broken with vllm.
SlowFail2433@reddit
GPT OSS with CPU offloading (will slow it down a bit)