Best sub 14b llm for long text summaries?

Posted by GreenTreeAndBlueSky@reddit | LocalLLaMA | View on Reddit | 16 comments

Speed is not important (can run overnight if really need be) but accuracy really matters to me. I was wondering if there were good 1M or 512K or even 256k context models That I might not be aware of.

I know qwen3 4b instruct has 256k native but im afraid it might not be accurate enough and hallucinate quite a bit due to its size