5060ti and 64gb ram - what is my best option for local coding?

Posted by bonesoftheancients@reddit | LocalLLaMA | View on Reddit | 13 comments

compiled llama.cpp forks for turboquant and rotorquant and now trying models - what is the best models for local coding that will run on my setup (in a usable speed)? and what realistically should i expect (after using gemini and claude online for coding)?