Anyone using Tesla P40 for local LLMs (30B models)?

Posted by ScarredPinguin@reddit | LocalLLaMA | View on Reddit | 15 comments

Hey guys, is anyone here using a Tesla P40 with newer models like Qwen / Mixtral / Llama?

RTX 3090 prices are still very high, while P40 is around $250, so I’m considering it as a budget option.

Trying to understand real-world usability:

Thank you!