Is running local LLMs actually cheaper in the long run?

Posted by HealthySkirt6910@reddit | LocalLLaMA | View on Reddit | 28 comments

Been experimenting with running models locally recently.

But honestly, it feels like costs (GPU, time, setup) add up faster than I expected.

For those who run things longer term — does it actually get cheaper over time, or not really?