Running Local LLMs / No Experience
Posted by ValkyrieEgy@reddit | LocalLLaMA | View on Reddit | 9 comments
I have desire to run local models to help with my workflow to produce articles and UGC videos and image to video generation for children cartoon videos and informative rells/tiktoks also vibe coding and building mini apps as solutions for workflows like commercial and ticketing systems and other scenarios.
What models that I could run or should I go with multiple models and easiest way to make it running on my windows PC
SPECS:
Ryzen 7 5700
RTX 4060TI 16GB
32GB DDR4 Ram
Academic-Map268@reddit
LTX 2B and Wan 1.3B are likely your best options.
jacek2023@reddit
Gemma 26B and Qwen 35B
Academic-Map268@reddit
Congrats, you commented without reading the post
jacek2023@reddit
I replied about LLMs, image to video requires Comfyui, not LLMs
ValkyrieEgy@reddit (OP)
They could do written content and content plans only right ?
Long_comment_san@reddit
How you're going to set up and run them if you're even too lazy to start with searching something so basic?
ValkyrieEgy@reddit (OP)
So many ideas and so many openions, not to mention that reaching out to someone tried it on similar hardware is faster to transfer his knowledge
Already did Wan 2gp before through pinokio
So basic to you if you are experienced but for someone starting fresh it's hazy
EatTFM@reddit
Probably download and install LM Studio, pull Gemma4 26B and Qwen 35B. Both models will run considerably fast on your GPU.
PretendAppointment47@reddit
Yo corri un modelo de similitud sematica para comparar cvs vs requerimientos laboral en un Ryzen 5 con 16GB sin placa grafica, como cada vez que se invocaba cargaba el modelo, expuse el modelo como API usando FastAPI.