LLM and Terminology Learning Recommendations for my specs and needs?
Posted by juasjuasie@reddit | LocalLLaMA | View on Reddit | 1 comments
GPU: RTX 4070 Super
Vram: 12GB
Ram: 64GB DDR5 4000 MT/s
CPU: 16 × 13th Gen Intel® Core™ i5-13400F
Needs: Creation of relatively decent-sized novels/stories, capability to remember well previous events of the text generated, accepts configurations commonly found in chatbot frontends like tavernAI
With the release of Gemma4 and the news of Google optimizing the use of DRAM, i was really interested in finally stopping using server-side, however it seems that the base gemma4 26B, my computer really struggled to run it in ollama.
I wish to hear suggestions as well as a place to look up the meaning of different abreviations i find in the models that i have a hard time to get my head around A4B, E2B, FP8. etc & etc.
MelodicRecognition7@reddit
https://old.reddit.com/r/LocalLLaMA/comments/1rqo2s0/can_i_run_this_model_on_my_hardware/?
well these are such basic things that you could ask even the smallest Gemma4 and get a correct answer. Or just google it and see answer from Gemini free tier.