Using LlaMa to analyze scientific texts - I am failing

Posted by RollLikeRick@reddit | LocalLLaMA | View on Reddit | 7 comments

So I am trying to analyze 15 scientific texts at once using a local LLM.

I wrote this post two days ago: https://www.reddit.com/r/LocalLLaMA/comments/1gu77hn/i_just_tried_llama70binstructggufiq2_xs_and_am/

Where people basically told me not to use a Q2 but atleast a Q4 model. I tried Q4_K_S, Q4_K_M, Q4_K_L and Q5_K_S but the best I got was this: The model checked two sources and gave rudimentary info.

Is using the llama 3.1 model doomed from the beginning and should I use a different one?
Or is the task just too big to be run locally on a consumer machine?
If I really wanted to use AI, should I analyze each paper individually?

HW: 2x4090RTX, 4x32GB RAM, Ryzen Threadripper 24 x 4,2GHz