Current best model for technical documentation text generation for RAG / fine tuning?
Posted by OkAstronaut4911@reddit | LocalLLaMA | View on Reddit | 1 comments
I want to create a model which supports us in writing technical documentation. We already have a lot of text from older documentations and want to use this as RAG / fine tuning source. Inference GPU memory size will be at least 80GB.
Which model would you recommend for this task currently?
RedditDiedLongAgo@reddit
A human. Having an LLM consolidate and produce canonical docs is a trainwreck that you won't see until it's too late.