What is currently a good and easy way to run local llms against an entire code base?
Posted by Particular_Paper7789@reddit | LocalLLaMA | View on Reddit | 2 comments
I am tasked with analyzing an exiting code base in a tech stack that I am not directly familiar with. ChatGPT is very useful for this but it requires a lot of manual input work since I can’t just pass in all the files.
I was thinking of giving Mixtral 8x7b via llamafile on M1 Max 32gb a try.
There must be existing and api-compatible open source tools for this by now, right?
How do I feed an entire Codebase as context into a local llm?
kryptkpr@reddit
There are some code-to-prompt building tools out there, check out https://prompt.16x.engineer/ for example.
It's intended for ChatGPT but a prompt is a prompt.
Mixtral might not be the best model for this btw you'd likely have better performance with a DeepSeek fine-tune.
paradite@reddit
Hi. Thanks for mentioning 16x Prompt. I have recently added support for Ollama. Now you can connect to LLM models like DeepSeek Coder and Qwen 2.5 Coder running locally via Ollama.