What is currently a good and easy way to run local llms against an entire code base?

Posted by Particular_Paper7789@reddit | LocalLLaMA | View on Reddit | 2 comments

I am tasked with analyzing an exiting code base in a tech stack that I am not directly familiar with. ChatGPT is very useful for this but it requires a lot of manual input work since I can’t just pass in all the files.

I was thinking of giving Mixtral 8x7b via llamafile on M1 Max 32gb a try.

There must be existing and api-compatible open source tools for this by now, right?

How do I feed an entire Codebase as context into a local llm?