Another way to use local llm, have an MCP server that talk to a Qemu computer. What do you think?

Posted by leonardosalvatore@reddit | LocalLLaMA | View on Reddit | 11 comments

I think is nice to contain the MCP into a Qemu enviroment where the LLM can do whatever ... here is doing GDB on a LVGL program.
https://github.com/leonardosalvatore/llama.cpp.debugger

video:
https://www.youtube.com/watch?v=i8Lcic8HxLQ