The official vLLM support for the Ryzen AI Max+ 395 is here! (the whole AI 300 series, ie gfx1150 and gfx1151)
Posted by waiting_for_zban@reddit | LocalLLaMA | View on Reddit | 3 comments
xjE4644Eyc@reddit
Awesome. Lets see how it compares to llama.cpp speedwise
noneabove1182@reddit
This is big :O VLLM is the go to standard for deploying and testing, great to see that support!
No_Afternoon_4260@reddit
Speeds?