AMD MI50 32GB/Vega20 GPU Passthrough Guide for Proxmox
Posted by Panda24z@reddit | LocalLLaMA | View on Reddit | 41 comments
What This Guide Solves
If you're trying to pass through an AMD Vega20 GPU (like the MI50 or Radeon Pro VII) to a VM in Proxmox and getting stuck with the dreaded "atombios stuck in loop" error, this guide is for you. The solution involves installing the vendor-reset kernel module on your Proxmox host.
Important note: This solution was developed after trying the standard PCIe passthrough setup first, which failed. While I'm not entirely sure if all the standard passthrough steps are required when using vendor-reset, I'm including them since they were part of my working configuration.
Warning: This involves kernel module compilation and hardware-level GPU reset procedures. Test this at your own risk.
My Setup
Here's what I was working with:
- Server Hardware: 56-core Intel Xeon E5-2680 v4 @ 2.40GHz (2 sockets), 110GB RAM
- Motherboard: Supermicro X10DRU-i+
- Software: Proxmox VE 8.4.8 running kernel 6.8.12-13-pve (EFI boot mode)
- GPU: AMD Radeon MI50 (bought from Alibaba, came pre-flashed with Radeon Pro VII BIOS - Device ID: 66a3)
- GPU Location: PCI address 08:00.0
- Guest VM: Ubuntu 22.04.5 LTS
- Previous attempts: Standard PCIe passthrough (failed with "atombios stuck in loop")
Part 1: Standard PCIe Passthrough Setup
Heads up: These steps might not all be necessary with vendor-reset, but I did them first and they're part of my working setup.
Helpful video reference: Proxmox PCIe Passthrough Guide
Enable IOMMU Support
If you're using a legacy boot system:
nano /etc/default/grub
Add this line:
GRUB_CMDLINE_LINUX_DEFAULT="quiet intel_iommu=on"
# Or for AMD systems:
GRUB_CMDLINE_LINUX_DEFAULT="quiet amd_iommu=on"
Then save and run:
update-grub
If you're using EFI boot:
nano /etc/kernel/cmdline
Add this:
intel_iommu=on
# Or for AMD systems:
amd_iommu=on
Then save and run:
proxmox-boot-tool refresh
Load VFIO Modules
Edit the modules file:
nano /etc/modules
Add these lines:
vfio
vfio_iommu_type1
vfio_pci
vfio_virqfd
Find Your GPU and Current Driver
First, let's see what we're working with:
# Find your AMD GPU
lspci | grep -i amd
# Get detailed info (replace 08:00 with your actual PCI address)
lspci -n -s 08:00 -v
Here's what I saw on my system:
08:00.0 0300: 1002:66a3 (prog-if 00 [VGA controller])
Subsystem: 106b:0201
Flags: bus master, fast devsel, latency 0, IRQ 44, NUMA node 0, IOMMU group 111
Memory at b0000000 (64-bit, prefetchable) [size=256M]
Memory at c0000000 (64-bit, prefetchable) [size=2M]
I/O ports at 3000 [size=256]
Memory at c7100000 (32-bit, non-prefetchable) [size=512K]
Expansion ROM at c7180000 [disabled] [size=128K]
Capabilities: [48] Vendor Specific Information: Len=08 <?>
Capabilities: [50] Power Management version 3
Capabilities: [64] Express Legacy Endpoint, MSI 00
Capabilities: [a0] MSI: Enable+ Count=1/1 Maskable- 64bit+
Capabilities: [100] Vendor Specific Information: ID=0001 Rev=1 Len=010 <?>
Capabilities: [150] Advanced Error Reporting
Capabilities: [200] Physical Resizable BAR
Capabilities: [270] Secondary PCI Express
Capabilities: [2a0] Access Control Services
Capabilities: [2b0] Address Translation Service (ATS)
Capabilities: [2c0] Page Request Interface (PRI)
Capabilities: [2d0] Process Address Space ID (PASID)
Capabilities: [320] Latency Tolerance Reporting
Kernel driver in use: vfio-pci
Kernel modules: amdgpu
Notice it shows "Kernel modules: amdgpu" - that's what we need to blacklist.
Configure VFIO and Blacklist the AMD Driver
echo "options vfio_iommu_type1 allow_unsafe_interrupts=1" > /etc/modprobe.d/iommu_unsafe_interrupts.conf
echo "options kvm ignore_msrs=1" > /etc/modprobe.d/kvm.conf
# Blacklist the AMD GPU driver
echo "blacklist amdgpu" >> /etc/modprobe.d/blacklist.conf
Bind Your GPU to VFIO
# Use the vendor:device ID from your lspci output (mine was 1002:66a3)
echo "options vfio-pci ids=1002:66a3 disable_vga=1" > /etc/modprobe.d/vfio.conf
Apply Changes and Reboot
update-initramfs -u -k all
reboot
Check That VFIO Binding Worked
After the reboot, verify your GPU is now using the vfio-pci driver:
# Use your actual PCI address
lspci -n -s 08:00 -v
You should see:
Kernel driver in use: vfio-pci
Kernel modules: amdgpu
If you see Kernel driver in use: vfio-pci, the standard passthrough setup is working correctly.
Part 2: The vendor-reset Solution
This is where the magic happens for AMD Vega20 GPUs.
Check Your System is Ready
Make sure your Proxmox host has the required kernel features:
# Check your kernel version
uname -r
# Verify required features (all should show 'y')
grep -E "CONFIG_FTRACE=|CONFIG_KPROBES=|CONFIG_PCI_QUIRKS=|CONFIG_KALLSYMS=|CONFIG_KALLSYMS_ALL=|CONFIG_FUNCTION_TRACER=" /boot/config-$(uname -r)
# Find your GPU info again
lspci -nn | grep -i amd
You should see something like:
6.8.12-13-pve
CONFIG_KALLSYMS=y
CONFIG_KALLSYMS_ALL=y
CONFIG_KPROBES=y
CONFIG_PCI_QUIRKS=y
CONFIG_FTRACE=y
CONFIG_FUNCTION_TRACER=y
08:00.0 VGA compatible controller [0300]: Advanced Micro Devices, Inc. [AMD/ATI] Vega 20 [Radeon Pro Vega II/Radeon Pro Vega II Duo] [1002:66a3]
Make note of your GPU's PCI address (mine is 08:00.0) - you'll need this later.
Install Build Dependencies
# Update and install what we need
apt update
apt install -y git dkms build-essential
# Install Proxmox kernel headers
apt install -y pve-headers-$(uname -r)
# Double-check the headers are there
ls -la /lib/modules/$(uname -r)/build
You should see a symlink pointing to something like /usr/src/linux-headers-X.X.X-X-pve.
Build and Install vendor-reset
# Download the source
cd /tmp
git clone https://github.com/gnif/vendor-reset.git
cd vendor-reset
# Clean up any previous attempts
sudo dkms remove vendor-reset/0.1.1 --all 2>/dev/null || true
sudo rm -rf /usr/src/vendor-reset-0.1.1
sudo rm -rf /var/lib/dkms/vendor-reset
# Build and install the module
sudo dkms install .
If everything goes well, you'll see output like:
Sign command: /lib/modules/6.8.12-13-pve/build/scripts/sign-file
Signing key: /var/lib/dkms/mok.key
Public certificate (MOK): /var/lib/dkms/mok.pub
Creating symlink /var/lib/dkms/vendor-reset/0.1.1/source -> /usr/src/vendor-reset-0.1.1
Building module:
Cleaning build area...
make -j56 KERNELRELEASE=6.8.12-13-pve KDIR=/lib/modules/6.8.12-13-pve/build...
Signing module /var/lib/dkms/vendor-reset/0.1.1/build/vendor-reset.ko
Cleaning build area...
vendor-reset.ko:
Running module version sanity check.
- Original module
- No original module exists within this kernel
- Installation
- Installing to /lib/modules/6.8.12-13-pve/updates/dkms/
depmod...
Configure vendor-reset to Load at Boot
# Tell the system to load vendor-reset at boot
echo "vendor-reset" | sudo tee -a /etc/modules
# Copy the udev rules that automatically set the reset method
sudo cp udev/99-vendor-reset.rules /etc/udev/rules.d/
# Update initramfs
sudo update-initramfs -u -k all
# Make sure the module file is where it should be
ls -la /lib/modules/$(uname -r)/updates/dkms/vendor-reset.ko
Reboot and Verify Everything Works
reboot
After the reboot, check that everything is working:
# Make sure vendor-reset is loaded
lsmod | grep vendor_reset
# Check the reset method for your GPU (use your actual PCI address)
cat /sys/bus/pci/devices/0000:08:00.0/reset_method
# Confirm your GPU is still detected
lspci -nn | grep -i amd
What you want to see:
vendor_reset 16384 0
device_specific
08:00.0 VGA compatible controller [0300]: Advanced Micro Devices, Inc. [AMD/ATI] Vega 20 [Radeon Pro Vega II/Radeon Pro Vega II Duo] [1002:66a3]
The reset method MUST device_specific. If it shows bus, the udev rules didn't work properly.
Part 3: VM Configuration
Add the GPU to Your VM
Through the Proxmox web interface:
- Go to your VM → Hardware → Add → PCI Device
- Select your GPU (like
0000:08:00) - Check "All Functions"
- Apply the changes
Machine Type: I used q35 for my VM, I did not try the other options.
Handle Large VRAM
Since GPUs like the MI50 have tons of VRAM (32GB), you need to increase the PCI BAR size.
Edit your VM config file (/etc/pve/qemu-server/VMID.conf) and add this line:
args: -cpu host,host-phys-bits=on -fw_cfg opt/ovmf/X-PciMmio64Mb,string=65536
I opted to use this larger sized based on a recommendation from another reddit post.
Here's my complete working VM configuration for reference:
args: -cpu host,host-phys-bits=on -fw_cfg opt/ovmf/X-PciMmio64Mb,string=65536
bios: seabios
boot: order=scsi0;hostpci0;net0
cores: 8
cpu: host
hostpci0: 0000:08:00
machine: q35
memory: 32768
name: AI-Node
net0: virtio=XX:XX:XX:XX:XX:XX,bridge=vmbr0,tag=40
numa: 1
ostype: l26
scsi0: local-lvm:vm-106-disk-0,cache=writeback,iothread=1,size=300G,ssd=1
scsihw: virtio-scsi-single
sockets: 2
Key points:
hostpci0: 0000:08:00- This is the GPU passthrough (use your actual PCI address)machine: q35- Required chipset for modern PCIe passthroughargs: -fw_cfg opt/ovmf/X-PciMmio64Mb,string=65536- Increased PCI BAR size for large VRAMbios: seabios- SeaBIOS works fine with these settings
Test Your VM
Start up your VM and check if the GPU initialized properly:
# Inside the Ubuntu VM, check the logs
sudo dmesg | grep -Ei "bios|gpu|amd|drm"
If everything worked, you should see something like:
[ 28.319860] [drm] initializing kernel modesetting (VEGA20 0x1002:0x66A1 0x1002:0x0834 0x02).
[ 28.354277] amdgpu 0000:05:00.0: amdgpu: Fetched VBIOS from ROM BAR
[ 28.354283] amdgpu: ATOM BIOS: 113-D1631700-111
[ 28.361352] amdgpu 0000:05:00.0: amdgpu: MEM ECC is active.
[ 28.361354] amdgpu 0000:05:00.0: amdgpu: SRAM ECC is active.
[ 29.376346] [drm] Initialized amdgpu 3.57.0 20150101 for 0000:05:00.0 on minor 0
Part 4: Getting ROCm Working
After I got Ubuntu 22.04.5 running in the VM, I followed AMD's standard ROCm installation guide to get everything working for Ollama.
Reference: ROCm Quick Start Installation Guide
Install ROCm
# Download and install the amdgpu-install package
wget https://repo.radeon.com/amdgpu-install/6.4.3/ubuntu/jammy/amdgpu-install_6.4.60403-1_all.deb
sudo apt install ./amdgpu-install_6.4.60403-1_all.deb
sudo apt update
# Install some required Python packages
sudo apt install python3-setuptools python3-wheel
# Add your user to the right groups
sudo usermod -a -G render,video $LOGNAME
# Install ROCm
sudo apt install rocm
Install AMDGPU Kernel Module
# If you haven't already downloaded the installer
wget https://repo.radeon.com/amdgpu-install/6.4.3/ubuntu/jammy/amdgpu-install_6.4.60403-1_all.deb
sudo apt install ./amdgpu-install_6.4.60403-1_all.deb
sudo apt update
# Install kernel headers and the AMDGPU driver
sudo apt install "linux-headers-$(uname -r)" "linux-modules-extra-$(uname -r)"
sudo apt install amdgpu-dkms
Post-Installation Setup
Following the ROCm Post-Install Guide:
# Set up library paths
sudo tee --append /etc/ld.so.conf.d/rocm.conf <<EOF
/opt/rocm/lib
/opt/rocm/lib64
EOF
sudo ldconfig
# Check ROCm installation
sudo update-alternatives --display rocm
# Set up environment variable
export LD_LIBRARY_PATH=/opt/rocm-6.4.3/lib
You want to reboot the VM after installing ROCm and the AMDGPU drivers.
Need to Remove Everything?
If you want to completely remove vendor-reset:
# Remove the DKMS module
sudo dkms remove vendor-reset/0.1.1 --all
sudo rm -rf /usr/src/vendor-reset-0.1.1
sudo rm -rf /var/lib/dkms/vendor-reset
# Remove configuration files
sudo sed -i '/vendor-reset/d' /etc/modules
sudo rm -f /etc/udev/rules.d/99-vendor-reset.rules
# Update initramfs and reboot
sudo update-initramfs -u -k all
reboot
Credits and References
- Original solution by gnif: https://github.com/gnif/vendor-reset
- PCI BAR size configuration and vendor-reset insights: https://www.reddit.com/r/VFIO/comments/oxsku7/vfio_amd_vega20_gpu_passthrough_issues/
- AMD GPU passthrough discussion: https://github.com/ROCm/amdgpu/issues/157
- Proxmox-specific AMD GPU issues: https://www.reddit.com/r/Proxmox/comments/1g4d5mf/amd_gpu_passthrough_issues_with_amd_mi60/
Final Thoughts
This setup took me way longer to figure out than it should have. If this guide saves you some time and frustration, awesome! Feel free to contribute back with any improvements or issues you run into.
TieMajor@reddit
Have you had any issues with GPU pass-through after shutting down the VM ? I was able to pass-through the card just fine but after shutting down and starting the VM it just hangs.
GamarsTCG@reddit
Would this still work for Proxmox 9? Having trouble specifically with the installing Proxmox kernel headers. Although I am also really new to Proxmox
Panda24z@reddit (OP)
I am unsure about Proxmox 9. I am currently on version 8.4.14 (running kernel: 6.8.12-13-pve). I probably won’t upgrade until it reaches EOL. Sorry for not being more helpful.
Similar-Kitchen-928@reddit
Hi sorry if this is a dumb question as I just started homelabbing but would it be possible to game within a vm using this setup? I currently have a few cards tested with no issues (wx2100 & wx3200) but this one seems really interesting and versatile for multipurpose uses.
Panda24z@reddit (OP)
Yes, it’s possible to use it for gaming in a VM, but there are a few things to consider.
The card will likely need external cooling, which is expected for this type of setup. The stock BIOS doesn’t support video output, so you’ll need to flash it with a Radeon VII or Radeon Pro VII BIOS to enable the mini DisplayPort. It also ships without active cooling, so you’ll need to set up a custom or DIY cooling solution.
Keep in mind that gaming in a VM can have its own issues. Some games don’t run properly or block virtual machines entirely. There are workarounds, but compatibility depends on the specific game.
There’s a good example of someone using the Radeon MI50 16GB variant as a daily driver with more details here: https://youtu.be/8LnoJBboiT8
Similar-Kitchen-928@reddit
Thanks a lot for the reply I have a Lenovo sr650 so I’ll look into the cooling for that and thanks for putting this card on the radar for me.
Panda24z@reddit (OP)
With a 2U server like that, you should be good. I’d just double check the card’s dimensions to make sure it fits properly. Are you planning to run a Windows VM or stick with Linux as your main setup? Let me know what you’re aiming for and I’ll spin up a VM to test it out and share the results. I probably won’t get to it until the weekend, though. Also, congrats on starting your homelab journey, it’s a fun rabbit hole to dive into.
Similar-Kitchen-928@reddit
Hi was wondering if you ever managed to set up a gaming vm with the 32gb mi50.
Panda24z@reddit (OP)
Apologies for the delayed reply. I’ve installed the latest stable Bazzite GNOME desktop ISO. While I haven’t run full benchmarks yet, I was able to play Little Nightmares directly from Steam with no issues. I’m not entirely sure which ISO version you’d like me to test, there are several available. I did set up GPU passthrough, but I didn’t have an HDMI‑mini cable to connect a monitor, so I used the Moonlight/Sunshine setup for remote display. I’m still fine tuning the configuration before I can give a definitive assessment, but the driver support works out of the box.
Similar-Kitchen-928@reddit
Hey thanks for the good news and thanks for your efforts in making this all work!!
Similar-Kitchen-928@reddit
Hey yeah thanks it been just as fun as it’s been confusing. I would like to try it out with bazzite for single player games and modded Minecraft as those r rlly the only games I play. I just wanted some to inv versatile and cheap enough to experiment with. Your help is greatly appreciated on this journey.
No-Refrigerator-1672@reddit
I appreciate you figuring this out and sharing this guide with the community, but I can't help but wonder: why would one opt for a VM for inference, when LXC container will take up less resorces, give less overhead, allow to share the GPU between multiple instances of containers (so you can separate your servixes, i.e. llama.cpp and comfy), allow for memory balooning, and won't take nearly as much pain to setup?
Panda24z@reddit (OP)
That's a valid question. I chose to use a VM for two main reasons. First, since this is my first time using ROCm with an MI50, I was concerned about kernel compatibility. Using a VM allows me to test different kernel versions without risking any issues on my host system. Second, I am transitioning from an existing AI VM setup, which makes VM-to-VM migration much simpler than rebuilding everything in containers. Once I confirm that everything is working properly, I may consider migrating to containers later on. So, I guess I’ve kind of put myself in a bit of a bind with my previous setup.
No-Refrigerator-1672@reddit
Well, I've been running dual Mi50 in LXC 24/7 for roughly 4 months now, and can confirm that default unmodified proxmox kernel is stable with ROCm 6.3.3.
Ok_Preparation8798@reddit
Can I ask how did You install rocm on proxmox? I can't make it :/ . Did You follow some instruction ?
No-Refrigerator-1672@reddit
Nothing more that official AMD guide. Select instructions for Debian and follow them to the letter. Exactly the same version of ROCm must be installed in both the host and container. Then add those lines to your container config
/etc/pve/lxc/container_id.conf:Note that if you have multiple /dev/dri/renderD*** files in your host, you absolutely must passthrough all of them.
DerLeoKatter@reddit
Thanks for guide. I'm building a homelab on Proxmox 9 CE (running Debian 13) and need some guidance on choosing a GPU for my virtualized setup. I want to run Linux and Windows VMs, splitting the GPU between them if possible, for a mix of everyday tasks and some 3D work. Here's what I'm working with and what I need:
What I'm trying to do:
Daily tasks: Spin up a Windows VM for browsing, YouTube (4K, smooth streaming), and document editing (Office, PDFs, nothing heavy).
3D modelling: Use KiCad and FreeCAD (mostly on Linux, maybe Windows) for designing PCBs and 3D-printable enclosures. These are simple models, but I want basic ray tracing for clean, polished renders (nice lighting, reflections, etc.).
Setup: Proxmox 9 CE, aiming to share the GPU across 4 VMs (Linux + Windows running together). I need decent performance for 3D, nothing enterprise-level crazy. Might play with light AI/ML later (small ROCm-based models), but that's not the main focus.
GPU options I'm considering:
AMD Instinct MI50/MI60: These look tempting with 16 GB (MI50) or 32 GB (MI60) HBM2 and crazy bandwidth (1 TB/s). They're dirt cheap on eBay. I checked out the MI50 pass-through guide here - it's focused on AI, but seems like it could work for 3D. Can these handle FreeCAD/KiCad renders with ray tracing? How's SR-IOV or MxGPU for VM sharing?
Questions for :
Can the MI50 or MI60 handle KiCad/FreeCAD 3D renders with decent ray tracing (via ROCm/HIP/OpenCL)? How do they compare to A4000's RT cores for "pretty" PCB/enclosure renders?
With AMD, is PCIe pass-through my only solid option, or can I hack GPU sharing across Linux + Windows VMs? NVIDIA's vGPU seems plug-and-play, but I'd rather avoid license fees.
Any issues running MI50/MI60 on Proxmox 9? That "atombios stuck in loop" fix in the thread sounds dicey - anyone hit that snag?
My setup: Proxmox 9 CE on Debian 13, server (Epic 7532 + Tyan S8036, supports IOMMU/SR-IOV). I've got enough airflow for high-TDP cards (like MI60's 300W). Budget's flexible, but I'd prefer not to drop over $1000 unless it's really worth it.
XccesSv2@reddit
Thx you saved me. I spent hours in getting this to work until I realizes that es not working in newes Proxmox VE 9. But with your guide and Proxmox 8.4 it worked! PS: when you build llama.cpp with ROCm 6.4.3 you need some extra missing files for gfx906 but it work!
Western-Lake8226@reddit
wow. this is a solid piece of work nice one. Will work my way through this.
ZeroGee0@reddit
Should note that vendor-reset doesn't work for 6.14.8-2-pve kernel on Proxmox 9 without adding the DKMS MOK key to make Secure Boot happy. Done after dkms install. https://pve.proxmox.com/wiki/Secure_Boot_Setup#Enrolling_the_MOK
Panda24z@reddit (OP)
Thank you for the update. I assume everything remains the same after the DKMS installation? I haven't had time to experiment with different server configurations, but I appreciate you sharing your experiences. Thanks again!
ZeroGee0@reddit
I'm fighting getting amdgpu-dkms 6.3.3 installed on Ubuntu 24.04 lts guest os with 6.14 kernel but it crashes on building amdgpu-dkms. I tried going through 6.4.x and adding or building gfx906 like another commentor pointed out but I got distracted. 6.4.x built just fine per your guide, for what it's worth, including what I know of building ik_llama.cpp with rocm/hip. RocmBLAS just didn't have the right arch built so it wasn't happy.
My use case is a handful of Radeon VIIs. Not quite the same, but what I have on hand.
JaredsBored@reddit
This worked well for me! I had two minor issues, nothing big.
Find your GPU
Find your AMD GPU
lspci | grep -i amd | grep -i vga
This command doesn't work for me because me MI50 returns from lspci as:
"Display controller: Advanced Micro Devices, Inc. [AMD/ATI] Vega 20 [Radeon Pro VII/Radeon Instinct MI50 32GB] (rev 01)"
This was easy enough to identify, but maybe the grep 'vga' could instead be 'Vega' since I think that should always be included.
ROCm 6.4.3 has an issue where it doesn't include support for gfx906 (aka MI50/Vega 20) even though it's listed as Supported (but Depreciated). There's a currently open github issue for this on the ROCm page: https://github.com/ROCm/ROCm/issues/4625
I simply installed 6.3.6 as Vega 20 GPUs are properly supported in that version. There is a way to add the missing file from a prior ROCm version to an install of 6.4.3 and fix this issue, however I found it simpler to just install 6.3.3 since it works and I saw another thread discussion mentioning that there is very little to no performance difference between ROCm 6.4 and 6.3.
Thank you again to u/Panda24z for putting this guide together, very easy to follow!
Danternas@reddit
I finally jumped on this. It got a rocky start as the standard "amd_iommu=on" doesn't work and results in Proxmox not booting probably because of the necessary "root=ZFS=rpool/ROOT/pve-1 boot=zfs" as I am using ZFS. I also need "pcie_acs_override=downstream,multifunction" to get IOMMU to properly split the pci-e devices on my consumer board. So that's a tip if you don't get good separation.
That aside the guide worked flawlessly! I am surprised as I had spent hours and hours to get it working. Much credit to vendor-reset as I think that was the major difference.
My only criticism is that the grep-lines often return a lot more information than what is expected. In particular:
I feared that the guide had failed until I found the expected lines nestled among the others. Also
On an AMD system :,)
Oh, and maybe suggest a "rocm-smi" at the end of the guide to confirm everything is working?
Panda24z@reddit (OP)
Thanks for the insight! I made sure to include it in the updated guide. I also updated those lines you mentioned with something better so users can identify the information easier, and added the rocm-smi check with example output. I still find it hilarious how it registered a fan when it has none lol. Thanks again!
Danternas@reddit
Amazing mate, well done.
Stampsm@reddit
about 6 months to a year ago I got as far as the PCI BAR size and was stuck beating my head against the wall at that point. I knew it was likely the issue, but all the guides I saw when followed were bricking my VM's and stopped them booting. I must have been entering something wrong but couldn't figure it out.
THANKS!
Panda24z@reddit (OP)
I'm glad I could help! It was honestly that feeling of frustration that motivated me to post this guide once I figured it out. I'm happy to see others achieving successful results!
Secure_Reflection409@reddit
What kinda tokens/sec are these doing?
Panda24z@reddit (OP)
Here's a rough idea of what I'm getting with Ollama:
JaredsBored@reddit
I've been considering these for a while but the reset bug had dissuaded me. Turns out a competent tutorial how to get around that was all it took me to pull the trigger
Panda24z@reddit (OP)
Let me know if you encounter any issues. I just updated the guide with more information, so hopefully it is easier to follow.
JaredsBored@reddit
Thank you! Ebay order is placed so in 1-2 weeks I'm sure I'll be back.
fuutott@reddit
are you working with llama.cpp or vllm with this card?
Panda24z@reddit (OP)
I started with Ollama for a quick setup. I encountered issues with Docker VLLM, so I'm considering switching to Llama.cpp or giving vLLM another try after building it from source. I haven't had the time to modify the server setup since the initial installation.
Marksta@reddit
Geez bro, thanks for the work on the guide. Proxmox and pass through is a strangely difficult-ish topic but also a little over hyped on how hard it is. Never had too much issue myself with Nvidia cards at least. But it makes sense, since the parts of Proxmox that are easy are a level 1/10 on difficulty scale and then anything not explicitly handled in the webui gets pretty wild comparatively speaking.
I had the dumb good idea of wanting to use Proxmox and then use LXCs for inference engines/services but then I got into a fight with the base proxmox video drivers when trying to install amdgpu/rocm and nuked the whole thing and just went with Ubuntu server for AI stuff. Didn't seem worth the niceities for my only use case of the machine being AI IMO.
Panda24z@reddit (OP)
I completely agree. This was my first experience with an AMD GPU. While I've successfully done PCIe passthrough on Proxmox before, working with NVIDIA cards is much more straightforward than this.
Honestly, I got caught up in the hype surrounding the MI50 and thought, “How hard could it be?” That was stupid on my part. However, two things kept me motivated: 1) I had already purchased the damn card and paid the import fees, so I couldn’t bear to lose more money. 2) I needed it to work in my Proxmox server to replace my old GPU, which I am moving to another system.
Since I went through the trouble of getting the damn thing to work, I figured I would document the process for others. It was honestly an impulse buy that turned into a troubleshooting marathon, but I was fortunate enough to find users with similar issues, which ultimately helped me configure it properly.
DistanceSolar1449@reddit
Do you have video out on your Mi50? Did you flash the vbios yourself?
Panda24z@reddit (OP)
Sorry for not being clearer earlier. I’m currently running Ubuntu Live Server, so it’s a headless setup at the moment. I don’t have a mini DisplayPort cable handy, but once I get one, I’ll check if I can get video output working on both Ubuntu Desktop and Windows. I just haven’t reached that step yet.
Regarding the VBIOS, I didn’t flash the card myself, t came pre-flashed from my Alibaba vendor. I know you can sometimes find VBIOS files online, but I haven’t experimented with that personally. From what I’ve heard, flashing can be tricky depending on the exact version of the card you have.
No_Efficiency_1144@reddit
Thanks for guide I might be jumping on the AMD Instinct 32GB train
Panda24z@reddit (OP)
Of course, hope it helps!