GLM 4.6 UD-Q6_K_XL running llama.cpp RPC across two nodes and 12 AMD MI50 32GB

Posted by MachineZer0@reddit | LocalLLaMA | View on Reddit | 63 comments

Finally got another six MI50 32gb. Removed my old Nvidia Titan Vs in my 2nd HP DL580 Gen9.

Here we go.
running on secondary host:

~/llama.cpp.20251012/build/bin/rpc-server --host 0.0.0.0
ggml_cuda_init: GGML_CUDA_FORCE_MMQ:    no
ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no
ggml_cuda_init: found 6 ROCm devices:
  Device 0: AMD Radeon Graphics, gfx906:sramecc+:xnack- (0x906), VMM: no, Wave Size: 64
  Device 1: AMD Radeon Graphics, gfx906:sramecc+:xnack- (0x906), VMM: no, Wave Size: 64
  Device 2: AMD Radeon Graphics, gfx906:sramecc+:xnack- (0x906), VMM: no, Wave Size: 64
  Device 3: AMD Radeon Graphics, gfx906:sramecc+:xnack- (0x906), VMM: no, Wave Size: 64
  Device 4: AMD Radeon Graphics, gfx906:sramecc+:xnack- (0x906), VMM: no, Wave Size: 64
  Device 5: AMD Radeon Graphics, gfx906:sramecc+:xnack- (0x906), VMM: no, Wave Size: 64

!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
WARNING: Host ('0.0.0.0') is != '127.0.0.1'
         Never expose the RPC server to an open network!
         This is an experimental feature and is not secure!
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!

Starting RPC server v3.0.0
  endpoint       : 0.0.0.0:50052
  local cache    : n/a
Devices:
  ROCm0: AMD Radeon Graphics (32752 MiB, 32694 MiB free)
  ROCm1: AMD Radeon Graphics (32752 MiB, 32694 MiB free)
  ROCm2: AMD Radeon Graphics (32752 MiB, 32694 MiB free)
  ROCm3: AMD Radeon Graphics (32752 MiB, 32694 MiB free)
  ROCm4: AMD Radeon Graphics (32752 MiB, 32694 MiB free)
  ROCm5: AMD Radeon Graphics (32752 MiB, 32694 MiB free)
Accepted client connection

Then on primary host:

~/llama.cpp/build/bin/llama-server --model ~//models/GLM-4.6-UD-Q6_K_XL-00001-of-00006.gguf --cache-type-k q8_0 --cache-type-v q8_0 --n-gpu-layers 94 --temp 0.6 --ctx-size 131072 --host 0.0.0.0 --rpc 192.168.1.xxx:50052 --alias GLM-4.6_RPC

Observations (vs Single Node 6x MI50 32gb with GLM 4.6 Q3_K_S):

Prior experiement:

https://www.reddit.com/r/LocalLLaMA/comments/1nxv7x6/performance_of_glm_46_q3_k_s_on_6x_mi50/