2 x DGX Spark! Give me your non-inference workloads

Posted by entsnack@reddit | LocalLLaMA | View on Reddit | 130 comments

2 x DGX Spark! Give me your non-inference workloads

2 x DGX Spark with a 200Gbps interconnect.

I posted here when my first Spark came in and everyone responded with inference workloads. I still tested them, but inference monkeys please BTFO this time.

Give me your big model non-inference workloads to test, something to push the 256GB unified memory. I have a few LORA training ones from the last post to try. I already have nanochat pretraining running. GRPO without PEFT planned.