[ DISCUSSION ] Using a global GPU pool for training models

Posted by Broad_Ice_2421@reddit | LocalLLaMA | View on Reddit | 11 comments

I was thinking, what if we all combine our idle GPUs into a global pool over a low latency network ?

Many people have gaming PCs, workstations, or spare GPUs that sit unused for large parts of the day. If those idle GPUs could be temporarily shared, developers, researchers, and startups could use that compute when they need it. The idea is somewhat like an airbnb for GPUs , connecting people with unused GPUs to those who need extra compute to deal w AI training resource demands.

In return, people who lend their GPUs could be rewarded with AI credits, compute credits**,** or other incentives that they can use . Will something like this could realistically work at scale and whether it can help with the growing demand for GPU compute and AI training.