Can deterministic LLM inference replace SHA-256 for network consensus?

Posted by I-Am-A_Robot@reddit | LocalLLaMA | View on Reddit | 12 comments

I got tired of my GPU sitting idle when I wasn't actively prompting it, and have been interested in activities in which human users can interact and explore the digital realm with their AI companions and agents. I started looking into ways to use local LLMs to secure a decentralized network instead of brute-forcing meaningless math like Bitcoin does, to find a modern solution using LLMs and antigenic AI capabilities. It also has the benefit of outputting cryptographically verified data sets, extending the potential utility of blockchain technology built on LLMs.

The core problem I ran into was deterministic state. How do you get a swarm of different consumer hardware to agree on an AI generation without fracturing the network, in a way that can scale from 1 to potentially millions of users on a decentralized P2P network? What I came up with, largely using premium models and antigenic workflow, is a two factor method.

Essentially, the node uses the previous block's hash to seed a Temperature 0.0 prompt for a local Llama-3-8B. The model generates a semantic sentence (Proof of Intellect). Then, instead of SHA-256, the cryptographic throttle is an Integer Matrix Multiplication algorithm, which natively leverages tensor cores and explicitly bricks traditional ASIC. It's entirely open source and runs on local models.

Curious if anyone here has experimented with deterministic LLM loops for network consensus before? The hardest part was getting the P2P swarm to accept cross-platform quantization without ghost forking.