Pavel Durov introduces Cocoon, a decentralized AI inference plateform on TON

Posted by No_Palpitation7740@reddit | LocalLLaMA | View on Reddit | 13 comments

Durov's tweet

🐣 It happened. Our decentralized confidential compute network, Cocoon, is live. The first AI requests from users are now being processed by Cocoon with 100% confidentiality. GPU owners are already earning TON. cocoon.org is up.

🏦 Centralized compute providers such as Amazon and Microsoft act as expensive intermediaries that drive up prices and reduce privacy. Cocoon solves both the economic and confidentiality issues associated with legacy AI compute providers.

📈 Now we scale. Over the next few weeks, we’ll be onboarding more GPU supply and bringing in more developer demand to Cocoon. Telegram users can expect new AI-related features built on 100% confidentiality. Cocoon will bring control and privacy back where they belong — with users.

COCOON Architecture

COCOON is a decentralized AI inference platform on TON that securely connects GPU owners who provide compute with privacy-conscious applications that need to run AI models. For GPU Providers, it defines how suitable hardware can become part of a confidential, attested compute layer – for Developers, it is the backend that executes model requests and settles payments on-chain.

COCOON For GPU Owners

COCOON allows GPU owners to contribute confidential AI inference capacity to a decentralized network on TON. By running the COCOON stack on a suitable TEE-enabled GPU server, you provide private, verifiable model execution and transparently receive TON payments for each processed request.

As a GPU owner, you bring the hardware and configuration – COCOON provides the protocol, attestation, and trustless payment distribution.

COCOON For Developers

Developers plug COCOON’s secure, verifiable AI inference into their apps and backends, so they can safely serve powerful AI features to their users. In exchange for these inference services, they reward GPU providers with TON.

Soon, developers will be able to use a streamlined Docker-based solution to deploy their own client instance. Stay tuned for this and more upcoming features – like a lightweight client library that lets apps plug directly into COCOON.