Introducing Prime Intellect’s Protocol & Testnet: A peer-to-peer compute and intelligence network
We’re excited to introduce our Protocol and the first iteration of it on our Testnet.
Prime Intellect’s Protocol is a peer-to-peer network that commoditizes compute and intelligence, coordinating the collective creation, ownership and governance of open source AI.
It builds on all our existing work on decentralized AI, and develops the infrastructure and economic incentives for us to aggregate and coordinate global compute resources to enable a truly sovereign open-source AI ecosystem.
The protocol is now in internal testnet, and was fully integrated and live on our SYNTHETIC-1 run. We’ve open sourced all our testnet code, and are excited to collaborate with others as we build towards an open and decentralized AI future.
The Next Step of Our Masterplan
Prime Intellect’s mission is to commoditize compute and intelligence to enable open & decentralized AI development, and accelerate progress in high impact domains like science, agents and code.
Our protocol plays a key role in enabling this vision at scale.

Many key prerequisites for our protocol have already been developed and released:
- Compute Exchange: Our platform for aggregating and orchestrating global GPU resources. https://www.primeintellect.ai/blog/compute
- PRIME: Our decentralized training framework which makes it possible to train large-scale models on globally distributed compute; demonstrated by our 10B INTELLECT-1 model - trained across 5 countries on 3 different continents https://www.primeintellect.ai/blog/intellect-1
- GENESYS: Our synthetic reasoning framework that’s easily parallelizable to thousands of data centers and millions of compute hardware devices, and forms the basis of our distributed reinforcement learning approach; used in our SYNTHETIC-1 dataset: https://www.primeintellect.ai/blog/synthetic-1
- TOPLOC: A very lightweight and efficient validation scheme for inference that allows us to verify legitimacy of compute contributions in a distributed setting; we’re planning to extend this design to verification of training: https://www.primeintellect.ai/blog/toploc
Prime Intellect’s Protocol forms the coordination layer to tie all of these components together, and provides the economic incentives for them to attract the compute and capital needed to achieve scale.
INTELLECT-1 and SYNTHETIC-1 are a first glimpse into what our protocol can enable, where anonymous individuals crowdfunded over 360,000 exaFLOPs of compute — equivalent to more than 100,000 H100 hours — to create open models and datasets.

We expect our protocol to enable much more compute crowdfunding for open source over time.
Our Protocol
Prime Intellect is building a peer-to-peer compute and intelligence protocol. The goal is to commoditize compute and intelligence, making it a universally accessible resource, collectively owned and aligned with humanity’s goals.
What do Protocols Enable
It is first worth discussing what a protocol can enable, and why we think it’s so important to our mission:
- Amass large scale compute: Open source AI needs access to incredibly large amounts of compute to compete with the largest closed labs. Distributed compute protocols like Bitcoin and Ethereum — that have each aggregated over 10 gigawatts of compute — provide historical precedent on the potential scale of community-pooled compute.
- Truly global scale markets: The nature of decentralized protocols allows them to scale beyond traditional state and institutional boundaries, enabling the most efficient and largest scale markets in the world for compute and intelligence - the most valuable commodities of the future.
- Remove middlemen and create a sovereign open source AI ecosystem: Permissionless networks such as Ethereum and Bitcoin have allowed for the emergence of truly sovereign applications which cannot be censored or shut down by state or corporate actors.We envision a similar future for the open source AI ecosystem, where there are no middlemen with the ability to shut down or hinder the ecosystem’s progress.
- Allow anyone in the world to participate: Decentralized protocols build on the open and egalitarian nature of open source software to enable provenance, ownership and democratize access to economic participation — allowing anyone in the world to earn and participate in the upside of the outputs they produce.
- Create novel funding mechanisms for open source AI: Open source models, datasets, agents and code notoriously suffer from funding issues. We’re excited to explore collective ownership and programmable licenses for models and agents to reward contributors and enable novel revenue streams for open source AI.
Peer-to-Peer Compute and Intelligence
Our protocol has a few key components:
- Global scale compute and intelligence markets, enabling anyone to provide or access compute and inference in a peer-to-peer manner; similar to Prime Intellect’s Compute Exchange and Open Router, but permissionless and global in scale
- Support planetary-scale trustless decentralized training, reinforcement learning, synthetic data generation, inference, and agents with robust verification mechanisms
- Allow anyone to come to the protocol and create and contribute to open models, agents and datasets and more, and gain ownership in the resulting outputs for their contributions
- Enable the financialization of open models, datasets and agents to provide novel revenue streams and make it economically viable to produce open source AI
- Designed to be self-upgradeable and scalable towards a future where millions of autonomous agents perform billions of interactions a second
We’re designing our protocol to be general enough to support a plethora of use cases, however we’re most excited about enabling the following:
- Decentralized model training for frontier open foundation models: from language, agents to scientific models
- Globally Distributed reinforcement learning (RL) in the inference-time compute paradigm
- Co-ownership of models, agents, datasets and code
- Incentivized NanoGPT-style competitions to direct progress
- Decentralized, autonomous, agent-centric organizations
Effectively forming an open ecosystem where anyone can create, crowdfund and economically participate in open intelligence.

Running live on SYNTHETIC-1
Our testnet — the first step towards building our protocol — was running live and served as the settlement layer for all compute contributions on SYNTHETIC-1.
All users contributing to the synthetic data generation run deployed our protocol workers on their compute, had their compute validated, joined the compute pool for SYNTHETIC-1, and ultimately had their contributions tracked on-chain.
You can see the transactions and events that were settled by looking at the contracts on a block explorer, which have been linked in the Contracts section below.
Testnet Overview
While our protocol vision is quite broad, our testnet is focused on enabling anyone in the world to pool their compute resources towards decentralized training, synthetic data generation and agents.
Our protocol currently has a few main components:
- Worker: Software that anyone can run to contribute their compute resources to a compute pool
- Validator: Checks for and identifies fraudulent or faulty hardware
- Orchestrator: A server for compute pool creators to manage the workers in their pool and coordinate the current workload to run
- Smart Contracts: A trustless settlement layer leveraging the Ethereum stack that enables us to scale globally without introducing middlemen, to ensure that no entity has the ability to shut-down an open ecosystem of models, agents, etc.

Each component is designed with the goal in mind of making it fully decentralized over time, so that we’re able to support a truly sovereign open ecosystem.
Worker
The Prime Intellect worker forms the basis of our decentralized compute infrastructure. It will enable anyone to run the software and contribute their hardware to compute pools on our decentralized infrastructure.
This will allow us to scale our compute exchange beyond our datacenter partners, to globally distributed data centers and consumer compute; creating a truly global scale compute fabric. The worker is also being designed to allocate compute towards the highest earning use-case, effectively maximizing the earnings of the hardware, and attracting large amounts of compute to our network.

While currently you can only launch workers through our platform, we’re planning on enabling permissionless contributions soon, where anyone can run the worker on any compute across the world, contribute it to compute pools, and earn for their contributions.
Validator: Verifying Hardware & Identifying Legitimate Contributions
The Validator is how we ensure the legitimacy and reliability of the hardware contributing to our distributed compute network.
Validators have the ability to perform hardware and software checks on compute joining the network to identify fraudulent or faulty compute hardware, and remove them from the network. Validators have multiple tools at their disposal:
Hardware Verification:
- Hashcat Challenges: System sends hash computation tasks at random intervals, measuring time and verifying results to prevent precomputation.
- Dynamic, time-sensitive workload profiling: Real-time workload profiling using diverse tasks to test different GPU capabilities while measuring timing and memory patterns.
- Hardware fingerprinting: Creates device fingerprints by measuring precise hardware characteristics like GPU operation timing and memory patterns.
- Behavioral Analysis: Monitors long-term patterns including temperature changes, power consumption, and performance degradation.
Software Verification:
- TOPLOC-Style Proofs: Uses lightweight verification schemes to validate inference results and training gradients at the software level, like TOPLOC: https://www.primeintellect.ai/blog/toploc
- Trusted Execution Environments (TEEs): Leverages Trusted Execution Environments to provide hardware-signed attestations verifying device legitimacy and software correctness.
Ultimately, we expect the solution to verification to be multi-layered, and differ with each use-case built on top of our protocol. There are also some use-cases that will require higher levels of security and correctness guarantees than others.
Orchestrator
The Orchestrator is a central server that compute pool creators can use to manage their training run:
- Manage contributing workers
- Monitor status and performance metrics of workers
- Remove workers if anomalies are detected
- Update software that nodes are running, or trigger restarts
While the holy grail is to have fully trustless use of distributed compute, in reality there are many challenges that need to be overcome to achieve this goal and become fully autonomous and fault-tolerant.
Our orchestrator enables compute pool creators to dynamically manage and react to issues during their training or synthetic data generation runs, and over time we expect to remove the guardrails and intervention mechanisms as our frameworks become increasingly resilient and fault-tolerant.
Smart Contracts
Smart contracts are self-executing programs that, once deployed, cannot be changed, and execution is guaranteed by the network.
They allow us to write rules to trustlessly coordinate compute on our protocol in a transparent and credibly neutral way, without any intermediaries or middleman - ensuring that anything built on our contracts cannot be tampered with or censored by anyone, not even us.
Our smart contracts form the foundation of our protocol, and act as a trustless settlement layer for our compute and intelligence markets.
The contracts are written in Solidity and deployed on Base (sepolia), an Ethereum-based blockchain, however we plan to migrate them to our own chain over time. Here’s a quick overview:
- PrimeNetwork: The central coordination contract, which:
- Enforces roles and permissions (Federator, Validator).
- Handles the registration of providers and their stakes.
- Orchestrates the creation of domains and links them to the DomainRegistry.
- StakeManager: Manages all staking-related logic, including unbonding periods, slashing conditions, and minimum stake requirements.
- ComputeRegistry: Maintains the record of providers, nodes, and their status (e.g., whitelisted, validated, active).
- DomainRegistry: Keeps track of various domains, each referencing potential validation logic and metadata.
- ComputePool: Governs the lifecycle of compute pools, from creation to completion. Tracks which nodes join, their active time intervals, and overall compute capacity.
- RewardsDistributor: Calculates rewards for nodes based on active time in a compute pool and distributes tokens accordingly.

The contracts are publicly viewable on Base testnet, and you can see the transactions and events that have been triggered on them using a block explorer; we’ve linked to all the contract addresses on Base’s block explorer below:
- Prime Network: https://sepolia.basescan.org/address/0x0DFd3646391c8CBde50b8B3541a2F6f12718c23F
- Compute Pool: https://sepolia.basescan.org/address/0x40d0bdd887b8f1711Ad8eD257dBFDe7d22AE9b67
- Stake Manager: https://sepolia.basescan.org/address/0x8e77B1e622f27B2F6cF8ED6605B15515F693bE3F
- Compute Registry: https://sepolia.basescan.org/address/0x3B03Ad8e9F03cfA364d80cd52b98E6523E041376
- Domain Registry: https://sepolia.basescan.org/address/0xE9f8e23199FA9A8331314272AdaF5D931c12384C
- Rewards Distributor: https://sepolia.basescan.org/address/0xc3A3EEc96dB2c4E2770A65d1C3d5Cd6ef780cF40
What’s Next?
Our testnet is just the first step of our protocol vision and masterplan, and we plan to improve on it massively over the next few months by:
- Enabling permissionless contributions on our public testnet for our next training run, where anyone in the world can run our worker and validator software and participate in the network without any middlemen to contribute to state-of-the-art reasoning models
- Integrating TOPLOC-style verification mechanisms for inference and training to enable truly decentralized training, synthetic data generation, reinforcement learning and inference
- Expanding our infrastructure beyond just compute pools to peer-to-peer compute and intelligence markets
- Extending our design to support the financialization of models, agents, datasets and code to enable novel revenue streams for open source AI
- Launching collaborations with other leading projects in open source and decentralized AI to bootstrap an ecosystem of researchers and developers collectively advancing open intelligence
- And much more…
If this sounds exciting to you, join us. We are a small, talent dense and highly ambitious team and are looking for engineers and researchers to help us build a beneficial co-existence between humanity and superintelligence.




.png)
