Subnet 03
Templar
Rao Foundation
Templar enables decentralized, incentivized AI training across global compute resources, ensuring scalability, privacy, and collaboration.

SN3 : Templar
Subnet | Description | Category | Company |
---|---|---|---|
SN3 : Templar | Distributed treaining | Generative Al | Templar |
Templar is a decentralized training framework that enables large-scale AI model training across heterogeneous compute resources distributed over the internet. By leveraging a carefully designed incentive mechanism, it connects diverse computational nodes, allowing contributors (miners) to participate in collaborative training while ensuring quality and integrity through a trustless validation process. This innovative approach, known as Incentivized Wide-Internet Training, rewards miners in $TAO tokens for contributing computational power and high-quality data, fostering an open and democratized AI development ecosystem. Unlike traditional cloud-based training, which relies on centralized infrastructure and controlled datasets, Templar ensures privacy, scalability, and resilience by distributing training across a decentralized network.
This model eliminates single points of failure, making AI training more accessible, secure, and efficient. Participants are incentivized to provide high-quality contributions, as rewards are directly tied to the accuracy and usefulness of their updates. The framework supports heterogeneous hardware environments, allowing a wide range of contributors to participate, from individual developers to large-scale compute providers. By integrating blockchain incentives with decentralized AI training, Templar paves the way for a future where advanced machine learning models can be developed collaboratively without reliance on centralized tech monopolies. This ensures that AI remains open, scalable, and privacy-focused while reducing the costs traditionally associated with model training.
Through its decentralized architecture, Templar is redefining the landscape of AI and blockchain integration, unlocking new opportunities for researchers, engineers, and businesses seeking an efficient, community-driven approach to artificial intelligence.
Key Features
Decentralised Training: Utilises computational resources across the internet to enable large-scale model training.
Incentive-Driven: Implements a reward system that encourages miners to contribute high-quality updates.
Heterogeneous Compute: Supports various hardware configurations to ensure broad participation.
Scalable Architecture: Designed to efficiently train large models across a distributed network.
Fair Participation: Includes mechanisms to prevent manipulation and ensure honest contributions.
System Overview
Templar is a decentralised training framework that coordinates computational workloads across a network of participants. The system comprises two key roles:
Miners: Nodes responsible for training models on assigned data subsets and sharing their computed gradients with peers.
Validators: Nodes that assess the effectiveness of miners’ submitted gradients and update weights on the blockchain accordingly.
The collaboration between miners and validators ensures that only beneficial updates are incorporated into the model. The training process is structured into synchronised windows, where miners train and submit gradients, and validators evaluate and integrate them, all managed through blockchain coordination.
Miners
Model Synchronisation:
- Miners synchronise their model with the latest global state.
- They attempt to retrieve the latest model checkpoint from the validator with the highest stake.
- If no checkpoint is available, they initialise a model from scratch.
Data Acquisition:
- Each miner retrieves a specific subset of the dataset (pages) for the current training window.
- The assignment of data is deterministic, based on a seed derived from the miner’s UID and the window number.
- This ensures that each miner processes a unique yet consistent portion of the dataset.
Local Training and Gradient Computation:
- Miners perform forward and backward passes on their assigned data to compute gradients.
- They accumulate gradients over multiple batches within the training window before submission.
Validators
Model Synchronisation:
- Validators synchronise their model with the latest global state.
- They attempt to retrieve the latest model checkpoint from the validator with the highest stake or start from scratch.
Data Acquisition:
- Validators select a miner to evaluate.
- They retrieve the same data subset assigned to the miner using the same deterministic seeding mechanism.
Gradient Gathering:
- Validators collect the compressed gradients submitted by miners.
- These gradients are decompressed and applied to the local model to maintain consistency and evaluate effectiveness.
Incentive Mechanism
The Templar incentive mechanism is designed to:
- Encourage Honest Participation: Miners are rewarded for performing genuine training and submitting beneficial updates.
- Promote Model Improvement: Contributions that effectively reduce model loss are rewarded.
- Discourage Malicious Behaviour: Updates that fail to improve or degrade model performance are penalised.
Templar ensures meaningful contributions by linking miner rewards to actual improvements in model performance. This approach aligns individual incentives with the collective goal of optimising the shared model. The structured assignment of data, robust evaluation by validators, and decentralised weight distribution create a resilient and self-regulating ecosystem.
By fostering a collaborative and incentive-driven model, τemplar enables efficient, decentralised learning that enhances AI training through global participation while ensuring security and fairness. This framework paves the way for scalable, distributed AI model training across diverse computational environments.
Awaiting data