Subnet 04
Targon
Manifold Labs
Manifold’s subnet enhances AI by integrating text, images, and audio for improved multimodal processing and predictions

SN4 : Targon
Subnet | Description | Category | Company |
---|---|---|---|
SN4 : Targon | Inference verification & optimization | Generative Al | Manifold |
- twitter: Company X
- site-internet: Company website
- site-internet: Website
- site-internet: Application
- github: Github
- linkedin: LinkedIn
Developed by Manifold, it focuses on multimodal AI systems that process and generate information across multiple data types and formats, including text, images, and audio.
This subnet enhances AI systems’ ability to process and generate information across various data types and formats. It leads to a deeper understanding of context and relationships, thereby improving human-AI interactions. Multi-modal AI systems in this setup become more resilient and reliable by leveraging data from multiple sources, which helps them handle inconsistencies and errors more effectively, ultimately enhancing output and performance.
Multimodal AI is an advanced form of artificial intelligence that integrates multiple types or modes of data to achieve more accurate assessments, insightful conclusions, and precise predictions. The primary distinction between multimodal AI and traditional single-modal AI lies in the diversity of data they utilize. Single-modal AI typically operates with a single source or type of data, whereas multimodal AI processes data from various sources, such as video, images, speech, sound, and text. This enables a more comprehensive and nuanced understanding of environments or situations.
Multimodal AI systems are generally composed of three key components: data acquisition, multimodal fusion, and decision-making. These systems have a wide range of applications across different industries, including manufacturing process optimization, product quality improvement, healthcare, finance, and entertainment.
In many real-world scenarios, multimodal AI outperforms single-modal AI, representing a new frontier in cognitive AI. By combining the strengths of multiple inputs, multimodal AI excels in solving complex tasks and synthesizing data from diverse sources, resulting in more intelligent and dynamic predictions.
The indexing of the web by the subnet involves the creation of a searchable database rather than live internet searches for faster response times. Developing a fast and efficient indexing system is crucial due to the vast amount of unindexed information on platforms like YouTube and Facebook, enhancing query speed significantly.
Obstacles encountered while building an index with Subnet 4 included outdated practices, lack of relevant information on search engines, and the slow efficiency of meta-search engines. Storing vast amounts of information for indexing, especially when aiming to index millions of pages daily, proved to be a challenging task in Python due to speed limitations. To improve indexing efficiency, the development team transitioned to using Bittensor P, allowing for faster data storage and retrieval compared to Python. By optimizing the indexing process through various languages like Go, the team aims to ensure quick response times for searches, enhancing overall network performance.
What is a Redundant Deterministic Verification Network?
Using public datasets for model training and evaluation presents challenges in fairly rewarding models for their work. Competitors might overfit their models to these datasets or use a lookup for the output to gain an unfair advantage. To address this, a solution involves prompt generation using a query generation model and a private input.
The private input will be sourced from an API managed by Manifold, rotated every twelve seconds, and authenticated with a signature using the validator’s keys. This private input is fed into the query generation model, which can be operated by the validator or as a light client by Manifold. The data source can be from a crawl or from RedPajama.
Using the query, private input, and a deterministic seed, a ground truth output is generated with the specified model, which can be executed by the validator or as a light client. The validator then sends requests to miners with the query, private input, and deterministic seed. The miners’ outputs are compared to the ground truth output, and if the tokens match, the miner has successfully completed the challenge.
Role of a Prover
A prover is a node responsible for generating an output from a query, private input, and deterministic sampling parameters.
Role of a Verifier
A verifier is a node responsible for verifying a prover’s output. The verifier sends a request to a prover with a query, private input, and deterministic sampling parameters. The prover then returns a response with the output. The verifier compares the output to the ground truth output. If they match, the prover has successfully completed the challenge.
Features of TARGON
Challenge Request: A challenge request is sent by a verifier to a prover, containing a query, private input, and deterministic sampling parameters. The prover generates an output based on these inputs and sends it back to the verifier.
Inference Request: An inference request is sent by a verifier to a prover, containing a query, private input, and inference sampling parameters. The prover generates an output and streams it back to the verifier.
CAVEAT: Every 360 blocks, there will be a random number of inference samples by the verifier. The verifier will compare these outputs to the ground truth outputs. The cosine similarity of the outputs determines the reward for the prover. Failing to respond to an inference request will result in a 5x penalty.
Robert Myers – Founder and CEO
James Woodham – Co-Founder
Joshua Brown – Lead Software Engineer
Ahmed Darwich – Software Engineer
Jonathan Guyton – Robotics Engineer