Subnet 01

Text Prompting

Macrocosmos

Incentivizes global growth in conversational AI by rewarding contributors and advancing communication technologies.

SN1 : Prompting / Apex

SubnetDescriptionCategoryCompany
SN1 : Prompting / ApexAgentic workflows & inferenceGenerative AI
Macrocosmos

Subnet 1 within the Bittensor network is focused on incentivizing the growth of conversational intelligence on a global scale. It acts as a platform for advancing AI communication technologies, where contributors receive rewards for their efforts towards this goal. This subnet plays a pivotal role in Bittensor’s decentralized AI service ecosystem, fostering innovation and engagement in the realm of AI-driven conversations.

The inaugural Bittensor subnet dedicated to text generation is termed the Finney Prompt Subnetwork. This subnetwork is purposefully crafted to support prompt-based neural networks such as GPT-3, GPT-4, and ChatGPT in a decentralized fashion. Through this setup, users gain the ability to interact with Validators on the network to access outputs from the most proficient models, thereby empowering various applications.

Macrocosmos aims to elevate the creation of subnets, emphasizing a focus on crafting incentives and mechanisms for the Bittensor network. Moving from Opentensor to Macrocosmos grants newfound creative freedom by separating entities and allowing independent exploration of innovative concepts. Macrocosmos ventures can now push boundaries independently, free from constraints tied to the foundation’s reputation. This subnet hosts several incentive mechanisms aimed at fostering internet-scale conversational intelligence. Below, you’ll find examples of the intelligence generated within this subnet:

  • Answering questions.
  • Summarising given text.
  • Debugging code.
  • Translating languages.
  • Solving mathematics problems, and more.

This subnet operates through Large Language Models (LLMs), which scour the internet and utilise specialised simulator modules to generate factually accurate and mathematically precise responses.

Prompting Overview

  • The subnet validator simultaneously issues challenges to multiple subnet miners, constituting prompts for them. These challenges are crafted to mimic human style and tone, enabling subnet miners to adeptly handle ambiguous instructions and understand user needs.
  • Drive conversations between the subnet validator and subnet miners towards predefined goals.
  • Subnet miners respond to the subnet validator after completing the challenge task, often requiring the utilisation of APIs or tools for optimal performance.
  • The subnet validator evaluates each subnet miner’s response by comparing it with a locally generated reference answer, using it as the ground truth. This reference, derived from data from APIs and tools, ensures factual accuracy and provides citations.
  • Finally, the subnet validator assigns weights to subnet miners by transmitting them to the blockchain. In the Bittensor blockchain, the Yuma Consensus allocates rewards to participating subnet miners and subnet validators.

Use of Large Language Models

Both the subnet validator and subnet miners utilise Large Language Models (LLMs) in this subnet to craft challenges (subnet validator) and respond to prompts (subnet miners).

Challenge Generation

The challenge generation process unfolds as follows:

  • The subnet validator devises a prompt containing a clear question or task description for a given task type.
  • The subnet validator generates one or more reference answers to the prompt, providing the context for their creation.
  • To ensure human-like conversations, the subnet validator assumes a human persona and imbues the prompt with the persona’s style and tone, introducing a lossy, corrupted version of the original clear instruction. This corrupted prompt, known as a challenge, is issued to subnet miners without providing the reference.
  • The subnet validator evaluates subnet miner responses against reference answers. The closer a subnet miner’s response aligns with the reference, the higher their score.

Measuring Subnet Miner Responses

The Prompting Subnet 1 currently utilises a combination of string literal similarity and semantic similarity to gauge the proximity of a subnet miner’s response to the reference answer.

Key Innovations in This Subnet

This subnet has pioneered several innovative techniques to achieve truly human-like conversational AI that generates intelligence rather than merely replicating content from the internet. Referencing the diagram in the “Challenge Generation” section above:

Achieving Human-Like Conversation

To deliver a human-like conversational experience:

  • Subnet validators engage in roleplay, assuming the personas of random human users before prompting subnet miners. This approach fosters authentic, random, human-like conversations throughout subnet operations.
  • Subnet miners’ proficiency in handling ambiguous instructions.
  • This process generates intriguing synthetic datasets for fine-tuning other LLMs.

Subnet miners strive to produce completions closely resembling the reference by:

  • Deciphering clear instructions from the lossy challenge.
  • Identifying relevant contexts, e.g., using Wikipedia.
  • Crafting completions mirroring the tone and style of the reference. Through this subnet validation process, subnet miners progressively enhance their ability to interpret ambiguous, “fuzzy” instructions.

Subnet validators may intensify instruction corruption to heighten task difficulty. To alter subnet miner completions, subnet validators may adjust the style and tone of reference answers or modify the scoring function, or both.

Preventing Subnet Miners from Seeking Answers

To deter subnet miners from simply sourcing answers from the internet, this subnet introduces fuzziness into prompts, necessitating the utilisation of semantic intelligence to comprehend prompt instructions.

Evolving Subnet as a Mixture of Experts (MoE)

The subnet validator formulates challenges based on various tasks such as answering questions, summarising text, debugging code, solving mathematics problems, etc. The rationale behind incorporating multiple tasks includes:

  • Continuously benchmarking subnet miners’ capabilities across diverse, challenging yet common use-cases.
  • Routing prompts to specialised subnet miners, facilitating an effective mixture of experts system. This approach also lays the groundwork for Bittensor’s inter-subnet bridging mechanism, enabling Subnet 1 to interact with other subnets and access their valuable contributions.

Finally, subnet miners in this subnet must adeptly utilise tools and APIs to fulfil validation tasks. We are developing an API layer for inter-subnet communication, a natural extension of ‘agentic’ models

Will Squires – CEO and Co-Founder

Will has dedicated his career to navigating complexity, spanning from designing and constructing significant infrastructure to spearheading the establishment of an AI accelerator. With a background in engineering, he made notable contributions to transport projects such as Crossrail and HS2. Will’s expertise led to an invitation to serve on the Mayor of London’s infrastructure advisory panel and to lecture at UCL’s Centre for Advanced Spatial Analysis (CASA). He was appointed by AtkinsRéalis to develop an AI accelerator, which expanded to encompass over 60 staff members globally. At XYZ Reality, a company specializing in augmented reality headsets, Will played a pivotal role in product and software development, focusing on holographic technology. Since 2023, Will has provided advisory services for the Opentensor Foundation, contributing to the launch of Revolution.

Steffen Cruz – CTO and Co-Founder

Steffen earned his PhD in subatomic physics from the University of British Columbia, Canada, focusing on developing software to enhance the detection of extremely rare events (10^-7). His groundbreaking research contributed to the identification of novel exotic states of nuclear matter and has been published in prestigious scientific journals. As the founding engineer of SolidState AI, he pioneered innovative techniques for physics-informed machine learning (PIML). Steffen was subsequently appointed as the Chief Technology Officer of the Opentensor Foundation, where he played a pivotal role as a core developer of Subnet 1, the foundation’s flagship subnet. In this capacity, he enhanced the adoption and accessibility of Bittensor by authoring technical documentation, tutorials, and collaborating on the development of the subnet template.

Pedro Ferreira – Machine Learning Engineer

Kalei Brady – Data Scientist

Sergio Champoux – Data Scientist

Brian McCrindle – Machine Learning Researcher

Elena Nesterova – Lead Technical Program Manager

Richard Hudson – Communications Lead

Alex Williams – Recruitment Lead