🧠The Intelligence System
The OINs system is designed to distribute the training of open source AI Agents over decentralized compute networks, through gaming on social media. Let's break that down.
In the first iteration, anyone will be able to execute a transaction on Solana from within a Farcaster Frame. This transaction will do two things:
Train the AI Agent by executing a game onchain against other AI agents
The game can be executed with minimal clicks in a Farcaster Frame
Track the amount of compute the user has contributed to train that AI agent
This will be used to reward each user proportionally
In this way, OINs will incentivize average individuals to participate in the AI revolution and own a piece of the ever-transforming future.
Q: But if anyone can participate in this onchain future, what's in it for the holders?
In the current iteration, only holders — the Researchers — have the ability to create Intelligent Agents onchain. OINs are intelligent agents instilled with one or more machine learning models to enable the agent to set intelligent goals and subgoals, create plans to meet those goals, and make decisions to execute on sets of plans. At first, the models used by agents will be very basic, only capable of solving simple problems. As the network of players grows, the capacity for much more complex models supporting more intelligent agents will become unlocked.
Q: How do OINs work?
Each intelligent agent uses a self-contained software binary that follows a set of standard interfaces. Developers can create smart contracts that adhere to these interfaces and make them available onchain for anyone to use.
If you are a software or AI engineer, this is an opportunity to use your skillset and expertise to help grow the OINs network.
Depending on the game, players can either utilize one of the existing agents from the OINs Open Source Agent Directory — an onchain repository — or a third party agent to train from the ground up. More advanced users can clone one or more existing agents from the planned Foundry Marketplace, discussed below. Once an agent's smart contract has been deployed onchain, anyone is capable of training that agent within eligible games.
Agents can have various levels of intelligence and generalizability to new games and environments. The best agents will be more intelligent than a human in their mastered skills and will generalize to solve many different games, requiring very little experience to master a new game. The community is incentivized to select a diverse set of best-in-class agents and reward them for being made available as Base Agents on both the Open Source Agent Directory and the Foundry Marketplace. Base Agents can be thought of as untrained sets of learning models; they are fresh canvases with the ability to learn useful skills in a variety of games and environments.
The first implementation of intelligent agents has been created to run on Solana. Future implementations of these agents can be created to run within other distributed compute networks, when for example, the SVM is no longer capable of meeting the compute needs of the models or the community.
Q: Which agents and games will be incentivized with $ECHO?
This will be governed by the community of $ECHO holders. At the beginning, the agents and games incentivized to be trained will be highly generic and low complexity. This enables OINs to build a base of players on simple games with faster iteration cycles, before moving on to more complex use cases requiring more intelligent agents. The early stages will validate the technology and identify the strengths and weaknesses of different parts of the system.
As the network and playerbase training agents both increase in size, two things become possible: using more advanced models in agents, and developing finely-tuned training and inference infrastructure. It is at this time that the power of OINs will begin to be unlocked. Everything before this is simply training wheels.
Q: How will OINs be monetized?
OINs that become highly useful for generalized or specific use cases can be listed by their owner on the planned Armory Marketplace, an Intelligence-as-a-Service (IaaS) platform where owners can earn a fee any time their agent is used. This will be very similar to existing Software-as-a-Service (SaaS) models, except the services will be generalizable to many more use cases over time.
As mentioned above, holders are also incentivized by the training rewards given to users for improving their AI agents. Anytime a user receives a training reward, the owner of the OINs that was trained receives a royalty from that reward. This means that OINs holders benefit the more their agents are trained, in addition to any revenue they receive from their trained models being used as a service.
Q: What is an Architect?
500 $ECHO holders will be randomly selected. This inherent Architect rarity will be used as a mechanism within OINs to limit priority access within the system.
Architects are able to designate one of their existing OINs as a Foundation Agent (FA) and list it on the Foundry Marketplace, a Transfer Learning as a Service (LaaS) platform where owners can earn a fee any time their agent is cloned by (used as the source for) another OINs. It is possible for users to combine two or more agents in the cloning process, if the agents use the same base model architecture. In this case, the weights and layers of the models in the FAs cloned will be merged using genetic programming techniques.
It is important to understand that FAs retain all the functionality of standard OINs, with the additional designation and marketability. They receive all benefits of standard OINs, while being incentivized to be highly generalizable base agents that will be cloned many times by other users. Upgrading an intelligent agent to a Foundation Agent is currently planned to be a reversible process.
Q: What types of games can be played by the OINs?
OINs can play any game that follows the standardized set of software interfaces provided in the OGs Game Tookit. Developers can create smart contracts that adhere to these interfaces and make them available onchain for anyone to train agents against. Each game is a self-contained software binary that can be plugged into the ecosystem of OINs like a gaming cartridge. Games are tests for agents' abilities or skills in specific areas of expertise; they can model real-world problems, game-theoretic scenarios, specific optimizations, or simply games for entertainment purposes such as chess or StarCraft.
The Game Toolkit will be provided as an open-source base for developing and comparing AI models and agents. It is based on modern best practices within the field and is always open for community contributions. The best games built for OINs will be selected to be a part of The Arena. This is another opportunity for software and AI engineers alike to participate in the growth of the network. More details will be announced at a later date.
For those that need more concrete examples, early agents will be capable of solving simple games such as Classic Control environments, Box2D, MuJoCo, Atari Games, and other lower complexity environments. Once the network has sufficiently solved these simpler games, the bar will be increased and more complex agents capable of solving much more complex games will be sought after. This will enable the community of software developers and AI engineers participating in the OINs project to learn the models, tools, and research necessary to build much more complex systems.
As agents in the network become more capable, the potential use cases that are unlocked will be immense. For example, agents trained with large language models (LLMs) such as GPT-4 are already capable of completing skilled tasks on standard operating systems and operating humanoid robots using advanced techniques. These same models trained on different data are capable of writing human-level code, without any additional agent-based training. It's only a matter of time before more complex agents are capable of completing any useful service that a highly educated, highly-trained human can complete.
Last updated