🦇
Echo
  • 🦇Echo: The Onchain General Intelligence Network
  • 🧠The Intelligence System
  • 🔁The Recap
  • 📖Changing the Narrative
  • 👁️‍🗨️Visualize the Future
Powered by GitBook
On this page

Changing the Narrative

PreviousThe RecapNextVisualize the Future

Last updated 12 months ago

Public discourse on AI agents primarily revolves around generative AI, such as large language models (LLMs) and image/video generation models. However, this conversation often overlooks a crucial limitation of generative AI: these models are not designed to plan or make effective decisions. To develop masterful AI agents and progress towards world-improving AGI, we need more advanced systems that can train AI to set goals, create robust plans to achieve those goals, and make proficient decisions while executing complex sets of plans.

Within specialized AI circles, the conversation and research are heavily focused on addressing these challenges. Over the past two decades, a substantial body of research has been dedicated to these topics. The pace of research and development has now accelerated to a point where the applications have become incredibly useful for tackling highly complex problems. For example, AI agents have been used to enhance , compete at the highest level in e-sports (), and control operating systems with human level proficiency ().

We are building upon decades of research:

AI-based agents have the potential to be a significant step forward in achieving massively accessible and highly useful artificial intelligence. Even OpenAI is focusing in on AI agents. The recent explosion of interest in the space around OpenAI's upcoming Q-Star algorithm is centered around this exact topic: AI agents that can plan, set goals, and make great decisions.

Right now, however, training world-class AI agents is out of reach for the average individual. Similar to the cloud-computing revolution, individuals and smaller organizations are expected to wait for trillion-dollar companies to deliver usable AI products. And just as with cloud computing, the prices charged for these AI products will likely be many times higher than the cost of providing the underlying technology.

Not this time.

If that were the world's only option, everyone would be forced to pay a much higher price for AI products compared to a scenario where they had access to the same tools and systems used by large organizations to train and monetize these models. This monopolistic situation mirrors what happened with the cloud computing revolution, where software giants were able to accumulate hardware and then sell access back to the world at greatly inflated prices.

This does not need be our fate.

OINs are an opportunity for everyone to participate in this world-changing revolution from the ground up. This is a grassroots, decentralized movement to put the strongest AI agents into the hands of individuals around the world, not behind the paywalls and gatekeeping of software giants and bureaucratic black holes. This is a push for a better world, with market-based incentives for providing access to this technology to anyone around the world, at the lowest cost possible. The opportunity is ours to grasp.

📖
YouTube Video Compression
AlphaStar
OSWorld
Self-Improving Reactive Agents Based On Reinforcement Learning, Planning and Teaching (1992)
Evolving Neural Networks through Augmenting Topologies (2002)
Playing Atari with Deep Reinforcement Learning (2013)
Sequence to Sequence Learning with Neural Networks (2014)
Universal Value Function Approximators (2015)
AlphaGo: A General Reinforcement Learning Algorithm that Masters Chess, Shogi, and Go through Self-Play (2015)
Policy Distillation (2016)
Attention is All You Need (2017)
Mutual Alignment Transfer Learning (2017)
Evolution Strategies as a Scalable Alternative to Reinforcement Learning (2017)
Diversity is All You Need: Learning Skills without a Reward Function (2018)
Variational Option Discovery Algorithms (2018)
OpenAI Five: Dota 2 with Large Scale Deep Reinforcement Learning (2018)
Sim-to-Real Transfer of Robotic Control with Dynamics Randomization (2018)
GPT-1: Improving language understanding with unsupervised learning (2018)
Emergent Complexity via Multi-Agent Competition (2018)
AlphaZero: Shedding new light on chess, shogi, and Go (2018)
Learning to Adapt in Dynamic, Real-World Environments Through Meta-Reinforcement Learning (2019)
Pluribus: Superhuman AI for multiplayer poker (2019)
Emergent Tool Use From Multi-Agent Autocurricula (2020)
SEED RL: Scalable and Efficient Deep-RL with Accelerated Central Inference
GPT-3: Language Models are Few Shot Learners (2020)
Generative Pretraining From Pixels (2020)
MuZero: Mastering Go, chess, shogi and Atari without rules (2020)
AI for Full-Self Driving at Tesla (2020)
Rapid Locomotion through Learned Reuse (2021)
Open-Ended Learning Leads to Generally Capable Agents (2021)
VPT: Learning to play Minecraft with Video PreTraining (2022)
GPT-4: Technical Report (2023)
Let's Verify Step by Step (2023)
Toolformer: Language Models Can Teach Themselves to Use Tools
Tree of Thoughts: Deliberate Problem Solving with Large Language Models (2023)
Mastering Diverse Domains through World Models
Voyager: An Open-Ended Embodied Agent with Large Language Models
More Agents is All You Need (2024)
OSWorld: Benchmarking Multimodal Agents for Open-Ended Tasks in Real Computer Environments (2024)