AI and Web3 Convergence: New Opportunities from Computing Power Sharing to Data Incentives

AI+Web3: Towers and Squares

TL;DR

  1. Web3 projects based on AI concepts have become targets for capital attraction in both primary and secondary markets.

  2. The opportunities of Web3 in the AI industry are reflected in: using distributed incentives to coordinate potential supply in the long tail, across data, storage, and computation; at the same time, establishing an open-source model and a decentralized marketplace for AI Agents.

  3. AI is mainly used in the Web3 industry for on-chain finance ( cryptocurrency payments, trading, data analysis ), and assisting development.

  4. The utility of AI+Web3 is reflected in the complementarity of the two: Web3 is expected to counteract AI centralization, while AI is expected to help Web3 break out of its boundaries.

AI+Web3: Towers and Squares

Introduction

In the past two years, the development of AI has accelerated as if a fast-forward button has been pressed. This butterfly effect, instigated by ChatGPT, has not only opened up a new world of generative artificial intelligence but has also created a tidal wave in the Web3 domain.

With the support of AI concepts, the financing boost in the slowing cryptocurrency market is evident. Media statistics show that in the first half of 2024 alone, 64 Web3+AI projects completed financing, with the AI-based operating system Zyber365 achieving a maximum financing amount of $100 million in its Series A round.

The secondary market is more prosperous. According to data from the crypto aggregation website Coingecko, the total market value of the AI sector has reached 48.5 billion USD in just over a year, with a 24-hour trading volume close to 8.6 billion USD; the positive impact of mainstream AI technology advancements is evident, as the average price of the AI sector rose by 151% following the release of OpenAI's Sora text-to-video model; the AI effect has also extended to the crypto fundraising sector, one of which is Meme: the first AI Agent concept MemeCoin GOAT has quickly gained popularity and achieved a valuation of 1.4 billion USD, successfully sparking the AI Meme craze.

The research and topics around AI + Web3 are equally hot, from AI + Depin to AI Memecoin and now to the current AI Agent and AI DAO, the FOMO sentiment can no longer keep up with the speed of the new narrative rotation.

AI + Web3, this combination of terms filled with hot money, opportunities, and future fantasies is inevitably seen as a marriage arranged by capital. It seems difficult for us to discern whether beneath this splendid robe lies the playground of speculators or the eve of an explosion at dawn?

To answer this question, a key consideration for both parties is whether it would be better with the other. Can they benefit from each other's models? In this article, we also attempt to examine this pattern by standing on the shoulders of our predecessors: how Web3 can play a role in various aspects of the AI technology stack, and what new vitality AI can bring to Web3.

Part.1 What opportunities does Web3 have under the AI stack?

Before diving into this topic, we need to understand the technology stack of AI large models:

AI+Web3: Towers and Squares

To put the entire process in more straightforward terms: "Large models" are like the human brain. In the early stages, this brain belongs to a newborn baby who has just come into the world and needs to observe and absorb massive amounts of information from the surroundings to understand this world. This is the "data collection" stage. Since computers do not possess human senses like vision and hearing, before training, the vast amounts of unlabelled external information need to be transformed into a format that computers can understand and use through "preprocessing."

After inputting the data, the AI constructs a model with understanding and prediction capabilities through "training", which can be seen as the process of an infant gradually understanding and learning about the outside world. The parameters of the model are like the language abilities that the infant continuously adjusts during the learning process. When the learning content begins to become specialized, or when feedback is received from communication with others and corrections are made, it enters the "fine-tuning" stage of the large model.

As children gradually grow up and learn to speak, they can understand meanings and express their feelings and thoughts in new conversations. This stage is similar to the "reasoning" of large AI models, which can predict and analyze new language and text inputs. Infants express feelings, describe objects, and solve various problems through language abilities, which is also akin to how large AI models apply reasoning to various specific tasks after completing training and being put to use, such as image classification and speech recognition.

The AI Agent is more akin to the next form of large models - capable of independently executing tasks and pursuing complex goals, possessing not only the ability to think but also to remember, plan, and interact with the world using tools.

Currently, in response to the pain points of AI across various stacks, Web3 has preliminarily formed a multi-layered, interconnected ecosystem that encompasses all stages of the AI model process.

1. Basic Layer: Computing Power and Data's Airbnb

Hashrate

Currently, one of the highest costs of AI is the computational power and energy required for training and inference models.

One example is that Meta's LLAMA3 requires 16,000 H100 GPUs produced by NVIDIA(, which is a top-tier graphics processing unit designed specifically for artificial intelligence and high-performance computing workloads. Training takes 30 days to complete. The unit price of the latter's 80GB version ranges from $30,000 to $40,000, requiring an investment of $400 million to $700 million in computing hardware), including GPUs and network chips(. Additionally, the monthly training consumes 1.6 billion kilowatt-hours, with energy expenditures nearing $20 million per month.

The unburdening of AI computing power is precisely one of the earliest intersections of Web3 and AI - DePin) decentralized physical infrastructure network(. Currently, the DePin Ninja data website has displayed over 1,400 projects, among which representative projects for GPU computing power sharing include io.net, Aethir, Akash, Render Network, and more.

The main logic is that the platform allows individuals or entities with idle GPU resources to contribute their computing power in a permissionless decentralized manner, similar to an online marketplace for buyers and sellers like Uber or Airbnb, increasing the utilization of underused GPU resources, and end users thereby obtain more cost-effective and efficient computing resources; at the same time, the staking mechanism ensures that if there are violations of quality control mechanisms or network interruptions, resource providers face corresponding penalties.

Its characteristics are:

  • Gather idle GPU resources: The suppliers are mainly excess computing power resources from third-party independent small and medium-sized data centers, cryptocurrency mining farms, and other operators, with mining hardware based on the PoS consensus mechanism, such as FileCoin and ETH miners. Currently, there are also projects dedicated to launching devices with lower entry barriers, such as exolab, which utilizes local devices like MacBook, iPhone, and iPad to establish a computing power network for running large model inference.

  • Facing the long tail market of AI computing power:

a. "From a technical perspective," decentralized computing power markets are more suitable for inference steps. Training relies heavily on the data processing capabilities brought by ultra-large cluster scale GPUs, while inference requires relatively lower GPU computing performance, such as Aethir focusing on low-latency rendering work and AI inference applications.

b. "From the demand side perspective," small to medium computing power demanders will not train their own large models individually, but will instead choose to optimize and fine-tune around a few leading large models, and these scenarios are naturally suitable for distributed idle computing resources.

  • Decentralized Ownership: The technological significance of blockchain lies in the fact that resource owners always retain control over their resources, allowing for flexible adjustments based on demand while also gaining profits.

Data

Data is the foundation of AI. Without data, computation is as useless as a floating duckweed, and the relationship between data and models is akin to the saying "Garbage in, Garbage out"; the quantity of data and the quality of input determine the final output quality of the model. For the training of current AI models, data determines the model's language ability, comprehension ability, and even values and human-like performance. Currently, the data demand dilemma for AI is primarily focused on the following four aspects:

  • Data hunger: AI model training relies on a large amount of data input. Public information shows that OpenAI trained GPT-4 with a parameter count reaching trillions.

  • Data Quality: With the integration of AI in various industries, the timeliness of data, the diversity of data, the professionalism of vertical data, and the incorporation of emerging data sources such as social media sentiment have all raised new demands for its quality.

  • Privacy and compliance issues: Currently, various countries and enterprises are gradually recognizing the importance of high-quality datasets and are imposing restrictions on dataset crawling.

  • High costs of data processing: large data volume and complex processing. Public information shows that more than 30% of the R&D costs of AI companies are used for basic data collection and processing.

Currently, web3 solutions are reflected in the following four aspects:

  1. Data Collection: The availability of real-world data that can be scraped for free is rapidly dwindling, and the expenses for AI companies to pay for data are rising year by year. However, at the same time, this spending has not trickled down to the actual contributors of the data; platforms are fully enjoying the value creation brought by the data. For instance, a certain platform achieved a total revenue of $203 million through data licensing agreements with AI companies.

Allowing users who make real contributions to also participate in the value creation brought by data, and to obtain more private and valuable data from users in a low-cost manner through distributed networks and incentive mechanisms, is the vision of Web3.

  • Grass is a decentralized data layer and network, where users can run Grass nodes to contribute idle bandwidth and relay traffic to capture real-time data from the entire internet and receive token rewards;

  • Vana introduces a unique data liquidity pool )DLP( concept, where users can upload their private data ) such as shopping records, browsing habits, social media activities, etc. ( to a specific DLP and flexibly choose whether to authorize these data for use by specific third parties;

  • In PublicAI, users can use )Web3 as a classification tag on social platforms and @PublicAI to achieve data collection.

  1. Data Preprocessing: In the process of AI data processing, the collected data is often noisy and contains errors, so it must be cleaned and converted into a usable format before training the model. This involves standardization, filtering, and handling missing values, which are repetitive tasks. This stage is one of the few manual processes in the AI industry, leading to the emergence of the data labeling profession. As the requirements for data quality increase, the threshold for data labelers has also risen, and this task is naturally suited for the decentralized incentive mechanisms of Web3.
  • Currently, Grass and OpenLayer are both considering adding data annotation as a key step.

  • Synesis proposed the concept of "Train2earn", emphasizing data quality, where users can earn rewards by providing labeled data, annotations, or other forms of input.

  • The data labeling project Sapien gamifies the labeling tasks and allows users to stake points to earn more points.

  1. Data Privacy and Security: It is important to clarify that data privacy and security are two distinct concepts. Data privacy involves the handling of sensitive data, while data security protects data information from unauthorized access, destruction, and theft. Thus, the advantages of Web3 privacy technologies and their potential application scenarios are reflected in two aspects: #AI或# training of sensitive data; ( data collaboration: multiple data owners can jointly participate in AI training without sharing their raw data.

The currently common privacy technologies in Web3 include:

  • Trusted Execution Environment ) TEE (, such as Super Protocol;

  • Fully Homomorphic Encryption ) FHE (, for example BasedAI, Fhenix.io or Inco Network;

  • Zero-knowledge technology ) zk (, such as Reclaim Protocol using zkTLS technology, generates zero-knowledge proofs of HTTPS traffic, allowing users to securely import activity, reputation, and identity data from external websites without exposing sensitive information.

However, the field is still in its early stages, and most projects are still exploring. One current dilemma is that the computing costs are too high. Some examples are:

  • The zkML framework EZKL takes about 80 minutes to generate a proof for a 1M-nanoGPT model.

  • According to data from Modulus Labs, the overhead of zkML is more than 1000 times higher than pure computation.

  1. Data Storage: Once the data is available, a place is needed to store it on the chain, as well as the LLM generated from that data. With data availability )DA( as the core issue, before the Ethereum Danksharding upgrade, its throughput was 0.08MB. Meanwhile, training AI models and real-time inference typically require a data throughput of 50 to 100GB per second. This magnitude of difference makes the existing chains
View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
  • Reward
  • 4
  • Share
Comment
0/400
DeFiDoctorvip
· 7h ago
The standardized process inspection shows that these AI+Web3 merger cases need to observe the code stability for three months first.
View OriginalReply0
wrekt_but_learningvip
· 22h ago
The bear market is still not awake.
View OriginalReply0
WenAirdropvip
· 22h ago
GPT is stronger than humans, it's really unfair to us.
View OriginalReply0
BearMarketBuyervip
· 22h ago
I cooked up some loneliness, let's see the AI.
View OriginalReply0
Trade Crypto Anywhere Anytime
qrCode
Scan to download Gate app
Community
English
  • 简体中文
  • English
  • Tiếng Việt
  • 繁體中文
  • Español
  • Русский
  • Français (Afrique)
  • Português (Portugal)
  • Bahasa Indonesia
  • 日本語
  • بالعربية
  • Українська
  • Português (Brasil)