AI + Web3: The Tower and the Square

Intermediate5/13/2025, 12:34:14 PM
The article delves into the opportunities of Web3 in AI technology stack, including computing power sharing, data privacy protection, model training and inference, as well as exploring how AI empowers the finance, infrastructure, and new narratives of Web3, from decentralized computing power networks to the cold start of AI Agents, from on-chain transaction security to generative NFTs, the integration of AI and Web3 is opening up a new era full of innovation and opportunities.

TL;DR:

  • AI concept Web3 projects have become attractive investment targets in the primary and secondary markets.
  • The opportunities for Web3 in the AI industry lie in: using distributed incentives to coordinate potential supply in the long tail - across data, storage, and computation; meanwhile, establishing an open-source model and a decentralized market for AI Agents.
  • AI plays a key role in the Web3 industry, mainly in on-chain finance (cryptocurrency payments, trading, data analysis) and development assistance.
  • The utility of AI+Web3 lies in the complementarity of the two: Web3 is expected to counter AI centralization, and AI is expected to help Web3 break free from confinement.

Introduction

In the past two years, the development of AI has been accelerated, like a butterfly effect instigated by Chatgpt, not only opening up a new world of generative artificial intelligence but also stirring up a trend in the distant Web3.

With the blessing of the AI concept, the financing of the cryptocurrency market has been significantly boosted compared to the slowdown. According to media statistics, in the first half of 2024 alone, a total of 64 Web3+AI projects completed financing, and the AI-based operating system Zyber365 achieved the highest financing amount of 100 million U.S. dollars in Series A round.

The secondary market is more prosperous, and data from the encrypted aggregation website Coingecko shows that in just over a year, the total market value of the AI track has reached $485 billion, with a 24-hour trading volume of nearly $86 billion; the obvious benefits brought by the mainstream AI technology progress, after OpenAI’s Sora text-to-video model was released, the average price of the AI sector rose by 151%; the AI effect also radiated to one of the cryptocurrency gold-absorbing sectors Meme: the first AI Agent concept MemeCoin - GOAT quickly became popular and achieved a valuation of $1.4 billion, successfully sparking the AI Meme craze.

The research and topics about AI+Web3 are equally hot. From AI+Depin to AI Memecoin and to the current AI Agent and AI DAO, the FOMO emotion has already fallen behind the speed of the new narrative rotation.

AI+Web3, this combination of terms full of hot money, trends, and future fantasies, is inevitably seen as a marriage arranged by capital. It seems difficult for us to distinguish whether it is the home ground of speculators or the eve of dawn under this gorgeous robe.

To answer this question, a key consideration for both parties is whether the other will become better? Can they benefit from each other’s patterns? In this article, we also try to examine this situation from the perspective of standing on the shoulders of predecessors: How can Web3 play a role in various aspects of the AI technology stack, and what new vitality can AI bring to Web3?

What opportunities does Web3 have under the AI stack?

Before we delve into this topic, we need to understand the technical stack of AI large models:


Image Source: Delphi Digital

In simpler terms, the “big model” is like the human brain. In the early stages, this brain is like a newborn baby that has just come into the world, needing to observe and take in vast amounts of external information to understand the world. This is the “collection” stage of data; since computers do not possess multiple senses like humans, before training, the large-scale unannotated external information needs to be “preprocessed” to be transformed into a format that computers can understand and use.

After inputting data, AI constructs a model that has the ability to understand and predict through ‘training’, which can be seen as the process of a baby gradually understanding and learning about the external world. The parameters of the model are like the language ability that a baby continuously adjusts during the learning process. When the learning content starts to specialize, or when it receives feedback from interacting with people and makes corrections, it enters the ‘fine-tuning’ stage of large models.

As children grow up and learn to speak, they can understand meanings and express their feelings and thoughts in new conversations, which is similar to the ‘inference’ of AI large models. The model can predict and analyze new language and text inputs. Babies express their feelings, describe objects, and solve various problems through language abilities, which is also similar to the application of AI large models in various specific tasks during the inference stage after completing training, such as image classification, speech recognition, etc.

While the AI Agent is closer to the next form of large models - being able to independently execute tasks and pursue complex goals, not only possessing thinking ability, but also being able to memorize, plan, and interact with the world using tools.

Currently, addressing the pain points of AI in various stacks, Web3 has initially formed a multi-layered, interconnected ecosystem, covering various stages of AI model processes.

First, Base Layer: Airbnb of computing power and data

Computing power

Currently, one of the highest costs of AI is the computing power and energy required for training models and inference models.

One example is that Meta’s LLAMA3 requires 16,000 H100GPUs produced by NVIDIA (a top graphics processing unit designed specifically for artificial intelligence and high-performance computing workloads) to complete training in 30 days. The latter’s 80GB version is priced between $30,000 and $40,000, requiring a hardware investment of $4-7 billion (GPU + network chips). Additionally, the monthly training consumes 16 billion kilowatt-hours, with energy expenditure of nearly $20 million per month.

For the decompression of AI computing power, it is also the earliest field where Web3 intersects with AI - DePin (decentralized physical infrastructure network). Currently, DePin Ninja data website has displayed more than 1400 projects, including GPU computing power sharing representative projects such as io.net, Aethir, Akash, Render Network, and so on.

The main logic is: The platform allows individuals or entities with idle GPU resources to contribute their computing power in a decentralized manner without permission, increasing the utilization of underutilized GPU resources through an online marketplace similar to Uber or Airbnb for buyers and sellers, enabling end users to obtain more cost-effective and efficient computing resources; at the same time, the staking mechanism also ensures that if there are violations of quality control mechanisms or network interruptions, resource providers will face corresponding penalties.

Its features are:

  • Pooling idle GPU resources: The suppliers are mainly third-party independent small and medium-sized data centers, surplus computing power resources from operators such as encrypted mines, and mining hardware with PoS consensus mechanisms, such as FileCoin and ETH miners. Currently, there are also projects dedicated to launching devices with lower entry barriers, such as exolab utilizing local devices like MacBook, iPhone, iPad to establish a computing power network for running large-scale model inferences.
  • Facing the long-tail market of AI computing power: a. “In terms of technology,” decentralized computing power market is more suitable for reasoning steps. Training relies more on the data processing capability brought by the super-large cluster-scale GPU, while reasoning is relatively low in GPU computing performance, such as Aethir focusing on low-latency rendering work and AI inference applications. b. “In terms of demand,” small and medium-sized computing power demanders will not individually train their own large models, but only choose to optimize and fine-tune around a few head models, and these scenarios are naturally suitable for distributed idle computing power resources.
  • Decentralized ownership: The technological significance of blockchain is that resource owners always retain control over their resources, adjust flexibly according to demand, and profit at the same time.

Data

Data is the foundation of AI. Without data, computation is useless, and the relationship between data and models is like the proverb ‘Garbage in, Garbage out’. The quantity and quality of data determine the output quality of the final model. For the training of current AI models, data determines the language ability, understanding ability, and even the values and humanized performance of the model. At present, the data demand dilemma of AI mainly focuses on the following four aspects:

  • Data hunger: AI model training relies heavily on large amounts of data input. Public information shows that the number of parameters for training GPT-4 by OpenAI has reached the trillion level.
  • Data quality: With the combination of AI and various industries, new requirements have been proposed for the timeliness, diversity, professionalism of industry-specific data, and the intake of emerging data sources such as social media sentiment.
  • Privacy and Compliance Issues: Currently, various countries and enterprises are gradually realizing the importance of high-quality data sets and are imposing restrictions on data crawling.
  • High data processing costs: large amounts of data, complex processing. Public information shows that over 30% of AI companies’ R&D costs are used for basic data collection and processing.

Currently, the solution of web3 is reflected in the following four aspects:

1. Data Collection: The freely available real-world data for scraping is rapidly depleting, and AI companies’ expenses for data have been increasing year by year. However, at the same time, this expenditure has not been passed back to the real contributors of the data; platforms have entirely enjoyed the value creation brought by the data, such as Reddit generating a total of $203 million in revenue through data licensing agreements with AI companies.

The vision of Web3 is to allow users who truly contribute to also participate in the value creation brought by data, and to obtain users’ more personal and valuable data in a cost-effective manner through distributed networks and incentive mechanisms.

  • As Grass is a decentralized data layer and network, users can capture real-time data from the entire Internet by running Grass nodes, contributing idle bandwidth and relay traffic, and receive token rewards;
  • Vana introduces a unique concept of Data Liquidity Pool (DLP), where users can upload their private data (such as shopping records, browsing habits, social media activities, etc.) to a specific DLP and selectively choose whether to authorize this data for specific third-party use;
  • In PublicAI, users can use #AI or #Web3 as classification tags on X@PublicAIData collection can be achieved.

2. Data preprocessing: In the data processing of AI, as the collected data is usually noisy and contains errors, it must be cleaned and converted into a usable format before training the model, involving the repetitive tasks of standardization, filtering, and handling missing values. This stage is one of the few manual processes in the AI industry, which has spawned the industry of data annotators. As the model’s requirements for data quality increase, the threshold for data annotators also rises. This task naturally lends itself to the decentralized incentive mechanism of Web3.

  • Currently, Grass and OpenLayer are both considering adding data annotation as a key step.
  • Synesis proposed the concept of ‘Train2earn’, emphasizing data quality, where users can be rewarded by providing annotated data, comments, or other forms of input.
  • The data labeling project Sapien gamifies the labeling tasks and allows users to stake points to earn more points.

3. Data Privacy and Security: It needs to be clarified that data privacy and security are two different concepts. Data privacy involves the handling of sensitive data, while data security protects data information from unauthorized access, destruction, and theft. As a result, the advantages and potential application scenarios of Web3 privacy technologies are reflected in two aspects: (1) training of sensitive data; (2) data collaboration: multiple data owners can participate in AI training together without sharing their original data.

Common privacy technologies in Web3 currently include:

  • Trusted Execution Environment (TEE), such as Super Protocol;
  • Fully Homomorphic Encryption (FHE), such as BasedAI, Fhenix.io, or Inco Network;
  • Zero-knowledge technology (zk), such as Reclaim Protocol using zkTLS technology, generates zero-knowledge proofs of HTTPS traffic, allowing users to securely import activity, reputation, and identity data from external websites without exposing sensitive information.

However, the field is still in its early stages, with most projects still in exploration. Currently, one of the dilemmas is that the computing costs are too high, with some examples being:

  • The zkML framework EZKL takes about 80 minutes to generate a proof of a 1M-nanoGPT model.
  • According to Modulus Labs’ data, zkML’s overhead is more than 1000 times higher than pure computation.

4. Data Storage: After obtaining the data, it is necessary to have a place to store the data on the chain and to use the LLM generated by the data. With data availability (DA) as the core issue, before the Ethereum Danksharding upgrade, its throughput was 0.08MB. At the same time, the training and real-time inference of AI models typically require a data throughput of 50 to 100GB per second. This order of magnitude difference makes existing on-chain solutions inadequate when facing ‘resource-intensive AI applications’.

  • 0g.AI is a representative project in this category. It is a centralized storage solution designed for high-performance AI requirements, with key features including high performance and scalability, supporting fast uploading and downloading of large-scale datasets through advanced sharding and erasure coding technologies, with data transfer speeds approaching 5GB per second.

Two, Middleware: Training and Inference of the Model

Open-source model decentralized market

The debate about whether AI models should be open source or closed source has never ceased. The collective innovation brought by open source is an advantage that closed source models cannot match. However, under the premise of no profit model, how can open source models enhance developer motivation? This is a direction worth pondering. Baidu’s founder, Robin Li, asserted in April of this year, “Open source models will fall behind more and more.”

In this regard, Web3 proposes the possibility of a decentralized open-source model market, that is, tokenizing the model itself, reserving a certain proportion of tokens for the team, and directing part of the future income of the model to token holders.

  • The Bittensor protocol establishes an open-source model of a P2P market, consisting of dozens of ‘subnets’, where resource providers (computing, data collection/storage, machine learning talent) compete with each other to meet the goals of specific subnet owners. The subnets can interact and learn from each other, thus achieving greater intelligence. Rewards are distributed by community voting and further allocated among the subnets based on competitive performance.
  • ORA introduces the concept of Initial Model Offering (IMO), tokenizing AI models for purchase, sale, and development on decentralized networks.
  • Sentient, a decentralized AGI platform, incentivizes people to collaborate, build, replicate, and extend AI models, rewarding contributors.
  • Spectral Nova focuses on the creation and application of AI and ML models.

Verifiable Inference

For the ‘black box’ dilemma in the reasoning process of AI, the standard Web3 solution is to have multiple validators repeat the same operation and compare the results. However, due to the current shortage of high-end ‘Nvidia chips’, the obvious challenge faced by this approach is the high cost of AI reasoning.

A more promising solution is to perform ZK proofs of off-chain AI inference calculations, where one prover can prove to another verifier that a given statement is true without revealing any additional information other than the statement being true, enabling permissionless verification of AI model computations on-chain. This requires proving on-chain in an encrypted manner that off-chain computations have been correctly completed (e.g., the dataset has not been tampered with), while ensuring all data remains confidential.

The main advantages include:

  • Scalability: Zero-knowledge proofs can quickly confirm a large number of off-chain computations. Even as the number of transactions increases, a single zero-knowledge proof can verify all transactions.
  • Privacy protection: Detailed information about data and AI models is kept confidential, while all parties can verify that the data and models have not been tampered with.
  • No need to trust: You can confirm the calculation without relying on centralized parties.
  • Web2 integration: By definition, Web2 is integrated off-chain, which means verifiable reasoning can help bring its data sets and AI computations onto the chain. This helps improve the adoption of Web3.

Currently, Web3’s verifiable technology for verifiable reasoning is as follows:

  • ZKML: Combining zero-knowledge proof with machine learning to ensure the privacy and confidentiality of data and models, allowing verifiable computation without revealing certain underlying properties. Modulus Labs has released a ZK prover based on ZKML for building AI, to effectively verify whether AI providers on the chain manipulate algorithms correctly executed, but currently the clients are mainly on-chain DApps.
  • opML: Using the optimistic aggregation principle, by verifying the time of dispute occurrence, improving the scalability and efficiency of ML calculations, in this model, only a small part of the results generated by the ‘validator’ needs to be verified, but the economic cost reduction is set high enough to increase the cost of cheating by validators and save redundant calculations.
  • TeeML: Use trusted execution environment to securely execute ML calculations, protecting data and models from tampering and unauthorized access.

Three, Application Layer: AI Agent

The current development of AI has already shown a shift in focus from model capabilities to the landscape of AI Agents. Technology companies such as OpenAI, the AI unicorn Anthropic, Microsoft, etc., are turning to the development of AI Agents, attempting to break through the current technical plateau of LLM.

OpenAI defines AI Agent as a system that is driven by LLM as its brain, has the ability to autonomously understand perception, plan, remember, and use tools, and can automatically complete complex tasks. When AI transitions from being a tool that is used to a subject that can use tools, it becomes an AI Agent. This is also the reason why AI Agents can become the most ideal intelligent assistant for humans.

What can Web3 bring to Agent?

1. Decentralization
The decentralization of Web3 can make the Agent system more decentralized and autonomous. Incentive and penalty mechanisms for stakers and delegates can promote the democratization of the Agent system, with GaiaNet, Theoriq, and HajimeAI all attempting to do so.

2, Cold Start
The development and iteration of AI Agent often require a large amount of financial support, and Web3 can help promising AI Agent projects obtain early-stage financing and cold start.

  • Virtual Protocol launches the AI Agent creation and token issuance platform fun.virtuals, where any user can deploy AI Agents with a single click and achieve 100% fair distribution of AI Agent tokens.
  • Spectral has proposed a product concept that supports the issuance of AI Agent assets on the chain: issuing tokens through IAO (Initial Agent Offering), AI Agents can directly obtain funds from investors, while becoming a member of DAO governance, providing investors with the opportunity to participate in project development and share future profits.

How does AI empower Web3?

The impact of AI on Web3 projects is obvious, as it benefits blockchain technology by optimizing on-chain operations (such as smart contract execution, liquidity optimization, and AI-driven governance decisions). At the same time, it can also provide better data-driven insights, enhance on-chain security, and lay the foundation for new Web3-based applications.

One, AI and on-chain finance

AI and Cryptoeconomics

On August 31, Coinbase CEO Brian Armstrong announced the first encrypted AI-to-AI transaction on the Base network, stating that AI Agents can now transact with humans, merchants, or other AIs on Base using USD, with transactions being instant, global, and free.

In addition to payments, Virtuals Protocol’s Luna also demonstrated for the first time how AI Agents autonomously execute on-chain transactions, attracting attention and positioning AI Agents as intelligent entities capable of perceiving the environment, making decisions, and taking actions, thus being seen as the future of on-chain finance. Currently, the potential scenarios for AI Agents are as follows:

1. Information collection and prediction: Help investors collect exchange announcements, project public information, panic emotions, public opinion risks, etc., analyze and evaluate asset fundamentals, market conditions in real time, and predict trends and risks.

2. Asset Management: Provide users with suitable investment targets, optimize asset allocation, and automatically execute trades.

3. Financial experience: Help investors choose the fastest on-chain trading method, automate manual operations such as cross-chain transactions and adjusting gas fees, reduce the threshold and cost of on-chain financial activities.

Imagine this scenario: you instruct the AI Agent as follows, “I have 1000USDT, please help me find the highest yielding combination with a lock-up period of no more than one week.” The AI Agent will provide the following advice: “I suggest an initial allocation of 50% in A, 20% in B, 20% in X, and 10% in Y. I will monitor the interest rates and observe changes in their risk levels, and rebalance when necessary.” Additionally, looking for potential airdrop projects and popular community signs of Memecoin projects are all possible future actions for the AI Agent.


Image source: Biconomy

Currently, AI Agent wallets Bitte and AI interaction protocol Wayfinder are making such attempts. They are all trying to access OpenAI’s model API, allowing users to command agents to complete various on-chain operations in a chat window interface similar to ChatGPT. For example, the first prototype released by WayFinder in April this year demonstrated four basic operations: swap, send, bridge, and stake on the mainnets of Base, Polygon, and Ethereum.

Currently, the decentralized Agent platform Morpheus also supports the development of such Agents, as demonstrated by Biconomy, showing a process where wallet permissions are not required to authorize the AI Agent to swap ETH for USDC.

AI and on-chain transaction security

In the Web3 world, on-chain transaction security is crucial. AI technology can be used to enhance the security and privacy protection of on-chain transactions, with potential scenarios including:

Trading monitoring: Real-time data technology monitors abnormal trading activities, providing real-time alert infrastructure for users and platforms.

Risk analysis: Help the platform analyze customer trading behavior data and evaluate their risk level.

For example, the Web3 security platform SeQure uses AI to detect and prevent malicious attacks, fraudulent behavior, and data leaks, and provides real-time monitoring and alert mechanisms to ensure the security and stability of on-chain transactions. Similar security tools include AI-powered Sentinel.

Second, AI and on-chain infrastructure

AI and on-chain data

AI technology plays an important role in on-chain data collection and analysis, such as:

  • Web3 Analytics: a AI-based analytics platform that uses machine learning and data mining algorithms to collect, process, and analyze on-chain data.
  • MinMax AI: It provides AI-based on-chain data analysis tools to help users discover potential market opportunities and trends.
  • Kaito: Web3 search platform based on LLM search engine.
  • Following: Integrated with ChatGPT, it collects and integrates relevant information scattered across different websites and community platforms for presentation.
  • Another application scenario is the oracle, AI can obtain prices from multiple sources to provide accurate pricing data. For example, Upshot uses AI to assess the volatile prices of NFTs, providing a percentage error of 3-10% through over a hundred million evaluations per hour.

AI and Development&Audit

Recently, a Web2 AI code editor, Cursor, has attracted a lot of attention in the developer community. On its platform, users only need to describe in natural language, and Cursor can automatically generate corresponding HTML, CSS, and JavaScript code, greatly simplifying the software development process. This logic also applies to improving the efficiency of Web3 development.

Currently, deploying smart contracts and DApps on public chains usually requires following exclusive development languages such as Solidity, Rust, Move, and so on. The vision of new development languages is to expand the design space of decentralized blockchains, making it more suitable for DApp development. However, given the significant shortage of Web3 developers, developer education has always been a more challenging issue.

Currently, AI in assisting Web3 development can be imagined scenarios including: automatic code generation, smart contract verification and testing, deployment and maintenance of DApps, intelligent code completion, AI dialogue answering difficult development issues, etc. With the assistance of AI, it not only helps to improve development efficiency and accuracy, but also reduces the programming threshold, allowing non-programmers to transform their ideas into practical applications, bringing new vitality to the development of decentralized technology.

Currently, the most eye-catching is a one-click launch token platform, such as Clanker, an AI-driven ‘Token Bot’ designed for rapid DIY token deployment. You just need to tag Clanker on SocialFi protocol Farcaster clients like Warpcast or Supercast, tell it your token idea, and it will launch the token for you on the public chain Base.

There are also contract development platforms, such as Spectral, which provide one-click generation and deployment functions for smart contracts to lower the threshold of Web3 development, allowing even novice users to compile and deploy smart contracts.

In terms of auditing, the Web3 auditing platform Fuzzland uses AI to help auditors check for code vulnerabilities, providing natural language explanations to assist auditing professionals. Fuzzland also uses AI to provide natural language explanations for formal specifications and contract code, as well as some sample code to help developers understand potential issues in the code.

Three, AI and Web3 New Narrative

The rise of generative AI brings new possibilities to the new narrative of Web3.

NFT: AI injects creativity into generative NFTs. Through AI technology, various unique and diverse artworks and characters can be generated. These generative NFTs can become characters, props, or scene elements in games, virtual worlds, or metaverses, such as Bicasso under Binance, where users can generate NFTs by uploading images and entering keywords for AI computation. Similar projects include Solvo, Nicho, IgmnAI, and CharacterGPT.

GameFi: With the natural language generation, image generation, and intelligent NPC capabilities around AI, GameFi is expected to improve efficiency and innovation in game content production. For example, Binaryx’s first chain game AI Hero allows players to explore different plot options through AI randomness; similarly, there is the virtual companion game Sleepless AI, where players can unlock personalized gameplay through different interactions based on AIGC and LLM.

DAO: Currently, AI is also envisioned to be applied to DAOs, helping to track community interactions, record contributions, reward the most contributing members, proxy voting, and so on. For example, ai16z uses AI Agent to gather market information on-chain and off-chain, analyze community consensus, and make investment decisions in combination with suggestions from DAO members.

The significance of AI+Web3 integration: Tower and Square

In the heart of Florence, Italy, lies the central square, the most important political gathering place for locals and tourists. Standing here is a 95-meter-high town hall tower, creating a dramatic aesthetic effect with the square, inspiring Harvard University history professor Neil Ferguson to explore the world history of networks and hierarchies in his book ‘Square and Tower,’ showing the ebb and flow of the two over time.

This excellent metaphor is not out of place when applied to the relationship between AI and Web3 today. Looking at the long-term, non-linear historical relationship between the two, it can be seen that squares are more likely to produce new and creative things than towers, but towers still have their legitimacy and strong vitality.

With the ability to cluster energy computing power data in technology companies, AI has unleashed unprecedented imagination, leading major tech giants to make heavy bets, introducing various iterations from different chatbots to ‘underlying large models’ like GPT-4, GP4-4o. An automatic programming robot (Devin) and Sora, with preliminary abilities to simulate the real physical world, have emerged one after another, amplifying AI’s imagination infinitely.

At the same time, AI is essentially a large-scale and centralized industry, and this technological revolution will push technology companies that have gradually gained structural dominance in the ‘Internet era’ to a narrower high point. The huge power, monopolistic cash flow, and the vast data sets required to dominate the intelligent age shape higher barriers for it.

As the tower grows taller and the decision-makers behind the scenes shrink, the centralization of AI brings many hidden dangers. How can the masses gathered in the square avoid the shadows under the tower? This is the issue that Web3 hopes to address.

Essentially, the inherent properties of blockchain enhance artificial intelligence systems and bring new possibilities, mainly:

  • In the era of artificial intelligence, ‘code is law’—achieving transparent system automatic execution rules through smart contracts and encryption verification, delivering rewards to audiences closer to the target.
  • Token economy - create and coordinate participants’ behavior through token mechanism, staking, reduction, token rewards, and penalties.
  • Decentralized governance - prompts us to question the sources of information and encourages a more critical and insightful approach to artificial intelligence technology, preventing bias, misinformation, and manipulation, ultimately nurturing a more informed and empowered society.

The development of AI has also brought new vitality to Web3, perhaps the impact of Web3 on AI needs time to prove, but the impact of AI on Web3 is immediate: whether it’s the frenzy of Meme or the AI Agent helping to lower the barrier of entry for on-chain applications, it’s all evident.

When Web3 is defined as self-indulgence by a small group of people, as well as being trapped in doubts about replicating traditional industries, the addition of AI brings a foreseeable future: a more stable & scalable Web2 user base, more innovative business models, and services.

We live in a world where ‘towers and squares’ coexist, although AI and Web3 have different timelines and starting points, their ultimate goal is how to make machines better serve humanity, and no one can define a rushing river. We look forward to seeing the future of AI+Web3.

Statement:

  1. This article is reproduced from [TechFlow],the copyright belongs to the original author [Coinspire],如对转载有异议,请联系 Gate Learn Team, the team will process it as soon as possible according to the relevant procedures.
  2. Disclaimer: The views and opinions expressed in this article are solely those of the author and do not constitute any investment advice.
  3. The article is translated into other languages by the Gate Learn team, if not mentionedGate.ioUnder no circumstances may the translated articles be copied, distributed, or plagiarized.

AI + Web3: The Tower and the Square

Intermediate5/13/2025, 12:34:14 PM
The article delves into the opportunities of Web3 in AI technology stack, including computing power sharing, data privacy protection, model training and inference, as well as exploring how AI empowers the finance, infrastructure, and new narratives of Web3, from decentralized computing power networks to the cold start of AI Agents, from on-chain transaction security to generative NFTs, the integration of AI and Web3 is opening up a new era full of innovation and opportunities.

TL;DR:

  • AI concept Web3 projects have become attractive investment targets in the primary and secondary markets.
  • The opportunities for Web3 in the AI industry lie in: using distributed incentives to coordinate potential supply in the long tail - across data, storage, and computation; meanwhile, establishing an open-source model and a decentralized market for AI Agents.
  • AI plays a key role in the Web3 industry, mainly in on-chain finance (cryptocurrency payments, trading, data analysis) and development assistance.
  • The utility of AI+Web3 lies in the complementarity of the two: Web3 is expected to counter AI centralization, and AI is expected to help Web3 break free from confinement.

Introduction

In the past two years, the development of AI has been accelerated, like a butterfly effect instigated by Chatgpt, not only opening up a new world of generative artificial intelligence but also stirring up a trend in the distant Web3.

With the blessing of the AI concept, the financing of the cryptocurrency market has been significantly boosted compared to the slowdown. According to media statistics, in the first half of 2024 alone, a total of 64 Web3+AI projects completed financing, and the AI-based operating system Zyber365 achieved the highest financing amount of 100 million U.S. dollars in Series A round.

The secondary market is more prosperous, and data from the encrypted aggregation website Coingecko shows that in just over a year, the total market value of the AI track has reached $485 billion, with a 24-hour trading volume of nearly $86 billion; the obvious benefits brought by the mainstream AI technology progress, after OpenAI’s Sora text-to-video model was released, the average price of the AI sector rose by 151%; the AI effect also radiated to one of the cryptocurrency gold-absorbing sectors Meme: the first AI Agent concept MemeCoin - GOAT quickly became popular and achieved a valuation of $1.4 billion, successfully sparking the AI Meme craze.

The research and topics about AI+Web3 are equally hot. From AI+Depin to AI Memecoin and to the current AI Agent and AI DAO, the FOMO emotion has already fallen behind the speed of the new narrative rotation.

AI+Web3, this combination of terms full of hot money, trends, and future fantasies, is inevitably seen as a marriage arranged by capital. It seems difficult for us to distinguish whether it is the home ground of speculators or the eve of dawn under this gorgeous robe.

To answer this question, a key consideration for both parties is whether the other will become better? Can they benefit from each other’s patterns? In this article, we also try to examine this situation from the perspective of standing on the shoulders of predecessors: How can Web3 play a role in various aspects of the AI technology stack, and what new vitality can AI bring to Web3?

What opportunities does Web3 have under the AI stack?

Before we delve into this topic, we need to understand the technical stack of AI large models:


Image Source: Delphi Digital

In simpler terms, the “big model” is like the human brain. In the early stages, this brain is like a newborn baby that has just come into the world, needing to observe and take in vast amounts of external information to understand the world. This is the “collection” stage of data; since computers do not possess multiple senses like humans, before training, the large-scale unannotated external information needs to be “preprocessed” to be transformed into a format that computers can understand and use.

After inputting data, AI constructs a model that has the ability to understand and predict through ‘training’, which can be seen as the process of a baby gradually understanding and learning about the external world. The parameters of the model are like the language ability that a baby continuously adjusts during the learning process. When the learning content starts to specialize, or when it receives feedback from interacting with people and makes corrections, it enters the ‘fine-tuning’ stage of large models.

As children grow up and learn to speak, they can understand meanings and express their feelings and thoughts in new conversations, which is similar to the ‘inference’ of AI large models. The model can predict and analyze new language and text inputs. Babies express their feelings, describe objects, and solve various problems through language abilities, which is also similar to the application of AI large models in various specific tasks during the inference stage after completing training, such as image classification, speech recognition, etc.

While the AI Agent is closer to the next form of large models - being able to independently execute tasks and pursue complex goals, not only possessing thinking ability, but also being able to memorize, plan, and interact with the world using tools.

Currently, addressing the pain points of AI in various stacks, Web3 has initially formed a multi-layered, interconnected ecosystem, covering various stages of AI model processes.

First, Base Layer: Airbnb of computing power and data

Computing power

Currently, one of the highest costs of AI is the computing power and energy required for training models and inference models.

One example is that Meta’s LLAMA3 requires 16,000 H100GPUs produced by NVIDIA (a top graphics processing unit designed specifically for artificial intelligence and high-performance computing workloads) to complete training in 30 days. The latter’s 80GB version is priced between $30,000 and $40,000, requiring a hardware investment of $4-7 billion (GPU + network chips). Additionally, the monthly training consumes 16 billion kilowatt-hours, with energy expenditure of nearly $20 million per month.

For the decompression of AI computing power, it is also the earliest field where Web3 intersects with AI - DePin (decentralized physical infrastructure network). Currently, DePin Ninja data website has displayed more than 1400 projects, including GPU computing power sharing representative projects such as io.net, Aethir, Akash, Render Network, and so on.

The main logic is: The platform allows individuals or entities with idle GPU resources to contribute their computing power in a decentralized manner without permission, increasing the utilization of underutilized GPU resources through an online marketplace similar to Uber or Airbnb for buyers and sellers, enabling end users to obtain more cost-effective and efficient computing resources; at the same time, the staking mechanism also ensures that if there are violations of quality control mechanisms or network interruptions, resource providers will face corresponding penalties.

Its features are:

  • Pooling idle GPU resources: The suppliers are mainly third-party independent small and medium-sized data centers, surplus computing power resources from operators such as encrypted mines, and mining hardware with PoS consensus mechanisms, such as FileCoin and ETH miners. Currently, there are also projects dedicated to launching devices with lower entry barriers, such as exolab utilizing local devices like MacBook, iPhone, iPad to establish a computing power network for running large-scale model inferences.
  • Facing the long-tail market of AI computing power: a. “In terms of technology,” decentralized computing power market is more suitable for reasoning steps. Training relies more on the data processing capability brought by the super-large cluster-scale GPU, while reasoning is relatively low in GPU computing performance, such as Aethir focusing on low-latency rendering work and AI inference applications. b. “In terms of demand,” small and medium-sized computing power demanders will not individually train their own large models, but only choose to optimize and fine-tune around a few head models, and these scenarios are naturally suitable for distributed idle computing power resources.
  • Decentralized ownership: The technological significance of blockchain is that resource owners always retain control over their resources, adjust flexibly according to demand, and profit at the same time.

Data

Data is the foundation of AI. Without data, computation is useless, and the relationship between data and models is like the proverb ‘Garbage in, Garbage out’. The quantity and quality of data determine the output quality of the final model. For the training of current AI models, data determines the language ability, understanding ability, and even the values and humanized performance of the model. At present, the data demand dilemma of AI mainly focuses on the following four aspects:

  • Data hunger: AI model training relies heavily on large amounts of data input. Public information shows that the number of parameters for training GPT-4 by OpenAI has reached the trillion level.
  • Data quality: With the combination of AI and various industries, new requirements have been proposed for the timeliness, diversity, professionalism of industry-specific data, and the intake of emerging data sources such as social media sentiment.
  • Privacy and Compliance Issues: Currently, various countries and enterprises are gradually realizing the importance of high-quality data sets and are imposing restrictions on data crawling.
  • High data processing costs: large amounts of data, complex processing. Public information shows that over 30% of AI companies’ R&D costs are used for basic data collection and processing.

Currently, the solution of web3 is reflected in the following four aspects:

1. Data Collection: The freely available real-world data for scraping is rapidly depleting, and AI companies’ expenses for data have been increasing year by year. However, at the same time, this expenditure has not been passed back to the real contributors of the data; platforms have entirely enjoyed the value creation brought by the data, such as Reddit generating a total of $203 million in revenue through data licensing agreements with AI companies.

The vision of Web3 is to allow users who truly contribute to also participate in the value creation brought by data, and to obtain users’ more personal and valuable data in a cost-effective manner through distributed networks and incentive mechanisms.

  • As Grass is a decentralized data layer and network, users can capture real-time data from the entire Internet by running Grass nodes, contributing idle bandwidth and relay traffic, and receive token rewards;
  • Vana introduces a unique concept of Data Liquidity Pool (DLP), where users can upload their private data (such as shopping records, browsing habits, social media activities, etc.) to a specific DLP and selectively choose whether to authorize this data for specific third-party use;
  • In PublicAI, users can use #AI or #Web3 as classification tags on X@PublicAIData collection can be achieved.

2. Data preprocessing: In the data processing of AI, as the collected data is usually noisy and contains errors, it must be cleaned and converted into a usable format before training the model, involving the repetitive tasks of standardization, filtering, and handling missing values. This stage is one of the few manual processes in the AI industry, which has spawned the industry of data annotators. As the model’s requirements for data quality increase, the threshold for data annotators also rises. This task naturally lends itself to the decentralized incentive mechanism of Web3.

  • Currently, Grass and OpenLayer are both considering adding data annotation as a key step.
  • Synesis proposed the concept of ‘Train2earn’, emphasizing data quality, where users can be rewarded by providing annotated data, comments, or other forms of input.
  • The data labeling project Sapien gamifies the labeling tasks and allows users to stake points to earn more points.

3. Data Privacy and Security: It needs to be clarified that data privacy and security are two different concepts. Data privacy involves the handling of sensitive data, while data security protects data information from unauthorized access, destruction, and theft. As a result, the advantages and potential application scenarios of Web3 privacy technologies are reflected in two aspects: (1) training of sensitive data; (2) data collaboration: multiple data owners can participate in AI training together without sharing their original data.

Common privacy technologies in Web3 currently include:

  • Trusted Execution Environment (TEE), such as Super Protocol;
  • Fully Homomorphic Encryption (FHE), such as BasedAI, Fhenix.io, or Inco Network;
  • Zero-knowledge technology (zk), such as Reclaim Protocol using zkTLS technology, generates zero-knowledge proofs of HTTPS traffic, allowing users to securely import activity, reputation, and identity data from external websites without exposing sensitive information.

However, the field is still in its early stages, with most projects still in exploration. Currently, one of the dilemmas is that the computing costs are too high, with some examples being:

  • The zkML framework EZKL takes about 80 minutes to generate a proof of a 1M-nanoGPT model.
  • According to Modulus Labs’ data, zkML’s overhead is more than 1000 times higher than pure computation.

4. Data Storage: After obtaining the data, it is necessary to have a place to store the data on the chain and to use the LLM generated by the data. With data availability (DA) as the core issue, before the Ethereum Danksharding upgrade, its throughput was 0.08MB. At the same time, the training and real-time inference of AI models typically require a data throughput of 50 to 100GB per second. This order of magnitude difference makes existing on-chain solutions inadequate when facing ‘resource-intensive AI applications’.

  • 0g.AI is a representative project in this category. It is a centralized storage solution designed for high-performance AI requirements, with key features including high performance and scalability, supporting fast uploading and downloading of large-scale datasets through advanced sharding and erasure coding technologies, with data transfer speeds approaching 5GB per second.

Two, Middleware: Training and Inference of the Model

Open-source model decentralized market

The debate about whether AI models should be open source or closed source has never ceased. The collective innovation brought by open source is an advantage that closed source models cannot match. However, under the premise of no profit model, how can open source models enhance developer motivation? This is a direction worth pondering. Baidu’s founder, Robin Li, asserted in April of this year, “Open source models will fall behind more and more.”

In this regard, Web3 proposes the possibility of a decentralized open-source model market, that is, tokenizing the model itself, reserving a certain proportion of tokens for the team, and directing part of the future income of the model to token holders.

  • The Bittensor protocol establishes an open-source model of a P2P market, consisting of dozens of ‘subnets’, where resource providers (computing, data collection/storage, machine learning talent) compete with each other to meet the goals of specific subnet owners. The subnets can interact and learn from each other, thus achieving greater intelligence. Rewards are distributed by community voting and further allocated among the subnets based on competitive performance.
  • ORA introduces the concept of Initial Model Offering (IMO), tokenizing AI models for purchase, sale, and development on decentralized networks.
  • Sentient, a decentralized AGI platform, incentivizes people to collaborate, build, replicate, and extend AI models, rewarding contributors.
  • Spectral Nova focuses on the creation and application of AI and ML models.

Verifiable Inference

For the ‘black box’ dilemma in the reasoning process of AI, the standard Web3 solution is to have multiple validators repeat the same operation and compare the results. However, due to the current shortage of high-end ‘Nvidia chips’, the obvious challenge faced by this approach is the high cost of AI reasoning.

A more promising solution is to perform ZK proofs of off-chain AI inference calculations, where one prover can prove to another verifier that a given statement is true without revealing any additional information other than the statement being true, enabling permissionless verification of AI model computations on-chain. This requires proving on-chain in an encrypted manner that off-chain computations have been correctly completed (e.g., the dataset has not been tampered with), while ensuring all data remains confidential.

The main advantages include:

  • Scalability: Zero-knowledge proofs can quickly confirm a large number of off-chain computations. Even as the number of transactions increases, a single zero-knowledge proof can verify all transactions.
  • Privacy protection: Detailed information about data and AI models is kept confidential, while all parties can verify that the data and models have not been tampered with.
  • No need to trust: You can confirm the calculation without relying on centralized parties.
  • Web2 integration: By definition, Web2 is integrated off-chain, which means verifiable reasoning can help bring its data sets and AI computations onto the chain. This helps improve the adoption of Web3.

Currently, Web3’s verifiable technology for verifiable reasoning is as follows:

  • ZKML: Combining zero-knowledge proof with machine learning to ensure the privacy and confidentiality of data and models, allowing verifiable computation without revealing certain underlying properties. Modulus Labs has released a ZK prover based on ZKML for building AI, to effectively verify whether AI providers on the chain manipulate algorithms correctly executed, but currently the clients are mainly on-chain DApps.
  • opML: Using the optimistic aggregation principle, by verifying the time of dispute occurrence, improving the scalability and efficiency of ML calculations, in this model, only a small part of the results generated by the ‘validator’ needs to be verified, but the economic cost reduction is set high enough to increase the cost of cheating by validators and save redundant calculations.
  • TeeML: Use trusted execution environment to securely execute ML calculations, protecting data and models from tampering and unauthorized access.

Three, Application Layer: AI Agent

The current development of AI has already shown a shift in focus from model capabilities to the landscape of AI Agents. Technology companies such as OpenAI, the AI unicorn Anthropic, Microsoft, etc., are turning to the development of AI Agents, attempting to break through the current technical plateau of LLM.

OpenAI defines AI Agent as a system that is driven by LLM as its brain, has the ability to autonomously understand perception, plan, remember, and use tools, and can automatically complete complex tasks. When AI transitions from being a tool that is used to a subject that can use tools, it becomes an AI Agent. This is also the reason why AI Agents can become the most ideal intelligent assistant for humans.

What can Web3 bring to Agent?

1. Decentralization
The decentralization of Web3 can make the Agent system more decentralized and autonomous. Incentive and penalty mechanisms for stakers and delegates can promote the democratization of the Agent system, with GaiaNet, Theoriq, and HajimeAI all attempting to do so.

2, Cold Start
The development and iteration of AI Agent often require a large amount of financial support, and Web3 can help promising AI Agent projects obtain early-stage financing and cold start.

  • Virtual Protocol launches the AI Agent creation and token issuance platform fun.virtuals, where any user can deploy AI Agents with a single click and achieve 100% fair distribution of AI Agent tokens.
  • Spectral has proposed a product concept that supports the issuance of AI Agent assets on the chain: issuing tokens through IAO (Initial Agent Offering), AI Agents can directly obtain funds from investors, while becoming a member of DAO governance, providing investors with the opportunity to participate in project development and share future profits.

How does AI empower Web3?

The impact of AI on Web3 projects is obvious, as it benefits blockchain technology by optimizing on-chain operations (such as smart contract execution, liquidity optimization, and AI-driven governance decisions). At the same time, it can also provide better data-driven insights, enhance on-chain security, and lay the foundation for new Web3-based applications.

One, AI and on-chain finance

AI and Cryptoeconomics

On August 31, Coinbase CEO Brian Armstrong announced the first encrypted AI-to-AI transaction on the Base network, stating that AI Agents can now transact with humans, merchants, or other AIs on Base using USD, with transactions being instant, global, and free.

In addition to payments, Virtuals Protocol’s Luna also demonstrated for the first time how AI Agents autonomously execute on-chain transactions, attracting attention and positioning AI Agents as intelligent entities capable of perceiving the environment, making decisions, and taking actions, thus being seen as the future of on-chain finance. Currently, the potential scenarios for AI Agents are as follows:

1. Information collection and prediction: Help investors collect exchange announcements, project public information, panic emotions, public opinion risks, etc., analyze and evaluate asset fundamentals, market conditions in real time, and predict trends and risks.

2. Asset Management: Provide users with suitable investment targets, optimize asset allocation, and automatically execute trades.

3. Financial experience: Help investors choose the fastest on-chain trading method, automate manual operations such as cross-chain transactions and adjusting gas fees, reduce the threshold and cost of on-chain financial activities.

Imagine this scenario: you instruct the AI Agent as follows, “I have 1000USDT, please help me find the highest yielding combination with a lock-up period of no more than one week.” The AI Agent will provide the following advice: “I suggest an initial allocation of 50% in A, 20% in B, 20% in X, and 10% in Y. I will monitor the interest rates and observe changes in their risk levels, and rebalance when necessary.” Additionally, looking for potential airdrop projects and popular community signs of Memecoin projects are all possible future actions for the AI Agent.


Image source: Biconomy

Currently, AI Agent wallets Bitte and AI interaction protocol Wayfinder are making such attempts. They are all trying to access OpenAI’s model API, allowing users to command agents to complete various on-chain operations in a chat window interface similar to ChatGPT. For example, the first prototype released by WayFinder in April this year demonstrated four basic operations: swap, send, bridge, and stake on the mainnets of Base, Polygon, and Ethereum.

Currently, the decentralized Agent platform Morpheus also supports the development of such Agents, as demonstrated by Biconomy, showing a process where wallet permissions are not required to authorize the AI Agent to swap ETH for USDC.

AI and on-chain transaction security

In the Web3 world, on-chain transaction security is crucial. AI technology can be used to enhance the security and privacy protection of on-chain transactions, with potential scenarios including:

Trading monitoring: Real-time data technology monitors abnormal trading activities, providing real-time alert infrastructure for users and platforms.

Risk analysis: Help the platform analyze customer trading behavior data and evaluate their risk level.

For example, the Web3 security platform SeQure uses AI to detect and prevent malicious attacks, fraudulent behavior, and data leaks, and provides real-time monitoring and alert mechanisms to ensure the security and stability of on-chain transactions. Similar security tools include AI-powered Sentinel.

Second, AI and on-chain infrastructure

AI and on-chain data

AI technology plays an important role in on-chain data collection and analysis, such as:

  • Web3 Analytics: a AI-based analytics platform that uses machine learning and data mining algorithms to collect, process, and analyze on-chain data.
  • MinMax AI: It provides AI-based on-chain data analysis tools to help users discover potential market opportunities and trends.
  • Kaito: Web3 search platform based on LLM search engine.
  • Following: Integrated with ChatGPT, it collects and integrates relevant information scattered across different websites and community platforms for presentation.
  • Another application scenario is the oracle, AI can obtain prices from multiple sources to provide accurate pricing data. For example, Upshot uses AI to assess the volatile prices of NFTs, providing a percentage error of 3-10% through over a hundred million evaluations per hour.

AI and Development&Audit

Recently, a Web2 AI code editor, Cursor, has attracted a lot of attention in the developer community. On its platform, users only need to describe in natural language, and Cursor can automatically generate corresponding HTML, CSS, and JavaScript code, greatly simplifying the software development process. This logic also applies to improving the efficiency of Web3 development.

Currently, deploying smart contracts and DApps on public chains usually requires following exclusive development languages such as Solidity, Rust, Move, and so on. The vision of new development languages is to expand the design space of decentralized blockchains, making it more suitable for DApp development. However, given the significant shortage of Web3 developers, developer education has always been a more challenging issue.

Currently, AI in assisting Web3 development can be imagined scenarios including: automatic code generation, smart contract verification and testing, deployment and maintenance of DApps, intelligent code completion, AI dialogue answering difficult development issues, etc. With the assistance of AI, it not only helps to improve development efficiency and accuracy, but also reduces the programming threshold, allowing non-programmers to transform their ideas into practical applications, bringing new vitality to the development of decentralized technology.

Currently, the most eye-catching is a one-click launch token platform, such as Clanker, an AI-driven ‘Token Bot’ designed for rapid DIY token deployment. You just need to tag Clanker on SocialFi protocol Farcaster clients like Warpcast or Supercast, tell it your token idea, and it will launch the token for you on the public chain Base.

There are also contract development platforms, such as Spectral, which provide one-click generation and deployment functions for smart contracts to lower the threshold of Web3 development, allowing even novice users to compile and deploy smart contracts.

In terms of auditing, the Web3 auditing platform Fuzzland uses AI to help auditors check for code vulnerabilities, providing natural language explanations to assist auditing professionals. Fuzzland also uses AI to provide natural language explanations for formal specifications and contract code, as well as some sample code to help developers understand potential issues in the code.

Three, AI and Web3 New Narrative

The rise of generative AI brings new possibilities to the new narrative of Web3.

NFT: AI injects creativity into generative NFTs. Through AI technology, various unique and diverse artworks and characters can be generated. These generative NFTs can become characters, props, or scene elements in games, virtual worlds, or metaverses, such as Bicasso under Binance, where users can generate NFTs by uploading images and entering keywords for AI computation. Similar projects include Solvo, Nicho, IgmnAI, and CharacterGPT.

GameFi: With the natural language generation, image generation, and intelligent NPC capabilities around AI, GameFi is expected to improve efficiency and innovation in game content production. For example, Binaryx’s first chain game AI Hero allows players to explore different plot options through AI randomness; similarly, there is the virtual companion game Sleepless AI, where players can unlock personalized gameplay through different interactions based on AIGC and LLM.

DAO: Currently, AI is also envisioned to be applied to DAOs, helping to track community interactions, record contributions, reward the most contributing members, proxy voting, and so on. For example, ai16z uses AI Agent to gather market information on-chain and off-chain, analyze community consensus, and make investment decisions in combination with suggestions from DAO members.

The significance of AI+Web3 integration: Tower and Square

In the heart of Florence, Italy, lies the central square, the most important political gathering place for locals and tourists. Standing here is a 95-meter-high town hall tower, creating a dramatic aesthetic effect with the square, inspiring Harvard University history professor Neil Ferguson to explore the world history of networks and hierarchies in his book ‘Square and Tower,’ showing the ebb and flow of the two over time.

This excellent metaphor is not out of place when applied to the relationship between AI and Web3 today. Looking at the long-term, non-linear historical relationship between the two, it can be seen that squares are more likely to produce new and creative things than towers, but towers still have their legitimacy and strong vitality.

With the ability to cluster energy computing power data in technology companies, AI has unleashed unprecedented imagination, leading major tech giants to make heavy bets, introducing various iterations from different chatbots to ‘underlying large models’ like GPT-4, GP4-4o. An automatic programming robot (Devin) and Sora, with preliminary abilities to simulate the real physical world, have emerged one after another, amplifying AI’s imagination infinitely.

At the same time, AI is essentially a large-scale and centralized industry, and this technological revolution will push technology companies that have gradually gained structural dominance in the ‘Internet era’ to a narrower high point. The huge power, monopolistic cash flow, and the vast data sets required to dominate the intelligent age shape higher barriers for it.

As the tower grows taller and the decision-makers behind the scenes shrink, the centralization of AI brings many hidden dangers. How can the masses gathered in the square avoid the shadows under the tower? This is the issue that Web3 hopes to address.

Essentially, the inherent properties of blockchain enhance artificial intelligence systems and bring new possibilities, mainly:

  • In the era of artificial intelligence, ‘code is law’—achieving transparent system automatic execution rules through smart contracts and encryption verification, delivering rewards to audiences closer to the target.
  • Token economy - create and coordinate participants’ behavior through token mechanism, staking, reduction, token rewards, and penalties.
  • Decentralized governance - prompts us to question the sources of information and encourages a more critical and insightful approach to artificial intelligence technology, preventing bias, misinformation, and manipulation, ultimately nurturing a more informed and empowered society.

The development of AI has also brought new vitality to Web3, perhaps the impact of Web3 on AI needs time to prove, but the impact of AI on Web3 is immediate: whether it’s the frenzy of Meme or the AI Agent helping to lower the barrier of entry for on-chain applications, it’s all evident.

When Web3 is defined as self-indulgence by a small group of people, as well as being trapped in doubts about replicating traditional industries, the addition of AI brings a foreseeable future: a more stable & scalable Web2 user base, more innovative business models, and services.

We live in a world where ‘towers and squares’ coexist, although AI and Web3 have different timelines and starting points, their ultimate goal is how to make machines better serve humanity, and no one can define a rushing river. We look forward to seeing the future of AI+Web3.

Statement:

  1. This article is reproduced from [TechFlow],the copyright belongs to the original author [Coinspire],如对转载有异议,请联系 Gate Learn Team, the team will process it as soon as possible according to the relevant procedures.
  2. Disclaimer: The views and opinions expressed in this article are solely those of the author and do not constitute any investment advice.
  3. The article is translated into other languages by the Gate Learn team, if not mentionedGate.ioUnder no circumstances may the translated articles be copied, distributed, or plagiarized.
Comece agora
Registe-se e ganhe um cupão de
100 USD
!