AI Empowered DePIN: Decentralized GPU Network Rise and Comparative Analysis of Mainstream Projects

The Intersection of AI and DePIN: The Rise of Decentralized GPU Networks

Since 2023, AI and DePIN have become popular trends in the Web3 space, with market values reaching $30 billion and $23 billion respectively. This article focuses on the intersection of the two, exploring the development of related protocols.

In the AI technology stack, the DePIN network provides utility for AI through computing resources. The GPU shortage caused by large tech companies has left other developers lacking sufficient GPUs for computation. This often leads developers to choose centralized cloud providers, but due to the need to sign inflexible long-term high-performance hardware contracts, efficiency is low.

DePIN essentially provides a more flexible and cost-effective alternative, incentivizing resource contributions that align with network goals through token rewards. In the AI field, DePIN crowdsources GPU resources from individual owners to data centers, forming a unified supply for users who need access to hardware. These DePIN networks not only offer customization and on-demand access to developers requiring computing power, but also provide additional income to GPU owners.

In the numerous AI DePIN networks on the market, it is not easy to identify their differences and find the right network you need. The following will explore the roles, goals, and highlights achieved by each protocol.

AI and DePIN Intersection

Overview of AI DePIN Network

Each project has a similar purpose - the GPU computing market network. This section examines the highlights, market focus, and achievements of each project, delving into their differences by understanding their key infrastructure and products.

Render is a pioneer in the P2P network providing GPU computing power, initially focusing on content creation and graphic rendering, and later expanding into computational tasks for generative AI, including neural radiance fields (NeRF) through the integration of tools like Stable Diffusion.

Highlights:

  1. Founded by the cloud graphics company OTOY, which owns Oscar-winning technology.

  2. GPU networks are used by major entertainment companies such as Paramount Pictures, PUBG, and Star Trek.

  3. Collaborate with Stability AI and Endeavor to integrate AI models with 3D content rendering workflows using Render's GPU.

  4. Approve multiple computing clients and integrate more GPUs from DePIN networks.

Akash claims to be the "managed version of Airbnb", positioning itself as a "super cloud" alternative to traditional platforms like AWS( that support storage, GPU, and CPU computing. With developer-friendly tools like the Akash container platform and Kubernetes-managed compute nodes, software can be seamlessly deployed across environments and run any cloud-native application.

Highlights:

  1. A wide range of computing tasks from general computing to web hosting.

  2. AkashML allows GPU networks to run over 15,000 models on Hugging Face, while integrating with Hugging Face.

  3. Akash hosts Mistral AI's LLM model chatbot, Stability AI's SDXL text-to-image model, and Thumper AI's new foundational model AT-1, among other applications.

  4. The platform for building the metaverse, AI deployment, and federated learning is leveraging Supercloud.

io.net provides access to distributed GPU cloud clusters specifically for AI and ML use cases. It aggregates GPUs from data centers, crypto miners, and other decentralized networks. The company was previously a quantitative trading firm and shifted to its current business after the surge in high-performance GPU prices.

Highlights:

  1. The IO-SDK is compatible with frameworks such as PyTorch and Tensorflow, and the multi-layer architecture can automatically and dynamically scale according to computational demands.

  2. Supports the creation of 3 different types of clusters, which can be launched within 2 minutes.

  3. Strong collaboration to integrate other DePIN network GPUs, including Render, Filecoin, Aethir, and Exabits

Gensyn provides GPU computing power focused on machine learning and deep learning computations. It claims to achieve a more efficient validation mechanism than existing methods by combining concepts such as proof of learning, graph-based precise positioning protocols, and Truebit-style incentive games involving staking and slashing of computing providers.

Highlights:

  1. The estimated cost of V100 equivalent GPU is about $0.40 per hour, significantly saving costs.

  2. Through proof stacking, the pre-trained base model can be fine-tuned to complete more specific tasks.

  3. These foundational models will be decentralized, globally owned, and will provide additional functionalities beyond the hardware computing network.

Aethir is specifically designed for enterprise GPUs, focusing on compute-intensive areas, mainly AI, machine learning ) ML (, cloud gaming, and more. In the network, containers act as virtual endpoints for executing cloud-based applications, transferring workloads from local devices to containers, achieving low-latency experiences. To ensure high-quality service, they move GPUs closer to data sources based on demand and location, adjusting resources.

Highlights:

  1. In addition to AI and cloud gaming, Aethir has also expanded into cloud phone services, launching a decentralized cloud smartphone in collaboration with APhone.

  2. Establish extensive cooperation with large Web2 companies such as NVIDIA, Super Micro, HPE, Foxconn, and Well Link.

  3. Multiple partners in Web3, such as CARV, Magic Eden, Sequence, Impossible Finance, etc.

Phala Network serves as the execution layer for Web3 AI solutions. Its blockchain is a trustless cloud computing solution that addresses privacy issues through a Trusted Execution Environment )TEE( design. The execution layer is not used as the AI model computation layer, but enables AI agents to be controlled by on-chain smart contracts.

Highlights:

  1. Act as a verifiable computing coprocessor protocol, enabling AI agents to utilize on-chain resources.

  2. AI agent contracts can obtain top large language models such as OpenAI, Llama, Claude, and Hugging Face through Redpill.

  3. The future will include zk-proofs, multi-party computation )MPC(, fully homomorphic encryption )FHE( and other multi-proof systems.

  4. Future support for H100 and other TEE GPUs to enhance computing power.

![The Intersection of AI and DePIN])https://img-cdn.gateio.im/webp-social/moments-68a395d50be4ab07fbc575dd54441164.webp(

Project Comparison

| | Render | Akash | io.net | Gensyn | Aethir | Phala | |--------|-------------|------------------|---------------------|---------|---------------|----------| | Hardware | GPU & CPU | GPU & CPU | GPU & CPU | GPU | GPU | CPU | | Business Focus | Graphics Rendering and AI | Cloud Computing, Rendering, and AI | AI | AI | AI, Cloud Gaming, and Telecommunications | On-chain AI Execution | | AI Task Type | Inference | Both | Both | Training | Training | Execution | | Work Pricing | Performance-Based Pricing | Reverse Auction | Market Pricing | Market Pricing | Bidding System | Equity Calculation | | Blockchain | Solana | Cosmos | Solana | Gensyn | Arbitrum | Polkadot | | Data Privacy | Encryption&Hashing | mTLS Authentication | Data Encryption | Secure Mapping | Encryption | TEE | | Work Fee | Each Job 0.5-5% | 20% USDC, 4% AKT | 2% USDC, 0.25% Reserve Fee | Low Fees | 20% per session | Proportional to the Staking Amount | | Security | Rendering Proof | Proof of Stake | Proof of Computation | Proof of Stake | Rendering Capability Proof | Inherited from Relay Chain | | Completion Proof | - | - | Time-Lock Proof | Learning Proof | Rendering Work Proof | TEE Proof | | Quality Assurance | Dispute | - | - | Verifier and Whistleblower | Checker Node | Remote Proof | | GPU Cluster | No | Yes | Yes | Yes | Yes | No |

Importance

Availability of Clusters and Parallel Computing

The distributed computing framework implements GPU clusters, providing more efficient training without affecting model accuracy, while enhancing scalability. Training more complex AI models requires powerful computing capabilities, which usually must rely on distributed computing to meet demands. Visually, OpenAI's GPT-4 model has over 1.8 trillion parameters, trained using approximately 25,000 Nvidia A100 GPUs across 128 clusters over a period of 3-4 months.

Previously, Render and Akash only provided single-purpose GPUs, which may limit market demand for GPUs. However, most key projects have now integrated clusters for parallel computing. io.net has partnered with Render, Filecoin, and Aethir to bring more GPUs into the network, successfully deploying over 3,800 clusters in the first quarter of 2024. Although Render does not support clusters, its working principle is similar to that of clusters, breaking a single frame into multiple different nodes to process different ranges of frames simultaneously. Phala currently only supports CPUs but allows for CPU worker clustering.

Incorporating a cluster framework into the AI workflow network is very important, but the number and type of cluster GPUs required to meet the needs of AI developers is a separate issue that will be discussed later.

Data Privacy

Developing AI models requires the use of large datasets, which may come from various sources and take different forms. Sensitive datasets such as personal medical records and user financial data may face the risk of exposure to model providers. Samsung internally disabled ChatGPT due to concerns that uploading sensitive code to the platform would violate privacy, and Microsoft's 38TB private data leak incident further highlights the importance of taking adequate security measures when using AI. Therefore, having various data privacy methods is crucial for returning data control to data providers.

Most of the projects covered use some form of data encryption to protect data privacy. Data encryption ensures that the data transmission from data providers to model providers ) data receivers ( in the network is protected. Render employs encryption and hashing when publishing rendering results back to the network, while io.net and Gensyn adopt some form of data encryption. Akash uses mTLS authentication, allowing only tenant-chosen providers to receive data.

However, io.net recently partnered with Mind Network to launch fully homomorphic encryption )FHE(, allowing encrypted data to be processed without the need for prior decryption. By enabling data to be securely transmitted for training purposes without revealing identities and data content, this innovation better ensures data privacy compared to existing encryption technologies.

Phala Network introduces TEE, which connects to the secure area within the main processor of the device. Through this isolation mechanism, it prevents external processes from accessing or modifying data, regardless of their permission levels, even individuals with physical access to the machine cannot access it. In addition to TEE, it also combines zk-proofs in the zkDCAP validator and jtee command line interface for integration with programs using RiscZero zkVM.

![The Intersection of AI and DePIN])https://img-cdn.gateio.im/webp-social/moments-8f83f1affbdfd92f33bc47afe8928c5c.webp(

Proof of Completion and Quality Inspection

The GPUs provided by these projects can offer computing power for a range of services. Due to the wide variety of services, from rendering graphics to AI computing, the final quality of such tasks may not always meet user standards. Proof of completion can be used to indicate that the specific GPU rented by the user was indeed used to run the required service, and quality checks are beneficial for users requesting completion of such work.

After the calculation is completed, both Gensyn and Aethir generate proofs indicating that the work has been completed, while the proof from io.net indicates that the performance of the rented GPU has been fully utilized without issues. Both Gensyn and Aethir conduct quality checks on the completed calculations. For Gensyn, validators rerun parts of the generated proofs to verify against the proofs, while whistleblowers act as an additional layer of checks on the validators. Meanwhile, Aethir uses checking nodes to determine service quality and penalizes services that fall below standards. Render recommends using a dispute resolution process; if the review committee finds issues with a node, that node will be penalized. After Phala is completed, a TEE proof is generated to ensure that the AI agent performs the required operations on-chain.

Hardware Statistics

| | Render | Akash | io.net | Gensyn | Aethir | Phala | |-------------|--------|-------|--------|------------|------------|--------| | Number of GPUs | 5600 | 384 | 38177 | - | 40000+ | - | | Number of CPUs | 114 | 14672 | 5433 | - | - | 30000+ | | H100/A100 Quantity | - | 157 | 2330 | - | 2000+ | - | | H100 Cost/Hour | - | $1.46 | $1.19 | - | - | - | | A100 Cost/Hour | - | $1.37 | $1.50 | $0.55 ) Estimated ( | $0.33 ) Estimated ( | - |

![The Intersection of AI and DePIN])https://

View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
  • Reward
  • 6
  • Share
Comment
0/400
DegenWhisperervip
· 6h ago
Hard to get, just decentralized.
View OriginalReply0
ChainSpyvip
· 07-22 18:01
Love it or hate it, retail investors will rise

Here are my comments:

GPU shortage, better to contribute through Mining
View OriginalReply0
ThatsNotARugPullvip
· 07-22 14:35
The GPU Supply Chain is bottlenecking.
View OriginalReply0
SoliditySlayervip
· 07-22 14:34
Market capitalization totals 53 billion. Is there a lack of money?
View OriginalReply0
GasFeeThundervip
· 07-22 14:27
Gas fees are too high, miners are really shameless.
View OriginalReply0
TokenVelocityTraumavip
· 07-22 14:20
It's all messed up, even the GPU is not enough.
View OriginalReply0
Trade Crypto Anywhere Anytime
qrCode
Scan to download Gate app
Community
English
  • 简体中文
  • English
  • Tiếng Việt
  • 繁體中文
  • Español
  • Русский
  • Français (Afrique)
  • Português (Portugal)
  • Bahasa Indonesia
  • 日本語
  • بالعربية
  • Українська
  • Português (Brasil)