OpenLedger builds a payable AI: OP Stack + EigenDA foundation drives data and model economy

OpenLedger Depth Research Report: Building a data-driven, model-composable agent economy based on OP Stack + EigenDA

1. Introduction | The Model Layer Leap of Crypto AI

Data, models, and computing power are the three core elements of AI infrastructure, analogous to fuel (data), engine (model), and energy (computing power), all of which are essential. Similar to the evolutionary path of infrastructure in the traditional AI industry, the Crypto AI field has also gone through similar stages. At the beginning of 2024, the market was once dominated by decentralized GPU projects such as ( certain computing platforms, certain rendering platforms, and certain networks ), which generally emphasized the rough growth logic of "competing in computing power." However, entering 2025, the industry's focus gradually shifted to the model and data layer, marking a transition for Crypto AI from competition for underlying resources to the construction of more sustainable and application-value-oriented mid-level frameworks.

OpenLedger Depth Research Report: Building a Data-Driven, Model-Composable Agent Economy on the Foundation of OP Stack + EigenDA

General Large Model (LLM) vs Specialized Model (SLM)

Traditional large language model (LLM) training heavily relies on large-scale datasets and complex distributed architectures, with parameter scales ranging from 70B to 500B, and the cost of training can often reach millions of dollars. SLM (Specialized Language Model), as a lightweight fine-tuning paradigm of reusable foundational models, is usually based on open-source models like LLaMA, Mistral, DeepSeek, etc., combined with a small amount of high-quality specialized data and technologies like LoRA, to quickly build expert models with specific domain knowledge, significantly reducing training costs and technical barriers.

It is worth noting that SLM will not be integrated into LLM weights, but will collaborate with LLM through methods such as Agent architecture calls, dynamic routing via a plugin system, hot-swappable LoRA modules, and RAG (Retrieval-Augmented Generation). This architecture retains the broad coverage capability of LLM while enhancing professional performance through fine-tuning modules, forming a highly flexible combinatorial intelligent system.

The Value and Boundaries of Crypto AI at the Model Layer

Crypto AI projects are essentially difficult to directly enhance the core capabilities of large language models (LLM), the core reason being

  • Technical barriers are too high: The scale of data, computing power resources, and engineering capabilities required to train a Foundation Model is extremely large. Currently, only technology giants in the United States (such as certain companies) and China (such as certain companies) have the corresponding capabilities.
  • Limitations of Open Source Ecology: Although mainstream foundational models such as LLaMA and Mixtral have been open-sourced, the key to driving breakthroughs in models still lies primarily within research institutions and closed-source engineering systems, with limited participation space for on-chain projects at the core model level.

However, on top of open-source foundational models, Crypto AI projects can still achieve value extension by fine-tuning specialized language models (SLM) and combining the verifiability and incentive mechanisms of Web3. As the "peripheral interface layer" of the AI industry chain, it is reflected in two core directions:

  • Trustworthy Verification Layer: Enhances the traceability and tamper-resistance of AI outputs by recording the model generation path, data contributions, and usage on-chain.
  • Incentive mechanism: Using native Token to incentivize behaviors such as data upload, model invocation, and agent execution, creating a positive cycle of model training and services.

Classification of AI Model Types and Analysis of Blockchain Applicability

It can be seen that the feasible landing points of model-type Crypto AI projects mainly focus on lightweight fine-tuning of small SLMs, on-chain data access and verification of RAG architecture, as well as local deployment and incentives for Edge models. Combining the verifiability of blockchain and the token mechanism, Crypto can provide unique value for these medium and low-resource model scenarios, forming differentiated value for the AI "interface layer".

The blockchain AI chain based on data and models can provide clear and tamper-proof on-chain records of the source of contributions for each piece of data and model, significantly enhancing the credibility of data and the traceability of model training. At the same time, through the smart contract mechanism, rewards distribution is automatically triggered when data or models are called, transforming AI behavior into measurable and tradable tokenized value, thus creating a sustainable incentive system. In addition, community users can also evaluate model performance, participate in rule formulation and iteration through token voting, improving the decentralized governance structure.

OpenLedger Depth Research Report: Building a Data-Driven, Model-Composable Intelligent Economy with OP Stack + EigenDA

2. Project Overview | OpenLedger's AI Chain Vision

OpenLedger is one of the few blockchain AI projects in the current market that focuses on data and model incentive mechanisms. It pioneers the concept of "Payable AI" with the aim of building a fair, transparent, and composable AI operating environment that incentivizes data contributors, model developers, and AI application builders to collaborate on the same platform and earn on-chain rewards based on their actual contributions.

OpenLedger provides a complete closed-loop chain from "data provision" to "model deployment" and then to "profit sharing," with its core modules including:

  • Model Factory: No programming required, use LoRA for fine-tuning training and deploying custom models based on open-source LLM.
  • OpenLoRA: Supports coexistence of thousands of models, dynamically loaded on demand, significantly reducing deployment costs;
  • PoA (Proof of Attribution): Achieving contribution measurement and reward distribution through on-chain call records;
  • Datanets: A structured data network aimed at vertical scenarios, built and validated through community collaboration;
  • Model Proposal Platform: A composable, callable, and payable on-chain model marketplace.

Through the above modules, OpenLedger has built a data-driven, model-composable "agent economy infrastructure" to promote the on-chain of the AI value chain.

OpenLedger Depth Research Report: Building a Data-Driven, Model-Composable Agent Economy Based on OP Stack + EigenDA

In the adoption of blockchain technology, OpenLedger uses OP Stack + EigenDA as a foundation to build a high-performance, low-cost, and verifiable environment for data and contract execution for AI models.

  • Built on OP Stack: Based on the Optimism technology stack, supporting high throughput and low-cost execution;
  • Settlement on the Ethereum mainnet: Ensure transaction security and asset integrity;
  • EVM Compatible: Enables developers to quickly deploy and scale based on Solidity;
  • EigenDA provides data availability support: significantly reduces storage costs and ensures data verifiability.

Compared to certain public chains that are more focused on lower layers, emphasizing data sovereignty and the "AI Agents on BOS" architecture, OpenLedger is more dedicated to building an AI-specific chain aimed at data and model incentives, striving to achieve a traceable, composable, and sustainable value loop for the development and invocation of models on-chain. It serves as the model incentive infrastructure in the Web3 world, combining model hosting similar to a certain model hosting platform, usage billing akin to a certain payment platform, and composable interfaces on-chain resembling certain infrastructure services, promoting the realization pathway of "model as an asset."

3. Core Components and Technical Architecture of OpenLedger

3.1 Model Factory, no-code model factory

ModelFactory is a large language model (LLM) fine-tuning platform under the OpenLedger ecosystem. Unlike traditional fine-tuning frameworks, ModelFactory provides a purely graphical interface for operation, eliminating the need for command line tools or API integration. Users can fine-tune the model based on datasets that have been authorized and reviewed on OpenLedger. It realizes an integrated workflow for data authorization, model training, and deployment, with the core processes including:

  • Data Access Control: Users submit data requests, providers review and approve, and data automatically connects to the model training interface.
  • Model selection and configuration: Supports mainstream LLMs (such as LLaMA, Mistral), with hyperparameters configured through the GUI.
  • Lightweight Fine-tuning: Built-in LoRA / QLoRA engine, displaying training progress in real-time.
  • Model Evaluation and Deployment: Built-in evaluation tools that support exporting for deployment or ecological sharing calls.
  • Interactive verification interface: Provides a chat-style interface for directly testing the model's Q&A capabilities.
  • RAG Generation Traceability: Responses include source citations, enhancing trust and auditability.

The Model Factory system architecture includes six major modules, covering identity authentication, data access permissions, model fine-tuning, evaluation deployment, and RAG traceability, creating a secure, controllable, real-time interactive, and sustainable monetization integrated model service platform.

OpenLedger Depth Research Report: Building a Data-Driven, Model-Combinable Agent Economy Based on OP Stack + EigenDA

The following is a brief overview of the large language model capabilities currently supported by ModelFactory:

  • LLaMA Series: The most extensive ecosystem, active community, and strong general performance, it is one of the most mainstream open-source foundational models currently.
  • Mistral: Efficient architecture with excellent inference performance, suitable for flexible deployment in resource-limited scenarios.
  • Qwen: Produced by a certain company, it performs excellently in Chinese tasks, has strong overall capabilities, and is the preferred choice for domestic developers.
  • ChatGLM: The Chinese dialogue effect is outstanding, suitable for vertical customer service and localization scenarios.
  • Deepseek: Excels in code generation and mathematical reasoning, suitable for intelligent development assistance tools.
  • Gemma: A lightweight model launched by a certain company, with a clear structure, easy to quickly get started and experiment.
  • Falcon: Once a benchmark for performance, suitable for basic research or comparative testing, but community activity has decreased.
  • BLOOM: Strong support for multiple languages, but weaker inference performance, suitable for research focused on language coverage.
  • GPT-2: A classic early model, suitable only for teaching and validation purposes, not recommended for actual deployment.

Although OpenLedger's model suite does not include the latest high-performance MoE models or multimodal models, its strategy is not outdated, but rather a "practical-first" configuration made based on the real constraints of on-chain deployment (inference costs, RAG adaptation, LoRA compatibility, EVM environment).

Model Factory, as a no-code toolchain, has built-in proof of contribution mechanisms for all models, ensuring the rights of data contributors and model developers. It features low barriers to entry, monetization, and combinability, compared to traditional model development tools:

  • For developers: Provide a complete path for model incubation, distribution, and revenue;
  • For the platform: to form a model asset circulation and combination ecology;
  • For users: Models or Agents can be combined in the same way as calling an API.

OpenLedger Depth Research Report: Building a Data-Driven and Model-Combinable Agent Economy on the Basis of OP Stack + EigenDA

3.2 OpenLoRA, On-chain Assetization of Fine-tuned Models

LoRA (Low-Rank Adaptation) is an efficient parameter tuning method that learns new tasks by inserting "low-rank matrices" into pre-trained large models without modifying the original model parameters, thereby significantly reducing training costs and storage requirements. Traditional large language models (such as LLaMA, GPT-3) typically have billions or even trillions of parameters. To use them for specific tasks (such as legal Q&A, medical consultations), fine-tuning is required. The core strategy of LoRA is: "freeze the parameters of the original large model and only train the newly inserted parameter matrices." It is parameter-efficient, trains quickly, and is flexibly deployable, making it the mainstream fine-tuning method most suitable for Web3 model deployment and composite calls.

OpenLoRA is a lightweight inference framework built by OpenLedger specifically designed for multi-model deployment and resource sharing. Its core goal is to address common issues in current AI model deployment such as high costs, low reusability, and GPU resource wastage, promoting the practical implementation of "Payable AI."

OpenLoRA system architecture core components, based on modular design

View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
  • Reward
  • 7
  • Share
Comment
0/400
SchroedingersFrontrunvip
· 2h ago
This plate is big, staring at it.
View OriginalReply0
WhaleSurfervip
· 07-01 12:06
Copying homework is really nice
View OriginalReply0
GateUser-c802f0e8vip
· 07-01 12:04
When will the intelligent agent economy be realized?
View OriginalReply0
ShadowStakervip
· 07-01 11:54
meh... another L2 trying to solve ai with fancy buzzwords. show me the mev stats first tbh
Reply0
PrivateKeyParanoiavip
· 07-01 11:44
The economy of intelligent agents is just VC hype.
View OriginalReply0
GasWastervip
· 07-01 11:40
It's time to Clip Coupons again.
View OriginalReply0
Ser_This_Is_A_Casinovip
· 07-01 11:37
3啊 Just waiting to go To da moon!
View OriginalReply0
Trade Crypto Anywhere Anytime
qrCode
Scan to download Gate app
Community
English
  • 简体中文
  • English
  • Tiếng Việt
  • 繁體中文
  • Español
  • Русский
  • Français (Afrique)
  • Português (Portugal)
  • Bahasa Indonesia
  • 日本語
  • بالعربية
  • Українська
  • Português (Brasil)