📢 Gate Square #Creator Campaign Phase 1# is now live – support the launch of the PUMP token sale!
The viral Solana-based project Pump.Fun ($PUMP) is now live on Gate for public sale!
Join the Gate Square Creator Campaign, unleash your content power, and earn rewards!
📅 Campaign Period: July 11, 18:00 – July 15, 22:00 (UTC+8)
🎁 Total Prize Pool: $500 token rewards
✅ Event 1: Create & Post – Win Content Rewards
📅 Timeframe: July 12, 22:00 – July 15, 22:00 (UTC+8)
📌 How to Join:
Post original content about the PUMP project on Gate Square:
Minimum 100 words
Include hashtags: #Creator Campaign
Comparison of Layered Development of AI and Crypto Assets: Technology-Driven VS Token Constraints
A Comparative Reflection on the Layered Development of AI and Crypto Assets
Recently, Ethereum's Rollup-Centric strategy seems to have encountered setbacks, and many people express dissatisfaction with the nested model of L1-L2-L3. Interestingly, the development in the AI field over the past year has also experienced a similar rapid evolution of L1-L2-L3. Let's delve into the layered logic of these two fields to examine the root of the problem.
In the field of AI, each layer addresses core issues that the previous layer could not solve. The large language models of L1 lay the groundwork for language understanding and generation, but there are shortcomings in logical reasoning and mathematical computation. The reasoning models of L2 specifically target these weaknesses, with certain models capable of handling complex mathematical problems and code debugging, filling the cognitive blind spots of large language models. Based on this, the AI agents of L3 integrate the capabilities of the first two layers, transforming AI from passive responses to active execution, enabling it to autonomously plan tasks, invoke tools, and manage complex workflows.
The layering of this AI represents a "capability progression": L1 lays the foundation, L2 fills in the gaps, and L3 integrates everything. Each layer achieves a qualitative leap based on the previous layer, allowing users to clearly feel that the AI becomes smarter and more practical.
In contrast, the layered logic in the Crypto Assets field seems to be constantly patching up the problems of the previous layer, inadvertently bringing about new, larger issues. For example, to address the performance shortcomings of L1 public chains, L2 scaling solutions were introduced. However, after experiencing a wave of L2 infrastructure hype, although Gas fees have decreased and TPS has improved, liquidity has become fragmented and ecological applications remain scarce. This has led to an excess of L2 infrastructure becoming a problem instead. To tackle this issue, L3 vertical application chains have emerged, but these application chains often operate independently, unable to enjoy the ecological synergy of a general chain, thus further fragmenting the user experience.
This layered evolution has become a "problem transfer": L1 has bottlenecks, L2 provides patches, and L3 becomes chaotic and decentralized. Each layer seems to merely transfer problems from one place to another, giving the impression that all solutions revolve around the purpose of "issuing coins."
The fundamental reason for this difference may lie in the fact that AI layering is driven by technological competition, with major AI companies striving to enhance model capabilities; whereas the layering of Crypto Assets seems to be constrained by token economics, with each L2 project's core metrics focused on Total Value Locked (TVL) and token prices.
This comparison reveals an interesting phenomenon: one field is focused on solving technical challenges, while the other seems more like packaging financial products. Of course, this abstract analogy is not absolute, but it provides us with an interesting perspective to think about the development trajectories of these two fields.