The financial industry restrains itself from "raising lobsters"

“Are you farming shrimp?” Recently, the entire internet has been crazy about OpenClaw “lobster.” From individual efficiency improvements to enterprise process automation, this open-source AI agent has almost swept all tech applications and even social scenarios, but not so much in the financial sector.

On March 10th, in response to the nationwide “shrimp farming craze” and whether there are plans to deploy OpenClaw, Beijing Business Daily reporters interviewed several banks, consumer finance companies, and payment institutions. Most responded, “It’s too hot, we need to observe and wait,” while some openly stated that OpenClaw is not suitable for finance, especially noting data security risks involved. Industry insiders believe that in this “shrimp farming” trend, internet banks and consumer finance companies have not followed the trend to deploy, and payment tech teams remain cautious, mainly due to concerns over funds, data, and information security.

Collective Calmness

The “shrimp farming” craze has collectively “fizzled out” in the financial industry. “Because the financial sector requires strict confidentiality, this AI application could pose risks to data and information security,” a consumer finance industry insider expressed his concerns.

“It has some value, but in core consumer finance areas, it still faces multiple risks. For example, compliance-wise, open-source intelligent agents are hard to meet regulatory requirements for risk control in core operations; security-wise, open-source agents could lead to information leaks,” said another consumer finance practitioner.

In summary, the main reason remains the strict regulation and high-risk nature of the financial industry.

For consumer finance companies, if AI agents autonomously handle customer credit approval, risk management, and loan disbursement, efficiency could double. But if over-lending, credit errors, or information leaks occur, who bears the responsibility? Who takes the risk? This is the biggest concern—an inherent conflict between technological autonomy and the compliance and security requirements of the financial sector.

“This is a minefield,” many consumer finance practitioners stated. No one wants to risk data and security by rushing to try new technology. “But OpenClaw is so popular that it feels a bit overhyped. We need to observe and understand its value more thoroughly.” Some also said that in the short term, the financial industry remains cautious, but layered penetration is possible.

Payment institutions are even more directly concerned. Every transaction involves funds security, leaving no room for “black box” algorithms. YeePay co-founder Yu Chen told Beijing Business Daily that the surge in open-source AI agents driven by OpenClaw represents a shift from dialogue AI to autonomous execution. The direction is promising, but the company remains cautious, observing the open-source framework carefully. Autonomous execution, permission granting, and compliance/security boundaries are inherently conflicting. The financial industry must prioritize security and controllability.

Regarding banks, frontline staff said, “Currently, not many people in the bank are using OpenClaw. We see it as a high-permission AI software that can authorize operations on computers and execute commands directly. We frontline staff don’t use such functions; only the tech department is testing it in small-scale trials.”

A bank business department head said openly that such open-source products require remote control of PCs via mobile devices during use. Even if they claim information isolation, banks remain highly cautious and generally do not use them directly.

Lower Compatibility

From a financial industry perspective, Yu Chen believes that the greatest value of open-source intelligent agents lies in automating processes and improving efficiency—freeing humans from repetitive tasks and reducing costs. However, there are risks: unexplained, uncontrollable autonomous decisions, data security issues, and overreach, which could directly violate compliance boundaries.

“Personally, I think it’s okay for casual use or office work, but applying it to core business processes is risky—such as data security and fund safety,” said another payment industry insider. He pointed out that risk control in payments is already quite mature, and blindly trying such AI agents could introduce hidden dangers. “If something goes wrong with compatibility, it could cause transaction interruptions or fund settlement errors, with serious consequences.”

A tech staff member from a local rural commercial bank said, “The top priority for banks in tech development is always security and compliance.” Currently, their main concerns about deploying open-source projects are twofold: first, data security risks—open-source code can have many vulnerabilities and backdoors that are hard to detect, risking data leaks; second, operational control risks—despite claims of information isolation, cross-device and cross-network control could be hijacked, screens recorded, or permissions overstepped, all of which threaten financial security. Banks will not risk using such tools.

Industry experts believe that as a highly regulated, high-risk sector, the financial industry must exercise restraint. Shen Xiayi, deputy director of the Federal Reserve Securities Research Institute, explained that the unique nature of finance involves funds, customer privacy, and systemic risks. Any technological innovation must be based on controllable risks, unlike the “rapid iteration and trial-and-error” approach common in internet tech.

Shen Xiayi sees that currently, the compatibility of OpenClaw with finance remains low. Its core end-to-end automation conflicts with regulatory requirements—blurred responsibility boundaries, lack of algorithm explainability, etc.—making it difficult for banks, consumer finance, and payment institutions to meet regulatory red lines. Additionally, the high demands for data security and operational stability mean that some instances of OpenClaw have security vulnerabilities and third-party skill market risks. Given the complexity of financial operations, it can only be tested in non-core scenarios, not in credit, risk control, or fund settlement. Overall adaptation will require long-term optimization.

Not Rejection, Just Caution

It’s worth noting that the industry’s “calm” does not mean rejection of AI, but rather a cautious approach. A banking insider said that the wave of open-source AI agents is fundamentally a democratization of AI application paradigms. The capabilities of large models have surpassed critical thresholds, and the market needs this wave to make users realize that AI is no longer just an auxiliary tool or a “consultant” providing suggestions, but a real executor.

This insider believes that AI application paradigms like OpenClaw are an inevitable future trend. For the financial sector, the issue is not “fear of use” or “unsuitability at this stage,” but how to adopt it carefully and gradually. Restraint stems more from respect for compliance and risk management than from rejection of technology.

In the short term, the greatest value of open-source intelligent agents is to significantly improve financial service efficiency and reduce operational costs, making financial services more inclusive. In the long run, these autonomous agents capable of executing tasks could bring new business models, create incremental value, and open new markets.

However, risks cannot be ignored. The same banking insider added that in terms of compliance, security, and investment, financial institutions do have concerns. The biggest risks are at the application level—automation lowers execution barriers, which is good for value creation but also opens the door to malicious behaviors. Therefore, risk prevention must be strengthened in advance.

In fact, many institutions are quietly exploring customized AI applications. A bank insider said that their bank is focusing on risk management, customer service, and telemarketing scenarios, with some deployment in credit approval, daily operations, and compliance. “For open-source AI agents to truly enter core financial scenarios, the first step is to address technical security and compliance issues.” He emphasized that in the near future, the initial work on responsibility attribution should still be human-led, with strict controls over key processes.

Zhaolian Consumer Finance has established eight core intelligent agents covering consumer protection, compliance, asset management, operations, risk, decision-making, R&D, and traditional Chinese medicine, along with several office AI tools, to enhance efficiency across business units.

Payment platform Lianlian Digital also mentioned that in recent years, they have promoted AI integration across risk control, operations, and customer service, including access to mainstream AI large models. Their proprietary platform offers comprehensive services such as payments, fund transfers, global fund distribution, intelligent remittance processing, and risk management.

Gradual Integration

After the hype, industry experts believe that the financial sector will not see a “wave of OpenClaw deployment,” but rather a phase of cautious exploration and gradual integration. “Financial institutions are actually among the earliest to apply AI because of the large amount of transaction data inherent in fintech,” Yu Chen explained. AI applications in finance mainly fall into two categories: one is basic applications—using AI as a safeguard for core operations, such as anti-money laundering; the other is advanced applications—bringing more business opportunities.

Yu Chen sees broad future prospects for AI in finance: optimizing customer service, enhancing user experience, cross-selling with large models, discovering new sales leads, and automating risk control and compliance. The goal is to truly serve business and user value.

“Currently, the digital transformation of banks, consumer finance, and payment institutions is mainly supportive, not pursuing full automation. Their approach is pragmatic, aligning with the strong regulation of finance and current technological realities,” said Wang Pengbo, chief analyst at Broadcom Consulting. He believes that for open-source AI agents to enter core financial scenarios, key issues like explainability, traceability, and transparency must be addressed first—no black boxes. They must meet strict regulatory and security standards. Responsibilities should be clearly defined, and data compliance must be maintained to protect user privacy. Balancing open-source benefits with institutional core interests, while retaining human intervention rights, is essential to avoid irreversible risks.

Small-Scale Implementation

Considering industry trends and regulatory requirements, many bank insiders believe that in the next five to ten years, the application of open-source tools in banking will be limited to scenarios where personal information is protected, and technology is fully controllable and risks are manageable. These include non-sensitive marketing, auxiliary functions that do not involve customer core data or fund transactions, to avoid security risks in core operations.

“This cautious approach is not conservative but a rational response to the particular risks in finance. Banks can accumulate experience through pilot projects, verify value in controlled scenarios, and gradually expand,” said Du Tongtong, researcher at the Federal Reserve Securities Research Institute. She emphasized that financial institutions should adhere to cautious innovation, starting with non-core scenarios, and gradually explore core scenario adaptation.

Wang Pengbo also stated that the industry will maintain a cautious stance, focusing on compliance, support decision-making, and small-scale deployment. They will prioritize low-risk, non-core functions like customer service and advertising writing, avoiding security and compliance issues in core operations.

A bank insider added that in the short term, financial institutions will not pursue full end-to-end automation but will emphasize a “Human in the Loop” approach, ensuring human experts retain final decision-making authority.

Furthermore, multi-agent collaboration combined with human oversight will be the future trend. Instead of fully autonomous single agents, a hybrid architecture of “multi-agent + human supervision” will be built to handle complex financial scenarios.

Additionally, establishing comprehensive AI governance systems is crucial. Financial institutions will develop systematic mechanisms, including AI asset inventories, risk importance assessments, and full lifecycle management, to ensure AI applications remain safe and compliant.

In summary, for the next five to ten years, the application of open-source tools in banking will be cautious, focusing on scenarios where personal data is protected, technology is controllable, and risks are manageable. Only under these conditions will banks explore limited deployment, with clear standards and safeguards in place.

View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
  • Reward
  • Comment
  • Repost
  • Share
Comment
0/400
No comments
  • Pin