In-Depth | Agent Internal Friction in Finance: A Struggle Between "Shrimp Farming" Aspirations and Risk

Summary

On March 15, 2026, the China Internet Finance Association issued a rare risk alert regarding the open-source intelligent agent OpenClaw.

Adding to previous warnings from NVDB, CNCERT, and other agencies, three consecutive security alerts within a week have brought the core contradiction faced by the financial industry—“desire for efficiency but fear of losing control”—to the forefront.

Through in-depth research, the central ledger found that banks, insurance companies, and securities firms are each heading down different paths.

The banking sector is the most restrained, with several state-owned banks explicitly banning employees from installing OpenClaw on personal devices, opting instead for in-house development of private domain intelligent agents. However, deployment is limited to peripheral systems. High costs, outdated hardware compatibility issues, and weak data infrastructure are intensifying industry effects—leading top banks to experiment within sandboxes, while small and medium banks are likely to be excluded.

The insurance industry is more flexible. Leading insurers have piloted large-scale applications but faced regulatory scrutiny. The industry consensus is shifting toward “micro-innovations in non-core areas,” with email, meetings, and other OA scenarios expected to be the first to implement.

The “super individual” agents empowering agents hold the greatest imagination potential, but the core privacy data they handle—such as health disclosures and family finances—face greater exposure risks and are difficult to regulate.

Brokerage firms are in a “tempted but hesitant” state. CITIC, GF Securities, and others prohibit installing OpenClaw on work computers, limiting testing to sandboxes. Research departments show the most promise; however, due to reliance on on-site verification and due diligence, implementation remains challenging. Budget constraints, including reductions in Wind terminals, and the token economy model are practical barriers dampening enthusiasm.

Overseas, Rogo has pioneered a “traceable” model on Wall Street, providing a reference for domestic efforts, but localized solutions are still under cautious exploration.

Regulatory warnings have cast a cold shower on the “shrimp farming” craze spreading from tech and broader commercial sectors into finance.

On March 15, the China Internet Finance Association issued a “Risk Warning on the Application Security of OpenClaw in the Internet Finance Industry.”

The document directly states that this open-source intelligent agent, due to its default high system permissions and weak security configurations, is highly susceptible to becoming a breach point for sensitive data theft or illegal transaction manipulation. It explicitly recommends that “financial institutions should not install OpenClaw on terminals involved in financial business.”

Prior to this, the Cybersecurity Threat and Vulnerability Information Sharing Platform of the Ministry of Industry and Information Technology and the National Internet Emergency Center issued warnings, and on March 12, the China Industrial Internet Security Development Research Center issued a risk notice targeting industrial sectors—within just a week, three regulatory bans and a rare risk alert for a single open-source software have highlighted the conflicting attitudes of financial institutions facing the agent wave.

OpenClaw, which traditionally only provided text boundaries for large models, has now qualitatively transformed into a system-level operation-capable, directly executable “brain and hands” agent. It is no longer just a strategic advisor but an executor with keys to the vault.

The efficiency revolution triggered by intelligent agents presents a highly tense new proposition for finance:

On one side, finance relies heavily on manpower for intensive information processing, eager to reduce processing time and improve bottom-line efficiency through technology;

On the other side, the system handles vast funds and sensitive customer data, maintaining a zero-tolerance stance on permissions and data security compliance.

One side craves ultimate efficiency; the other fears system out-of-control. This is the core contradiction facing financial institutions “raising lobsters” today.

The central ledger’s research shows that amid this sudden clash of technological waves and security alerts, banks, insurance companies, and securities firms are each evolving along distinct paths.

“Testing for Absolute Security”

Compared to the tech and broader commercial sectors, banks show high restraint in “raising lobsters.”

This cautiousness is not merely conservative thinking but a necessary requirement for financial system stability and physical risk management. Among all Chinese industries, banks may be the most devout followers of “certainty”—and the essence of OpenClaw is precisely a trade-off of efficiency for “uncertainty” through autonomy.

In traditional finance, banks’ data volume and sensitivity far surpass ordinary user environments. “Lobster” is granted high local system permissions to achieve strong execution capabilities. This convenience also brings multiple threats: if attacked or if AI hallucinations occur, core production data could be irreversibly deleted.

There are precedents.

In February 2026, Summer Yue, Director of AI Alignment and Safety at Meta’s Superintelligence Lab, shared a chilling out-of-control experience. She issued a seemingly simple command to OpenClaw—“Check inbox, suggest emails to archive or delete”—with an added security restriction: “No operations without approval.” The process ran normally for weeks.

But once connected to her real work email, disaster struck. Due to the email’s information volume exceeding AI’s processing limit, OpenClaw triggered a “context compression” mechanism—shortening memory by forgetting the critical “no unauthorized operations” instruction. Subsequently, “lobster” began deleting emails in a flash, ignoring her three consecutive “stop” commands. She had to physically disconnect the machine to prevent over 200 emails from being wiped out.

A top AI researcher dedicated to “AI safety” was itself tripped by AI—this irony chills every financial practitioner.

For banks processing billions of yuan daily, such out-of-control events are unacceptable. Deleting 200 emails is manageable, but if it were 200 settlement instructions, consequences could be catastrophic.

The central ledger confirmed from multiple state-owned and joint-stock bank tech personnel that many large and medium banks have already instructed staff to exercise caution with “lobster,” via emails and internal training, highlighting security risks.

Currently, the information protection architecture remains strict: due to firewalls between intranet and extranet, internal devices cannot download high-permission external software like “lobster,” and data export is tightly restricted. Several bank tech staff said, “Colleagues only try on personal computers that don’t contain any work data.”

Some financial regulators also confirmed receiving department notices to ensure employees’ work computers do not have OpenClaw installed.

But the real dilemma lies in balancing security and development.

Under the agent wave, banks’ demand for OpenClaw is real: when consumers use “lobster” for automatic news tracking, report generation, or simulated trading, banks are still debating whether employees can use it internally—this speed gap already threatens to overturn traditional finance.

History repeats itself: the most tightly regulated industries are often the most violently impacted by technological waves. From online payments replacing bank counters to Yu’ebao disrupting savings accounts, every revolution begins with a “no” from financial institutions.

In the face of insurmountable security boundaries, “private deployment” may be the only way to introduce intelligent agents into business processes.

A regulator pointed out that, for data security, banks cannot directly deploy “lobster,” but can develop similar tools based on its architecture.

The central ledger learned that several leading banks are already exploring such tools.

For example, a tech department staff at a state-owned bank revealed that the bank’s R&D center is actively developing an internal proprietary agent. “The head office has explicitly banned employees from building OpenClaw themselves,” he said. “In a recent meeting, it was announced that our ‘lobster’ is now preliminarily set up.”

Regarding project development, he said “no news of joint development with other companies; it’s probably developed in-house.”

In other words, banks’ strategy is not to reject lobsters but to keep a caged one.

Practically, private intelligent agents are currently deployed mainly in high-tolerance peripheral systems, not in core trading or settlement systems.

“Controllable Triangle”

This cautious segmentation of scenarios is forming a rational evaluation logic within the industry.

A data architect at a joint-stock bank believes that when selecting agent deployment scenarios, financial institutions must weigh “business return, feasibility of building a ‘world model,’ and employee acceptance” across three dimensions.

Based on this framework, the architect believes that empowering internal staff and R&D efficiency (e.g., AI-assisted coding, intelligent office) will be the first “P0” scenarios, as they are constrained by business rules, high transparency, and can directly reduce repetitive tasks. Conversely, core areas like “credit risk approval” and “complex investment decisions” are considered difficult to cross in the short term due to the “black box” nature of large models and unclear human-machine responsibility boundaries.

This logic can be summarized as: let AI first learn to serve tea and pour water, then consider letting it perform surgery.

Employees at a state-owned bank that has deployed its own “lobster” said that the scope of internal testing remains limited, but may gradually expand to all staff. Another tech staff at a different state-owned bank also said some departments are experimenting with similar private deployments in non-core areas.

Some joint-stock banks are also conducting gray-scale experiments related to OpenClaw.

“We’ve set up a closed testing environment on our intranet for specific teams,” said a tech staff. “It’s very secure, and other environments are strictly blocked. The company emphasizes cautious use of ‘lobster.’”

Of course, not all institutions are enthusiastic about OpenClaw.

The pace of AI adoption in banks often depends on leadership style.

“Technology promotion often lacks a trigger,” said a regulator.

He pointed out that the bank’s budget was set early in the year, and new initiatives are mostly top-down. “For example, when the tech department proposed integrating DeepSeek, there was strong resistance, but after the big boom during last year’s Spring Festival, management recognized its value, and deployment went smoothly.”

Another tech staff from a joint-stock bank shared similar views.

“Our top leaders are not very supportive of large models; some teams have even disbanded,” he said. “Our AI exploration is always a step behind peers. ‘Lobster’ is popular, but our bank has little response.”

This reveals a deeper pattern in China’s banking industry: the speed of technological change often depends not on maturity but on how quickly top management’s awareness is refreshed. DeepSeek and OpenClaw are no exceptions.

However, once top management reaches a strategic consensus and a large-scale promotion window opens, well-resourced financial institutions can heavily deploy private domains, build multi-layered security sandboxes, and even perform secondary development and risk control to mitigate systemic risks.

As long as funds are sufficient, even banks with weaker tech capabilities can purchase enterprise private domain services from domestic giants like Zhituo, ByteDance Volcano, Alibaba, and others to deploy intelligent agents.

Deeper Digital Divide

This could further intensify the “Matthäus Effect” in banking under technological folding.

Apart from data security, high operational costs are another invisible barrier.

Until now, the technological benefits of OpenClaw have not shown a universal advantage.

Behind the “shrimp farming” craze is a brutal economic law: OpenClaw software is free; feeding it is costly.

OpenClaw’s remarkable execution relies on extreme token consumption. Besides frequent API calls and multi-step reasoning, the most hidden cost is the “long memory” mechanism—maintaining context requires packaging all past actions and environment states and feeding them back to the large model each time.

It’s like a secretary who must start with a self-introduction and repeat all previous conversations every time they call. This “snowballing” memory retrieval causes exponential token consumption.

Under high-frequency operation, high-end “lobster” models can cost nearly 30,000 yuan per month. Many users wake up to find hundreds of yuan spent on the agent’s endless “memory cycle.” Someone described it as: “Token consumption now is like 2009’s 30MB/2GB data plan—expensive and insufficient.”

For financial institutions seeking enterprise private deployment, this bill is also huge.

First, the “heavy asset” costs of computing infrastructure and custom development. The central ledger learned that the initial cost for private deployment of “lobster”-type agents generally ranges from 3 to 5 million yuan, supporting about 100 local inference units.

Second, the cloud invocation costs for “high intelligence.”

Limited by local data center capacity, financial institutions often have to deploy smaller models, sacrificing some performance. To use more advanced, powerful models, they must invoke cloud services, incurring high token fees again.

Some tech staff at state-owned banks emphasized that seeking “cloud high-performance computing” is limited. “Even if explored, it won’t involve core personal data like client details or account flows. Likely only for less sensitive scenarios.”

Beyond computing costs, hardware limitations pose another challenge.

Private deployment requires modern browser APIs like WebGPU and WebAssembly for inference and execution. Old browsers lack hardware acceleration; high-security browsers must have vulnerabilities patched and data encrypted, but many old office PCs can’t support newer browsers, creating terminal compatibility issues.

An internal tech staff at a state-owned bank who has built their own intelligent agent said that the system’s browser and environment requirements are high. “Many of our old devices can’t run the latest browsers, so deploying such ‘brain and hands’ digital staff is difficult.”

He added, “Just from demos, it seems hard to promote internally.” Many of their PCs are outdated, unable to support high-version browsers.

He also told the central ledger that it’s hard to predict the actual effect of their self-developed agent; “Many branches are cautious.”

This creates a somewhat absurd scene: the financial industry is chasing the most advanced AI agents of 2026, but often relies on hardware and browsers from 2016.

Beyond explicit computing costs, hidden data reconstruction costs also hinder further AI development.

For example, traditional banks’ data architectures are based on relational algebra, designed for structured analysis. Key business semantics are often not stored in data tables but scattered across application logic or veteran employees’ brains. This “AI-ready” data foundation is extremely weak, causing agents to suffer semantic loss when trying to reconstruct the full business picture. To truly “raise” intelligent agents in complex scenarios, banks must invest heavily in restructuring underlying metadata models.

These multiple pressures mean that top-tier large and medium banks, with strong IT budgets and solid technical bases, may be able to “cage” their proprietary digital agents safely within sandboxes; smaller banks with limited budgets are likely to be excluded by high entry barriers.

The agent era’s financial arms race has never been fair.

The central ledger found that few small and medium banks show enthusiasm for OpenClaw.

A fintech company serving small banks told the ledger, “At this stage, it’s more hype than substance.”

They said, “We haven’t unified installation of ‘lobster,’ let alone for clients.”

But they also revealed that some tech leaders are exploring OpenClaw’s use in banking systems, “but the main concern is data security.”

Insurers quietly pursue “micro-innovations”

Compared to heavily armored banks, the insurance industry, with its lengthy chains and labor-intensive processes, shows more flexibility.

If banks are wearing full armor testing the waters, insurance is more like rolling up sleeves and cautiously stepping in.

Some top insurers have previously made bold attempts. The central ledger learned that a major insurer tried integrating OpenClaw with email, meeting platforms, and other office tools, piloting internal testing among thousands of employees for email handling, scheduling, etc.

However, such large-scale rapid deployment faced strict regulatory review.

Another senior tech staff at a state-owned insurer told the ledger that many employees have tried OpenClaw on personal computers, but the current experience is not particularly “amazing.”

“Agent is definitely the trend,” he said. “We are a top player, but our self-development capability is limited. In the future, we will likely follow domestic big vendors’ customized solutions.”

He pointed out that data security regulations are very strict: “Agent’s internal network permissions are complex, vary across departments and levels, and future permission settings need careful research.”

The ledger also consulted several small and medium insurers with strong “tech genes.” They all said that, due to data risks, OpenClaw deployment at the company level has not yet begun.

Though not officially launched, the future scenarios show some logical evolution.

Peng Huan, founder of InsurTech, noted that as a highly regulated industry, insurance’s compliance, data privacy, and “black box” model issues prevent high-autonomy AI agents like OpenClaw from fully embedding into core processes. For core business, reliance on human-machine collaboration within controlled workflows remains necessary, limiting AI to fixed, transparent segments.

He believes that OpenClaw’s initial practice in insurance will focus on “micro-innovations” in non-core areas, breaking down and applying it to low-risk, non-sensitive scenarios for small-scale testing.

This trend requires insurance tech teams to handle new threats like “AI poisoning” (fake data) and “prompt injection” (extracting internal secrets via prompts). They must set strict permissions at the code level to prevent AI from bypassing security and accessing sensitive data.

“Private deployment and micro-tuning with permission controls are complicated; high-level intelligence in the short term is unlikely,” Peng Huan said. “So, internal OA functions like email, meetings, and scheduling—non-core—are the most tolerant and likely to be implemented first.”

“Super Agent” Duality

However, the greatest imagination for AI in insurance may not be at the company level but in empowering “super individuals” among agents.

After the “dehydration” of millions of agents and the rise of bancassurance channels, frontline agents are betting on serving high-net-worth clients with complex asset management needs.

OpenClaw, like a tireless digital butler, can track customer inquiries 24/7, build deep customer profiles, summarize communications, and generate daily to-do lists—liberating agents from tedious information sorting, allowing them to focus on emotional value and closing deals, exponentially increasing productivity.

The ledger noted that some agents are already using OpenClaw to build workflows—writing scripts, making short videos, creating personal brands, improving private domain conversions, interpreting policies, and enhancing professionalism; even training agencies are offering AI customer acquisition courses.

“‘Lobster’ helps life insurance agents a lot,” said an agent with deep experience. “But insurance is a niche, product info updates fast, so AI capabilities are limited. It’s better at principles, general knowledge, logic—scripts, videos, daily chores—‘lobster’ can handle in bulk.”

But this efficiency leap also raises privacy concerns.

Agents handle core private data—detailed health disclosures, medical records, family finances. If the intelligent agent becomes widespread, these privacy data could be further exposed.

Efficiency and privacy are colliding in the insurance agent’s phone. A single device running “lobster” could be both the most diligent assistant and the biggest leak risk.

Moreover, the security risks of employee personal use of agents are hard to regulate and even harder for agents to perceive.

Peng Huan pointed out, “Since agents operate mostly individually, such risks are hard to avoid and prevent. It can only be addressed by stronger regulation of large model providers, recommending use of domestic or top-tier models.”

He added that domestic computing power is mainly controlled by major tech firms and top model providers. “Strengthening regulation at the source makes overall risks controllable.”

Securities Firms: Tempted but Hesitant

The wave of OpenClaw is also infiltrating securities firms.

Research institutes were the first to sense the trend, sparking a “shrimp farming” popular science wave. Huatai Securities, Orient Securities, and others have launched “OpenClaw special courses,” guiding institutions and investors on deployment and research applications.

But, similar to banks and insurers, securities firms face strict risk controls.

According to the ledger’s investigation, most domestic securities firms are in a stage of “high concern for technology but strict control over deployment.”

CITIC Securities, GF Securities, and others prohibit employees from installing OpenClaw on work computers.

An insider at a Beijing securities firm told the ledger that deployment is mostly personal; some teams or employees explore privately on their own devices.

Meanwhile, some firms haven’t issued formal bans but remain cautious. An internal staff at a Shanghai securities firm said the company doesn’t prohibit deploying OpenClaw in work environments and is closely monitoring the product.

Bans do not mean outright rejection.

For example, GF Securities has a preliminary framework for deploying OpenClaw. An internal source said they have initiated controlled AI agent applications within safe boundaries, using pre-approval, isolated sandbox networks, and minimal permissions, to explore and validate the technology.

He emphasized that the company has issued security reminders to all staff, forbidding personal installation of OpenClaw tools.

In specific business lines, research and advisory are the most promising “test fields.”

For instance, GF Securities has established an OpenClaw tech research team focusing on intelligent office, personal assistants, and research/advisory tools. These can handle initial data retrieval, document processing, and logic sorting, freeing analysts and advisors from text overload, allowing more focus on strategic analysis.

In contrast, for investment banking, the physical gap remains.

“Currently, using such tools in investment banking is limited,” said an internal person from a top investment bank. “One key step in IPO projects is financial data verification, which relies heavily on rigorous ‘confirmation’ procedures involving banks, clients, and suppliers.”

Additionally, much work involves deep field visits and on-site due diligence—tasks that AI cannot replace, as they depend on real-world interaction.

AI can process information faster than humans but cannot open factory doors.

Other Cases: Rogo on Wall Street

Besides lacking physical interaction, hallucination risks are a major compliance concern for licensed financial institutions.

A Shenzhen investment banker told the ledger that fault tolerance in finance is nearly zero; every statement and data point in prospectuses must be verifiable. But using general AI-generated drafts, staff cannot know the source of each statement.

“But if there were tools that could trace every sentence back to its source, usage enthusiasm might increase,” he said.

On Wall Street, a tool called “Rogo” is quietly gaining popularity, offering a potential breakthrough for agents in finance.

Last October, Rogo raised $750 million at a valuation of $7.5 billion. Its clients include J.P. Morgan, Nomura, and others, and it’s seen as a potential “alternative” for junior investment bankers.

Rogo is favored because it connects to core databases like Capital IQ and FactSet via APIs, allowing analysts to access real-time data, with AI conclusions explicitly citing sources and links, achieving “traceability.”

For example, if a user inputs: “Based on earnings call / investor presentation, analyze Google’s core AI initiatives over the past 24 months, identify changes, and extract KPIs related to AI adoption,” Rogo annotates each conclusion with data sources.

Clicking these annotations jumps to the original transcript segment, and automatically generates Excel sheets for data, fitting into investment workflows, completing “search → extract → structure” in about 12 seconds.

Rogo’s success in finance answers a core question: AI is not unusable; it’s about turning the “black box” into a “glass box.”

For data compliance, Rogo uses single-tenant deployment, providing each client with an independent infrastructure instance. Even competitors using Rogo are isolated physically, reducing data leakage risks.

Its subscription costs are about tens of thousands of dollars annually for 10-12 seats, which is affordable for Wall Street investment banks but still needs localization for domestic markets.

Market feedback indicates that Rogo still has limitations in handling complex financial models.

Besides Rogo, other overseas AI tools like Hebbia, a knowledge graph engine designed for finance, also offer traceability in document processing.

Final Cost Analysis

In the domestic market, Wind and Tonghuashun are also exploring similar AI agents.

For example, WindClaw, currently in beta, deeply integrates Wind’s financial data, enabling automatic reading of real-time quotes, financials, industry info, and compliance notices.

However, many pain points remain. Wind’s data is public, but investment banking projects often involve confidential IPO financials. How to securely and deeply connect such sensitive data with AI tools remains an open question.

Moreover, like banks and insurers, securities firms face a real “economic bill.”

In recent years, all business lines have entered a “cost reduction and efficiency increase” deep water zone. The high frequency of API calls to large models consumes tokens, which management is highly sensitive to.

An analyst who tried OpenClaw joked: “Using ‘lobster,’ I realized that even polite phrases like ‘received’ or ‘thank you’ burn real token funds.”

In the token economy, politeness has a price.

This “pay-per-use” uncertainty conflicts with the strict cost controls of research departments.

Under current budget constraints, even core tools like Wind are facing reduced procurement, shared accounts, or non-renewals. Convincing institutions to spend large sums on expensive tokens is challenging.

“Apart from no Wind procurement budget now,” said a research staff at a southern securities firm. “Reimbursements and meetings are also tightly controlled.”

This extreme cost-awareness is a practical reason why many firms hesitate to act.

In the face of the efficiency revolution brought by OpenClaw, the hesitation of China’s financial industry is not resistance to innovation but constraints imposed by compliance, business characteristics, and budgets.

History repeats: every wave of technological change—Internet, mobile payments, blockchain—begins with fear, then restriction, then internal imitation, and finally full adoption.

The wheel of technological progress is unstoppable. But before AI truly takes a seat at the financial street’s desks, the industry still awaits a domestically optimized “best solution” that balances data security and computing costs.

The outline of this best solution may well be hidden in bank security sandboxes, insurance agents’ workflows, or in the meticulous token calculations of securities analysts.

It will not fall from the sky but grow from countless cautious explorations.

Risk warning and disclaimer

Market risks exist; investments should be cautious. This article does not constitute personal investment advice and does not consider individual user’s specific investment goals, financial situations, or needs. Users should evaluate whether any opinions, viewpoints, or conclusions herein are suitable for their circumstances. Invest accordingly at their own risk.

View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
  • Reward
  • Comment
  • Repost
  • Share
Comment
Add a comment
Add a comment
No comments
  • Pin