Is the AI Race Just a Copycat Race? What This Means for Markets and Adopters

Executive Summary

Anthropic accuses Chinese labs of copying its frontier model through 16 million fraudulent exchanges. Software stocks recently sold-off drastically as AI enables firms to build cheap copies of established enterprise applications. As a consequence, firms may be building copies of software with copies of AI models – fast, affordable, but largely untested for regulated environments. For wealth managers, this raises a pressing question: how do you stay on top of what is safe, what is viable, and what is just the latest shortcut? This article helps you navigate.

Two Copycat Dynamics, One Structural Shift

Two distinct but related developments have dominated AI headlines in recent weeks.

At the model level, Anthropic published evidence on February 23 alleging that three Chinese AI laboratories – DeepSeek, Moonshot AI, and MiniMax – conducted large-scale “distillation” campaigns against its Claude models. According to Anthropic, the three labs generated over 16 million exchanges through approximately 24,000 fraudulent accounts, systematically targeting Claude’s most advanced capabilities in agentic reasoning, tool use, and coding. OpenAI raised similar allegations against DeepSeek earlier in February in a memo to the U.S. House Select Committee on China. The Chinese labs have not publicly responded to these specific claims at the time of writing.

At the application level, AI tools have triggered what markets now call the “SaaSpocalypse.” Since late January 2026, hundreds of billions in market value have been erased from software stocks. The iShares Expanded Tech-Software Sector ETF (IGV) entered bear market territory, falling roughly 20% from its recent highs. Several large-cap software names, including ServiceNow and Thomson Reuters, are down more than 25% year-to-date. What triggered the sell-off was a realization: if you know how to use AI – or have someone who does – you can now build in weeks what previously required expensive enterprise software licenses. The per-seat licensing model that powered enterprise software for two decades faces structural pressure.

The common thread: in the AI era, both frontier model capabilities and enterprise software functionality can be replicated faster and cheaper than at any point in history. The question for wealth managers is whether the models entering their workflows are actually safe.

Cheaper Does Not Mean Safer

This is the security dimension that gets lost in the headlines about market corrections and geopolitical tensions.

A NIST/CAISI evaluation found that DeepSeek models were 12 times more susceptible to agent hijacking attacks than evaluated U.S. frontier models. In simulated environments, hijacked agents based on DeepSeek sent phishing emails, downloaded malware, and exfiltrated user credentials. A separate assessment by Cisco found a 100% jailbreak success rate against DeepSeek R1, meaning the model failed to block a single harmful prompt tested. Several countries, including Italy, Taiwan, Australia, and South Korea, have restricted DeepSeek on government devices.

Now consider Anthropic’s latest allegations. If these distilled models are indeed built by extracting capabilities from frontier models while stripping away safety guardrails, the result is a system that may perform well on benchmarks but lacks the safeguards that regulated industries require.

For wealth managers operating under FINMA, GDPR, or the EU AI Act, this has direct consequences. In our previous newsletter on AI browsers, we examined how giving an AI tool autonomous control over your browser creates exposure points for client data, strategic positions, and confidential information. The same principle applies here: can I trust this model with my clients’ data?

Models built through unauthorized distillation, with undisclosed training data, minimal safety testing, and data infrastructure linked to jurisdictions with mandatory government data-sharing requirements, do not meet that threshold for a regulated wealth management environment.

What This Means in Practice

None of this means wealth managers should wait. The tools are mature enough to deliver real operational value today, provided they are deployed with the right priorities. The question is how to build an architecture that remains secure, flexible, and independent of any single provider.

The Case for a Multi-Provider Strategy

The current landscape reinforces a practical conclusion that applies regardless of where you stand on the geopolitical debate: do not lock yourself into a single AI provider.

The competitive dynamics are shifting rapidly. Anthropic has released some of the most capable models for agentic reasoning and coding. Google’s Gemini ecosystem continues to expand. Open-source models are improving in quality. And OpenAI, while dominant in market presence, is facing increasing competitive pressure from multiple directions.

For a wealth management firm with CHF 500 million to CHF 10 billion in assets under management, the strategic implication is clear: build your AI architecture to be provider-agnostic where possible. Use workflow automation tools that allow you to switch models depending on the task, the cost, and the security requirements. Set temperature low for factual extraction tasks (as we covered in our January newsletter). Use frontier models for complex reasoning. Use smaller, validated models for routine processing. And ensure that none of your critical workflows depend on a single provider that could change pricing, terms, or capabilities overnight.

As we discussed in our last newsletter on AI-sourcing, smaller firms have a structural advantage here. They can adapt faster than large institutions locked into multi-year enterprise agreements. A boutique wealth manager who builds a flexible AI stack today, with clear governance, controlled data flows, and the ability to swap providers, is better positioned than a large bank tied to a single platform.

The Bottom Line

It seems that the AI race has entered a phase where copying is the norm at every level. AI capabilities are becoming more accessible and more affordable, faster than anyone predicted. But not every model that performs well is safe, compliant, or appropriate for a regulated environment. In wealth management, the security question must come before the cost question.

For European wealth managers, the path forward requires three things: an AI strategy that puts data security and regulatory compliance first, a flexible architecture that avoids vendor lock-in, and the knowledge to distinguish between a genuinely safe tool and a hastily assembled copy.

If you are evaluating how to build or refine your firm’s AI strategy in this environment, connect with me on LinkedIn or book a conversation at gerevest.ai.

Sources:

  1. Detecting and preventing distillation attacks | Anthropic
  2. Anthropic accuses Chinese AI labs of mining Claude as US debates AI chip exports | TechCrunch
  3. What’s Behind the ‘SaaSpocalypse’ Plunge in Software Stocks | Bloomberg
  4. AI fears pummel software stocks | CNBC
  5. Why SaaS Stocks Have Dropped | Bain & Company
  6. Evaluation of DeepSeek AI Models Finds Shortcomings and Risks | NIST/CAISI
  7. Evaluating Security Risk in DeepSeek and Other Frontier Reasoning Models | Cisco

About the Author: Dr. Andreas K. Janoschek specializes in AI applications for Asset & Wealth Management. Based in Geneva, he helps industry professionals stay ahead of competition by securely advancing with AI.

This newsletter aims to inform and does not constitute investment or legal advice. Always consult with qualified professionals for specific circumstances.

📧 Originally published in our AI x Wealth Management Newsletter

Subscribe on LinkedIn

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *