
"Anthropic accuses DeepSeek, Moonshot, Minimax of illicitly using their large language model Claude for improving Chinese AI models. Distillation attacks i
Senior artificial intelligence firm Anthropic has accused DeepSeek, Moonshot, and Minimax - all Chinese AI companies with estimated valuations in the multi-billion dollar range - of illicitly using its large language model Claude to improve their own models. In a blog post on Sunday, Anthropic alleged that these firms conducted "distillation attacks" involving training less capable models on the outputs of a stronger one, generating over 16 million exchanges combined with Claude AI across approximately 24,000 fraudulent accounts.
The attacks focused on scraping Claude for a wide range of purposes, including agentic reasoning, coding and data analysis, rubric-based grading tasks, and computer vision. Anthropic identified the trio via various means, including IP address correlation, request metadata, infrastructure indicators, and in some cases corroboration from industry partners who observed the same actors and behaviors on their platforms.
Distillation is a legitimate training method used by some AI firms to create smaller, cheaper versions of their models. However, Anthropic argued that it can also be used for illicit purposes, such as competitors acquiring powerful capabilities from other labs in a fraction of the time and cost needed for independent development.
Beyond intellectual property implications, Anthropic highlighted the potential geopolitical risks posed by distillation campaigns from foreign competitors. "Foreign labs that distill American models can then feed these unprotected capabilities into military, intelligence, and surveillance systems—enabling authoritarian governments to deploy frontier AI for offensive cyber operations, disinformation campaigns, and mass surveillance," the firm said.
To protect itself, Anthropic plans to enhance detection systems, share threat intelligence, and tighten access controls. The firm also called for more collaboration from domestic industry participants and lawmakers to help stop foreign AI companies from attacking US firms. "No company can solve this alone. As we noted above, distillation attacks at this scale require a coordinated response across the AI industry, cloud providers, and policymakers," Anthropic concluded.
Source: Read Original Article
Post a Comment