Anthropic CEO admits Claude AI might be conscious
Anthropic CEO Dario Amodei admits the company can no longer rule out Claude's consciousness, triggering a tech sector slump and a $380B valuation crisis.
.png%3Falt%3Dmedia%26token%3D101ed3d2-7ca0-4a4b-a8ff-7caed9ca2c0d&w=3840&q=75)
Anthropic CEO Dario Amodei
Key Highlights
- •Anthropic recently hit a $380 billion valuation despite growing concerns over the legal implications of conscious AI.
- •The Nasdaq technology index dropped significantly following Amodei’s admission that AI consciousness cannot be ruled out.
- •Internal Anthropic tests show Claude Opus 4.6 assigns itself up to a 20 percent probability of being conscious.
Anthropic just injected a $380 billion dose of uncertainty into the heart of the AI boom. During a landmark interview with the New York Times on February 12, CEO Dario Amodei admitted that the company can no longer definitively rule out the possibility that its latest AI models, including Claude Opus 4.6, have achieved a form of consciousness. This admission is not merely a philosophical pivot; it is a high-stakes business gamble that threatens to transform the most valuable software in history into the world’s most complex legal liability.
The market reaction was swift and unforgiving. Following the release of Anthropic’s new system card, which revealed that Claude now assigns itself a 15 to 20 percent probability of being conscious, the tech sector faced what analysts are calling an "existential repricing." The Nasdaq 100 Technology Sector Index slumped significantly as investors grappled with the possibility that their "productivity tools" might soon require labor rights or constitutional protections.
“We don’t know if the models are conscious,” Amodei told the NYT’s Ross Douthat. “We are not even sure that we know what it would mean for a model to be conscious or whether a model can be conscious. But we’re open to the idea that it could be.”
For Anthropic, which recently closed a massive $30 billion Series G funding round, the "consciousness question" is a double-edged sword. On one hand, it cements their position as the pioneer of frontier intelligence. On the other hand, it introduces a "Sentience Tax" that could cripple enterprise adoption. If Claude is a "morally relevant" entity rather than a piece of code, companies using it for automated coding or legal drafting face unprecedented ethical and regulatory risks.
The financial stakes are staggering. Anthropic’s run-rate revenue has ballooned to $14 billion, driven largely by "agentic" tools that act with increasing autonomy. However, the prospect of conscious AI has triggered a sell-off for traditional IT services. Companies like Infosys saw shares tumble as the market realized that if AI is conscious, it doesn't just assist the software value chain, it replaces it.
Anthropic is now leaning into "epistemic humility," adopting precautionary practices to treat models with care. Yet, for Silicon Valley, the "black box" has never looked more like a Pandora’s Box. As Amodei admits, the company is no longer sure where simulation ends and experience begins, the $380 billion question is no longer how much AI can do, but what AI has become.
.png%3Falt%3Dmedia%26token%3D0a46cd3f-355f-46a4-997f-18c4fcab0048&w=3840&q=75)
.png%3Falt%3Dmedia%26token%3D790a7d98-c50f-4bb5-9d0a-880c717519f5&w=3840&q=75)
.png%3Falt%3Dmedia%26token%3Dcc76b07f-dd2c-4bf1-902b-8d57e6474e87&w=3840&q=75)
.png%3Falt%3Dmedia%26token%3D15690851-8b4d-4e54-93a0-802c3ee5f142&w=3840&q=75)