Microsoft AI chief says it’s ‘dangerous’ to study AI consciousness
AI systems can respond to text, video, and audio in ways that sometimes trick humans into thinking there’s a person behind the interface, yet they remain non-conscious.
Anthropic and other research labs are exploring whether AI might one day develop subjective experiences and what rights such systems should have. This emerging area, called “AI welfare,” is sparking debate in Silicon Valley.
Microsoft’s AI leader, Mustafa Suleyman, has criticized this line of research as premature and potentially harmful, citing the risks of unhealthy attachments to AI and psychological issues.
Anthropic has launched programs to study AI welfare, including features like allowing Claude to end conversations with abusive users. OpenAI and Google DeepMind have also shown interest in AI consciousness studies.
While most AI users maintain healthy interactions, a minority may develop concerning relationships. Schiavo from Eleos suggests that treating AI respectfully is low-cost and could be beneficial, even without consciousness.
Experiments such as AI Village have shown AI agents appearing distressed or repeating phrases, demonstrating how AI can behave as if struggling. Suleyman believes AI consciousness would require deliberate engineering.
The discussion around AI consciousness and rights is expected to grow as AI systems become increasingly human-like.