BEIJING, December 27, 2025 – China’s Cyberspace Administration (CAC) has released draft regulations for AI systems that simulate human personalities or engage users emotionally, requiring providers to implement strict safety measures, user notifications, and alignment with national values.

The “Interim Measures for the Management of Artificial Intelligence Human-Like Interactive Services,” open for public comment until January 25, 2026, covers platforms delivering text, image, audio, or video interactions that mimic human traits, thinking patterns, or communication styles.

Providers must establish full lifecycle safety systems, including algorithm audits, data security protocols, and personal information protection. Services reaching 1 million registered users or 100,000 monthly active users require security assessments and reporting to provincial authorities.

Users must receive clear notifications that they are interacting with AI —at login, every two hours, or upon detection of overdependence.

Providers are required to intervene in cases of extreme emotions or addiction risks, such as prompting breaks or recommending professional help. Prohibited content includes threats to national security, misinformation, violence, or obscenity. Services must embody “core socialist values.”

Additional protections apply to minors, including time restrictions and guardian controls. The draft addresses psychological risks such as blurred human-AI boundaries or manipulation.

Lin Wei, president of Southwest University of Political Science and Law, described the rules in a CAC commentary as proactive governance that balances innovation with social stability.

The measures build on China’s 2023 generative AI regulations and 2025 labeling requirements, forming part of a layered oversight framework while promoting AI as a strategic industry.

Similar concerns have prompted actions in other jurisdictions, including EU rules on emotional AI under the AI Act and U.S. investigations into companionship services.