China Sets the Stage for AI Guidelines as Tech Giants Unveil ChatGPT Rivals

China Sets the Stage for AI Guidelines as Tech Giants Unveil ChatGPT Rivals

Table of Content

In this article, we’ll look at the reasons behind China’s new draft rules for generative AI services and their impact on AI development as domestic tech giants roll out ChatGPT-style products.

Key Takeaways:

  • China’s Cyberspace Administration drafts first-ever rules for generative AI services
  • Alibaba and Baidu unveil their ChatGPT-style products
  • Draft rules emphasize content reflecting core values of socialism and avoiding false information
  • Data collection must respect privacy and intellectual property rights
  • China not alone in regulating AI, with Italy banning ChatGPT and the US considering certification process

Draft Rules Aim to Guide Generative AI Development

Generative AI refers to algorithms trained on vast amounts of data that can create content such as images and texts. 

OpenAI, a US-based company, developed ChatGPT, which has become hugely popular for generating responses to user queries. 

Recognizing the potential implications and rapid growth of generative AI, the CAC drafted rules to ensure its proper management.

These rules, designed to lay the foundation for how companies develop generative AI products, are the first of their kind in China. 

The draft measures include guidelines on the type of content that AI services can generate, data collection, and algorithm development. 

The regulations are anticipated to take effect later this year, coinciding with China’s existing laws concerning the safeguarding of information and the creation of algorithms.

Chinese Tech Giants Enter the AI Race

In recent weeks, China’s leading technology companies have been unveiling their own generative AI products in a bid to rival OpenAI’s ChatGPT. 

Alibaba introduced Tongyi Qianwen, its generative AI product, which it plans to integrate across various services. Similarly, Baidu launched its equivalent, Ernie Bot, for testing last month.

These tech giants’ entry into the AI market demonstrates China’s eagerness to keep up with the rapid pace of AI development. 

However, these products are still in the testing phase and are not yet available to the public. 

As analysts have noted, the CAC rules will likely affect how AI models are trained in the future, as China seeks to guide the development of this powerful technology.

Content Guidelines and Data Privacy

The CAC’s draft rules emphasize that generative AI services must adhere to certain content guidelines. 

The material produced by AI tools should align with the fundamental principles of socialism and must not undermine the authority of the state.

Additionally, AI services must avoid promoting terrorism, discrimination, violence, and other harmful content.

Companies are also required to ensure that the data used to train AI models does not discriminate against people based on ethnicity, race, gender, or other factors. They must prevent their AI services from generating false information.

When it comes to data collection for AI models, the CAC rules require that data must not contain information that infringes on intellectual property rights. 

If personal information is included in the data, companies must obtain consent from the individuals involved or meet other legal requirements.

These content guidelines and data privacy measures serve to protect users and maintain a responsible approach to AI development in China.

The Global Trend of AI Regulation

China is not the only country concerned about the development and potential implications of generative AI. In March, Italy banned ChatGPT, citing privacy concerns.

The US Department of Commerce has also requested public comment on whether AI models should undergo a certification process.

Leading tech companies like Google and Microsoft have acknowledged that their AI bots are not perfect and have expressed openness to regulation. 

OpenAI, for example, states on its website that it believes powerful AI systems should be subject to rigorous safety evaluations and that it actively engages with governments to determine the best form of regulation OpenAI, for instance, emphasizes on its website the importance of subjecting powerful AI systems to thorough safety evaluations. 

The company actively collaborates with governments in order to establish the most effective regulatory framework for AI technology.


As generative AI technology advances, China is quick to establish guidelines and regulations to ensure its responsible and secure development. 

These draft rules, set to come into effect later this year, outline the content and data handling requirements for AI services in the country. 

With other nations also expressing concerns and implementing measures, it is clear that global AI regulation is becoming a growing priority for governments and tech companies alike.


Written by


Reviewed By



Judith Harvey is a seasoned finance editor with over two decades of experience in the financial journalism industry. Her analytical skills and keen insight into market trends quickly made her a sought-after expert in financial reporting.