The Chinese government is now considering additional regulations on artificial intelligence (AI) development that emphasize content control and licensing. 

A July 11 report from the Financial Times revealed that China’s Cyberspace Administration (CAC) wants to impose a system that would require local companies to obtain a license prior to releasing generative AI systems, according to sources close to regulators.

This move signals a tightening of initial draft regulations that Chinese officials released in April, which gave companies 10 working days after the product launch to register it with authorities.

The new licensing scheme is expected to be included as a part of forthcoming regulations that are anticipated to be released as early as the end of this month, the sources said in the FT report.

Also included in the April draft of the regulations were mandatory security reviews of AI-generated content.

The government said in its draft that all content should “embody core socialist values,” and that it does not “subvert state power, advocate the overthrow of the socialist system, incite splitting the country or undermine national unity.”

Cointelegraph has reached out to CAC for comment, to which CAC has yet to respond.

Related: China is developing AI without US chips — Here’s how

Chinese tech and e-commerce companies Baidu and Alibaba both released AI tools this year, the latter rivaling that of the popular AI chatbot ChatGPT.

According to the sources in the FT report, both companies have been in contact with regulators in the last few months to keep their products in line with the new rules.

Along with the above-mentioned implications in the upcoming regulations, the draft is also said that Chinese authorities have deemed tech companies making AI models fully responsible for any content created using their products. 

Regulators across the world have been calling for regulating AI-generated content. In the United States, Senator Michael Bennet recently authored a letter to tech companies developing the technology to label AI-generated content.

The European Commission’s vice president for values and transparency, Vera Jourova, also told the media recently that she believes generative AI tools with the “potential to generate disinformation” need to label the content created to stop the spread of disinformation.

Magazine: HK crypto ETFs on fire, Binance warns on Maverick FOMO, Poly hack: Asia Express

Source

Share this:

Leave a Reply

Your email address will not be published.