Much has been written over the past six months about how Beijing is grappling with the regulation of generative artificial intelligence (AI), in the process corralling large language models (LLMs) and the platforms that use them via chatbots. In this endeavor, Beijing has arguably been out in front of other governments, particularly the U.S., by building a set of tools rather than trying to develop sweeping regulation such as the European Union’s AI Act.
Chinese firms are clearly among the global leaders in the generative AI space, with major tech platform companies like Alibaba and Baidu, telecoms giant Huawei, and social media companies like Tencent and Bytedance all investing heavily in the technology. Along with leading generative AI companies in the U.S., these Chinese firms are continuing to develop and iterate generative AI models. They are also participating in a robust regulatory dialogue with the likes of the Cyberspace Administration of China (CAC), which has become the country’s default generative AI regulator because of its content, data privacy and security mandate.
However, the world of AI governance has changed dramatically since the release of ChatGPT-3 a year ago. Led by the Biden administration, alongside multilateral efforts such as the G7 Hiroshima process, Western governments have sharpened their focus on so-called “frontier AI models” — think ChatGPT-5 — and the potential national security risks they pose. The Biden administration has moved quickly to show leadership on how governments should think about how to control next-generation AI systems, launching a rapid series of initiatives:, from a Blueprint for an AI Bill of Rights, to the White House Voluntary Commitments, culminating in last week’s executive order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence.
Unlike the Chinese approach to regulation, which has focused on the nuts and bolts of generative AI — from the validity and political correctness of input data to the accuracy of the output — the U.S. has zeroed in on the national security implications of malicious actors getting their hands on advanced models. The fear is that such actors could enable better cyber operations, facilitate access to biological, chemical, or nuclear weapons formulas, and generally cause havoc if their access to advanced models through the open sourcing of algorithms becomes the norm, and governments are not able to track who is training and deploying them.
The European Union, which has devoted significant effort over the past five years to an inclusive process leading to its AI Act, has had to scramble to add generative AI to the legislation’s language. The EU’s approach has been to classify AI systems into four risk categories, from ‘unacceptable’ to ‘minimal’. The Act does not use the term “frontier AI”, but is now more focused on the applications of advanced AI models that could involve major national security risks.
The entire global debate about AI regulation has ramped up over the past six months, mostly between AI developers and proponents of so-called ‘AI doomsday’ scenarios, who argue that governments need to start regulating AI now to prevent humans from losing control of the technology. This debate has been largely confined to the U.S. and select developed countries, and has been much less of a focus in China, where the government and companies have been more focused on the benefit of deploying advanced AI models.
THE U.K. AI SAFETY SUMMIT: A WATERSHED MOMENT
The many issues around AI regulation came together last week at Bletchley Park outside London, the birthplace of a high performance computer used to break German codes during World War 2. The U.K. AI Summit, driven by Prime Minister Rishi Sunak, was an attempt to enlist a significant number of countries and leading AI companies, along with civil society organizations, in a new effort to establish a multilateral process for developing a regulatory framework for frontier AI systems.
A huge debate in the runup to the Summit was the status of China, and whether to invite a Chinese delegation at all. Many ‘like minded’ countries believed that they should first decide on their approach to frontier AI, and then determine how to include an authoritarian country with huge stakes in the game. In the end, those asserting that leaving out China would doom an effort to get to a global agreement won out. But it was not pretty. Originally, it appeared that the Chinese delegation, eventually headed by Ministry of Science and Technology (MoST) Vice Minister Wu Zhaohui, would not be invited to Day Two of the Summit, which would feature the ‘like-mindeds’ alone.
A strange compromise was reached that saw Wu included that day’s ministerial meeting, but excluded him from the ‘like-minded’ meeting later on that included a now iconic group photo of senior officials and leading industry figures. Adding to the confusion, China did sign the Bletchley Declaration on Day One; but critically, Chinese companies present at the Summit — Alibaba and Tencent — did not back a key joint agreement signed on Day Two pledging to allow governments, including those of the U.S., U.K. and Singapore, to test their models. By contrast, major western firms such as OpenAI, Google Deepmind, Amazon, Anthropic, Mistral, Meta, and Microsoft did sign this agreement.
WHERE CHINA FITS INTO THE AI GOVERNANCE GAME
While the U.K. and Sunak are clearly seeking a role as the global convener for a more inclusive discussion of frontier AI regulation that includes China, the exact nature of this inclusion remains heavily clouded coming out of Bletchley. There are several major reasons for this.
First, the ‘like minded’ countries are grappling with how to include China in different parts of the construction of the evolving regulatory framework. This means having to differentiate between Chinese government officials like Wu, and other Chinese regulators such as CAC — which wears both a government and Communist Party of China (CPC) hat — Chinese academics, and Chinese AI companies. Importantly, some of the leading Chinese AI companies, such as Huawei and Bytedance, are current or future targets of U.S. export controls or other regulatory action.
While the goal of the U.K. AI Summit was to include China, it remains unclear whether Beijing will allow the many companies in China developing AI models to sign on to any such initiatives.
Within China, there is also a major debate about which organizations should be involved in international discussions around AI governance. MoST, which had a heavy hand in drafting China’s National AI Development strategy in 2017, has played more of an advocacy role for technology development, while CAC, as a regulator, focuses on content and data privacy/security. Other key ministries and commissions, such as the Ministry of Industry and Information Technology (MIIT) and the National Development and Reform Commission (NDRC), also have major roles overseeing key pieces of the technology industry and the AI stack, from semiconductors to cloud services.
Just before the U.K. AI Summit and the Biden administration’s AI executive order, China released the Global AI Governance Initiative at the Belt and Road Forum. This document, drafted by CAC and MoST, lays out Beijing’s high level vision for AI governance at the international level. It seeks to position China as a supporter of the preferences of the Global South on AI governance, and also takes a not-too-veiled shot at U.S. efforts, through export controls, to constrain the supply of technology key to AI development. The statement came on the heels of an August announcement that the BRICS countries have agreed to launch an AI study group.
Following the Bletchley Park meetings, officials in Beijing will seek to determine how China will engage in future international fora on AI governance. As part of this effort, they will need to develop guidance for its academics and companies on participating in the growing number of global dialogues, codes of conduct, and other initiatives around the development of frontier AI models. Some leading Chinese AI companies, such as Baidu, which was invited, did not attend the U.K. AI Summit, likely because they did not receive clear guidance from government officials. Others, such as Huawei and Bytedance, or Kai-fu Li’s new AI startup Project 01.A1, were almost certainly not invited, possibly reflecting political sensitivities, or lack of detailed knowledge on the part of the summit organizers on who the leading second-tier Chinese AI firms are. Li’s 01.A1, which is developing an open source model, is already valued at $1 billion, on par with leading second-tier U.S. and Canadian AI firms such as Anthropic, Inflection, and Cohere. Given Li’s stature within the industry, his participation in future summitry will be key to watch.
The Chinese companies, Alibaba and Tencent, that did attend the Summit, sent lower level regional leaders, and not their CEOs. By contrast, all the main U.S., Canadian, and UK companies that participated in the Summit sent CEOs or other very senior officials: Sam Altman from OpenAI, Mustafa Suleyman from Inflection, Elon Musk from Tesla, Brad Smith from Microsoft, and others. It is likely that the invitations were sent to Alibaba, Tencent, and Baidu’s CEOs, but none decided to show at Bletchley Park. This is significant, as leading AI company bosses have been at the center of U.S. and U.K. processes over the past six months. The CEOs of the major AI players — Google, Anthropic, and OpenAI — were all present at the signing of the White House Voluntary Commitments in May.
Chinese scientists are likely to be part of an Expert Advisory Panel, headed by Turing-award winner and machine learning pioneer Yoshua Bengio, that was announced at Bletchley Park. This group will publish a “State of the Science” report on the capabilities and risks of frontier AI before the next summit to be held in six months in South Korea. U.K. officials touted other key documents that showed leading AI companies were on board, such as a pre-Summit press release where six leading companies, all from the U.S., published safety policies. Having companies sign on to such voluntary commitments has now become part of key international initiatives, from the White House Voluntary Commitments to the G7 Hiroshima Process Code of Conduct, to the final statement from the UK AI Safety Summit.
DEVELOPING FRONTIER AI MODELS
A huge unanswered question coming out of what is now the Bletchley Park process, is how leading AI companies will participate in all of these voluntary initiatives, which involve them signing up to guiding principles for model development and testing. The Biden AI executive order, for example, will eventually require cloud services providers to turn over a significant amount of information on customers who are training advanced models in the cloud, and on large data center clusters. Already, most leading U.S., Canadian, and U.K. AI companies have signed on to some of these initiatives.
While the goal of the U.K. AI Summit was to include China, it remains unclear whether Beijing will allow the many companies in China developing AI models to sign on to any such initiatives. Beijing will likely view them as U.S. or ‘like-minded’ branded. Allowing the participation of leading Chinese AI company CEOs such as Pony Ma of Tencent, Robin Li of Baidu, and others would be a major sign that Beijing is taking the process seriously. If Chinese companies are not included in codes of conduct and agreements on government testing, this will undermine the Bletchley Park process effort to ensure more global interoperability around the entire frontier AI governance framework.
There is also a huge major bilateral complication hanging over the effort to engage the Chinese government on AI governance. The U.S.’s October 17 export control package significantly upped restrictions around the export of advanced GPUs to Chinese companies, including all those developing large language models of the type that governments are now seeking to regulate via the Biden order and the Bletchley Park process. The controls focused on “performance density”, a new metric that expanded their scope to cover the kind of GPUs which U.S. industry leader Nvidia developed last year that were fell under previous performance thresholds. Some Chinese companies may have stockpiled advanced GPUs over the past year, but all Chinese generative AI companies and cloud services providers are concerned over their long-term access to advanced hardware needed to train models.
IS AI GOVERNANCE NOW A FRONT BURNER ISSUE IN BEIJING?
Where do we go from here? First, Beijing will likely retaliate for the 17 October controls after the November 15 meeting between Presidents Biden and Xi in San Francisco. Chinese officials were particularly upset at the inclusion in the latest rules package of leading domestic GPU firms Biren Technology and Moore Threads. Beijing could respond with further restrictions on critical minerals, or by targeting U.S. firms’ operations in China.
The next six months in the runup to the South Korea summit will be critical to determining the future direction of China’s participation in global AI governance…
Second, Beijing will likely reconsider how it approaches bilateral, plurilateral, and multilateral engagement on AI governance. One of the deliverables from this week’s meeting between the presidents will likely be an AI governance working group; but it may only tackle narrow issues around AI, such as autonomous weapons systems. Between now and the next Bletchley Park process summit in South Korea, Beijing will have to decide how its government officials and regulators, its academic and scientific leaders on AI, and its leading AI companies engage in the process. Third, it remains unclear where Beijing will come down on the open sourcing of advanced AI models. The Biden executive order appears to cast a wide net on open sourcing models, and has already drawn criticism from some leading figures in the industry. A number of leading Chinese generative AI model leaders have already open sourced some of their models. Alibaba and Baidu are hosting open sourced U.S. models such as Llama-2 and offering access through their cloud services.
Fourth, it appears that Beijing has given the greenlight to Chinese AI researcher participation in groups that are taking a hard look at AI governance and safety, including with key participants in the U.K. AI Summit. A number of Chinese AI scientists participated in the publication of a short paper last week on AI safety that includes several policy recommendations — the lead authors on the paper include Bengio and another machine learning pioneer Geoff Hinton, who recently left Google. Both Bengio and Hinton are part of the ‘AI doomsday’ group of AI scientists and intellectuals that believe more regulation is needed now to slow its development.
But the big issue will remain how Beijing views the new trend of having governments gently arm twist leading AI companies into agreeing to voluntary commitments and codes of conduct, and into turning over things like red-teaming testing and customer data to help governments track the development of advanced frontier AI models.
In this regard, U.S. government export controls on GPUs are working strongly against any attempts to get Beijing to consider allowing Chinese companies to fully participate in initiatives involving the U.S. or other allied governments. This threatens to undermine U.K. government efforts to include the world’s second largest economy and its leading AI firms in what British officials are touting as a “truly global consensus.” The next six months in the runup to the South Korea summit will be critical to determining the future direction of China’s participation in global AI governance and building a regulatory framework around frontier AI models. Stay tuned.
Click here to read another op-ed from Paul Triolo on Huawei’s new Mate 60 smartphone.
Paul Triolo is a Senior Associate with the Trustee Chair in Chinese Business and Economics at the Center for Strategic and International Studies.