Last week at the annual World Economic Forum in Davos, G42, the Emirati technology giant, seemed on top of the world.
Hosting events with tech luminaries at the so-called ‘AI House Davos’ — which G42 launched with other global artificial intelligence heavyweights — the company was at the center of the hottest topic at the elite conference in the Swiss Alps. It even announced a new, cutting-edge AI firm during the event called Analog, run by a former Microsoft AI executive.
The company’s outward confidence belies its rocky start to the year. The House’s vocal Select Committee on the Chinese Communist Party has called on the Biden administration to consider placing export controls on the network of firms affiliated to G42, after reports of U.S. intelligence concerns about the firm’s extensive ties to China emerged in November. The firm has denied any connection to the Chinese government or military and stated that it is phasing out hardware from Chinese firms like Huawei in order to maintain its partnerships with U.S. firms like OpenAI and Microsoft.
The controversy around G42 and its chief executive, Peng Xiao, looks set to continue. In the meantime, it has shone the spotlight on a broader debate: How should Washington regulate emerging AI technology, and how far should the government go to ensure that firms from adversaries like China can’t catch up with leading American companies like OpenAI?
So far, the Biden administration has focused on limiting China’s access to the kinds of advanced semiconductors and chip manufacturing equipment firms need to develop AI. But some in Washington are now advocating for more expansive controls to regulate core AI technology with potential dual uses, such as future versions of large language models like ChatGPT. Experts fret such models could not merely help highschoolers cheat on their homework: they could give the U.S.’s rivals significant help towards developing their own advanced weapons systems.
“There is very little consensus on what to do around large language models,” says a congressional aide who asked not to be identified by name. “Our concern is around the technology itself and that we maintain that advantage as long as we can. There is an appreciation that there is a huge amount of risk right now.”
That risk was underscored earlier this month when the South China Morning Post reported that Chinese tech firm Baidu’s answer to ChatGPT was linked to the country’s military research. Baidu vigorously denied any link to the PLA, but its stock nevertheless plunged 15 percent in the days following the report.
One of the big challenges in crafting regulation to diminish the risk of these models is the unique way AI has developed, with many models available to anyone online. Meta’s Llama 2 is one such ‘open source’ large language model.
There are real advantages to open source collaboration: researchers and academics can access these models easily and cheaply, and use them to discover technological breakthroughs. Such research can even occur across borders: in 2020, the U.S. was the top collaborator for Chinese researchers publishing AI-related papers, according to research from the Center for Security and Emerging Technology.
All export controls are about delaying a country’s ability to get access to the technology, it doesn’t really stop them forever, due to intellectual property theft, espionage, or simply China developing it themselves.
Paul Scharre, executive vice president at the Center for a New American Security (CNAS)
But critics say this setup leaves the door wide open for China, and other adversaries, to utilize America’s latest advances.
“If you want to worry about something, I would worry more about open source models, like Llama 2,” says Paul Triolo, who analyzes Chinese technology issues at Albright Stonebridge Group. “There is unlikely to be any significant transfer of AI model development related technology via G42 to China. Chinese developers, for example, already have access to advanced open source models such as Llama 2 that they are already leveraging to develop optimized models and applications.”
Examples of this scenario are already surfacing. When 01.AI, the Chinese AI firm founded by Taiwanese tech pioneer Kaifu Lee, released its own large language model, developers pointed out that some of its technological architecture seemed to be based on Llama 2 and that the company had not credited its source: 01.AI later agreed to cite Llama 2.
“It’s a very thorny problem, both technically and politically,” says Matt Sheehan, a fellow at the Carnegie Endowment for International Peace. “The debate is largely people concerned about AI safety arguing that these models will develop dangerous capabilities soon, and then the pro-open source side arguing that’s just speculation, and presenting lots of pretty compelling reasons why open source is valuable.”
In October, President Biden put out a sweeping executive order on AI which charged the Commerce Secretary with soliciting input from the private sector and academia on this dilemma. When the parameters “for a dual-use foundation model are widely available — such as when they are publicly posted on the Internet — there can be substantial benefits to innovation,” the executive order stated, “but also substantial security risks, such as the removal of safeguards within the model.”
AI is still in its relative infancy and its future is hard to predict, adding to the difficulty for governments in setting controls around its spread. Chinese firms have already proved able to continue developing technologies while circumventing U.S. export controls: Witness Huawei’s development of a new smartphone, the Mate 60, despite measures designed to restrict its access to advanced semiconductors.
“The tricky part is that large language models, the AI du jour, are new and still poorly understood…which makes calibrating policy here hard” says Gary Marcus, a New York University professor and AI entrepreneur. “At the same time, I don’t think the current technology is the last word on AI — the AI race may ultimately be won by those who develop new and better technologies that are safer, more reliable, and more sophisticated.”
Paul Scharre, executive vice president at the Center for a New American Security (CNAS), a think tank which closely monitors national security issues, argues that the AI field’s fast moving nature is not a reason to avoid putting in place regulations that at least delay China’s advances: instead, he says, it provides a “reason to learn faster with regulation.”
“All export controls are about delaying a country’s ability to get access to the technology, it doesn’t really stop them forever, due to intellectual property theft, espionage, or simply China developing it themselves,” he says.
The specter of China should, he adds, encourage the U.S. to put regulation in place over AI. “I’ve heard the argument made, we don’t want to restrain the AI community or regulate AI because we need to stay ahead of China,” he says. “But this is a case where actually, by having no regulations, we’re giving China a massive competitive advantage.”
The latest illustration of the concerns about G42, and how to regulate the company’s sprawling network, comes in the form of Beyond Limits, a California-based AI firm spun out of technology from NASA’s Jet Propulsion Laboratory. In 2020, G42 led a $133 million investment round in Beyond Limits, which accelerated the firm’s global expansion, including into Asia. Around the same time as the G42 investment, Beyond Limits opened subsidiaries in Hong Kong, Beijing and Shenzhen, according to corporate records. The company currently has a dozen employees in China.
Last March, Beyond Limits signed a $10 million letter of intent with the government of Dongying, a city in Shandong, to explore the possibility of creating an advanced AI R&D center in a state-run development zone. In a Chinese article about the partnership, pictures show a Beyond Limits executive shaking hands with an official in charge of the development zone. “Next,” the article states, “both sides will strengthen communication exchanges on the basis of mutual benefit and win-win results.”
In a statement to The Wire, Beyond Limits said it did not move forward with the Dongying center plan, and that the company launched its Asia business before G42’s investment. “Our China operations have no connection with anyone from G42,” the statement said, adding that G42 has no influence on the company’s strategy. “We do not have any connection to any U.S. Intelligence concerns related to G42.”
As G42 and its partners are discovering, when it comes to the U.S.-China AI competition, many in Washington are skeptical that any kind of win-win result is possible.
Katrina Northrop is a former staff writer at The Wire China, and joined The Washington Post in August 2024. Her work has been published in The New York Times, The Atlantic, The Providence Journal, and SupChina. In 2023, Katrina won the SOPA Award for Young Journalists for a “standout and impactful body of investigative work on China’s economic influence.” @NorthropKatrina