Emerging technologies like artificial intelligence (AI) should be used as “tools of opportunity, not as weapons of oppression,” President Biden remarked recently. He’s right.

But this exhortation makes his subsequent vow to work directly with “our competitors” to harness the power of AI “for good” all the more curious. Working with our competitors, like China, would only empower the Chinese Communist Party (CCP) to write the rules of the road for AI. And we don’t want China in the driver’s seat.

China is at the bleeding edge of using emerging technologies for oppression – both at home and abroad. Journalists and human-rights activists have long pointed to a state-run data system that uses AI to flag whole categories of people for detention in the western Xinjiang region.

Elsewhere, the Chinese government is partnering with technology companies like iFlytek to develop an AI-powered voice-recognition system that can automatically identify specific voices in phone conversations. Its mass facial-recognition systems operate under standards that segment the population by eyebrow size and skin color. “Sharp Eyes,” a sweeping public-private surveillance project, employs AI to analyze peoples’ movements, associations, medical records, online behaviors and more to create an omnipresent panopticon aimed at reinforcing social control.


And the CCP isn’t shy about exporting these AI technologies to like-minded governments. Since at least 2018, China has provided AI-driven digital monitoring capabilities to countries across Africa, Latin America, the Middle East and beyond. Today, China maintains 296 surveillance relationships in 96 countries, many via its designated “AI Champion” companies.

In Ecuador alone, Chinese telecommunications company Huawei helped build a network of over4,000 cameras for its police and intelligence services. In Zimbabwe, Chinese firm CloudWalk maintains multiple AI and biometric data partnerships with Harare. In Malaysia, police wear body-mounted cameras developed by one of China’s AI Champions, tech startup Yitu. In Pakistan, Chinese policing technologies with predictive analytics and “smart city” surveillance capabilities empower Islamabad to monitor and control its citizens.

If international cooperation requires a common vision for how the technology is used, then America and China couldn’t be further apart.

Cooperation on AI development could also bolster the capabilities of the People’s Liberation Army (PLA). In 2019, Chairman of the Joint Chiefs of Staff Joseph Dunford argued that private tech companies like Google working in China provided “a direct benefit” to the Chinese military. (This is due to the CCP’s strategy of Military Civil-Fusion, which seeks, in part, to orient technological progress in service of its military goals.)

In fact, China has already developed multiple offensive applications of AI that will not redound to the benefit of the United States and our allies like Taiwan. According to the Department of Defense’s 2020 China Military Power Report, China maintains AI-enabled unmanned surface vessels reportedly intended for patrols in the disputed South China Sea. The same report revealed the CCP has already developed armed swarming drones that claim to use AI for autonomous guidance, target acquisition and “attack execution.”


And as early as 2021, the PLA intended to leverage AI to identify weak points in U.S. operating systems and launch precision strikes against these vulnerabilities. The PLA also ultimately wants to use AI to advance its offensive “cognitive domain operations,” possibly resulting in the development of deep fakes and targeted propaganda in our shared information environment. By collaborating with China on AI, we could be abetting our own ruin.

Make no mistake, China wants to win an AI race. Their 2017 plan to lead in global AI development by 2030 explicitly seeks first mover advantage at the expense of other nations. After all, as Russian leader Vladimir Putin declared in 2017: the country who leads in AI “will be the ruler of the world.” The CCP got the message.

So what can we do instead? Since tech development will almost always outpace attempts to govern it, the world needs appropriate guardrails for how these technologies are built and used. These safeguard should be imbued with American values like openness and transparency. Such an idea stands in contrast to the model the CCP proposed this summer, which requires generative AI services to “uphold the core socialist values.”

To contest this, the United States could first assert itself more in bodies that set international standards, which determine how tech is built and used throughout the world. China has long been expanding its influence in these fora—it’s time to counter them.


Next, U.S. companies can commit to open sourcing elements of their AI technologies, so potential malicious uses can be identified and resolved. For example, OpenAI released a smaller version of their large language model GPT-2 in 2019. This is good practice and could be repeated with certain foundational AI models. Doing so and having these changes originate within the United States and other democratic countries could help us cement our version of these technologies as superior before China ships and standardizes its own offerings.

Finally, the U.S. government and especially Congress should do everything in its power to promote explainable AI, or a set of processes that helps explain how a machine-learning algorithm arrived at its output. Users that stand to be affected by this transformational technology should be able to audit the behavior of these systems.

The tech should be designed in a way that allows humans to understand the outputs and outcomes it generates. Without explainable AI, we could see the proliferation of black box systems like algorithms in China that surreptitiously segment the population based on ethnicity. 


If America and freedom-loving nations don’t write the rules of the road for emerging technologies like AI, authoritarians will do it for us. America must instead contest China’s vision for AI and offer our own affirmative agenda in its place. Designing and deploying products imbued with our own values of openness and transparency to the rest of the world will help.

And layering on top of that the explicit promotion of our own vision for AI governance – one rooted in advancing human flourishing and individual liberty – would be even better. Collaborating with despots is not the answer; leadership is.

Jake Denton is a research associate in the Tech Policy Center at The Heritage Foundation.