Skip to main content

The race for AI dominance is fueling an international arms race

· 5 min read
DeepMake

Read the news on any given day and you’ll see media firestorms about the dangers of AI and fearmongering over robots being developed for warfare. Simultaneously, countries are diving headfirst into AI development and new software is emerging daily. The hustle for AI dominance has our heart rate rapidly rising, reminiscent of the nuclear arms race of the twentieth century.

Part of this comes from changing attitudes toward AI. Many major AI developments are currently open source or academic papers that are available to everyone. That could change — there’s lots of talk about privatization and regulation in the industry, right when countries all over the world are trying to outdo each other in AI development. We’re this worried about AI when we know what it can do; what will happen when we no longer know what it’s capable of?

You might think that AI can’t be a major threat, but what about these?

  • LLMs and applications such as ChatGPT could be exploited to generate mountains of content to undermine political and business rivals.
  • AI tools could create fake news, propaganda, or other disinformation campaigns. Sounds like a dystopian novel, but it’s possible today.
  • Governments or businesses can employ AI to create massive amounts of comments, articles, or images to manipulate public opinion and create chaos in other countries.
  • Militaries could field robotic soldiers that use AI to target their weapons.
  • Unknown and hidden AI techniques could be used to attack us in ways we can’t even comprehend.

This is already happening on a small scale. There have been incidents of AI weaponized against political rivals hitting news outlets, deepfake technology creating fake news anchors who spread pro-Chinese and anti-U.S. propaganda, and platforms like Buzzfeed are reportedly planning to use AI tools to enhance and personalize content. As AI and Deepfakes improve, the threat of misuse grows. Given the potential of these tools and the speed at which they can operate, it’s more of a worry every day.

This is serious stuff. Yet, there’s one thing that’s keeping AI accountable today: most of the tools and code are open source. This is incredibly important for managing the ethical use of AI because anyone can take a look at the source code and see what’s going on.

Closing AI — what could go wrong?

Don’t count on things staying this way forever, however. This safeguard may soon be a thing of the past. There is talk in several countries about locking down AI development from the public eye. If this happens, imagine a country like China developing its own products under a cloak of secrecy. We’d all be left in the dark; no one would know who has the most advanced AI technology. That kind of uncertainty could lead to every developed country working on AI in secret, where nations that support AI development will gain a technological advantage over countries that regulated it, leading to a power imbalance — not unlike nuclear weapon development.

As if an AI race wasn’t enough, another danger of privatizing AI — and shrouding it in secrecy — is the risk of enabling mass censorship. After all, the capability is already there, waiting to deploy. One way this happens is when technology designed to detect child sexual abuse material (CSAM) oversteps its bounds, scanning private data and mislabeling files, with potentially ruinous results for innocent people. Meanwhile, in China, the government hypothetically could expand its “Great Wall of China” internet censorship to use a technology like ChatGPT to identify content based on context, not just keywords, for more thorough internet censorship.

Overall, this is a very positive use of technology…but from there, it’s not a big leap for a company or government agency to seize control of an AI system and use it to censor specific types of content or viewpoints. This could stifle free speech and suppress ideas. When you considerAI’s ability to analyze vast amounts of data and make decisions in real time, this is an extremely disturbing prospect.

These issues are merely the tip of the iceberg. There are so many unknown variables and possibilities waiting to unfold.

Now what? Keep AI open and ethical.

So, how do we keep this Jenga tower from tumbling down on us? The best way to keep these risks in check and take the edge off the threat is through an open dialogue about AI. Transparency, along with sharing knowledge and resources, is essential. Unlocking the power of collaboration by keeping AI open source and open core is key to responsible AI development.

As we’ve said before, [TD: Reminder to self include link] establishing a code of ethics in AI development is crucial. This code of ethics would help keep AI systems safe, reliable, and trustworthy while discouraging harmful use and development.

Furthermore, we need to educate the public on AI awareness and risks. As people learn more about what AI can do, and its potential dangers, they’re more likely to support responsible use and development. Replacing fear with knowledge will benefit us all.

As we look at the potential of an AI race, it’s a lot like the movie “Dr. Strangelove.” Just like a nuclear arms race, an AI race is an exercise in mutually assured destruction: If one country or business deploys AI against another, we all lose.

Major Kong

Let’s not be like Major Kong riding a bomb to our own destruction.

DeepMake supports keeping AI open and public, establishing a clear code of ethics for AI development, and public awareness and education. A collaborative approach to AI development ensures that AI is used safely and ethically, for the betterment of all people. Click here to read our Ethical Manifesto.