AI Deterrence

5 May 2023

Much has been written about the potential upsides and downsides of AI, how it may shape our future, and what we should do about it.

Most of these takes are at one end of the spectrum or the other. AI will either doom us all or save us all, but a middle ground does exist.

The answer is not in regulating or slowing it down. From an American perspective, our rivals will make no such concessions.

The solution has two parts:

AI deterrence, very much like nuclear deterrence - mutually assured destruction. We must distribute the capabilities widely, so that they are not controlled only by Big Tech, as they are now. Personal AIs, just like personal computers level the playing field.

The reason that progress currently appears to be so fast, is that many of these technologies have existed for quite some time inside of Big Tech, but have been focused on their internal, business-related problems. Google published the paper, Attention Is All You Need in June 2017, this paper led directly to OpenAI’s developments.

Mining of cryptocurrencies, as well as video games, incentivized the wide distribution of GPUs and other kinds of compute. Without GPUs being widely available, AI deterrence very well may not be possible.

I envision a new kind of appliance that one can install in their home with the necessary compute to run inference on AI models (training new models takes orders of magnitude more compute). Your phone (Apple ones at least) are already a very scaled down version of this concept. Their AI capabilities are confined to computer vision (CV) for FaceID, optical character recognition (OCR), and other high-polish features. However, they aren't currently capable of running a large lanaguage model like ChatGPT or other more complex models.

Such a future would bring new meaning to “collective action” as you could loan part of the compute of your AI appliance to a group (potentially, a decentralized autonomous organization, aka DAO) to train new models or have them perform more complex tasks.

The second piece is cryptography. As usual, cypherpunks envisioned this future long ago and built the proto-technology to show how the problems that arise might be resolved. We’ve already invented many cryptographic primitives that we can exploit to minimize the negative impacts of AI.

Bitcoin (and cryptocurrency in general), Smart Contracts, NFTs, Zero Knowledge Proofs, merkle trees, Decentralized Autonomous Organizations, etc all have roles to play.

We can use these tools to prove the provenance of information. Where it came from first, how it was distributed and amplified.

We can use cryptocurrencies to compensate the copyright holders whose information is used to train AI models.

Zero knowledge proofs can be used to prove whether a piece of information was used to train an AI without revealing any of the training data.

We can use NFTs to prove that something did indeed happen after a certain point in time (like taking a picture with newspaper).

Cryptocurrency and smart contracts can be used as a way for AIs that do not fully trust each other to nevertheless cooperate, just like the US Dollar and contract law does for humans.

We can build identity and reputation systems that allow us to better determine which sources of information to trust and how much trust to assign to them.

The shape that these guardrails will take is still very fuzzy. We must also be careful not to build them in a centralized way, where one entity can easily take control of them and use them for their benefit. Walking these tightrope is a challenge, but it is not insurmountable.

Will we choose the difficult path of solving these problems or will we choose the, perhaps well intended, but otherwise foolish path to put the genie back in the bottle?