We published our video on zero-knowledge proofs!
Surprisingly, making this video took a lot of work. Zero-knowledge proofs for coloring are one of those algorithms that, in hindsight, seem beautifully simple and clean. But that’s just an illusion—there’s actually a lot going on behind the scenes. We struggled with deciding how in-depth to go and which applications to discuss. In the end, the video covers a bit of everything, and I hope different viewers will find something that sparks their curiosity. If that’s the case, you should definitely check out some slower, more in-depth sources; zero-knowledge proofs are realistically too difficult to understand them from 20 minutes of a video. For example, the following book on cryptography is a classic: https://www.wisdom.weizmann.ac.il/~oded/foc.html
Here’s a list of topics that did not make it in the video.
1. How to reduce satisfiability to coloring?
We pointed out that almost everything can be reduced to 3-coloring. This is closely connected to our video on NP-completeness—many problems can be formulated as “here is a (polynomial-time) algorithm checking solutions: can you find an input that is accepted by it?” Such an algorithm can be represented as a circuit, and each circuit can be thought of as a collection of constraints on Boolean variables—that is, the satisfiability problem.
Here’s how you can convert a concrete satisfiability problem into graph coloring with three colors. The image below shows a satisfiability problem asking us to set binary variables to some values so that the formula is satisfied. We assume that each constraint in the problem is a disjunction of a few variables or their negations; any satisfiability problem can be converted into this form. 1 Below the formula is an equivalent graph 3-coloring problem.

And here’s the explanation for the picture:

Explanation of the parts of the graph:
Each part—usually called a gadget—plays a specific role in translating the satisfiability problem into a coloring problem.
Triangle Gadget:
There’s a triangle where each vertex must be colored with one of three colors. In the second picture, I already colored the nodes of the triangle with red, blue, and green. This makes it simpler to talk about the coloring; instead of saying “This node has to have the same color as the top node in the triangle,” we can say “This node has to be green.”
Variable Gadget:
We represent each variable with an edge between two nodes. All these nodes are connected to the green node, which means that in any valid solution, the edge is either colored red-blue or blue-red. This represents whether the variable is true or false.
Clause Gadget:
For a clause like OR
OR
, we think of it as “the only forbidden combination is
.” Our task is to construct a gadget that eliminates this one forbidden combination while allowing all others. Here’s how: if both
and
are false (represented as the edge being colored red-blue), the node labeled
OR
must be blue. Similarly, the node labeled
OR
OR
must be blue. You can verify that any other combination is acceptable.
2. My Favorite Theorem
At the end of the video, we mentioned that zero-knowledge proofs address the question “what can be done without a trusted authority?” By “trusted authority,” we mean someone that everybody trusts to be honest and keep their secrets safe—like a bank, a state, or a software company. Zero-knowledge proofs show that we can achieve certain tasks without any trusted authority.
But zero-knowledge proofs are only about tasks of the type “convince others I can do something”. We can be much more ambitious! For example:
- How can we vote without a trusted authority?
In state elections, we trust the state to count votes honestly while maintaining privacy. But here’s a long-shot hope: Could there be a distributed system that does the same without relying on any trusted authority? - How can we keep and exchange money without a bank?
In financial transactions, we trust banks to handle our money honestly while preserving privacy. Again, could a distributed system achieve the same without relying on a bank?
As it turns out, for both questions—and all similar ones—the answer is yes, at least in theory. In the second case, the implementation is, (in)famously, the cryptocurrencies. There are also blockchain protocols that attempt to implement secure private voting.
The fact that all this is possible is supported by one of my favorite theorems from theoretical computer science, proven a few years after the introduction of zero-knowledge proofs. The paper is called How to play ANY mental game, it’s by Goldreich (wrote the book I linked at the beginning), Micali (I had an office pretty close to his when I was at MIT, but sadly never met him there), and Wigderson (Received Turing award recently, his work on cryptography such as this paper was cited as one of the reasons).
Their theorem uses the setup of the Byzantine generals problem (which we covered in an earlier video). In this problem, a group of generals wants to coordinate—in our basic case, each general started with 1 bit of information and they wanted to agree on a common bit. However, the theorem’s setup is more general—each general starts with some information , and together they want to simulate an algorithm
that takes inputs
and outputs some result
. For example, if
is a vote-counting algorithm, each
represents who the general numbered
votes for, and
is the elected candidate. We want a distributed simulation of the algorithm
such that:
For example, if there were a trusted authority—say, a universally trusted queen— among the generals, we could have each general send to her, have her simulate
, and then announce
. But the theorem by Goldreich, Micali, Wigderson says that the same can be done without any trusted authority, and for any algorithm
.
It is perhaps not surprising that zero-knowledge proofs are a crucial component of the proof. The rough idea is to split the simulation of among the generals so that each one performs a computation on data that is meaningless without the context of what the others are doing. For example, a general might be asked to “receive two numbers from generals 8 and 42, sum them, and send the result to general 4.” Using commitment schemes and zero-knowledge proofs, the general can convince others that he performed this computation correctly without revealing the actual numbers. This way, we can keep all the generals to be coordinated in their simulation of
, while each is only doing small piece of the overall computation and can’t really say what’s going on globally.
In practice, implementing cryptocurrencies by simply following this theorem would be incredibly inefficient. Nonetheless, the theorem is a powerful theoretical result demonstrating that many tasks can be accomplished without a trusted authority. Moreover, zero-knowledge proofs are used in blockchain practice. For instance, you could use them to implement private transactions. — on typical blockchains, all transactions are public, but with zero-knowledge proofs, you can prove you have enough funds for a transaction without revealing your balance. There are also methods to use zero-knowledge proofs to speed up blockchains by verifying certain requirements off-chain (as far as I can say people are actually more excited by this second application).
Noninteractive Proofs
Our zero-knowledge proof was interactive—a back-and-forth conversation between the prover and the verifier. This approach is not ideal in practice; for example, what if you want to prove the same fact to many people? It would be better if the prover could simply publish a document that everyone can verify. This is called a noninteractive proof, and it is how we typically think of proofs.
One fact that I believe isn’t as well-known as it should be is that the interactivity of our protocol isn’t inherent and can be removed rather easily. Here’s how:
First, imagine there is what’s sometimes called a beacon—a public source of randomness that everyone can observe. For example, you might imagine that a distant star emits light that we measure here on Earth, and we all agree on how to convert the tiny fluctuations in the strength of the signal into a stream of random bits.

In that case, we can “implement” a verifier as follows: we choose random bits emitted by the beacon at some time and use these bits to simulate the random challenges issued by the verifier. In the end, our proof is a transcript of the conversation between the prover and the verifier, along with the time
. Any verifier can check that the transcript is valid and that the verifier used the random bits from the beacon at time
.
And that’s it—a noninteractive proof. There are some subtle issues, such as the possibility of us trying to cheat by using some specific time for which the stream of random bits has some helpful property. However, the probability of successfully cheating in our protocol for a specific time
is extremely low, and we can make it so low that even when considering all possible
, the overall chance of us cheating remains negligible.
In practice, you don’t even need a beacon. Instead, if the protocol has rounds, you first commit to your graph
times and record all the commits to form a string
. Then, you pass
through a cryptographic hash function to produce a “random” string
, which you use to simulate the verifier. Although the bits in
aren’t truly random, they are unpredictable because cryptographic hash functions are one-way, so this is pretty much equivalent to using the truly random beacon. The right terminology for the bits in the string
is that they are pseudorandom—yet another topic in one of our recent videos. While working on this video, I really felt that all the topics we covered recently are coming together. 🙂