CSE 525: Randomized Algorithms Spring 2026 Lecture 6: Algorithmic Lovász Local Lemma Lecturer: Shayan Oveis Gharan 04/23/2026
Disclaimer: These notes have not been subjected to the usual scrutiny reserved for formal publications.
Given underlying independent random variables with product measure . The ”bad events” are each determined by a certain subset of the random variables, which we de- note . The dependency graph has vertices and edges whenever . Note that this is a valid choice of a dependency graph, since each event is independent of any conditioning on the variables outside of .
Given that the conditions of the Lovász Local Lemma hold, we want to find a realization of the random variables such that no events happen.
Moser Tardos’s Algorithm
-
1.
Sample from the distribution .
-
2.
As long as any event is satisfied by the current values of , choose the smallest such and resample : replace by new independent samples.
It is clear that if the algorithm terminates, then we have found an assignment avoiding all events. The key is to analyze the expected number of resampling steps. By one resampling step, we mean the operation of resampling all the variables of an event.
Theorem 6.1 (Moser-Tardos [MT10]).
The expected number of resampling steps before termination of the algorithm is at most , provided that , where as usual are the set of neighbors of in the dependency graph.
In these notes we also use to denote the set of neighbors of and including itself.
Note that in applications, ’s are usually small (), so this means the expected number of resampling operations is .
6.1 Execution Log and Stable Set Sequences
We define the execution log of the algorithm as the sequence of events that get resampled: , where denotes the fact that the event was resampled at time . We want to prove that for all
Stable set sequences
An important notion in the analysis will be that of stable set sequences. First, given the log, we define a directed graph on vertices as follows. For each pair of entries in the log, and , we add a directed edge to if and .
For a fixed entry in the log, let us consider a subgraph , induced by the vertices that have a directed path to . We call the root of . For each , we define a set of events:
Note that . We have the following properties:
-
i)
For every , is an independent set in .
Proof: If and , then there must be a directed edge (we assume wlog that ). This means that has a path to the root through that has length more than the longest path from to the root. This contradicts the fact that the longest paths from both nodes to the root have length exactly .
-
ii)
For every , .
Proof: For every , there is a longest directed path from to the root of length exactly . So the next vertex on the path must have a longest path to the root of length . This vertex corresponds to an event and by construction of the directed graph, we have that .
This motivates the following definition.
Definition 6.2.
A stable set sequence for is a finite sequence of sets such that for every , is an independent set in and for every , .
By the discussion above, every sequence produced from a log of execution of the algorithm is a stable set sequence (note that it must be finite, since for a fixed root the induced subgraph is finite).
Definition 6.3.
A stable set sequence is said to be a witness of a resampling if it is produced from the log by the above process, starting from root . We say that occurs in the execution log if there is such that I is a witness of the resampling .
Lemma 6.4.
For every stable set sequence ,
where .
Proof.
We first modify the algorithm as follows (which does not change its behavior). We prepare an infinite table of samples to be used: For each , the -th row of the table contains an infinite sequence each sampled independently according to the distribution of . The algorithm maintains a pointer for each . We start with for each . The ”current values” of are given by . Whenever the algorithm ”resamples” , we increment by 1, which means moving on to the next sample. Clearly, this is equivalent to the original description of the algorithm.
We claim that if a certain stable set sequence occurs in the execution log, then for each of its events we can determine a particular set of samples in the table that must satisfy the event. Given , we obtain the locations of these samples as follows: For every where , let denote the number of indices such that and . (Note that for each , at most one event in can depend on , since is an independent set.)
Then, we claim that are exactly the samples of that were checked by the algorithm to determine that occurs, before the resampling that makes a member of . This is because the only times when is incremented is when we resample an event depending on . If and this is due to a resampling at time , then any event resampled before time that also depends on will be part of the directed graph and hence also part of the stable set sequence. These are the only times when the pointer is incremented prior to the resampling and hence the value of just before this resampling is exactly .
Now we know that in order for to occur, it must be the case that for each and for each event , the samples satisfy the event . (Otherwise the algorithm would not choose to resample it.) This happens with probability . Most importantly, notice that the samples for different values of are distinct; this follows directly from the definition of . By the independence of the samples in the table, the probability that for each , , the samples satisfy , is . ∎
Remark 6.5.
We remark that this is an upper bound on rather than its exact value, because the presence of appropriate samples in the table does not guarantee that will occur: In particular, the probability that occurs also depends on the sequence of resamplings and the order the algorithm execute the resampling. The presence of appropriate samples in the table is only a necessary condition for to occur
6.2 Summing Up
Now for each event , define the random variable to be the number of times event is resampled during the execution. Our goal is to compute the expectation . The sum of these expectations will be the expected running time of the algorithm. Note that is the number of distinct stable set sequences with in a execution of the algorithm (We remark that although the stable sets are distinct, each one is properly included in the later ones.)
where for simplicity we write . We need to show that .
We prove a more general fact:
Lemma 6.6.
For any , and any non-empty independent set , we have
Proof.
We prove by induction. We leave the base case as an exercise.
Going over all possibilities for based on the definition of a stable set sequence we can write,
To see the last identity, observe that
This completes the proof of the lemma. ∎
Note that we can take a limit to show that indeed .
Finally, to prove Theorem 6.1, it is enough to use linearity of expectations together with the above lemma to show that
as desired.