CSE 525: Randomized Algorithms Spring 2026 Lecture 5: Lovasz Local Lemma Lecturer: Shayan Oveis Gharan 04/21/2026 Scribe:

Disclaimer: These notes have not been subjected to the usual scrutiny reserved for formal publications.

This lecture and the next are based on Alistair Sinclair’s course on randomized algorithms

Recall from earlier lectures that the probabilistic method provides a useful non-constructive strategy for proving the existence (or non-existence) of an object that satisfies some prescribed property. Generally, the argument involves selecting an object randomly from a specific set and demonstrating that it has the desired property with strictly positive probability. This in turn proves the existence of at least one such object. In most of the examples we have seen, the desired property holds not just with positive probability but actually with quite large probability, even with probability approaching 1 as n. This in turn often leads to an efficient randomized algorithm for constructing such an object: we just select an object at random and with high probability it has the desired property.

For some problems, it is natural to describe the selected object in terms of a set of ”bad” events {𝒜1,𝒜2,,𝒜n}, whose occurrences render the object undesirable, while the desired property is simply the avoidance of all these bad events in the set. In such scenarios, the existence of a non-trivial lower bound on

[i=1n¬𝒜i]

is of particular interest. One immediate approach is just to use the union bound. The union bound fails if the probability of each bad is way larger than 1/n. Of particular interest to us are the case that each Ai occurs with a small constant probability.

Clearly, if all ”bad” events are independent, and if the probability of each of them satisfies [𝒜i]p, then the probability that none of the events {𝒜i} occur is simply the product

[i=1n¬𝒜i]=i=1n[¬𝒜i](1p)n (5.1)

which is strictly positive (provided only that the trivial condition p<1 holds).

Informally, the Lovász Local Lemma can be viewed as extending the above result to a more general setting, in which we allow limited dependencies among the events in question. In light of (5.1), the resulting probability that no bad event occurs will typically be exponentially small. Thus the Local Lemma tends to apply in situations where we are looking for a ”needle in a haystack,” so does not immediately lead to an efficient randomized algorithm. (However, see below and the next lecture for more recent developments on constructive versions of the Lemma.)

Definition 5.1.

An event 𝒜 is said to be mutually independent of a set of events {𝒜i} if for any subset S of events or their complements contained in {𝒜i}, we have [𝒜|S]=[𝒜].

Lemma 5.2 (Lovász Local Lemma).

Let 𝒜1,,𝒜n be a set of ”bad” events with [𝒜i]p<1 such that each event 𝒜i is mutually independent of all but at most d of the other 𝒜j’s. If ep(d+1)1 then

[i=1n¬𝒜i]>0.

Often the Lovász Local Lemma is stated with the condition ep(d+1)1 replaced by 4pd1, which is slightly weaker for d2 but asymptotically stronger. In fact, the constant e above is asymptotically optimal.

5.1 Application 1: Satisfying Solutions to k-SAT

Theorem 5.3.

Any instance ϕ of k-SAT in which no variable appears in more than 2k2k clauses is satisfiable.

As a quick example, the above claim implies that for k=10, any formula in which no variable appears in more than 25 clauses is satisfiable. Note that there is no restriction at all on the total number of clauses!

Proof.

Suppose we have an arbitrary instance ϕ of k-SAT consisting of n clauses. Let’s pick a truth assignment to the variables of ϕ uniformly at random and let Ai denote the event ”clause i is not satisfied”. Noting that exactly one of the 2k possible assignments fails to satisfy any particular clause, we have

i{1,2,,n}:[𝒜i]=2kp.

Furthermore, we observe that each event Ai is independent of all other events Aj except those corresponding to clauses j that share at least one variable with clause i. Let d denote the largest possible number of such clauses. Clearly, since each variable is assumed to appear in at most 2k2k clauses, we have

dk2k2k=2k2.

The condition p1/4d in the local lemma now becomes

p=12k=142k214d

Hence, the lemma implies that

[i=1n¬Ai]>0.

Since the probability of picking an assignment that satisfies every clause in ϕ is non-zero, we can invoke the standard argument of the probabilistic method and infer the existence of a satisfying truth assignment. ∎

In the above proof, we claimed that each 𝒜i is independent of all 𝒜j for which clauses i and j do not share any variables. This is an instance of the following general principle that is frequently useful in applications of the Local Lemma:

Proposition 5.4 (Mutual Independence Principle).

Suppose that Z1,,Zm is an underlying sequence of independent variables, and suppose that each event 𝒜i is completely determined by some subset Si{Z1,,Zm}. If SiSj= for j=j1,,jk, then 𝒜i is mutually independent of {𝒜j1,,𝒜jk}.

In our above application, the underlying independent events Zl are the assignments to the variables.

5.2 Application 2: Min Congestion Routing with Dilation

Recall that in the last lecture we discussed the min congestion problem: We have k terminal pairs, (s1,t1),,(sk,tk) and let 𝒫i be the set of paths from si to ti for all i and 𝒫=i𝒫i. Furthermore, let yP be a feasible LP solution with fractional congestion 1 (everything we say will generalize for larger congestion, but here we choose 1 for simplicity of the arguments). So, P𝒫iyP=1 for all i.

We say the LP solution has dilation D if

D=max{|P|:P𝒫,yP>0},

i.e., D is the maximum length of all paths in the support of LP. We prove the following theorem:

Theorem 5.5 (Leighton-Rao-Srivisan [LRS98]).

Given a fractional solution with dilation D, it is possible to round this to integral flow with congestion at most O(logD/loglogD).

Note that Dn, so the above theorem only improves over what we proved in that last lecture when Dn. The proof uses Loväsz Local Lemma, so the rounding procedure was not algorithmic at the time that the paper was published but it can be made algorithmic using new developments.

We assume that the OPT=1 to simplify the argument, although it naturally extends to when OPT>1.

Step 1: Discretization. It turns out that the analysis is simpler if yP is invariant over all paths with yP>0. To do that, we ”discretize” the LP solution, simply by choosing an ϵ1kn-small enough and repeat every path yp/ϵ many times. This can incur an extra loss of kϵ1/n on the congestion of every edge which we ignore for simplicity. So, assume that yP=ϵ for every path P where yP>0. Furthermore, since the fractional congestion of every edge is 1, there are at most m:=1/ϵ many paths going over an edge e.

Step 2: Bad Events The crux of the proof is to define the right set of bad events. Let C be the target congestion. Here the idea is as follows: For every edge e, and every set S of C paths going through e we add an event 𝒜e,S which occurs if all of the paths in S are chosen. So, if none of the bad events happen the congestion of every edge is at most C1 and we are done. Next, we upper bound the probability of a bad event. Fix a bad event 𝒜e,S, where S={(i1,j1),,(iC,jC)}; We use the notation (i1,j1) to denote the j1’th path of the m many paths connecting si1 to ti1 in S, and write Pi1,j1𝒫i1 to denote this path.

Now, notice if S contains two paths of the same source/destination pair, then [𝒜e,S]=0. So, we ignore these events from any further considerations. Otherwise, if all of the C paths of S come from distinct source/destination pair, we have

[𝒜e,S]=1m|S|=1mC.

Step 3: Dependency Set Fix two bad event 𝒜e,S,𝒜e,S where paths in S,S are from distinct source/destination pairs. Say S={(i1,j1),,(iC,jC)} and S={(i1,j1),,(iC,jC)}. Observe that if {i1,,ik}{i1,,ik}= then S,S are independent. So assume i1{i1,,ik}, say i1=i1. For how many choices of i2,,ik and j1,,jk do we have a bad event? First, there are m choices for j1. Fix such a choice. The path Pi1,j1 uses that D many edges and each edge has at most m paths going through it. Thus the number of bad events containing i1 is at most

mD(mC1).

Since there are C option for the common index between 𝒜e,S,𝒜e,S we have

|Γ(𝒜e,S)|CmD(mC1)CDmC(C1)!.

So, to apply LLL we need

e1mCCDmC(C1)!eCD(C1)!<1

The latter holds as long C>logDloglogD.

5.3 Proof of the Lovász Local Lemma

First, we prove the following lemma which is the main ingredient of the proof:

Lemma 5.6.

For any subset S{1,,n}, and any i{1,,n}, such that iS we have

[𝒜i|jS¬𝒜j]1d+1.
Proof.

We prove by induction on m=|S|. The base case, m=0, holds since [Ai]p1e(d+1)1d+1. For the inductive step (m>0) we first partition S into the two sets S1=SΓi and S2=SS1, where Γi is the ”dependency set” of 𝒜i, i.e., the set of at most d indices j such that 𝒜i is independent of all 𝒜j except for those in this set. Note that by definition |S1|d. Then, by Bayes rule we may write

[𝒜i|jS¬𝒜j]=[𝒜ijS1¬𝒜j|kS2¬𝒜k][jS1¬𝒜j|kS2¬𝒜k]

To bound the numerator we write,

[𝒜ijS1¬𝒜j|kS2¬𝒜k][𝒜i|kS2¬𝒜k]=[𝒜i]p.

So, it remains to lower bound the denominator. WLOG, perhaps after renaming, assume S1={1,2,,r}. We use the Bayes rule together with IH:

[jS1¬𝒜j|kS2¬𝒜k] =j=1r[¬𝒜j|(=j+1m¬𝒜)(kS2¬𝒜k)]
=j=1r(1[𝒜j|(=j+1m¬𝒜)(kS2¬𝒜k)])
IHj=1r(11d+1)
r=|S1|d(11d+1)d>1e.

Putting this together with the bound on the numerator, we obtain that

[𝒜i|jS¬𝒜j]p1/e1d+1

as desired. ∎

To prove 5.2, we applying the above lemma repeatedly,

[i=1n¬𝒜i] =i=1n[¬𝒜i|j=i+1n¬𝒜j]
=i=1n(1[𝒜i|j=i+1n¬𝒜j])
i=1n(11d+1)=(11d+1)n>0.

Note that the above proof is not algorithmic. After around 20 years of intense study Moser and Tardos manage to give an algorithmic proof of the lemma which we will discuss in the next lecture.

5.4 General Lovasz Local Lemma

In some settings it is useful to have a more flexible version of the Local Lemma, which allows large differences in the probabilities of the ”bad” events. We state this next.

Lemma 5.7 (General Lovász Local Lemma).

Let 𝒜1,,𝒜n be a set of ”bad” events, and let Γi{𝒜1,,𝒜n} denote the ”dependency set” of 𝒜i (i.e., 𝒜i is mutually independent of all events not in Γi). If there exists a set of real numbers x1,,xn[0,1) such that

[𝒜i]xijΓi(1xj),1in

then

[i=1n¬𝒜i]i=1n(1xi)>0.
Exercise 5.8.

It is left as a straightforward (and strongly recommended) exercise to prove this general version by mimicking the proof of 5.2. Also, you should check that applying 5.7 with xi=1/(d+1) yields 5.2 as a special case.

Corollary 5.9 (Asymmetric Lovász Local Lemma).

In the same scenario as in 5.7, if jΓi[𝒜j]1/4 for all i then

[i=1n¬𝒜i]i=1n(12[𝒜i])>0.
Proof.

The result follows easily by applying 5.7 with xi=2[𝒜i]. ∎