Skip to main content
  (Week 5)

Theory

We open the box. First-order logic gives us the language for talking about cooperation. Nelson-Oppen gives us the protocol.

Where We Left Off in Practice

Three demos. Two theories agreeing on a counterexample (Act 1, the swap bug). A formula where each piece is satisfiable but the combination is unsatisfiable (Act 2, with the disjunction x=1x=2 hanging in the air). A live propagation chain (Act 3, three notes passed).

Now we make the cooperation precise. Five questions to answer:

  1. What is a theory, formally? Practice talked about reasoners "knowing things." We need a precise account.
  2. What's the contract for a theory solver? L03 said each one "decides a conjunction." We sharpen that.
  3. When can two theory solvers be combined honestly? Three restrictions.
  4. How does the combination work? Purification, plus equality propagation.
  5. Why doesn't simple equality propagation suffice for the Act 2 formula? Convexity, and what to do without it.

The catchphrase still applies: theories pass notes through equality. Today we say what a note is, who can send one, and what makes one valid.

1. First-Order Logic Semantics

L03 said informally that a theory is a signature and some axioms. We now make this formal. The work pays off in the next section, where the theory-solver interface drops out as a one-liner.

A concrete structure

Take a universe with two elements: U={,}. Two elements, nothing more.

Now pin down the meaning of a small vocabulary.

That data is what we mean by an interpretation I. Together with the universe U, it is a first-order structure, written U,I. The structure has just enough furniture for any formula in this vocabulary to have a definite truth value.

Structures, in general

The general shape: a first-order structure is a pair U,I where

The {,} data above instantiates each clause: a finite U, explicit constants, an explicit function table, an explicit relation.

Evaluation rules

Terms denote elements of U. Atoms and formulas evaluate to true or false. Both are defined inductively over the syntax.

Terms. A constant c is given by I directly. A function application is the function applied to its arguments:

I[f(t1,,tn)]=I[f](I[t1],,I[tn])

Atoms. A predicate application is true exactly when the tuple of arguments lies in the relation:

Ip(t1,,tn)(I[t1],,I[tn])I[p]

Formulas. The connectives:

II I¬FIF IF1F2IF1 and IF2 IF1F2IF1 or IF2

When IF, we say the structure is a model of F, or that F is true under I.

We restrict ourselves to quantifier-free, ground formulas: no or , and no free variables. Quantifiers are L06.

Evaluating a formula on the structure

Back to the {,} structure from above. Question: does Ip(f(y),f(f(x)))?

Compute the terms first.

So the question is whether (,) is in I[p]. It is.

Ip(f(y),f(f(x))). The structure is a model of the atom.

This example is small enough to evaluate by hand. Studio Exercise 1 is exactly this evaluator written in Python. Read the rules, type the missing case, predict the truth values, run, check.

Satisfiability and validity, modulo nothing yet

The duality from L01 carries over.

F is valid iff ¬F is unsatisfiable. The same counter-example search you have been doing all along; the trick scales up.

2. Theories as Restricted Structures

So far the universe and interpretation are arbitrary. I[+] can do whatever it likes; I[] too. But when we write x+y=y+x, we don't want structures where + means something weird. We want structures where + means actual addition.

You have already met four theories that pin down what their symbols mean:

Theory ΣT What gets fixed
T= (equality + UF) = plus arbitrary functions and constants reflexivity, symmetry, transitivity, function and predicate congruence
TR (linear real arithmetic) rational constants, +, * (by constants), <= the standard reals
TZ (linear integer arithmetic) integer constants, +, * (by constants), <= the standard integers
TA (arrays) select, store, = read-over-write axioms

Each row is a restriction on which structures count. TR admits only structures whose universe is the reals and whose +,*, act normally. T= admits any universe but forces equality to act like equality. TA admits any element type but forces select/store to satisfy the array axioms.

Now name the abstraction. A theory T is a pair

T=(ΣT,T)

Each row of the table picks a ΣT and a class T. The four theories above are the four you have already used.

Modulo a theory

Once we have a theory, satisfiability and validity modulo the theory are the obvious thing. Drop "every structure" and say "every T-model."

F is T-valid iff ¬F is T-unsatisfiable. The counter-example search still works: assert the negation, check T-satisfiability, recover a witness if there is one. Z3 has been doing exactly this every time you wrote Not(P) and called check().

Uninterpreted symbols

A formula can contain symbols outside ΣT. These are uninterpreted. The theory does not constrain them; T-models agree with the theory on ΣT but assign whatever they like to symbols outside.

This is what Function('f', IntSort(), IntSort()) and DeclareSort('Point') were doing all along. Z3 declares f to be a function symbol whose interpretation is unconstrained except by the equality axioms (which are part of T=, the always-on core theory). The L03 sq/sqabs trick depended on this exactly: declare umul uninterpreted, give it a single axiom, watch the solver succeed without bit-blasting.

3. The Theory-Solver Interface

A theory solver for T is a procedure that

Three things to notice.

  1. Conjunction only. Not arbitrary boolean combinations. The solver gets a list of literals it must satisfy simultaneously. Disjunctions, if there are any, are someone else's job.
  2. ΣT-literals only. No symbols from outside the theory's signature. Mixed-theory atoms have to be split apart first; that's purification.
  3. Decides T-satisfiability. The procedure terminates and gives the right answer.

This is the contract you have been seeing since L03. We just made it precise. Z3 has theory solvers for T=, TR, TZ, TA, bitvectors, strings, datatypes, and a few more; each one satisfies this interface in its own theory.

4. Nelson-Oppen Restrictions

You have two theory solvers, one for T1 and one for T2, each meeting the interface above. You want a solver for the combined theory T1T2: conjunctions over Σ1Σ2, looking for a single (T1T2)-model.

We only ever combine two theories at a time. If you have three or more, say T=TRTA, you fold: combine T= with TR first, then combine the result with TA. Nelson-Oppen is the binary operation; the n-ary case is just reduce.

Nelson-Oppen does this combination, but only under three restrictions. Each will feel a little abstract on first read; §8 walks through an algorithm that uses all three, and at that point you can look back and see exactly what each restriction is buying.

Why not just merge T1 and T2 into one big theory and call its solver? Two answers. First, the union of decidable theories is often undecidable (a classic result). Second, even when it stays decidable, a monolithic solver tends to be much slower than two specialists exchanging notes. Cooperation lets each theory solver keep its specialized data structures and algorithms.

Restriction 1: each Ti is decidable, quantifier-free, conjunctive

Each Ti already has a decision procedure for conjunctions of Σi-literals. This is the §3 interface. It is the prerequisite for everything that follows.

Restriction 2: signatures share only equality

Σ1Σ2={=}

The two theories have disjoint vocabularies except for =. No symbol other than equality is shared.

This restriction makes the cooperation tractable. When T1's solver wants to tell T2's solver something about a shared constant, the only language they have in common is equality. The note has to be of the form x=y. Equality is the postal service from Practice; here we are saying it is the only postal service.

Restriction 3: each Ti is stably infinite

T is stably infinite iff every T-satisfiable formula has a T-model with infinite universe.

It is the metagame restriction: the proof would not go through without it. Stably-infinite was added in 1980 to fix a hole in Nelson and Oppen's original 1979 paper. The fix was real; non-stably-infinite theories can be combined unsoundly without it.

The intuition is that the cooperation procedure sometimes needs room to invent fresh witnesses. If you have ever used gensym in Lisp or Symbol() in JavaScript, you know the affordance: conjure a fresh name nobody else is using. That is what stably-infinite buys the proof. The procedure may need to gensym a fresh element into the universe to make the cooperation go through. If a theory's models are forced to be finite, the procedure can run out of room and report SAT when the right answer is UNSAT.

A concrete counterexample makes the failure tangible. Combine TBV1 (one-bit bitvectors, only two values) with T=:

bvnot(x)bvnot(y)f(x)f(z)f(y)f(z)

This is unsatisfiable by pigeonhole. bvnot(x)bvnot(y) forces xy, so {x,y,z} needs three distinct values, but TBV1 has only two. Nelson-Oppen does not see this. F1 (the bitvector half) is satisfiable on its own, F2 (the equality half) is satisfiable on its own, and there are no equalities to propagate, so the procedure returns SAT. Wrong. TBV1's finite universe was the gap that the stably-infinite restriction is closing off.

Theories that are stably infinite: T=, TR, TZ, TA. Most theories of interest.

Theories that are not stably infinite: fixed-width bitvectors (only 2n elements per BV_n sort), the toy theory with axiom x. x=ax=b (universe forced to size at most 2). For non-stably-infinite theories, Nelson-Oppen does not apply out of the box; specialized cooperation procedures handle them.

5. Purification

Restriction 2 says no shared symbols except =. But a typical mixed-theory formula has terms that mix vocabularies. Take

f(x+g(y))g(a)+f(b)

The function symbols f and g are in Σ=. The operators + and <= are in ΣR. But x+g(y) applies a ΣR operator to a Σ= term; f(x+g(y)) does the reverse. Neither term lives purely in one signature.

Purification is the rewrite that splits a mixed formula into two pure conjunctions, one per theory. It is structurally identical to Tseitin from L01: introduce fresh names for cross-theory subterms, and pin those names with equalities.

The rewrite rule

Apply this rule to fixpoint:

Whenever a subterm t appears inside a context that belongs to a different theory's signature, introduce a fresh constant u, replace t with u in place, and conjoin u=t to the formula.

The rule covers three syntactic shapes uniformly: a foreign subterm inside a function application (f(t)), inside a predicate application (p(t)), or sitting on the other side of a cross-theory equality (c=t where c and t belong to different signatures). All three abstract t to a fresh u and conjoin u=t. The worked example below exercises only the function case; predicates and bare equalities work identically.

Each rewrite shortens the depth of theory-mixing. The rewrite terminates because the formula's syntax tree has finite depth and each step strictly reduces the mixing.

The fresh constants are called shared constants and act as the mailbox for cross-theory facts. Original constants from the formula (here a, b, x, y) may also be shared if they appear in both halves after the rewrite. Purification is the post office being installed; the equality propagation phase will run the mail.

A point that trips up first-time readers: shared constants are not shared terms. After purification every term belongs to one theory's signature only. That is the whole point. What gets shared is the constants that appear in both halves' literals, and those constants are how the two solvers will end up referring to the same objects. The next phase exchanges equalities over those constants, never over compound terms.

Warmup: a one-step purification

Before the bigger example, see the rule on a tiny formula. Take f(x+1)=y over T=TR. The subterm x+1 mixes signatures: a ΣR term sitting inside f, which is in Σ=. One application of the rule:

f(u)=y;u=x+1

Now separate.

Σ=:f(u)=yΣR:u=x+1

Shared constant: u. Local: y in Σ=, x in ΣR. One mechanical step, one fresh name, two pure conjunctions joined by an equality. That's the whole rule. The worked example below repeats it five times on a denser formula.

Worked example

The whole transformation in one picture: one mixed formula goes in, two pure conjunctions come out, joined only by the shared constants that purification introduces.

graph TD
    accTitle: Purification splits a mixed formula into two pure halves
    accDescr: Input formula f(x + g(y)) ≤ g(a) + f(b) flows into a Purification box, which produces two outputs labeled Σ_R and Σ_=, joined by shared constants u_1 through u_5.
    F["f(x + g(y)) ≤ g(a) + f(b)"]
    P["Purification
(introduce u_1, …, u_5)"] R["Σ_R: u_4 = x + u_1 ∧ u_5 ≤ u_2 + u_3"] E["Σ_=: u_1 = g(y) ∧ u_2 = g(a) ∧ u_3 = f(b) ∧ u_5 = f(u_4)"] M["shared mailbox
{ u_1, u_2, u_3, u_4, u_5 }"] F --> P P --> R P --> E R -.- M E -.- M

Now the steps. Start with:

f(x+g(y))g(a)+f(b)

The mixed subterms are g(y) (a Σ= term inside +), g(a) (similar), f(b) (similar), x+g(y) (a ΣR term inside f), and f(x+g(y)) (a Σ= term inside <=). Five abstractions, in any order. Here is one pass:

  1. Abstract g(y) with u1: f(x+u1)g(a)+f(b);u1=g(y)
  2. Abstract g(a) with u2: f(x+u1)u2+f(b);u2=g(a)
  3. Abstract f(b) with u3: f(x+u1)u2+u3;u3=f(b)
  4. Abstract x+u1 with u4: f(u4)u2+u3;u4=x+u1
  5. Abstract f(u4) with u5: u5u2+u3;u5=f(u4)

Now every atom belongs to exactly one signature. Separate:

Σ=:u1=g(y)u2=g(a)u3=f(b)u5=f(u4) ΣR:u4=x+u1u5u2+u3

Shared constants: u1,u2,u3,u4,u5 (all introduced by purification, all appearing in both halves via their defining equation). Original a,b,x,y also live somewhere but each appears in only one half here.

Tseitin escaped exponential CNF blowup by naming subformulas. Purification escapes mixed-theory atoms by naming cross-theory subterms. Same trick, different target. Same correctness story too: the result is not equivalent to the input (we added fresh constants the original didn't mention), but it is equi-satisfiable. Every model of the original lifts to a model of the purified pair by reading off the value of each ui from its definition, and every model of the purified pair projects back to a model of the original by ignoring those fresh constants.

6. Convexity

A theory T is convex iff: whenever a conjunction of T-literals F implies a finite disjunction of equalities, F implies one of those equalities individually.

FT(x1=y1xn=yn)FT(xi=yi) for some i

If the theory can prove the disjunction, it can pick a specific disjunct. No genuine ambiguity.

Examples

Predict before reading on. Of TR, T=, and TZ, which are convex? Two are; one is not.

TR (linear real arithmetic) is convex. Geometric reason: the set of solutions to a conjunction of linear constraints over the reals is a convex polytope. If the polytope is contained in a union of finitely many hyperplanes, it is contained in one of them. The name "convex" comes from this geometry: a convex theory is one whose satisfying assignments form a convex set in the natural ambient space, and convex sets have the property that finitely many hyperplanes covering them must include one that already covers them. T= and TZ inherit the name even when no obvious geometry is in play.

A common trap: doesn't x·y=0 imply x=0y=0 in the reals, with neither disjunct alone implied, making TR non-convex? Yes, but x·y (variable times variable) is not an LRA term. LRA permits multiplication only by rational constants. Once x·y is in your formula, you have left LRA for nonlinear arithmetic, which is in fact non-convex. The convexity claim depends precisely on the signature being linear.

T= (equality with uninterpreted functions) is convex. Combinatorial reason: a conjunction of equalities and disequalities partitions the set of terms into equivalence classes. The implied equalities are exactly the equalities within a class. There is no "either-or" the theory can prove.

TZ (linear integer arithmetic) is not convex. The canonical witness is from Practice Act 2:

1xx2TZ(x=1x=2)

The conjunction implies the disjunction. But neither equality alone is implied. x could be 1; x could be 2. The integer lattice has gaps that allow this kind of split.

The theory of bitvectors is also not convex (similar reason: finite domain forces disjunctive consequences).

Why this matters

A convex theory propagates single equalities. A non-convex theory can imply genuine disjunctions of equalities, with no individual equality implied. The cooperation algorithm has to handle these two cases differently. The convex case is simpler; the non-convex case needs a search step.

We start with the convex case.

7. Equality Propagation: The Convex Algorithm

The full algorithm in pseudocode:

def NO_convex(F):
    F1, F2 = purify(F)            # split into pure conjunctions
    while True:
        if not T1_solver.sat(F1):
            return UNSAT
        if not T2_solver.sat(F2):
            return UNSAT
        new_eq = find_implied_equality(F1, F2)
        if new_eq is None:
            return SAT
        F1.append(new_eq)
        F2.append(new_eq)

find_implied_equality(F1, F2) looks for a pair of shared constants x,y such that one half implies x=y but the other half does not yet have it. If both halves already agree on all implied equalities, the algorithm returns SAT.

Three things to verify.

Soundness. If the algorithm returns UNSAT, the original F is genuinely unsatisfiable. Short argument: if a sub-solver returns unsat, that pure conjunction has no Ti-model, so the combined formula has no (T1T2)-model.

Termination. The number of shared constants is finite. The number of equalities between them is at most (n2) where n is that count. Each iteration either returns or adds a new equality, so the loop runs at most (n2) times. Complexity falls out: if both sub-solvers run in polynomial time, the convex Nelson-Oppen combination runs in polynomial time. If both are NP, the combination is NP. The non-convex extension in §9 stays in the same broad class but pays an additional factor for case-split branching.

Completeness, for convex theories. If the algorithm returns SAT, the original formula is genuinely satisfiable. The argument needs convexity: when no more equalities are implied by either side, the union of models for F1 and F2 (modulo agreement on shared constants) yields a (T1T2)-model. Stably infinite makes the universe-size accounting work; convexity makes the equalities-on-shared-constants accounting work.

Without convexity, the procedure is sound and terminates but is incomplete. It may return SAT when the formula is unsatisfiable. That gap is what §9 fills.

8. Worked Example

The same formula students saw in Practice Act 3:

f(f(x)f(y))f(z)xyy+zx0z

Over the reals, with f uninterpreted. Convex theories on both sides; the convex algorithm should decide it.

Purification

Three abstractions: u=f(x), v=f(y), w=uv. Then f(f(x)f(y)) becomes f(w). Split:

ΣR:xyy+zx0zw=uv Σ=:f(w)f(z)u=f(x)v=f(y)

Shared constants: x,y,z,u,v,w.

The chain

The big picture: four notes pass between the two reasoners. Three equalities cross the boundary; the fourth note is a contradiction.

sequenceDiagram
    accTitle: Nelson-Oppen propagation chain on the worked example
    accDescr: Σ_R and Σ_= exchange three equalities, then Σ_= reports unsat.
    autonumber
    participant R as Σ_R (LRA)
    participant E as Σ_= (EUF)
    Note over R,E: each side satisfiable on its own
    R->>E: x = y
    Note right of E: f(x) = f(y) by congruence, so u = v
    E->>R: u = v
    Note left of R: w = u − v = 0, and z = 0, so w = z
    R->>E: w = z
    Note right of E: f(w) = f(z) by congruence, contradicts f(w) ≠ f(z)
    Note over R,E: UNSAT

Sanity. Each side alone is satisfiable. Neither solver bails.

Step 1. ΣR implies x=y. From xy and y+zx, we get y+zy, so z0. With 0z, we have z=0. Then yxy, so x=y.

ΣR derives x=y. Pass to Σ=.

Step 2. Σ=, now with x=y, implies u=v. Function congruence: u=f(x), v=f(y), x=y, so u=v.

Σ= derives u=v. Pass to ΣR.

Step 3. ΣR, now with u=v, implies w=z. We have w=uv=0 and (from Step 1's reasoning) z=0, so w=z.

ΣR derives w=z. Pass to Σ=.

The third note arrives. Theories pass notes through equality.

Step 4. Σ=, now with x=y,u=v,w=z, is unsatisfiable. The constraint f(w)f(z) plus w=z plus function congruence gives f(w)=f(z)f(z), contradiction.

Σ= is unsatisfiable. The original formula is UNSAT.

Running state

A scratchpad view: each row is one step of the sequence diagram above. The ★ marks the side that derived the equality on that step; the middle column shows which way the note flowed; the other side records the same equality via algorithm step 3.a.

Step ΣR dir Σ=
0 xy,y+zx,0z,w=uv f(w)f(z),u=f(x),v=f(y)
1 x=y x=y
2 u=v u=v
3 w=z w=z
4 ⊥: f(w)=f(z) by congruence contradicts f(w)f(z)

After step 3 each side's full conjunction has the original literals from step 0 plus all three propagated equalities x=y,u=v,w=z. Step 4 is then Σ= noticing that w=z plus f(w)f(z) is unsatisfiable.

The two columns stay in sync without ever sharing a non-equality literal. Theories pass notes through equality.

03-no-trace.py from Practice runs each step as a Z3 query of the form "does this conjunction with the negated conclusion become unsatisfiable?" and prints a check mark per step. The chain is mechanically verified: the page does not have to trust the prose.

A predicate-congruence cameo

The main worked example exercises function congruence (the f chain). Predicate congruence is the dual axiom: equal inputs produce equal truth values. Tiny example, pure T=:

p(f(x))¬p(f(y))x=y

Function congruence on x=y gives f(x)=f(y). Predicate congruence on f(x)=f(y) gives p(f(x))p(f(y)). But the formula says p(f(x)) and ¬p(f(y)), contradiction. UNSAT.

No purification needed; the formula is already pure equality. Quick demonstration of the axiom that the main worked example never gets to use.

9. Beyond Convex: Case Splits on Disjunctions

Now back to Practice Act 2. The formula:

1xx2f(x)f(1)f(x)f(2)

with x ranging over the integers. Try the convex algorithm.

Purification

Introduce shared constants z1=1 and z2=2 so the f(1) and f(2) terms become f(z1) and f(z2).

ΣZ:1xx2z1=1z2=2 Σ=:f(x)f(z1)f(x)f(z2)

Shared constants: x,z1,z2.

The convex algorithm gets stuck

ΣZ is satisfiable on its own (any x{1,2}). Σ= is satisfiable on its own. Neither implies any single equality between shared constants, so find_implied_equality returns None. The algorithm returns SAT.

But this is wrong. The formula is unsatisfiable: if x=1 then function congruence forces f(x)=f(z1), contradicting f(x)f(z1); the same goes for x=2. Both possible x values close.

The algorithm got stuck because TZ implied a disjunction:

ΣZTZ(x=z1x=z2)

But TZ is non-convex, so neither disjunct is implied alone. The convex algorithm has no way to communicate the disjunction across the boundary.

The fix: case split

When a non-convex theory implies a disjunction of equalities, try each disjunct in turn. The combined formula is unsat iff every branch is unsat.

Pseudocode for the extended algorithm:

def NO(F):
    F1, F2 = purify(F)
    if convex_loop_unsat(F1, F2):
        return UNSAT
    # Convex loop done. Look for a disjunction of equalities.
    disjunction = find_implied_disjunction(F1, F2)
    if disjunction is None:
        return SAT
    for x_eq_y in disjunction:
        if NO(F + [x_eq_y]) == SAT:
            return SAT
    return UNSAT

The recursion bottoms out because each branch fixes one more equality, and the number of equalities between shared constants is finite.

Tracing both branches

Back to our formula. Branch on x=z1x=z2.

Branch A: x=z1.

Add x=z1 to both halves. Σ= now has f(x)f(z1) and x=z1. Function congruence gives f(x)=f(z1), contradicting f(x)f(z1). UNSAT.

Branch B: x=z2.

Add x=z2. Σ= now has f(x)f(z2) and x=z2. Same congruence argument: f(x)=f(z2), contradiction. UNSAT.

Both branches close. The original formula is UNSAT.

This is the resolution we promised at the end of Practice Act 2. The ΣZ reasoner could not pass any single equality to the Σ= reasoner; it could only pass the disjunction. The non-convex extension says: try each disjunct, all branches must close. Both did.

10. Closing Bridge: What's Left

We have an algorithm. Three restrictions met (decidable conjunctive, signatures share only equality, stably infinite). Purification. Equality propagation, convex case. Case split on disjunctions of equalities, non-convex case. End to end, conjunctions over T1T2 are decidable.

Today's piece sits in the middle of a larger SMT picture:

graph TD
    accTitle: Where Nelson-Oppen sits inside an SMT solver
    accDescr: A CDCL/DPLL(T) layer hands conjunctions to a Nelson-Oppen layer, which dispatches to per-theory solvers for T_=, T_R, T_Z, and T_A.
    Top["Boolean envelope: CDCL / DPLL(T)
(L06)"] NO["Nelson-Oppen
(today)"] EQ["T_= solver
(L03)"] LR["T_R / T_Z solvers
(L04)"] AR["T_A solver
(L04)"] Top -- "conjunction of T-literals" --> NO NO -- "Σ_= conjunction" --> EQ NO -- "arith conjunction" --> LR NO -- "array conjunction" --> AR

But there are two pieces L04 promised that we have not delivered.

The boolean envelope

Nelson-Oppen takes a conjunction. Real formulas are not always conjunctions. Take

(x=yf(x)f(y))(a+b>5a+b<3)

Each disjunct is unsatisfiable: function congruence kills the left, arithmetic kills the right. But Nelson-Oppen cannot see the OR. The missing piece is DPLL(T): the CDCL engine from Week 2 manages the boolean structure on top, dispatching conjunctions of theory literals to Nelson-Oppen (or directly to a single theory solver) at the leaves. That is L06.

ForAll returns

You have already used universal quantifiers, possibly without noticing. The EUF axioms from L03:

x. x=x x,y. x=yy=x x,y,z. x=yy=zx=z x,y. ixi=yif(x)=f(y)

are all universally quantified. Z3 has been instantiating them behind the scenes every time you wrote a formula in T=. Today's worked example used the function-congruence axiom three times (Steps 2, 3, 4). We just did not name it.

Quantifiers come back explicitly in L06. The mechanisms are E-matching (instantiate x. ϕ(x) at terms that already appear in the formula's congruence closure) and MBQI (model-based quantifier instantiation: try to build a model where the quantified formula holds, and instantiate where the model fails). A small foreshadowing example, the kind of thing Z3 just handles:

foreshadowing.py
from z3 import Function, IntSort, Ints, ForAll, Solver, Not

f = Function('f', IntSort(), IntSort())
x, y = Ints('x y')

s = Solver()
s.add(ForAll([x], f(x) >= 0))         # universal axiom about f
s.add(Not(f(y) >= 0))                 # negation of an instance
print(s.check())                      # unsat (by E-matching)

Z3 returns UNSAT. The instantiation xy of the universal axiom contradicts the negation. E-matching is the mechanism that finds the right xy. Next week, in detail.

What we built today

Two notes for your future self.

  1. Theories pass notes through equality. The catchphrase is literal. Equality is the only shared symbol; everything else stays in its own world. The mailbox is the set of shared constants.
  2. A reduction is happening here too. The course catch phrase still applies. Purification is a reduction from a mixed-theory formula to a pair of pure formulas linked by shared equalities. Get the purification wrong and the solver gives you a correct answer to the wrong question.

Studio after the break: implement the FOL evaluator, predict truth values, work through some mixed-theory predict-then-check formulas, and purify by hand.

Source

This page follows Aaron Bradley and Zohar Manna, The Calculus of Computation (Springer, 2007), Chapter 10. Purification is §10.2 (p. 265–269); the convex equality-propagation algorithm is §10.3 (p. 272–276); the worked example is Example 10.13 with Figure 10.1 (p. 279–280); the non-convex case-split extension is §10.5 (p. 285–287). Convexity intuition and the stably-infinite restriction are presented in this page's own framing; both are covered in Bradley and Manna §10.4 and §10.6.