CSE 312 – Section 3 Solutions
Spring 2026
Review of Main Concepts
- Conditional Probability: \(\Pr (A|B) =\dfrac {\Pr (A \cap B)}{\Pr (B)}\)
- Independence: Events \(E\) and \(F\) are independent iff \(\Pr (E \cap F)\) = \(\Pr (E)\Pr (F)\), or equivalently \(\Pr (F) =\Pr (F|E)\), or equivalently \(\Pr (E) =\Pr (E|F)\)
- Bayes Theorem: \(\Pr (A|B) =\dfrac {\Pr (B|A)\Pr (A)}{\Pr (B)}\)
-
Partition: Nonempty events \(E_1,\dots ,E_n\) partition the sample space \(\Omega \) iff
- \(E_1,\dots ,E_n\) are exhaustive: \(E_1 \cup E_2 \cup \dots \cup E_n=\bigcup _{i=1}^n E_i =\Omega \), and
- \(E_1,\dots ,E_n\) are pairwise mutually exclusive: \(\forall i \neq j, E_i \cap E_j=\emptyset \)
-
Law of Total Probability (LTP): Suppose \(A_{1},\ldots ,A_{n}\) partition \(\Omega \) and let \(B\) be any event. Then
\(\Pr ( B ) = \sum _{i = 1}^{n}{\Pr (B \cap A_{i})} = \sum _{i = 1}^{n}{\Pr ( \text {B\ } |\ A_{i})\Pr (A_{i})}\)
- Bayes Theorem with LTP: Suppose \(A_{1},\ldots ,A_{n}\) partition \(\Omega \) and let \(B\) be any event. Then \( \Pr ( A_1 | B) =\dfrac {\Pr ( B\ |\ A_1 ) \Pr (A_1)}{\sum _{i = 1}^{n}{\Pr ( \text {B\ } |\ A_{i})\Pr (A_{i})}} \). In particular, \( \Pr ( A | B) = \dfrac {\Pr ( B\ |\ A )\Pr ( A ) }{\Pr ( \text {B\ } |\ A)\Pr ( A ) + \Pr ( \text {B\ } |\ A^{C})\Pr (A^{C})} \)
- Chain Rule: Suppose \(A_1,...,A_n\) are events. Then, \[\Pr (A_1\cap ...\cap A_n)=\Pr (A_1)\Pr (A_2|A_1)\Pr (A_3|A_1\cap A_2)...\Pr (A_n|A_1\cap ...\cap A_{n-1})\]
- Conditional Independence: Events \(E\) and \(F\) are independent conditioned on event \(G\) iff \(\Pr (E \cap F|G)\) = \(\Pr (E|G)\Pr (F|G)\).
Section Plan
Administrivia
- Pset 1 grades released on Gradescope. Check your submissions to read comments. Regrade requests open 24 hours after grades are released and close after a week
- PSet 2 was due yesterday 4/15 @ 11:59 PM. Late deadline Saturday @ 11:59 PM (max of 3 late days per problem set)
- Pset 3 is out and due next Wednesday (April 22). Submit on Gradescope.
- Pset 3 has a coding part! you will be working partly in Ed, but see instructions on PSet
- For an introduction to the Naive Bayes Classifier and details on implementation, see the video and slides (as well as some self-check questions) linked at this edstem lesson. Also check out Section 9.3 from the book. To solve the task, we have set up an edstem lesson.
Questions about material covered so far?
Problems to be covered in section
- problem 1
- problem 3
- problem 5
- problem 9
- Time permitting, problem 10 and 11.
Be sure to check out all the remaining problems before you do your homework.
1 Content Review
- a)
- True or False: It is always the case that \(\Pr (A \mid B) = \Pr (B \mid A)\).
False. It is not true in general that \[ \Pr (A \mid B) = \frac {\Pr (A \cap B)}{\Pr (B)} = \frac {\Pr (A \cap B)}{\Pr (A)} = \Pr (B \mid A)\;. \] Any pair of events \(A\) and \(B\) such that \(\Pr (A \cap B) > 0\) and \(\Pr (A) \neq \Pr (B)\) would be a valid counterexample.
- b)
- Select one: Suppose \(A\) and \(B\) are independent events. Then
- \(\Pr (A \cap B) = \Pr (A)\Pr (B)\)
- \(\Pr (A \mid B) = \Pr (A)\)
- Both are true
- Both are false
Both are true, as we have seen in lecture.
- c)
- Select one: For any two events \(A\) and \(B\),
- \(\Pr (A \mid B) = \Pr (A \cap B)\)
- \(\Pr (A \mid B) = \frac {\Pr (B \mid A)\Pr (A)}{\Pr (B)}\)
- \(\Pr (A \mid B) = \frac {\Pr (B \mid A)\Pr (B)}{\Pr (A)}\)
\(\Pr (A \mid B) = \frac {\Pr (B \mid A)\Pr (A)}{\Pr (B)}\) by Bayes’ Theorem.
- d)
- True or False: Let \(A\) and \(B\) be the event that a roll of a six-sided die shows a
number that is at most 3 or more than 4, respectively. Then \(A\) and \(B\) are a
partition.
False. \(A\) and \(B\) are mutually exclusive (it’s not possible to roll a number that is at most 3 and more than 4 at the same time). However, \(A\) and \(B\) are not exhaustive: the number 4 is not in either event.
- e)
- Select one: Let \(T\) be the event that Alice tests positive for the flu. Let \(C\) be the
event that Alice actually has the flu. Suppose the probability that Alice
tests positive given that she has a flu is \(0.8\). Then the probability she tests
negative given that she has a flu is
- \(0.8\)
- \(0.2\)
- Not enough information
First, note that for two events \(A\) and \(B\), \[ \Pr (A \mid B) + \Pr (\overline {A} \mid B) = \frac {\Pr (A \cap B)}{\Pr (B)} + \frac {\Pr (\overline {A} \cap B)}{\Pr (B)} = \frac {\Pr (A \cap B) + \Pr (\overline {A} \cap B)}{\Pr (B)} = \frac {\Pr (B)}{\Pr (B)} = 1 \;, \] where the first part uses the definition of conditional probability, and the last equality comes from using the Law of Total Probability in the numerator (since \(A\) and \(\overline {A}\) partition the sample space). Now, we are looking for \(\Pr (\overline {T} \mid C)\). Then \[ \Pr (T \mid C) + \Pr (\overline {T} \mid C) = 1 \implies \Pr (\overline {T} \mid C) = 1 - 0.8 = \boxed {\text {0.2}} \;. \]
- f)
- True or False: Suppose \(A_1,\ldots ,A_n\) are events. Then \[ \Pr (A_1 \cap A_2 \cap \cdots \cap A_n) = \Pr (A_1) \cdot \Pr (A_2 \mid A_1) \Pr (A_3\mid A_1 \cap A_2) \cdots \Pr (A_n \mid A_1 \cap \cdots \cap A_{n-1})\;. \]
True, this is the definition of the Chain Rule.
2 Flipping Coins
A coin is tossed twice. The coin is “heads” one quarter of the time. You can assume that the second toss is independent of the first toss.
- a)
- What is the probability that the second toss is “heads” given that the
first toss is “tails”?
Sample Space: \(\Omega = \{HH, TT, HT, TH\}\). Because heads come \(1/4\) of the time, and tails \(3/4\), the probabilities of the outcomes are:
- \(\Prob {HH} = 1/4 \times 1/4 = 1/16\)
- \(\Prob {HT} = \Prob {TH} = 3/4 \times 1/4 = 3/16\)
- \(\Prob {TT} = 3/4 \times 3/4 = 9/16\)
Let \(A\) be the event that the first coin is tails: \(A = \{TT, TH\}\). Let \(B\) be the event that the second coin is heads: \(B = \{HH, TH\}\).
We want the conditional probability \(\Prob {B | A} = \frac { \Prob {A \cap B}}{\Prob {A}}\). \(\Prob {A} = \Prob {TT} + \Prob {TH} = 9/16 + 3/16 = 12/16 = 3/4\). \(\Prob {A \cap B} = \Prob {TH} = 3/16\).
Therefore, \(\Prob {B | A} = \frac {3/16}{3/4} = \) \(1/4\). (Note: This is exactly what we expect, as the coins are modeled to be independent!)
- b)
- What is the probability that the second toss is “heads” given that at least
one of the tosses is “tails”?
Let \(A\) and \(B\) be the same events as in a). Let \(C\) be the event that at least one toss is tails: \(C = \{TH, HT, TT\}\). Note that HH is excluded since it contains no tails. We want the conditional probability \(\Prob {B|C}\).
\(\Prob {C} = 1 - \Prob {HH} = 1 - 1/16 = 15/16\). The intersection \(B \cap C\) is the event that the second toss is heads AND at least one toss is tails, which is \(\{HH, TH\} \cap \{TH, HT, TT\} = \{TH\}\). Therefore, \(\Prob {B \cap C} = \Prob {TH} = 3/16\).
Therefore, \(\Prob {B | C} =\frac {\Prob {B \cap C}}{\Prob {C}} = \frac {3/16}{15/16} = \frac {3}{15} = \) \(\frac {1}{5}\).
- c)
- With respect to the probability space of this problem, give an example of
two events that are disjoint but not independent.
Let \(E_1 = \{TT\}\) and \(E_2 = \{HH\}\). These are disjoint (mutually exclusive) because their intersection is the empty set (\(\Prob {E_1 \cap E_2} = \Prob {\emptyset } = 0\)). They are not independent because both events occur with positive probability, meaning \(\Prob {E_1} \cdot \Prob {E_2} > 0\), which does not equal \(\Prob {E_1 \cap E_2}\).
- d)
- With respect to the probability space of this problem, give an example of
two events that are independent but not disjoint.
Let \(E_1 = \{TH, HH\}\) (second coin is heads) and \(E_2 = \{TH, TT\}\) (first coin is tails). These are not disjoint because they share the outcome \(TH\). They are independent because the outcome of the first toss does not affect the outcome of the second toss, as verified in part a).
3 Balls from an Urn
Say an urn contains three red balls and four blue balls. Imagine we draw three balls without replacement. (You can assume every ball is uniformly selected among those remaining in the urn.)
- a)
- What is the probability that all three balls are all of the same color?
The experiment is modeled with \(\Omega = \{r, b\}^3\). By assuming every draw is uniform among the remaining balls, and letting \(r_i\) (resp. \(b_i\)) be the probability that we draw a red (resp. blue) ball on the \(i\)-th draw, we apply the chain rule:
- Using the chain rule, we have \(\Prob {r_1r_2r_3} = \Prob {r_1}\cdot \Prob {r_2 | r_1} \cdot \Prob {r_3 | r_1r_2}=\frac {3}{7} \cdot \frac {2}{6} \cdot \frac {1}{5} = \frac {1}{35}\)
- Similarly, \(\Prob {b_1b_2b_3} = \frac {4}{7} \cdot \frac {3}{6} \cdot \frac {2}{5} = \frac {4}{35}\)
Because these are mutually exclusive events, the probability they all have the same color is \(\frac {1}{35} + \frac {4}{35} =\) \(\frac {1}{7}\).
- b)
- What is the probability that we get more than one red ball given the first
ball is red?
Let \(R\) be the event that the first ball is red. Since the first draw is uniform, \(\Prob {R} = \frac {3}{7}\). Let \(M\) be the event that more than one ball is red. We need to compute the conditional probability \(\Prob {M|R} = \frac {\Prob {M \cap R}}{\Prob {R}}\).
By the Law of Total Probability: \(\Prob {M \cap R} = \Prob {R} - \Prob {M^c \cap R}\)
\(M^c \cap R\) is the event that the first ball is red, and we do NOT get more than one red ball (meaning the remaining two MUST be blue). We can calculate this directly: \(\Prob {M^c \cap R} = \Prob {r_1b_2b_3}=\frac {3}{7} \cdot \frac {4}{6} \cdot \frac {3}{5} = \frac {6}{35}\)
Thus, \(\Prob {M \cap R} = \frac {3}{7} - \frac {6}{35} = \frac {9}{35}\). Finally, \(\Prob {M|R} = \frac {9/35}{3/7} =\) \(\frac {3}{5}\).
4 Random Grades?
Suppose there are three possible teachers for CSE 312: Martin Tompa, Anna Karlin, and Adam Blank. Suppose the probability of getting an \(A\) in Tompa’s class is \(\frac {5}{15}\), in Karlin’s class is \(\frac {3}{15}\), and in Blank’s class is \(\frac {1}{15}\). Suppose you are assigned a grade randomly according to the given probabilities when you take a class from one of these professors, irrespective of your performance. Furthermore, suppose Blank teaches your class with probability \(\frac {1}{2}\) and Karlin and Tompa have an equal chance of teaching if Blank isn’t. Use Bayes’ Rule to compute the probability you had Blank, given that you received an \(A\). Compare this to the unconditional probability that you had Blank.
Let \(T, K, B\) be the events you take CSE 312 from Tompa, Karlin, and Blank, respectively. Let \(A\) be the event you get an \(A\). Because Blank teaches with probability \(1/2\), we have \(\Pr (B) = 1/2\). Because Karlin and Tompa split the remaining probability equally, \(\Pr (K) = 1/4\) and \(\Pr (T) = 1/4\).
We use Bayes’ Theorem with the Law of Total Probability (LTP), conditioning on each of \(T,K,B\) since those events partition our sample space: \[\Pr ( B \mid A ) = \frac {\Pr ( A \mid B ) \Pr ( B ) }{\Pr ( A \mid T ) \Pr ( T ) + \Pr ( A \mid K ) \Pr ( K ) + \Pr ( A \mid B ) \Pr ( B ) } \] \[ = \frac {\frac {1}{15} \cdot \frac {1}{2}}{\left (\frac {5}{15} \cdot \frac {1}{4}\right ) + \left (\frac {3}{15} \cdot \frac {1}{4}\right ) + \left (\frac {1}{15} \cdot \frac {1}{2}\right )} \] \[ = \frac {\frac {1}{30}}{\frac {5}{60} + \frac {3}{60} + \frac {2}{60}} = \frac {\frac {2}{60}}{\frac {10}{60}} = \frac {2}{10} = \boxed {\frac {1}{5}}\]
The unconditional probability that you had Blank was \(\Pr (B) = \frac {1}{2}\). Because getting an \(A\) in his class is relatively rare compared to the other professors, observing that you received an \(A\) significantly lowers the probability that you were in his section (down to \(\frac {1}{5}\)).
5 Marbles in Pockets
Aleks has 5 blue and 3 white marbles in her left pocket, and 4 blue and 4 white marbles in her right pocket. If she transfers a randomly chosen marble from left pocket to right pocket without looking, and then draws a randomly chosen marble from her right pocket, what is the probability that it is blue?
Let \(W_{\_}, B_{\_}\) denote the event that we choose a white marble or a blue marble respectively, with subscripts \(L, R\) indicating from which pocket we are picking – left and right.
We know that we will pick from the left pocket first, and the right pocket second. We can use the Law of Total Probability, conditioning on the color of the transferred marble (the marble picked from the left pocket): \[\Pr (B_R) = \Pr (W_L) \cdot \Pr (B_R \mid W_L) + \Pr (B_L) \cdot \Pr (B_R \mid B_L)\] If a white marble is transferred (\(\Pr (W_L) = \frac {3}{8}\)), the right pocket now contains 4 blue and 5 white marbles, so \(\Pr (B_R \mid W_L) = \frac {4}{9}\). If a blue marble is transferred (\(\Pr (B_L) = \frac {5}{8}\)), the right pocket now contains 5 blue and 4 white marbles, so \(\Pr (B_R \mid B_L) = \frac {5}{9}\).
Plugging these in: \[\Pr (B_R) = \left (\frac {3}{8} \cdot \frac {4}{9}\right ) + \left (\frac {5}{8} \cdot \frac {5}{9}\right ) = \frac {12}{72} + \frac {25}{72} = \boxed {\frac {37}{72}} \]
6 Game Show
Corrupted by their power, the judges running the popular game show America’s Next Top Mathematician have been taking bribes from many of the contestants. During each of two episodes, a given contestant is either allowed to stay on the show or is kicked off. If the contestant has been bribing the judges, she will be allowed to stay with probability 1. If the contestant has not been bribing the judges, she will be allowed to stay with probability 1/3, independent of what happens in earlier episodes. Suppose that 1/4 of the contestants have been bribing the judges. The same contestants bribe the judges in both rounds.
- a)
- If you pick a random contestant, what is the probability that she is
allowed to stay during the first episode?
Let \(S_i\) be the event that she stayed during the \(i\)-th episode. Let \(B\) be the event that she bribes the judges (so \(\Pr (B) = 1/4\) and \(\Pr (\overline {B}) = 3/4\)). By the Law of Total Probability, conditioning on whether the contestant bribed the judges, we get: \[\Pr (S_1) = \Pr (B) \cdot \Pr (S_1 \mid B) + \Pr (\overline {B}) \cdot \Pr (S_1 \mid \overline {B})\] \[ = \left (\frac {1}{4} \cdot 1\right ) + \left (\frac {3}{4} \cdot \frac {1}{3}\right ) = \frac {1}{4} + \frac {1}{4} = \boxed {\frac {1}{2}}\]
- b)
- If you pick a random contestant, what is the probability that she is allowed
to stay during both episodes?
Staying during both episodes is equivalent to the contestant staying in episodes \(1\) and \(2\), so the event \(S_1 \cap S_2\). By the Law of Total Probability: \begin {equation} \Pr (S_1 \cap S_2) = \Pr (B) \cdot \Pr (S_1 \cap S_2 \mid B) + \Pr (\overline {B}) \cdot \Pr (S_1 \cap S_2 \mid \overline {B}) \label {Prob4:LTP} \end {equation} We know a contestant is guaranteed to stay on the show given that they are bribing the judges, hence: \[\Pr (S_1 \cap S_2 \mid B) = 1\] On the other hand, if they have not been bribing judges, then the probability they stay on the show is \(1/3\), independent of what happens on earlier episodes. The statement that her chance of staying is "independent of what happens in earlier episodes" means that the events \(S_1\) and \(S_2\) are conditionally independent given \(\overline {B}\). Therefore, we have: \[\Pr (S_1 \cap S_2 \mid \overline {B}) = \Pr (S_1 \mid \overline {B}) \cdot \Pr (S_2 \mid \overline {B}) = \frac {1}{3} \cdot \frac {1}{3} = \frac {1}{9}\] Plugging the results of these calculations into equation (??) gives us: \[\Pr (S_1 \cap S_2)= \left (\frac {1}{4} \cdot 1\right ) + \left (\frac {3}{4} \cdot \frac {1}{9}\right ) = \frac {3}{12} + \frac {1}{12} = \frac {4}{12} = \boxed {\frac {1}{3}}\]
- c)
- If you pick a random contestant who was allowed to stay during the first
episode, what is the probability that she gets kicked off during the second
episode?
By the definition of conditional probability and the Law of Total Probability: \[\Pr (\overline {S_2} \mid S_1) = \frac {\Pr (S_1 \cap \overline {S_2})}{\Pr (S_1)} = \frac {\Pr (S_1 \cap \overline {S_2} \mid B) \Pr (B) + \Pr (S_1 \cap \overline {S_2} \mid \overline {B}) \Pr (\overline {B})}{\Pr (S_1)}\] We have already computed \(\Pr (S_1) = 1/2\) in part (a). We compute the numerator term by term.
Given that a contestant is bribing the judges, they are guaranteed to stay on the show, so the probability of being kicked off is 0: \[\Pr (S_1 \cap \overline {S_2} \mid B) = \Pr (S_1 \mid B) \cdot \Pr (\overline {S_2} \mid B) = 1\cdot 0 = 0 \] If they have not been bribing judges, the probability they leave the show is \(2/3\) (by complementing). We can then write (by conditional independence): \[\Pr (S_1 \cap \overline {S_2} \mid \overline {B}) = \Pr (S_1 \mid \overline {B}) \cdot \Pr (\overline {S_2} \mid \overline {B}) = \frac {1}{3} \cdot \frac {2}{3} = \frac {2}{9}\] We can now evaluate our initial expression: \[\Pr (\overline {S_2} \mid S_1) = \frac {0 \cdot \frac {1}{4} + \frac {2}{9} \cdot \frac {3}{4}}{\frac {1}{2}} = \frac {1/6}{1/2} = \boxed {\frac {1}{3}}\]
- d)
- If you pick a random contestant who was allowed to stay during
the first episode, what is the probability that she was bribing the
judges?
Let \(B\) be the event that she bribed the judges. By Bayes’ Theorem: \[\Pr (B \mid S_1) = \frac {\Pr (S_1 \mid B)\Pr (B)}{\Pr (S_1)} = \frac {1 \cdot \frac {1}{4}}{\frac {1}{2}} = \boxed {\frac {1}{2}}\]
7 Parallel Systems
A parallel system functions whenever at least one of its components works. Consider a parallel system of \(n\) components and suppose that each component works with probability \(p\) independently.
- a)
- What is the probability the system is functioning?
Let \(C_i\) be the event that component \(i\) is working, and let \(F\) be the event that the system is functioning. For the system to function, it is sufficient for any component to be working. This means that the only case in which the system does not function is when none of the components work. We can use Complementary Probability to compute \(\Pr (F)\), knowing that \(\Pr (C_i) = p\). We get: \[\Pr (F) = 1 - \Pr (F^C) = 1 - \Pr \left (\bigcap _{i=1}^n{C_i^C}\right ) = 1 - \prod _{i=1}^n{\Pr (C_i^C)} \]
\[ = 1 - \prod _{i=1}^n{\left (1 - \Pr (C_i)\right )} = 1 - \prod _{i=1}^n{(1 - p)} = \boxed {1-(1-p)^n}\]
Note that \(\Pr (\bigcap _{i=1}^n{C_i^C}) = \prod _{i=1}^n{\Pr (C_i^C)}\) due to the independence of \(C_i\) (components failing independently of each other). Also note that \(\prod _{i=1}^n{a} = a^n\) for any constant \(a\).
- b)
- If the system is functioning, what is the probability that component 1 is
working?
We want to find \(\Pr (C_1 \mid F)\). We know that for the system to function, only one component needs to be working, so \(\Pr (F \mid C_1) = 1\). Using Bayes’ Theorem, we get: \[\Pr (C_1 \mid F) = \frac {\Pr (F \mid C_1)\Pr (C_1)}{\Pr (F)} = \frac {1 \cdot p}{1-(1-p)^n} = \boxed {\frac {p}{1-(1-p)^n}}\]
- c)
- If the system is functioning and component 2 is working, what is the
probability that component 1 is also working?
We want to find \(\Pr (C_1 \mid C_2,F)\). \[\Pr (C_1 \mid C_2,F) = \Pr (C_1 \mid C_2) = \Pr (C_1) = \boxed {p}\] The first equality holds because knowing \(C_2\) and \(F\) is just as good as knowing \(C_2\) (since if component 2 works, the system \(F\) is guaranteed to work). The second equality holds because the components working are independent of each other.
More formally, we can use the definition of conditional probability along with a careful application of the chain rule to get the same result: \[\Pr (C_1 \mid C_2, F) = \frac {\Pr (C_1, C_2, F)}{\Pr (C_2, F)} = \frac {\Pr (F \mid C_1, C_2) \cdot \Pr (C_1 \mid C_2) \Pr (C_2)}{\Pr (F \mid C_2) \cdot \Pr (C_2)}\] We note that the system is guaranteed to work if any one component is working, so \(\Pr (F \mid C_1, C_2) = \Pr (F \mid C_2) = 1\). We also note that components work independently of each other, hence \(\Pr (C_1 \mid C_2) = \Pr (C_1) = p\). With that in mind, we can rewrite our expression so that: \[\Pr (C_1 \mid C_2, F) = \frac {1\cdot \Pr (C_1) \cdot \Pr (C_2)}{1 \cdot \Pr (C_2)} = \Pr (C_1) = p\]
8 Allergy Season
In a certain population, everyone is equally susceptible to colds. The number of colds suffered by each person during each winter season ranges from 0 to 4, with probability 0.2 for each value (see table below). A new cold prevention drug is introduced that, for people for whom the drug is effective, changes the probabilities as shown in the table. Unfortunately, the effects of the drug last only the duration of one winter season, and the drug is only effective in 20% of people, independently.
| number of colds | no drug or ineffective | drug effective |
| 0 | 0.2 | 0.4 |
| 1 | 0.2 | 0.3 |
| 2 | 0.2 | 0.2 |
| 3 | 0.2 | 0.1 |
| 4 | 0.2 | 0.0 |
- a)
- Sneezy decides to take the drug. Given that he gets 1 cold that winter,
what is the probability that the drug is effective for Sneezy?
Let \(E\) be the event that the drug is effective for Sneezy, and \(C_i\) be the event that he gets \(i\) colds the first winter. From the prompt, \(\Pr (E) = 0.2\) and \(\Pr (\overline {E}) = 0.8\). By Bayes’ Theorem: \[\Pr (E \mid C_1) = \frac {\Pr (C_1 \mid E)\Pr (E)}{\Pr (C_1 \mid E)\Pr (E) + \Pr (C_1 \mid \overline {E})\Pr (\overline {E})} \] \[ = \frac {0.3 \times 0.2}{(0.3 \times 0.2) + (0.2 \times 0.8)} = \frac {0.06}{0.06 + 0.16} = \frac {0.06}{0.22} = \boxed {\frac {3}{11}}\]
- b)
- The next year he takes the drug again. Given that he gets 2 colds in this
next year (and one cold in the first year), what is the probability that the
drug is effective for Sneezy?
Let \(D_i\) be the event that he gets \(i\) colds the second winter. We want to find \(\Pr (E \mid C_1, D_2)\). By Bayes’ Theorem: \[ \begin {aligned} \Pr (E \mid C_1, D_2) &= \frac {\Pr (E, C_1, D_2)}{\Pr (C_1, D_2)} \\ &=\frac {\Pr (D_2 \mid C_1, E)\Pr (C_1 \mid E)\Pr (E)}{\Pr (D_2 \mid C_1, E)\Pr (C_1 \mid E)\Pr (E) + \Pr (D_2 \mid C_1, \overline {E})\Pr (C_1 \mid \overline {E})\Pr (\overline {E})} \end {aligned} \] Because the number of colds Sneezy gets in year 2 is conditionally independent of the number of colds in year 1, \(\Pr (D_2 \mid C_1, E) = \Pr (D_2 \mid E) = 0.2\). Similarly, \(\Pr (D_2 \mid C_1, \overline {E}) = \Pr (D_2 \mid \overline {E}) = 0.2\).
\[ \begin {aligned} &= \frac {0.2\times \Pr (C_1 \mid E) \Pr (E)}{0.2 \times \Pr (C_1 \mid E) \Pr (E) + 0.2 \times \Pr (C_1 \mid \overline {E})\Pr (\overline {E})} \\ &= \frac {\Pr (C_1 \mid E) \Pr (E)}{\Pr (C_1 \mid E) \Pr (E) + \Pr (C_1 \mid \overline {E})\Pr (\overline {E})} = \Pr (E \mid C_1) = \boxed {\frac {3}{11}} \end {aligned} \]
- c)
- Why is the answer to (b) the same as the answer to (a)?
The probability of getting exactly two colds is \(0.2\) regardless of whether the drug is effective or not (\(\Pr (D_2 \mid E) = \Pr (D_2 \mid \overline {E}) = 0.2\)). Hence, knowing that Sneezy got exactly two colds in the second year provides absolutely no new information about the drug’s underlying effectiveness for his immune system, leaving our probability unchanged from year 1.
9 What’s the chance it was raining?
Suppose that you can commute to campus via bike or public transit. If it’s raining, you’ll bike to campus \(20\%\) of the time, and if it’s not raining, you’ll bike to campus \(60\%\) of the time. You know that it rains 7 days out of 10 during Autumn quarter. If you biked to campus today, what is the probability that it was raining?
Let \(B\) be the event that you biked to campus, and \(R\) be the event that it rained. We want to calculate \(P(R|B)\). By Bayes’ Rule, we have \[P(R|B) = \frac {P(B|R)\cdot P(R)}{P(B)}\] To solve for the denominator, we use the law of total probability: \[P(B) = P(R)\cdot P(B|R) + P(R^C)\cdot P(B|R^C)\] So, we have \[P(R|B) = \frac {P(B|R)\cdot P(R)}{P(R)\cdot P(B|R) + P(R^C)\cdot P(B|R^C)}\] We are given in the problem the probability of biking given raining or not raining, and the probability of it raining. So, we know the following: \[P(B|R) = 0.2\] \[P(B|R^C)=0.6\] \[P(R) = 0.7\] We know that \[ \begin {aligned}P(R^C) &= 1 - P(R)\\ &= 1 - 0.7\\ &= 0.3 \end {aligned} \] Plugging in these values, we get that \[ \begin {aligned} P(R|B) &= \frac {P(B|R)\cdot P(R)}{P(R)\cdot P(B|R) + P(R^C)\cdot P(B|R^C)}\\ &=\frac {0.2\cdot 0.7}{0.2\cdot 0.7 + 0.6 \cdot 0.3}\\ &= \frac {0.14}{0.14 + 0.18}\\ &= 0.4375 \end {aligned} \]
10 A game
Howard and Jerome are playing the following game: A 6-sided die is thrown and each time it’s thrown, regardless of the history, it is equally likely to show any of the six numbers.
- If it shows 5, Howard wins.
- If it shows 1, 2, or 6, Jerome wins.
- Otherwise, they play a second round and so on.
What is the probability that Jerome wins on the 4th round?
Let \(J_i\) be the event that Jerome wins on the \(i\)-th round, and let \(N_i\) be the event that nobody wins on the \(i\)-th round. Then we are interested in the event: \[N_1\cap N_2 \cap N_3 \cap J_4\] Using the chain rule, we have: \[ \begin {aligned} \Pr (N_1, N_2, N_3, J_4) &= \Pr (N_1)\cdot \Pr (N_2 \mid N_1)\cdot \Pr (N_3 \mid N_1, N_2)\cdot \Pr (J_4 \mid N_1, N_2, N_3) \\ & = \frac {1}{3}\cdot \frac {1}{3}\cdot \frac {1}{3}\cdot \frac {1}{2} = \boxed {\frac {1}{54}} \end {aligned} \] In the final step, we used the fact that if the game hasn’t ended, then the probability that it continues for another round is the probability that the die comes up 3 or 4, which has probability \(2/6 = 1/3\). The probability Jerome wins a given round is the probability it lands on 1, 2, or 6, which is \(3/6 = 1/2\).
11 Another game
Alice and Alicia are playing a tournament in which they stop as soon as one of them wins \(n\) games. Alicia wins each game with probability \(p\) and Alice wins with probability \(1-p\), independently of other games. What is the probability that Alicia wins and that when the match is over, Alice has won \(k\) games?
Since the match is over when someone wins their \(n^{th}\) game, and Alicia won the
overall match, Alicia must have won the very last game.
Before this final game, Alicia must have won exactly \(n-1\) games, and Alice must have
won exactly \(k\) games.
Therefore, the probability that we reach a point in time when Alicia has won \(n-1\)
games and Alice has won \(k\) games is: \[p^{n-1} \cdot (1-p)^{k} \cdot \binom {n-1+k}{k}\] The binomial coefficient counts the number
of ways of distributing the \(k\) games that Alice won among the \(n-1+k\) total games played
so far.
At that point, in the event of interest, Alicia wins the next game. This happens with probability \(p\), independent of previous outcomes. Therefore, our final probability is: \[\left [p^{n-1} \cdot (1-p)^{k} \cdot \binom {n-1+k}{k}\right ] \cdot p = \boxed {p^{n} \cdot (1-p)^{k} \cdot \binom {n-1+k}{k}}\]
12 Dependent Dice Duo
This problem demonstrates that independence can be “broken” by conditioning. Let \(D_1\) and \(D_2\) be the outcomes of two independent rolls of a fair die. Let \(E\) be the event “\(D_1 = 1\)”, \(F\) be the event “\(D_2 = 6\)”, and \(G\) be the event “\(D_1 + D_2 = 7\)”. Even though \(E\) and \(F\) are independent, show that \[ \mathbb {P}(E \cap F \mid G) \neq \mathbb {P}(E \mid G) \cdot \mathbb {P}(F \mid G) \]
When we condition on \(G\), our sample space \(\Omega \) becomes the restricted set \[\{(1,6), (2,5), (3,4), (4,3), (5,2), (6,1)\}\] where the first number in the pair is the \(D_1\) outcome and the second number is the \(D_2\) outcome.
From here we can see that \(\mathbb {P}(E \mid G) = \mathbb {P}(D_1 = 1 \mid D_1 + D_2 = 7) = 1/6\), as \((1,6)\) is exactly 1 of the 6 possible rolls that sum to 7, and each roll is equally likely. We also see that \(\mathbb {P}(F \mid G) = \mathbb {P}(D_2 = 6 \mid D_1 + D_2 = 7) = 1/6\). Finally, \(\mathbb {P}(E \cap F \mid G) = \mathbb {P}(D_1 = 1 \cap D_2 = 6 \mid D_1 + D_2 = 7) = 1/6\) using the same logic.
Now we calculate the product: \(\mathbb {P}(E \mid G) \cdot \mathbb {P}(F \mid G) = \frac {1}{6} \cdot \frac {1}{6} = 1/36\). Notice that \(\frac {1}{36} \neq \frac {1}{6}\). Therefore, we have shown that \(\mathbb {P}(E \cap F \mid G) \neq \mathbb {P}(E \mid G) \cdot \mathbb {P}(F \mid G)\), proving that independence can be “broken” by conditioning.
13 Infinite Lottery
Suppose we randomly generate a number from the natural numbers \(\mathbb {N} = \{ 1,2,\ldots \}\). Let \(A_{k}\) be the event we generate the number \(k\), and suppose \(\Pr ( A_{k} ) = \left ( \frac {1}{2} \right ) ^{k}\). (Note that \(\sum _{k=1}^\infty 2^{-k} = 1\).)
Once we generate a number \(k\), that is the maximum we can win. That is, after generating a value \(k\), we can win any number in \([k]=\{1,...,k\}\) dollars. Suppose the probability that we win \(\$ j\) for \(j\in [k] \) is “uniform”, that is, each has probability \(\frac {1}{k}\). Let \(B\) be the event we win exactly \(\$ 1\). Given that we win exactly one dollar, what is the probability that the number generated was also \(1\)?
You may use the fact that \(\sum _{j = 1}^{\infty }\frac {1}{j \cdot a^{j}} = \ln \left ( \frac {a}{a - 1} \right ) \) for \(a > 1\).
The probability that we are looking for is \(\Pr ( A_{1} \mid B )\). By Bayes’ Theorem and the Law of Total Probability, we see that: \[\Pr ( A_{1} \mid B ) = \frac {\Pr ( B \mid A_{1} ) \Pr (A_{1})}{\sum _{j = 1}^{\infty }{\Pr ( B \mid A_{j} ) \Pr (A_{j})}}\] Using the formulas given in the problem statement (\(\Pr (A_k) = 2^{-k}\) and \(\Pr (B \mid A_k) = 1/k\)), we get: \[\Pr ( A_{1} \mid B ) = \frac {\frac {1}{1}\left (\frac {1}{2^{1}}\right )}{\sum _{j = 1}^{\infty }{\frac {1}{j}\left (\frac {1}{2^{j}}\right ) }} \] Using the provided logarithm identity with \(a = 2\), the denominator evaluates to \(\ln \left (\frac {2}{2-1}\right ) = \ln (2)\). \[\Pr ( A_{1} \mid B ) = \frac {\frac {1}{2}}{\ln 2} = \boxed {\frac {1}{2\ln 2}} \approx 0.7213\]
14 The mysteries of independence (10 points)
Suppose that a uniformly random card is selected from a standard 52 card deck of cards. Let \(E\) be the event that the card is a king, let \(F\) be the event that the card is a heart, and let \(G\) be the event that the card is black (that is, a spade or a club).
- a)
-
- (i)
- Are \(E\) and \(F\) independent? Provide a short proof of your claim.
- (ii)
- Are \(G\) and \(F\) independent? Provide a short proof of your claim.
- (iii)
- Are \(E\) and \(G\) independent? Provide a short proof of your claim.
(i) Yes.
We have \(Pr(E \cap F) = \frac {1}{52}\) since there is only 1 king of hearts card in the deck, \(Pr(E) = \frac {4}{52}\) since there are 4 kings in the deck, and \(Pr(F) = \frac {13}{52} = \frac {1}{4}\) since there are 13 hearts in the deck. Then we get \(Pr(E) \cdot Pr(F) = \frac {4}{52} \cdot \frac {1}{4} = \frac {1}{52} = Pr(E \cap F)\).
(ii) No.
We have \(Pr(G \cap F) = 0\) since a card can’t be both a heart and black, \(Pr(G) = \frac {26}{52} = \frac {1}{2}\) since half of the deck is black, and \(Pr(F) = \frac {1}{4}\). So \(Pr(G \cap F) \neq Pr(G) \cdot Pr(F)\).
(iii) Yes.
We have \(Pr(E \cap G) = \frac {2}{52}\) since there are only two king cards that are black (the king of spades and the king of clubs), \(Pr(E) = \frac {4}{52}\) and \(Pr(G) = \frac {1}{2}\). Then we get \(Pr(E) \cdot Pr(G) = \frac {4}{52} \cdot \frac {1}{2} = \frac {2}{52} = Pr(E \cap G)\).
- b)
- Now assume that an additional green card (with no suit and no rank) is
added to the deck and a uniformly random card is selected from this
enlarged deck of 53 cards. Let \(E'\) be the event that the card is a king, let \(F'\) be
the event that the card is a heart, and let \(G' \) be the event that the card is
black (that is a spade or a club).
- (i)
- Are \(E'\) and \(F'\) independent? Provide a short proof of your claim.
- (ii)
- Are \(G'\) and \(F'\) independent? Provide a short proof of your claim.
- (iii)
- Are \(E'\) and \(G'\) independent? Provide a short proof of your claim.
(i) No.
We have \(Pr(E' \cap F') = \frac {1}{53}\) since now we have a green card, \(Pr(E') = \frac {4}{53}\), and \(Pr(F') = \frac {13}{53}\). Then we get \(Pr(E') \cdot Pr(F') = \frac {4}{53} \cdot \frac {13}{53} \neq Pr(E' \cap F')\).
(ii) No.
We still have \(Pr(G' \cap F') = 0\) since a heart card cannot be black, \(\Pr (G') = \frac {26}{53}\), and \(\Pr (F') = \frac {13}{53}\). So \(Pr(G' \cap F') \neq Pr(G') \cdot Pr(F')\).
(iii) No.
We have \(Pr(E' \cap G') = \frac {2}{53}\), \(\Pr (E') = \frac {4}{53}\), and \(\Pr (G') = \frac {26}{53}\). Then we get \(\Pr (E') \cdot \Pr (G') = \frac {4}{53} \cdot \frac {26}{53} \neq Pr(E' \cap G')\).
A proof that two events \(A\) and \(B\) are independent typically consists of showing that \(Pr(A \cap B) = Pr (A ) \cdot Pr (B)\), whereas a proof that they are not independent consists of showing that \(Pr(A \cap B) \ne Pr (A ) \cdot Pr (B)\)
15 CSE 312 Learning Assistant Usage
There is a population of \(n\) students in the class. The number of students using the 312 Learning Assistant among these \(n\) students is \(i\) with probability \(p_i\), where, of course, \(\sum _{0\le i \le n}p_i = 1\). We take a random sample of \(k\) students from the class (without replacement). Use Bayes Theorem to derive an expression for the probability that there are \(i\) students using the 312 Learning Assistant in the population conditioned on the fact that there are \(j\) such students in the sample. Let \(G_i\) be the event that there are \(i\) students using the 312 Learning Assistant in the population and let \(S_j\) be the event that there are \(j\) such students in the sample. Your answer should be written in terms of the \(p_\ell \)’s, \(i,j,n\) and \(k\).
\[ \begin {aligned} \Pr (G_i \mid S_j) &= \frac {\Pr (S_j \mid G_i) \Pr (G_i)}{\Pr (S_j)}\quad \quad \quad \quad \text { by Bayes Theorem}\\ &= \frac {\frac {\binom {i }{ j}\binom {n-i }{ k-j}}{\binom {n }{ k}}\cdot p_i} {\sum _{\ell =0}^n \Pr (S_j \mid G_\ell ) \Pr (G_\ell )}\\ &= \frac {\frac {\binom {i }{ j}\binom {n-i }{ k-j}}{\binom {n }{ k}}\cdot p_i} {\sum _{\ell =0}^n \frac {\binom {\ell }{ j}\binom {n-\ell }{ k-j}}{\binom {n }{ k}}\cdot p_\ell } = \frac {{\binom {i }{ j}\binom {n-i }{ k-j}}\cdot p_i} {\sum _{\ell =0}^n \binom {\ell }{ j}\binom {n-\ell }{ k-j}\cdot p_\ell }. \end {aligned} \] Above, we used the fact that \(\Pr (G_\ell ) = p_\ell \) and the fact that \(\Pr (S_j \mid G_\ell )\) is the probability of choosing a subset of size \(k\), where \(j\) of the selected students are from the subset of \(\ell \) students using the 312 Learning Assistant and \(k-j\) are from the remaining \(n-\ell \) students not using the assistant.
16 The Monty Hall Problem
The Monty Hall problem is a famous, seemingly counter-intuitive probability puzzle named after Monty Hall, the host of the show "Let’s Make a Deal". This problem emphasizes the importance of using given information to make decisions. Assume you are a contestant on this game show. In the original problem, there are \(3\) doors, one (randomly selected) hiding a car and the other two hiding goats. At first, you randomly pick a door, hoping you can win the car. As Monty knows exactly what door hides the location of the car, he purposefully decides to reveal a door different from your pick which is guaranteed to reveal a goat. As there are \(2\) doors left, Monty later asks if you want to stick to your current door or to switch to the other door.
In the beginning, when there is no information about these \(3\) doors, every door has equal probability of revealing a car. However, after knowing that Monty will only open a door which definitely reveals a goat, it turns out that switching to the other door yields a higher probability of winning than sticking to your current door. Thus, the best strategy is to switch to the other door.
For this problem, you have to determine the best strategy when there are \(4\) doors, again a random one hiding a car and the other 3 hiding one goat each. As a contestant, you first randomly choose a door. Monty opens one of the \(3\) other doors, which reveals a goat, and asks if you want to stick to your current choice or switch to a different door. After you make your pick, Monty opens another door (other than your current pick) which also reveals a goat. This time, you have to make the final pick: sticking to the current door in the previous pick or switching to the other door. Perform a thorough analysis of all possible strategies and explain which one is the best.
There are four possible strategies a contestant can employ. Let’s analyze the probability of winning the car for each.
Conceptual Solution: The optimal strategy is \(S_2\): to stick on your first choice, and switch doors only on the very last move. Here is an intuitive way to see this.
When you make your first choice (out of 4 doors), you have a \(\frac {1}{4}\) chance of selecting the correct door. This probability holds up throughout the entire game, even as more doors with goats are opened, because at the moment you selected it, you only had a \(\frac {1}{4}\) chance of success. So if you stick with this door throughout the game, you have a \(\frac {1}{4}\) chance of winning.
When you choose your first door, there is a \(\frac {3}{4}\) chance one of the other 3 doors holds the car. So when the host eliminates one of these doors by revealing the first goat, there is now a \(\frac {3}{4}\) chance of the car being behind one of 2 remaining unpicked doors. Each of these 2 doors has an equal probability of holding the car, so a probability of \(\frac {3}{8}\) each.
When the first goat is revealed, we are given the opportunity to switch doors. If we switch doors, we will have a \(\frac {3}{8}\) chance of selecting the correct one. So should we switch? Not so fast. If we switch, it means the other two doors have a combined \(\frac {5}{8}\) chance of holding the car (since we just selected the winning door with probability \(\frac {3}{8}\)). The host will then reveal a second goat, leaving us with 2 choices of doors. Our current door wins with probability \(\frac {3}{8}\), and the other door wins with probability \(\frac {5}{8}\). So the best we can do if we switch early is win with probability \(\frac {5}{8}\).
But what if we never switched doors after the first goat was revealed? In this case, our current door only has a \(\frac {1}{4}\) chance of winning, and when the host reveals a second goat, the only other remaining door absorbs that entire \(\frac {3}{4}\) probability block! This represents a better chance of winning than any previous strategy. Therefore, we should wait to switch until the very last phase, and then switch to win with probability \(\frac {3}{4}\).
Formal Probabilistic Proof: We rigorously calculate the probability of winning given that we play with a certain strategy. We use \(R\) and \(W\) to indicate when you pick the Right and Wrong door at a specific pick, respectively. Note that we use semicolon notation \(\mathbb {P}(P_1 = R; S_1)\) to indicate the probability under strategy \(S_1\), because a strategy is not an event we can condition on. For each strategy \(S_i\), we calculate \(\mathbb {P}(P_3 = R; S_i)\).
- (a)
- \(S_1\): Stick-and-stick strategy We only need to calculate the case \(RRR\) (since you
stick, your first pick determines the rest).
- \(\mathbb {P}(P_1 = R; S_1) = \frac {1}{4}\).
- \(\mathbb {P}(P_2 = R \mid P_1 = R; S_1) = 1\), because you must stick to your first pick.
- \(\mathbb {P}(P_3 = R \mid P_2 = R, P_1 = R; S_1) = 1\), because you must stick to your second pick.
\(\mathbb {P}(\text {win};S_1) = \frac {1}{4} \cdot 1 \cdot 1 = \mathbf {\frac {1}{4}}\)
- (b)
- \(S_2\): Stick-and-switch strategy We only need to calculate the probability
for the case \(WWR\) (if your first pick is right, switching at the end guarantees a
loss).
- \(\mathbb {P}(P_1 = W; S_2) = \frac {3}{4}\).
- \(\mathbb {P}(P_2 = W \mid P_1 = W; S_2) = 1\), because you must stick to your first pick.
- \(\mathbb {P}(P_3 = R \mid P_2 = W, P_1 = W; S_2) = 1\). Conditioned on two previous wrong doors, there is only one right door left out of 2. Monty will show the wrong door in his second reveal, so you’re guaranteed to pick the right door if you switch.
\(\mathbb {P}(\text {win};S_2) = \frac {3}{4} \cdot 1 \cdot 1 = \mathbf {\frac {3}{4}}\)
- (c)
- \(S_3\): Switch-and-stick strategy We only need to calculate the probability
for \(WRR\).
- \(\mathbb {P}(P_1 = W; S_3) = \frac {3}{4}\).
- \(\mathbb {P}(P_2 = R \mid P_1 = W; S_3) = \frac {1}{2}\). Conditioned on the first wrong door, Monty will show another wrong door, leaving two doors to switch to, one of which will be correct.
- \(\mathbb {P}(P_3 = R \mid P_1 = W, P_2 = R; S_3) = 1\), because if you stick to a correct pick, you are guaranteed to be right.
\(\mathbb {P}(\text {win};S_3) = \frac {3}{4} \cdot \frac {1}{2} \cdot 1 = \mathbf {\frac {3}{8}}\)
- (d)
- \(S_4\): Switch-and-switch strategy We must calculate the disjoint
probabilities for \(RWR\) and \(WWR\).
For \(RWR\):
- \(\mathbb {P}(P_1 = R; S_4) = \frac {1}{4}\).
- \(\mathbb {P}(P_2 = W \mid P_1 = R; S_4) = 1\), because if your first pick is right, switching guarantees a wrong door.
- \(\mathbb {P}(P_3 = R \mid P_1 = R, P_2 = W; S_4) = 1\), because Monty has opened two wrong doors, so switching from your current wrong door guarantees the right one.
\(\mathbb {P}(RWR) = \frac {1}{4} \cdot 1 \cdot 1 = \frac {1}{4}\)
For \(WWR\):
- \(\mathbb {P}(P_1 = W; S_4) = \frac {3}{4}\).
- \(\mathbb {P}(P_2 = W \mid P_1 = W; S_4) = \frac {1}{2}\), because there is \(1\) wrong door to switch to out of the \(2\) remaining doors.
- \(\mathbb {P}(P_3 = R \mid P_1 = W, P_2 = W; S_4) = 1\), because conditioned on a second wrong pick and \(2\) wrong doors opened by Monty, switching guarantees the right door.
\(\mathbb {P}(WWR) = \frac {3}{4} \cdot \frac {1}{2} \cdot 1 = \frac {3}{8}\)
\(\mathbb {P}(\text {win};S_4) = \mathbb {P}(RWR) + \mathbb {P}(WWR) = \frac {1}{4} + \frac {3}{8} = \mathbf {\frac {5}{8}}\)
In conclusion, \(S_2\) (Stick-and-Switch) yields the highest probability of winning at \(3/4\).
17 More independence
Prove that if events \(E\) and \(F\) are independent, then so are \(E\) and \(\overline {F}\).
We are given that \(E\) and \(F\) are independent, which means by definition: \[\Pr (E \cap F) = \Pr (E)\Pr (F)\]
We want to show that \(E\) and \(\overline {F}\) are independent. To do this, we need to prove: \[\Pr (E \cap \overline {F}) = \Pr (E)\Pr (\overline {F})\]
First, note that the event \(E\) can be partitioned into two mutually exclusive events: the part of \(E\) that intersects with \(F\), and the part of \(E\) that intersects with the complement of \(F\) (\(\overline {F}\)). By the Law of Total Probability: \[\Pr (E) = \Pr (E \cap F) + \Pr (E \cap \overline {F})\]
Rearranging this equation to solve for the term we are interested in, we get: \[\Pr (E \cap \overline {F}) = \Pr (E) - \Pr (E \cap F)\]
Now, we substitute the independence assumption (\(\Pr (E \cap F) = \Pr (E)\Pr (F)\)) into the equation: \[\Pr (E \cap \overline {F}) = \Pr (E) - \Pr (E)\Pr (F)\]
Next, we factor out \(\Pr (E)\) on the right side: \[\Pr (E \cap \overline {F}) = \Pr (E)(1 - \Pr (F))\]
Finally, by the complement rule, we know that \(1 - \Pr (F) = \Pr (\overline {F})\). Substituting this back in yields: \[\Pr (E \cap \overline {F}) = \Pr (E)\Pr (\overline {F})\]
Since the probability of their intersection equals the product of their individual probabilities, \(E\) and \(\overline {F}\) are independent.