Probability
$$ % Basic sets/Variables
% Probability and statistics % % Linear algebra % Math functions % Distributions% Update symbols $$
Probability theory is the study of uncertainty and random events. It provides a framework for predicting the likelihood of different outcomes in a given situation. The foundation of probability theory begins with the concept of an experiment, which is any process that leads to an uncertain result—such as flipping a coin, rolling a die, or drawing a card. The sample space is the set of all possible outcomes of the experiment, and an event is any subset of this sample space.
Sample Space
The sample space of an experiment, e.g. rolling a die once, is the set of all of the possible outcomes. In the example of rolling a six-sided die once, the sample space would be \(\Omega = \{1,2,3,4,5,6\}\), and each event \(E\subseteq\Omega\). The probability that an event \(E\) occurs is denoted \(\mathbb{P}\left(E\right)\), where \(0 \leq \mathbb{P}\left(E\right) \leq \mathbb{P}\left(\Omega\right) = 1\).
Probability Function
Probability is assigned to events as a number between \(0\) and \(1\), where \(0\) indicates impossibility and \(1\) indicates certainty - \(\mathbb{P}: \mathcal{A} \to [0, 1]\) for \(E \subseteq \mathcal{A}\). For example, the probability of rolling a \(4\) (E = \(\{4\}\)) on a fair six-sided die is \(1/6\). Some basic rules of probability include: the probability of the entire sample space \(\Omega\) is 1; if two events are mutually exclusive (cannot happen at the same time), the probability that either occurs is the sum of their individual probabilities. More of such concepts will be presented in the following.
\[ \begin{aligned} \mathbb{P}\left(E^\mathrm{C}\right) &= 1 - \mathbb{P}\left(E\right) \\ \mathbb{P}\left(E_1 \setminus E_2\right) &= \mathbb{P}\left(E_1\right) - \mathbb{P}\left(E_1 \cap E_2\right) \\ \mathbb{P}\left(E_1 \cup E_2\right) &= \mathbb{P}\left(E_1\right) + \mathbb{P}\left(E_2\right) - \mathbb{P}\left(E_1 \cap E_2\right) \end{aligned} \]
\[ \mathbb{P}\left(E \cap E^\mathrm{C}\right) = 0 \]
Conditional Probability
\[ \mathbb{P}\left(E_1 \mid E_2\right) = \frac{\mathbb{P}\left(E_1 \cap E_2\right)}{\mathbb{P}\left(E_2\right)} \]
Two events are said to be independent if \(\mathbb{P}\left(E_1 \cap E_2\right) = \mathbb{P}\left(E_1\right)\mathbb{P}\left(E_2\right)\) meaning that if \(\mathbb{P}\left(E_2\right) > 0\), then \(\mathbb{P}\left(E_1 \mid E_2\right) = \mathbb{P}\left(E_1\right)\).
Bayes Formula
\[ \mathbb{P}\left(E_2 \mid E_1\right) = \frac{\mathbb{P}\left(E_1 \mid E_2\right)\mathbb{P}\left(E_1\right)}{\mathbb{P}\left(E_2\right)},\quad \mathbb{P}\left(E_2\right) \neq 0 \]