Streams Course Blog

Probability theory reference

Posted in Uncategorized by amitc1 on October 6, 2009

By now, we have introduced the idea of randomized algorithms, and to analyse their performance, we have used some basic probability theory. To recap, the important concepts are:

  • Probability spaces, events, union bound
  • Random variables, expectation, linearity of expectation
  • Independent events, independent random variables
  • Variance, standard deviation

Most importantly, we have introduced the notion of large deviation inequalities, or tail inequalities. Briefly, these inequalities assert that a certain random variable is not too likely to deviate much from its expected value. The important inequalities (so far) are:

  1. Markov’s inequality: If X \ge 0 is a random variable and t \ge 0 a constant, then
    \Pr[X \ge t] ~\le~ \mathbb{E}[X]/t.
  2. Chebyshev’s inequality: If \mu = \mathbb{E}[X], \sigma = \sqrt{\mathrm{Var}[X]}, then
    \Pr[|X - \mu| \ge t\sigma] ~\le~ 1/t^2.
  3. Chernoff bound: If X = X_1 + \cdots + X_n, \mu = \mathbb{E}[X] and \{X_i\} are independent and identically distributed indicator random varibles, then
    \Pr[X - \mu \ge \varepsilon\mu] ~\le~ e^{-\mu\varepsilon^2/3}; \Pr[X - \mu \le -\varepsilon\mu] ~\le~ e^{-\mu\varepsilon^2/3}.

Going forward, it will be vital to have these inequalities at your fingertips, ready for use. An excellent reference covering all of this material is the early chapters of the book Randomized Algorithms, by Motwani and Raghavan.

Advertisements
Tagged with:

Leave a Reply

Please log in using one of these methods to post your comment:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: