Probability: Finding The Expectation and Variance of Runs

Note: knowing intermediate probability and calculus is necessary to understand this post.

Last week, I completed a summer course at Stanford: CS109, Probability for Computer Scientists. This 8-week long class was intense and challenging but one of the most rewarding classes I’ve taken at Stanford so far. A few weeks ago, I had an idea for a homework problem but I couldn’t quite figure out how to implement the idea. The problem is as follows:


Below are two sequences of 300 “coin flips” (H for heads, T for tails). One of these is a true sequence of 300 independent flips of a fair coin. The other was generated by a person typing out H’s and T’s and trying to seem random. Which sequence is the true sequence of coin flips? Make an argument that is justified with probabilities calculated on the sequences. Both sequences have 148 heads, two less than the expected number for a 0.5 probability of heads.

Sequence 1:
TTHHTHTTHTTTHTTTHTTTHTTHTHHTHHTHTHHTTTHHTHTHTTHTHH
TTHTHHTHTTTHHTTHHTTHHHTHHTHTTHTHTTHHTHHHTTHTHTTTHH
TTHTHTHTHTHTTHTHTHHHTTHTHTHHTHHHTHTHTTHTTHHTHTHTHT
THHTTHTHTTHHHTHTHTHTTHTTHHTTHTHHTHHHTTHHTHTTHTHTHT
HTHTHTHHHTHTHTHTHHTHHTHTHTTHTTTHHTHTTTHTHHTHHHHTTT
HHTHTHTHTHHHTTHHTHTTTHTHHTHTHTHHTHTTHTTHTHHTHTHTTT
Sequence 2:
HTHHHTHTTHHTTTTTTTTHHHTTTHHTTTTHHTTHHHTTHTHTTTTTTH
THTTTTHHHHTHTHTTHTTTHTTHTTTTHTHHTHHHHTTTTTHHHHTHHH
TTTTHTHTTHHHHTHHHHHHHHTTHHTHHTHHHHHHHTTHTHTTTHHTTT
THTHHTTHTTHTHTHTTHHHHHTTHTTTHTHTHHTTTTHTTTTTHHTHTH
HHHTTTTHTHHHTHHTHTHTHTHHHTHTTHHHTHHHHHHTHHHTHTTTHH
HTTTHHTHTTHHTHHHTHTTHTTHTTTHHTHTHTTTTHTHTHTTHTHTHT

There’s actually a straightforward approach to solving this problem that involves the geometric distribution, but after researching ways to approach this problem, I stumbled across something known as the Wald–Wolfowitz runs test (or simply runs test). According to Wikipedia, this is a “statistical test that checks a randomness hypothesis for a two-valued data sequence. More precisely, it can be used to test the hypothesis that the elements of the sequence are mutually independent.” In layman’s terms, it can be used to test the randomness of a distribution comprised of only two possible outcomes (heads and tails for example).

Intuitively, if someone flips a coin 10 times and gets 5 heads and 5 tails, it is more plausible that the order of outcomes H, T, T, H, T, H, H, H, T, T  is more likely to be random than something like H, H, H, H, H, T, T, T, T, T. The runs test can analyze both experiments to see which one is more likely to be a truly random outcome of flips. By calculating the expectation (mean) and variance of the number of runs in this type of experiment, we can use the normal distribution to approximate the probability that we get a certain number of runs.  We use the word “runs” to denote consecutive outcomes of the same result.

I was on a mission to find the expectation and variance for this problem. The expectation formula was simple to solve in a matter of minutes, but for some reason, I couldn’t find a proof of the variance anywhere on the internet (after an hour of searching). Every source I found cited textbooks that I do not have easy access to (this should be remedied once I’m an official Masters student this fall). I did, however, find a PDF of a book written in 1940 by the very people that the runs test is named after:  “On a test whether two samples are from the same population”, by Wald and Wolfowitz. Excited to finally have the source of this test right at my fingertips, I began skimming the pages. To my disappointment, the authors decided to not reveal the “tedious calculations”:

Wald, A. and Wolfowitz

Wald, A.; Wolfowitz, J. On a Test Whether Two Samples are from the Same Population. Ann. Math. Statist. 11 (1940), no. 2, 147–162. doi:10.1214/aoms/1177731909. https://projecteuclid.org/euclid.aoms/1177731909.

*Blank stare*. I’m really curious to find out where exactly they proved these equations. Sigh. Thus I was left on my own to figure it out. I tried over several days to wrap my head around this problem, and it wasn’t until after finishing the class that I finally was able to approach the problem with a clear head. Here is my approach.

Let R be the number of runs (consecutive outcomes of the same result) in a series of coin tosses that result in H heads and T tails. We will find the equations for the expectation and variance of R.

First, we will find the expected number of runs. Define I_j to be an indicator variable that represents whether the j^{th} coin flip result does not equal the (j+1)^{th} coin flip result. For any positive number of coin tosses, we can count the number of times the outcome switches to a different result. Since the second run counts as the first switch, we add 1 to the equation below to account for the first run, which defines the way the runs can alternate between heads and tails. For example, {{\color{DarkOrchid} H}, {{\color{DarkOrchid} H}, {{\color{RoyalBlue} T}, {{\color{DarkOrchid} H}, {{\color{RoyalBlue} T}} has 4 runs: 1 run of two heads followed by 3 switches (heads to tails, tails to heads, and heads to tails). We can now express R as 1 plus the sum of the indicator variables:

    \begin{align*} R=1+\sum_{j=1}^{H+T-1}{I_j} \end{align*}

The summation starts from the first flip and goes to the second to last flip since we want to compare every j^{th} flip to the flip that comes after it. Thus we compare the flips in every such pair. We can now express the expected number of runs when there are H heads and T tails as follows:

    \begin{align*} E[R] &= E \left[ 1 + \sum_{j=1}^{H+T-1}{I_j} \right] \end{align*}

(Via linearity of expectation)   \begin{align*} {\color{white}{E[R]}} &= E[1] + E \left[ \sum_{j=1}^{H+T-1}{I_j} \right]  \end{align*}

(Via linearity of expectation)   \begin{align*} {\color{white}{E[R]}} = 1 + \sum_{j=1}^{H+T-1}{E \left[ I_j \right]}  \end{align*}

Since the expected value of an indicator variable is simply the probability of the variable’s success, we have:

    \begin{align*} E[R] &= 1 + \sum_{j=1}^{H+T-1}{P \left( I_j = 1 \right)} \\ &= 1 + \sum_{j=1}^{H+T-1}{P \left( j^{th} flip \neq (j+1)^{th} flip \right)} \\ &= 1 + \sum_{j=1}^{H+T-1}{2 \left( \frac{H}{H+T} \frac{T}{H+T-1} \right)} \\ \end{align*}

We need the j^{th} flip to be a different outcome than the flip that comes after it. If the j^{th} flip is heads, there are H ways to pick heads out of H+T outcomes, giving a probability \scriptstyle\frac{H}{H+T} that the j^{th} flip is heads. After choosing one heads, there are H+T-1 remaining flips and T ways to choose tails out of those remaining flips. We lastly double this result since we have the same probability of having tails on the j^{th} flip followed by heads. Simplifying gives:

    \begin{align*} E[R] &= 1 + \sum_{j=1}^{H+T-1}{ \frac{2HT}{(H+T)(H+T-1)} } \end{align*}

(Since the body of the summation didn’t depend on j)   \begin{align*} {\color{white}{E[R]}} &= 1 + (H+T-1) \frac{2HT}{(H+T)(H+T-1)}  \end{align*}

    \begin{align*} {\color{white}{E[R]}} &= {\color{MidnightBlue}{1 + \frac{2HT}{H+T} }} \end{align*}

Thus we have the expected number of runs when we have H heads and T tails.

To find the variance, we must compute Var(R)=E[R^2]-{(E[R])}^2. First, compute {(E[R])}^2:

    \begin{align*} {(E[R])}^2 &= {\left( 1 + \frac{2HT}{H+T} \right)}^2 \\ &= 1 + \frac{4HT}{H+T} + {\left( \frac{2HT}{H+T} \right)}^2 \\ \end{align*}

To compute E[R^2], we’ll first find R^2. Recall that R=1+\sum_{j=1}^{H+T-1}{I_j}. Thus:

    \begin{align*} R^2 &= {\left( 1+ \sum_{j=1}^{H+T-1}{I_j} \right)}^2 \\ &= 1 + \sum_{j=1}^{H+T-1}{I_j} + \sum_{j=1}^{H+T-1}{I_j} + {\left( \sum_{j=1}^{H+T-1}{I_j} \right)}^2 \end{align*}

(Substitution using equation for R)   \begin{align*} {\color{white}{R^2}} &= R + (R-1) + {\left( \sum_{j=1}^{H+T-1}{I_j} \right)}^2  \end{align*}

(Equation 1: Expansion of square of summation)   \begin{align*} \ {\color{white}{R^2}} &= 2R - 1 + \sum_{j=1}^{H+T-1}{{I_j}^2} + \sum_{j \neq k}^{H+T-1}I_j I_k  \end{align*}

We can split the square of the summation this way because the square of a summation represents all (j, k) pairs in the range 1 to H+T-1, inclusive, where there are two cases we can consider: the case where j=k (represented by the first summation) and the case where j \neq k (represented by the second summation). To show why this works, consider a simple example:

    \begin{align*} { \left( \sum_{j=1}^{3} j \right) }^2 &= {(1+2+3)}^2 = 6^2 = 36 \end{align*}

Note that we can actually compute the square using the FOIL method as follows:

    \begin{align*} { \left( \sum_{j=1}^{3} j \right) }^2 &= {(1+2+3)}^2 \\ &= (1+2+3)(1+2+3) \\ &= (1\times1) + (1\times2) + (1\times3) + (2\times1) + (2\times2) + (2\times3) + (3\times1) + (3\times2) + (3\times3)\\ \end{align*}

We can rearrange the terms to get the following:

(Equation 2)   \begin{align*} { \left( \sum_{j=1}^{3} j \right) }^2 &= 1^2 + 2^2 + 3^2 + (1\times2) + (1\times3) + (2\times1) + (2\times3) + (3\times1) + (3\times2)  \\ &= 1^2 + 2^2 + 3^2 + 2(1\times2) + 2(1\times3) + 2(2\times3) \end{align*}

Notice that we can express these terms as the sum of two summations: a summation of squares and a summation of products of different-valued integer pairs:

    \begin{align*} { \left( \sum_{j=1}^{3} j \right) }^2 &= \sum_{j=1}^{3}j^2 + \sum_{j \neq k}^{3}jk \\ \end{align*}

This is like the trick we used earlier. At this point, we can also use another summation property to expand the latter summation:

    \begin{align*} \sum_{j \neq k}^{3} jk &= \sum_{j < k}^{3} jk + \sum_{j > k}^{3} jk \end{align*}

(Equation 3)   \begin{align*} {\color{white}\sum_{j=1}^{3} jk} &= 2 \sum_{j < k}^{3} jk  \end{align*}

Notice that the last three terms shown at the end of Equation 2 each have a product of an integer and a larger integer. Thus we can represent these products as the summation shown in Equation 3, where we double the summation to account for symmetry (e.g. 1\times3 = 3\times1). We could’ve also reversed the inequality sign to achieve the same result. I arbitrarily chose j<k.

We can now use this approach in our equation for R^2 (Equation 1):

    \begin{align*} \ R^2 &= 2R - 1 + \sum_{j=1}^{H+T-1}{{I_j}^2} + \sum_{j \neq k}^{H+T-1}{I_j I_k} \end{align*}

    \begin{align*} {\color{white}{R^2}} &= 2R - 1 + \sum_{j=1}^{H+T-1}{{I_j}^2} + 2\sum_{j<k}^{H+T-1}{I_j I_k} \end{align*}

(because I is an indicator variable)   \begin{align*} \ {\color{white}{R^2}} &= 2R - 1 + \sum_{j=1}^{H+T-1}{I_j} + 2\sum_{j<k}^{H+T-1}I_j I_k  \end{align*}

(Substitution using equation for R)   \begin{align*} \ {\color{white}{R^2}} &= 2R - 1 + (R-1) + 2\sum_{j<k}^{H+T-1}I_j I_k  \end{align*}

    \begin{align*} \ {\color{white}{R^2}} &= 3R - 2 + 2\sum_{j < k}^{H+T-1}I_j I_k \end{align*}

Now we can compute E[R^2]:

    \begin{align*} \ E[R^2] &= E \left[ 3R - 2 + 2\sum_{j<k}^{H+T-1}I_j I_k \right] \end{align*}

(Via linearity of expectation)   \begin{align*} \ {\color{white}{E[R^2]}} &= 3E[R] - E[2] + 2E \left[ \sum_{j<k}^{H+T-1}I_j I_k \right]  \end{align*}

(Via linearity of expectation & substitution using equation for R)   \begin{align*} \ {\color{white}{E[R^2]}} &= 3 \left( 1 + \frac{2HT}{H+T} \right) - 2 + 2\sum_{j<k}^{H+T-1}E \left[ I_j I_k \right]  \end{align*}

    \begin{align*} \ {\color{white}{E[R^2]}} &= 1 + \frac{6HT}{H+T} + 2\sum_{j<k}^{H+T-1} P \left( I_j I_k = 1 \right) \end{align*}

(Both variables must = 1 in order for their product to = 1)   \begin{align*} \ {\color{white}{E[R^2]}} &= 1 + \frac{6HT}{H+T} + 2\sum_{j<k}^{H+T-1} P \left( I_j = 1, I_k = 1 \right)  \end{align*}

In order to find the probability that both indicator variables equal 1, let’s further split the summation into two cases. Either the indicator variables I_j and I_k each refer to pairs of flips that overlap (meaning the second flip of I_j (the \scriptstyle(j+1)^{th} flip) equals the first flip of I_{j+1}), or the two indicator variables represent pairs of flips that have no flips in common. Since these two outcomes have different probabilities, we split them into summations that cover each case separately. Let’s explore these cases in more depth.

Case 1: We consider all indicator variables that share a flip, starting with indicator variables that compare the first three flips and incrementing by one until we end the summation with indicator variables that compare the last three flips. For example, say we have the following outcomes of 6 coin flips: T, H, T, H, H, T. The following highlighted outcomes represent indicator variables that share an outcome, where the first two purple-colored outcomes are the outcomes compared in the first indicator variable and the last two purple-colored outcomes are the ones compared in the second indicator variable:

    \[{{\color{DarkOrchid} T}, {{\color{DarkOrchid} H}, {{\color{DarkOrchid} T}, H, H, T\]

    \[T, {{\color{DarkOrchid} H}, {{\color{DarkOrchid} T}, {{\color{DarkOrchid} H}, H, T\]

    \[T, H, {{\color{DarkOrchid} T}, {{\color{DarkOrchid} H}, {{\color{DarkOrchid} H}, T\]

    \[T, H, T, {{\color{DarkOrchid} H}, {{\color{DarkOrchid} H}, {{\color{DarkOrchid} T}\]

This example makes it easier to see that the summation bounds go from j=1 to j=H+T-2 so we always have three outcomes to consider.

Case 2: We consider all indicator variables that do not share a flip, thus four flips are considered at a time. Let’s visualize this with our 6 coin flips example:

    \[{{{\color{DarkOrchid} T}, {{\color{DarkOrchid} H}, {{\color{RoyalBlue} T}, {{\color{RoyalBlue} H}, H, T\]

    \[{{{\color{DarkOrchid} T}, {{\color{DarkOrchid} H}, T, {{\color{RoyalBlue} H}, {{\color{RoyalBlue} H}, T\]

    \[{{{\color{DarkOrchid} T}, {{\color{DarkOrchid} H}, T, H, {{\color{RoyalBlue} H}, {{\color{RoyalBlue} T}\]

    \[{T, {{\color{DarkOrchid} H}, {{\color{DarkOrchid} T}, {{\color{RoyalBlue} H}, {{\color{RoyalBlue} H}, T\]

    \[{T, {{\color{DarkOrchid} H}, {{\color{DarkOrchid} T}, H, {{\color{RoyalBlue} H}, {{\color{RoyalBlue} T}\]

    \[{T, H, {{\color{DarkOrchid} T}, {{\color{DarkOrchid} H}, {{\color{RoyalBlue} H}, {{\color{RoyalBlue} T}\]

In this case, we can use a nested summation to ensure that I_j and I_k never refer to the same flip. The outer summation starts from j=1. It ends at j=H+T-3 to ensure there are always four outcomes to consider (two for each indicator variable). The inner summation starts at k=j+2 (to ensure that it doesn’t consider one of the two flips that I_j considers), and it ends at k=H+T-1 (to allow I_k to consider the last two flips).

Notice that these two cases along with the case we saw earlier (where an indicator variable is squared) are mutually exclusive and exhaustive: we’ve accounted for all possible pairs of flips that the two indicator variables can compare. Now that we understand the cases, let’s update our equation:

    \begin{align*} \ E[R^2] &= 1 + \frac{6HT}{H+T} + 2\sum_{j<k}^{H+T-1} P \left( I_j = 1, I_k = 1 \right) \end{align*}

    \begin{align*} {\color{white}{E[R^2]}} &= 1 + \frac{6HT}{H+T} + 2\left( \sum_{j = 1}^{H+T-2} P \left( I_j = 1, I_{j+1} = 1 \right) + \sum_{j = 1}^{H+T-3} { \sum_{k = j+2}^{H+T-1} P \left( I_j = 1, I_k = 1 \right) } \right) \\ \end{align*}

(Equation 4)   \begin{align*} {\color{white}{E[R^2]}} &= 1 + \frac{6HT}{H+T} + 2 \sum_{j = 1}^{H+T-2} P \left( I_j = 1, I_{j+1} = 1 \right) + 2 \sum_{j = 1}^{H+T-3} { \sum_{k = j+2}^{H+T-1} P \left( I_j = 1, I_k = 1 \right) }  \end{align*}

Let’s first expand the probability for the first case, recalling that I_j and I_{j+1} share a flip. For both I_j=1 and I_{j+1}=1 to be true at the same time, flip j must be different from flip j+1, and flip j+1 must be different from flip j+2 (thus flip j and j+2 must have the same outcome). This means that these three sequential flips are either HTH or THT, as expressed below:

    \begin{align*} 2 \sum_{j = 1}^{H+T-2} P \left( I_j = 1, I_{j+1} = 1 \right) &= 2 \sum_{j = 1}^{H+T-2} P \left(\text{flips j, j+1, and j+2 are either HTH or THT} \right) \\ \end{align*}

    \begin{align*} {\color{white}{2 \sum_{j = 1}^{H+T-2} P \left( I_j = 1, I_{j+1} = 1 \right)}} &= 2 \sum_{j = 1}^{H+T-2} \textstyle{ \left[ \left( \frac{H}{H+T} \cdot \frac{T}{H+T-1} \cdot \frac{H-1}{H+T-2} \right) + \left( \frac{T}{H+T} \cdot \frac{H}{H+T-1} \cdot \frac{T-1}{H+T-2} \right) \right]} \end{align*}

    \begin{align*} {\color{white}{2 \sum_{j = 1}^{H+T-2} P \left( I_j = 1, I_{j+1} = 1 \right)}} &= 2 \sum_{j = 1}^{H+T-2} \textstyle{ \frac{HT(H-1) + HT(T-1)}{(H+T)(H+T-1)(H+T-2)} } \end{align*}

    \begin{align*} {\color{white}{2 \sum_{j = 1}^{H+T-2} P \left( I_j = 1, I_{j+1} = 1 \right)}} &= 2 \sum_{j = 1}^{H+T-2} \textstyle{ \frac{HT(H+T-2)}{(H+T)(H+T-1)(H+T-2)} } \end{align*}

    \begin{align*} {\color{white}{2 \sum_{j = 1}^{H+T-2} P \left( I_j = 1, I_{j+1} = 1 \right)}} &= 2 \sum_{j = 1}^{H+T-2} \textstyle{ \frac{HT}{(H+T)(H+T-1)} } \end{align*}

(Since summation body didn’t depend on j)   \begin{align*} {\color{white}{2 \sum_{j = 1}^{H+T-2} P \left( I_j = 1, I_{j+1} = 1 \right)}} &= \textstyle{ \frac{2HT(H+T-2)}{(H+T)(H+T-1)} }  \end{align*}

Plugging this back into Equation 4 gives us:

(Equation 5)   \begin{align*} E[R^2] &= 1 + \frac{6HT}{H+T} + \frac{2HT(H+T-2)}{(H+T)(H+T-1)} + 2 \sum_{j = 1}^{H+T-3} { \sum_{k = j+2}^{H+T-1} P \left( I_j = 1, I_k = 1 \right) }  \end{align*}

Now let’s consider the case where the two indicator variables do not overlap. We compute the probability that the two flips represented by each indicator variable have one H and one T:

    \begin{align*} 2 \sum_{j = 1}^{H+T-3} { \sum_{k = j+2}^{H+T-1} P \left( I_j = 1, I_k = 1 \right) } &= 2 \sum_{j = 1}^{H+T-3} { \sum_{k = j+2}^{H+T-1} \textstyle{ \left( 2 \cdot \frac{H}{H+T} \cdot \frac{T}{H+T-1} \right) \left( 2 \cdot \frac{H-1}{H+T-2} \cdot \frac{T-1}{H+T-3} \right) } } \end{align*}

We multiply the probability of each pair of flips having different outcomes by 2 to account for either HT or TH being the outcome. Let’s simplify, first by pulling out the constants that don’t depend on j or k:

    \begin{align*} {\color{white}{2 \sum_{j = 1}^{H+T-3} { \sum_{k = j+2}^{H+T-1} P \left( I_j = 1, I_k = 1 \right) }}} &= {\textstyle{ 2 \left( 2 \cdot \frac{H}{H+T} \cdot \frac{T}{H+T-1} \right) \left( 2 \cdot \frac{H-1}{H+T-2} \cdot \frac{T-1}{H+T-3} \right) }} \sum_{j = 1}^{H+T-3} { \sum_{k = j+2}^{H+T-1} 1 } \end{align*}

    \begin{align*} {\color{white}{2 \sum_{j = 1}^{H+T-3} { \sum_{k = j+2}^{H+T-1} P \left( I_j = 1, I_k = 1 \right) }}} &={\textstyle{ \left( \frac{8HT(H-1)(T-1)}{(H+T)(H+T-1)(H+T-2)(H+T-3)} \right)}} \sum_{j = 1}^{H+T-3} {\textstyle{ \left( H+T-1-(j+2)+1 \right) }} \end{align*}

    \begin{align*} {\color{white}{2 \sum_{j = 1}^{H+T-3} { \sum_{k = j+2}^{H+T-1} P \left( I_j = 1, I_k = 1 \right) }}} &={\textstyle{ \left( \frac{8HT(H-1)(T-1)}{(H+T)(H+T-1)(H+T-2)(H+T-3)} \right)}} \sum_{j = 1}^{H+T-3} {\textstyle{ \left( H+T-2-j \right) }} \end{align*}

At this point, we can split the summation:

    \begin{align*} {\color{white}{2 \sum_{j = 1}^{H+T-3} { \sum_{k = j+2}^{H+T-1} P \left( I_j = 1, I_k = 1 \right) }}} &={\textstyle{ \left( \frac{8HT(H-1)(T-1)}{(H+T)(H+T-1)(H+T-2)(H+T-3)} \right)}} \left( \sum_{j = 1}^{H+T-3} {\textstyle{ (H+T-2) }} - \sum_{j = 1}^{H+T-3} {\textstyle{ j }} \right) \end{align*}

Now let’s simplify by using the arithmetic series formula:

    \begin{align*} {\color{white}{2 \sum_{j = 1}^{H+T-3} { \sum_{k = j+2}^{H+T-1} P \left( I_j = 1, I_k = 1 \right) }}} &={\textstyle{ \left( \frac{8HT(H-1)(T-1)}{(H+T)(H+T-1)(H+T-2)(H+T-3)} \right)}} \left( {\scriptstyle{ (H+T-3)(H+T-2)}} {\textstyle{ - \frac{(H+T-3)(H+T-2)}{2} }} \right) \end{align*}

    \begin{align*} {\color{white}{2 \sum_{j = 1}^{H+T-3} { \sum_{k = j+2}^{H+T-1} P \left( I_j = 1, I_k = 1 \right) }}} &={\textstyle{ \left( \frac{8HT(H-1)(T-1)}{(H+T)(H+T-1)(H+T-2)(H+T-3)} \right)}} \left( {\textstyle{ \frac{(H+T-3)(H+T-2)}{2} }} \right) \end{align*}

    \begin{align*} {\color{white}{2 \sum_{j = 1}^{H+T-3} { \sum_{k = j+2}^{H+T-1} P \left( I_j = 1, I_k = 1 \right) }}} &= \textstyle{ \frac{4HT(H-1)(T-1)}{(H+T)(H+T-1)} } \end{align*}

Let’s plug this back into equation 5:

    \begin{align*} E[R^2] &= \textstyle{ 1 + \frac{6HT}{H+T} + \frac{2HT(H+T-2)}{(H+T)(H+T-1)} + \frac{4HT(H-1)(T-1)}{(H+T)(H+T-1)} } \end{align*}

    \begin{align*} {\color{white}{E[R^2]}} &= 1 + \textstyle{ \frac{6HT(H+T-1) + 2HT(H+T-2) + 4HT(H-1)(T-1)}{(H+T)(H+T-1)} } \end{align*}

    \begin{align*} {\color{white}{E[R^2]}} &= 1 + \textstyle{ \frac{2HT ( 3(H+T-1) + H+T-2 + 2(H-1)(T-1) )}{(H+T)(H+T-1)} } \end{align*}

    \begin{align*} {\color{white}{E[R^2]}} &= 1 + \textstyle{ \frac{2HT ( 3H+3T-3 + H+T-2 + 2(HT-H-T+1) )}{(H+T)(H+T-1)} } \end{align*}

    \begin{align*} {\color{white}{E[R^2]}} &= 1 + \textstyle{ \frac{2HT ( 4H+4T-5 + 2HT-2H-2T+2 )}{(H+T)(H+T-1)} } \end{align*}

    \begin{align*} {\color{white}{E[R^2]}} &= 1 + \textstyle{ \frac{2HT (2HT+2H+2T-3)}{(H+T)(H+T-1)} } \end{align*}

Whew, that’s a lot of beautiful simplifying! Now we can plug this into our variance formula:

    \begin{align*} Var(R) &= E[R^2]-{(E[R])}^2 \end{align*}

    \begin{align*} {\color{white}{Var(R)}} &= 1 + \textstyle{ \frac{2HT (2HT+2H+2T-3)}{(H+T)(H+T-1)} - \left( 1 + \frac{4HT}{H+T} + {\left( \frac{2HT}{H+T} \right)}^2 \right) } \end{align*}

    \begin{align*} {\color{white}{Var(R)}} &= \textstyle{ \frac{2HT (2HT+2H+2T-3)}{(H+T)(H+T-1)} - \frac{4HT}{H+T} - \frac{{(2HT)}^2}{{(H+T)}^2} } \end{align*}

    \begin{align*} {\color{white}{Var(R)}} &= \textstyle{ \frac{2HT(2HT+2H+2T-3) - 4HT(H+T-1)}{(H+T)(H+T-1)} - \frac{{(2HT)}^2}{{(H+T)}^2} } \end{align*}

    \begin{align*} {\color{white}{Var(R)}} &= \textstyle{ \frac{2HT(2HT+2H+2T-3 - 2(H+T-1))}{(H+T)(H+T-1)} - \frac{{(2HT)}^2}{{(H+T)}^2} } \end{align*}

    \begin{align*} {\color{white}{Var(R)}} &= \textstyle{ \frac{2HT(2HT+2H+2T-3 - 2H-2T+2)}{(H+T)(H+T-1)} - \frac{{(2HT)}^2}{{(H+T)}^2} } \end{align*}

    \begin{align*} {\color{white}{Var(R)}} &= \textstyle{ \frac{2HT(2HT-1)}{(H+T)(H+T-1)} - \frac{{(2HT)}^2}{{(H+T)}^2} } \end{align*}

    \begin{align*} {\color{white}{Var(R)}} &= \textstyle{ \frac{2HT(2HT-1)(H+T)}{{(H+T)}^2(H+T-1)} - \frac{{(2HT)}^2(H+T-1)}{{(H+T)}^2(H+T-1)} } \end{align*}

    \begin{align*} {\color{white}{Var(R)}} &= \textstyle{ \frac{2HT(2HT-1)(H+T) - {(2HT)}^2(H+T-1)}{{(H+T)}^2(H+T-1)}} \end{align*}

    \begin{align*} {\color{white}{Var(R)}} &= \textstyle{ \frac{2HT((2HT-1)(H+T) - 2HT(H+T-1))}{{(H+T)}^2(H+T-1)}} \end{align*}

    \begin{align*} {\color{white}{Var(R)}} &= \textstyle{ \frac{2HT(2H^2T+2HT^2-H-T - 2H^2T-2HT^2+2HT)}{{(H+T)}^2(H+T-1)}} \end{align*}

    \begin{align*} {\color{white}{Var(R)}} &= {\color{MidnightBlue}{\textstyle{ \frac{2HT(2HT-H-T)}{{(H+T)}^2(H+T-1)}} }} \end{align*}

Thus for R, which represents the number of runs when there are H heads and T tails, we have:

    \begin{align*} {\color{MidnightBlue}{E[R] = 1 + \frac{2HT}{H+T} }} && \text{ and} && {{\color{MidnightBlue}{Var(R) = \frac{2HT(2HT-H-T)}{{(H+T)}^2(H+T-1)} }} \enspace \blacksquare} \end{align*}

You can verify on the Wikipedia page that this is indeed the formula for the variance of the number of runs. This is probably the most rigorous math I’ve ever written for a proof! Whew! I hope this is useful, and if you stumbled across this page because you were looking for a proof of the variance of the number of runs, please give me credit! :)

At this point, one can use the normal distribution to see which sequence in the homework problem I provided is the true sequence of coin flips, but I’ll leave that as an exercise for the reader!


Powered by QuickLatex

6 comments

  1. Stonedieck says:

    You know what? This is pretty awesome. I mean the sheer breakdown and elaboration of each step is commendable. I googled, went through the nonparametric mammoths but no one offered anything. Zilch. Finally the main paper. Have a book that did kind of obscure proof beyond my brain. And the author seemed to have omitted them. In fact l, I reached here after going through google images. Helpful is a underrated word. Amazing. Best wishes.

  2. henk korbee says:

    P.S. The moment mathematician speak or write about ‘tedious proof’ it isn’t tedious at all. The question is: can a shorter proof be found?

  3. Henrik says:

    I have sent a reply earlier to Roslyn, I used her math for some algorithm in currency trading. Very useful and a persistent job she made with the proof.
    Mama should be really proud.

Leave a Reply