Here’s a link to the problems and a link to the code on Github.

**Riddler Express**

A fun way to solve this problem is through the use of a Monte Carlo simulation: The code for this is relatively straightforward:

```
from __future__ import division, print_function
from multiprocessing import Pool
import sys
import numpy as np
def simulate(n_kids):
# create array to track kids + teacher
had_potato = np.zeros(n_kids + 1, dtype=np.bool)
pos = n_kids // 2
# move the position by either -1 or 1 until complete
while np.sum(had_potato) < n_kids:
had_potato[pos] = True
pos += np.random.randint(2) * 2 - 1
pos %= had_potato.size
return np.argmin(had_potato)
if __name__ == "__main__":
if len(sys.argv) >= 2:
n_kids = int(sys.argv[1])
else:
n_kids = 30
# initialize random number generator
np.random.seed()
# simulate for "enough" times
n_sim = int(1e6)
pool = Pool(processes=8)
winners = pool.map(simulate, [n_kids] * n_sim)
probs = np.bincount(winners) / float(n_sim)
print("Win probabilities")
print(probs)
```

For each simulation run, I maintain a list of positions that the potato has “touched”. At each iteration, we select a random direction to pass the potato, encoded as -1 or 1. This occurs continuously until only one child (the winner) remains. To speed up the simulation, I used a pool^{1} of worker processes to run several distinct simulations in parallel. The result of a single run is below:

```
[ 0.034032 0.03448 0.033488 0.03324 0.0334 0.033272 0.033864
0.03368 0.033328 0.032984 0.032872 0.033432 0.032656 0.032552
0.033936 0. 0.033576 0.033336 0.03268 0.034328 0.033424
0.03312 0.032728 0.033016 0.03312 0.032848 0.033464 0.033424
0.033296 0.033552 0.032872]
```

Running this for a million iterations, we see that each child actually has **the same** win probability, i.e. **1/30**! The teacher’s position (index 15 in the code above) can never win since that’s the position the potato starts off at.

**Riddler Classic**

This problem can also be solved with a Monte Carlo simulation. I took a stab at an analytical solution first, drawing out the graph of bathroom-user interactions as a finite state machine. I got stuck trying to determine the “long-term” probability of each state, though I am fairly certain there is indeed an analytical solution (I seem to have forgotten all the material I learned in my stats classes at Stanford ¯\_(ツ)_/¯). Back to Monte Carlo simulations!

For the time being, we can ignore the fact that the bathroom is occupied exactly 50% of the time – that just helps normalize our prior probabilities. Instead, simply encoding the states of the bathroom (user, sign state, actual state) during each function call is enough:

```
from __future__ import division, print_function
import sys
import numpy as np
def simulate(prob, state):
# make new user if bathroom is unoccupied
if not state[1]:
state[0] = np.random.random()
# update actual state
state[1] = not state[1]
# update sign state
if state[0] < prob:
pass
elif state[0] < 2 * prob:
state[2] = True
else:
state[2] = state[1]
if __name__ == "__main__":
if len(sys.argv) >= 2:
prob_co = float(sys.argv[1])
else:
prob_co = 1 / 3.0
# initialize random number generator
np.random.seed()
# prob := probability that person totally ignores the sign
# prob := probability that person forgets sign upon leaving
prob = (1 - prob_co) / 2
# (user type, actual state, sign state)
state = [0, False, False]
# track # of sign state count and # of times correct
n_sign = [0, 0]
n_valid = [0, 0]
# simulate for "enough" times
n_sim = int(1e6)
for _ in range(n_sim):
simulate(prob, state)
n_sign[state[2]] += 1
if state[1] == state[2]:
n_valid[state[2]] += 1
print("P(occ|sign_occ) = " + str(n_valid[1] / n_sign[1]))
print("P(vac|sign_vac) = " + str(n_valid[0] / n_sign[0]))
```

As shown, the `simulate`

function simply runs one bathroom entrance or exit per iteration, keeping track of the user type between two consecutive runs as necessary. I ran the code with `n_sim = int(1e9)`

and `xrange`

instead of `range`

to avoid unnecessary memory consumption (Python 2 is still my preferred interpreter):

```
P(occ|sign_occ) = 0.624993716521
P(vac|sign_vac) = 0.749989913933
```

With some obvious rounding, we arrive at probabilities of 0.625 and 0.75. Hence our final answer of **5/8, 3/4**.

^{1} I have an 4-core processor with hyperthreading, bringing the total “CPU count” to 8, hence `processes=8`

; you can check yours with `python -c "import multiprocessing; print(multiprocessing.cpu_count())"`

.