7.7 Jane Street Capital Second Round Interview

How can you be so sure?
To solve this problem, you first need to solve the optimal stopping time by doing a dynamic programming approach. To do this, start by calculating the average of the last roll, which is 50.5. Then you go backwards. You know that to get to the last roll, you needed to have rolled 50 or less (otherwise, why stop?). You would have stopped if you had greater than 50. The average of 51, 52, ..., 100 is 75.5. So to get the second to last roll, you needed to have had 75 or less. So on and so on. Now, you just need to calculate the expected value minus the cost (number of rolls -1) for each node. The optimal stopping time will be the time which maximizes this quantity.
 
How can you be so sure? To solve this problem, you first need to solve the optimal stopping time by doing a dynamic programming approach. To do this, start by calculating the average of the last roll, which is 50.5. Then you go backwards. You know that to get to the last roll, you needed to have rolled 50 or less (otherwise, why stop?). You would have stopped if you had greater than 50. The average of 51, 52, ..., 100 is 75.5. So to get the second to last roll, you needed to have had 75 or less. So on and so on. Now, you just need to calculate the expected value minus the cost (number of rolls -1) for each node. The optimal stopping time will be the time which maximizes this quantity.

There is no optimal stopping time since your both maximizing profit and minimizing loss simultaneously.
 
How can you be so sure?
To solve this problem, you first need to solve the optimal stopping time by doing a dynamic programming approach. To do this, start by calculating the average of the last roll, which is 50.5. Then you go backwards. You know that to get to the last roll, you needed to have rolled 50 or less (otherwise, why stop?). You would have stopped if you had greater than 50. The average of 51, 52, ..., 100 is 75.5. So to get the second to last roll, you needed to have had 75 or less. So on and so on. Now, you just need to calculate the expected value minus the cost (number of rolls -1) for each node. The optimal stopping time will be the time which maximizes this quantity.

How can I be so sure? Well, what you're describing is equivalent to the recursion equation I posted above, whose solution is 1223/14.
 
How can I be so sure? Well, what you're describing is equivalent to the recursion equation I posted above, whose solution is 1223/14.

The way I understand it is what he's describing is a bit different from what you've used. What he is suggesting is that there exists a node (determined by the number of re-tries) where the expected value of this game hits maximum and starts to reduce thereafter and therefore, one should only retry till this node. This is not what you're recursion formula assumes. You're recursion formula, the way I understand and agree with, does not care about optimal tries. For example, if you had the misfortune of hitting the number 1 a 1000 times continuously, it still makes sense to go for the 1001th time and had made sense on every other previous try since in this scenario you are trying to minimize you're losses. The way the other guy would go about it is that he'd figure an optimal node and if he hits 1 on that node, he'd stop which is sub-optimal.
 
I see what you mean. Upon closer inspection of RRodriguez's argument, I think the misstep is where he says "to get to the last roll, you needed to have rolled 50 or less (otherwise why stop?)" This doesn't take into account the cost of all the rolls up to that point.
 
Perhaps it will be useful to think about a much simpler brainteaser that is a classic: You roll a die up to three times. Each time you roll, you can either take the number showing the dollars, or roll again. What are the expected winnings?
What method would you use to solve this?

In the interview questions there is no "expiration date" (three rolls in brainteaser), the die has 100 numbers, and there is a cost of playing (If there was no cost it would be optimal to roll until you get 100).
It is the same idea though. Interviewers ask these sort of questions to test your knowledge in dynamic programming. The tricky part about optimal stopping problems in general is that the time you stop playing is random. So when you stop will depend on what you have rolled.
Just think of how you price an American option using a tree.
 
yes, but it also depends on the cost of playing, not just what you've rolled...
 
Yes, you take into account the cost of playing...every time you roll you just subtract $1 from the expected value in the dynamic programming procedure.
 
I get what you're saying. However, in your outline above you said "You know that to get to the last roll, you needed to have rolled 50 or less (otherwise, why stop?)." Can you explain this? It seems incorrect to me, as it still disregards the cost of playing.

Can you demonstrate that the e.v. of your strategy is larger than 1223/14?
 
Incorporating the cost of play is pretty straightforward (by subtracting 1 from the expected earnings each roll, which will be random). If you understand the dynamic principle, then it should be clear how to incorporate it.
To demonstrate that my strategy is possibly different that your 1223/14 I would need to do the dynamic programming scheme and write all the details here, which requires more time than I wanted to invest in this.
 
Let's say we stop if we roll something > x...
Then, E(game) = P(>x) * (100+x)/2 + P(<= x) * (E - 1)

suppose E = f(x)

f(x) = (100-x)/100 * (100+x)/2 + x/100 * (f(x) - 1)
f(x) = [(100-x)/100 * (100+x)/2 - x/100]/(1 - x/100)

maximum of f(x) = 101 - 10*sqrt(2) = 86.858... where x = 100 - 10*sqrt(2) = 85.858...
 
Back
Top Bottom