Quantitative Interview questions and answers

From what i understood so far the option is a transfer of risk, from the buyer to the seller.+ there is no obligation for the buyer to exercise his option if it is out of the money. whereas the in the futures and forward markets it is an obligation of the buyer.
Please feel free to comment:))


Umm let me see if i am able to understand it from this perspective ..

so when the buyer, who pays something, transfers the risk to the sellers book, the seller tried to hedge it (read delta hedge) .. with the short gamma the delta hedging causes bigger losses with increasing vol .. and price of the option is equal to the cost of the hedge .. hence increase vol is again proportional to price of the option

for a future .. we both have the same risk on our books but different direction .. a zero sum game which i am not sure how will i hedge .. hence the zero cost .. an increase in vol wont change the dynamics in this scenario

:)
 
Answer is 7/12.
X_r, X_b and X_g are the last positions of red, blue and green candies respectively, when taken out of jar. Required Event (E) is : X_r < X_g < X<b (call it A) and X_r < X_b < X_g (call it B)
Using conditional probability:
P(E) = P(A)*P(E|A) + P(B)*P(E|B)
P(A) = ( 59!/ (10!*19!*30!) )/ (60!/ (10!*20!*30!) ) = 1/3
P(B) = ( 59!/ (10!*20!*29!) )/ (60!/ (10!*20!*30!) ) = 1/2
P(E|A) = P(X_r < X_g) = P( getting X_r < X_b out of forty candies (10 Reds and 30 Green), = (39!/(10!29!)) / (40! / (10!30!)) = 3/4
similarly P(E|B) = 2/3

So, P(E) = (1/3)*(3/4) + (1/2)*(2/3) = 7/12

Quant Crack, can you please better explain the following passage:
\(P(E|A) = P(X_r < X_g < X_b \cap X_r < X_b < X_g | X_r < X_g < X_b) = P(X_r < X_g)\)

In the next one:
\(P(X_r < X_g) = P\large( \text{getting $X_r < X_b$ out of forty candies (10 Reds and 30 Green)}\right)\)
I guess there is a misspelling and you were looking to write:
\(P(X_r < X_g) = P\large( \text{getting $X_r < X_g$ out of forty candies (10 Reds and 30 Green)}\right)\)
 
Last edited:
5. A unit length is broken off into 3 pieces. What is the probability of them forming a triangle ?

I know this has been discussed over the years but wanted to bring a new thought process to it.
My answer is somewhere around .1933 using a monte carlo simulation. I also assumed the lengh is broken 1 break at a time, and the results are uniformly distributed.

The length of part A is just a uniform random between 0-1, and then B is the product of (1-A)*(random uniform between 0-1), and then C is 1-A-B. I did 10k iterations and looked for when all A,B,&C are less than 0.5.

I'm playing around for an exact solution, but because A, B, & C are dependent it is getting a little tricky.
 
The length of part A is just a uniform random between 0-1, and then B is the product of (1-A)*(random uniform between 0-1),.

That's only the case if the first cut is random along the line, and the second MUST be after the first one.

The length of part A would be a uniform random distribution between 0-1 only when one random cut is made.
 
That's only the case if the first cut is random along the line, and the second MUST be after the first one.

The assumptions we make have a huge difference in the final answer. I assumed you cut it, then cut again from the remainder. You are assuming you create 2 marks first, and then cut on those marks.

I hadn't thought of your way... gives me something to do on my flight tomorrow.
 
Suppose you have a random number generator that generates random numbers between (0,1) with a uniform distribution.... 2 consecutive generations are independant of each other...
You generate 2 random numbers x, y from this random number generator.... What is the probability that xy < 0.5

Never saw a reply to this one, so just for kicks...

Solution: 1/2*[1-ln(1/2)] or .846574 and confirmed by monte carlo

Method:
xy < .5
y<.5/x *now integrate the area which the condition is true within the event space x{0,1} & y{0,1}
Y = 1/2*ln(x) *from x{0,.5} the area under the curve is the same as event space's

Probability = Area of event space from x{0,.5} + Area under curve 1/2*ln(x) from x{.5,1}
Probability = 1/2 + [0-1/2*ln(1/2)]
Probability= 1/2*[1-ln(1/2)]
 
Question: A square with four corners A,B,C,D. Suppose you start from corner A and have equal chance to go to neighboring corners B and D; After reaching new corner, you again have equal chance to go to its two neighboring corners. The time consumed to travel on each edge is 1, what is the mean time to come back to A.

Answer: 4, verified my monte carlo 100M iterations. Manually solving I had to use Abel's Lemma, or Summation by Parts.

Someone earlier responded with the correct setup of \(\sum \frac{2k}{2^k}\) where k \((1, \infty)\) but got the incorrect answer of 2.
 
Answer: 4, verified my monte carlo 100M iterations. Manually solving I had to use Abel's Lemma, or Summation by Parts.

Someone earlier responded with the correct setup of \(\sum \frac{2k}{2^k}\) where k \((1, \infty)\) but got the incorrect answer of 2.

Nice question! I used brute force to compute \( E(T)=4 \). Is there any other approach to solving this problem?
 
Nice question! I used brute force to compute \( E(T)=4 \). Is there any other approach to solving this problem?

rewrite the summation as \[ 2\sum_1^\infty k\cdot r^k \] and thus = 2*S

let S = 1*r^1 + 2*r^2 + 3*r^3 +...
then S/r = 1*r^0 + 2*r^1 + 3*r^2 +....
now take S/r -S = 1 + 2*r^1 -1*r^1+ 3*r^2 - 2*r^2 +....
clean it up to S*(1/r -1) = 1 + r^1 + r^2 + r^3 +....
now the right is a sum of a geometric series S*(1/r-1) = 1 + r/(1-r)
since r is 0.5, then S*(2 -1) = 1 +1
then S = 2/(1) and because we pulled out the "2" from the beginning we multiple by 2 to get 4.

This method only cleanly works when you have a summation of k*r^k. Once the first term is squared or higher then method is much messier and not feasible.
 
Last edited:
rewrite the summation as \[ 2\sum_1^\infty k\cdot r^k \] and thus = 2*S

let S = 1*r^1 + 2*r^2 + 3*r^3 +...
then S/r = 1*r^0 + 2*r^1 + 3*r^2 +....
now take S/r -S = 1 + 2*r^1 -1*r^1+ 3*r^2 - 2*r^2 +....
clean it up to S*(1/r -1) = 1 + r^1 + r^2 + r^3 +....
now the right is a sum of a geometric series S*(1/r-1) = 1 + r/(1-r)
since r is 0.5, then S*(2 -1) = 1 +1
then S = 2/(1) and because we pulled out the "2" from the beginning we multiple by 2 to get 4.

This method only cleanly works when you have a summation of k*r^k. Once the first term is squared or higher then method is much messier and not feasible.

Yeah, arithmetico-geometric series!
 
Unless they throw smoke with the man's honesty (trick question), the way I read the question is that what is the probability of the man reports 6 if the dice being 6?

Dice being 6 is 1/6
Man reports 6 when he sees 6 is 3/4 so it goes 3/4 x 1/6 =1/8

Of course, we assume fair dice. If they are two independent events, then I agree it stays 1/6

Rekindling this old question. The man speaks truth \(3 \) times in \( 4 \) trials, so \( P(M_{6}|D_{6}) = 3/4 = P(M_{6}^C|D_{6}^C) \). Given information that \( M_6 \) has occurred, what's the chance that \( D_{6} \) occurred? Clearly, what's asked is, \( P(D_{6}|M_{6}) = ?\).

Say, the dice is rolled \( 24 \) times. In \( 24 \) trials, \( D_{6}=4 \) throws are sixes, \( D_{6}^C=20 \) throws are not six. On \( M_{6},D_{6}=3 \) occasions, man reports six, and a six is thrown. On \( M_{6}^C,D_{6}=1 \) occasion, man lies, and a six is thrown. On \( M_{6},D_{6}^C=5 \) occasions, man lies, and no six is thrown. On \( M_{6}^C,D_{6}^=15 \) occasions, man reports no six, and no six is thrown.

\( P(D_{6}|M_{6}) = \frac{3}{3+5}=\frac{3}{8} \)
 
Calculate exp(5) to two decimal places by pencil and paper.

By hand-multiplication, it appears, we'd have to take a rather large number of digits to get the desired precision. I took \( e=\textbf{2.7182}81828459045 \) upto 4 decimal places to get \( 147.9118 \). Still off from the answer. Also, MacLaurin's series doesn't seem feasible.
 
A coin is flipped repeated until you or I win. If the sequence \( HHT \) appears, I win. If the sequence \( THH \) appears, you win. Which sequence is more likely?

My Brute-force attempt :
\( \begin{aligned} P(\text{Sequence ending in HHT}) &= P(HHT) + P(HHHT) + P(HHHHT) + \ldots \\ &= \frac{1}{2^{3}} + \frac{1}{2^{4}} + \frac{1}{2^{5}} + \ldots \\ &=\frac{1}{4} \end{aligned} \)

My trouble is with finding \( P(\text{Sequence ending in THH}) \).
\[ \begin{aligned} P(\text{Sequence ending in THH}) &= \{P(THH) + P(TTHH) + \ldots\} \\&+ \{P(THTHH) + P(THTHTHH) + \ldots\} \\&+ \{P(HTHH) + P(HTTHH) + \ldots\} \\&+ \{P(HTHTHH) + P(HTHTHTHH) + \ldots\} \\ &= \frac{1}{4} + \frac{1}{24} + \frac{1}{8} + \frac{1}{48} \\ &=\frac{7}{16} \end{aligned}. \]
I think, the chance that you win should have been \( 3/4 \).

What do you guys think?
 
A coin is flipped repeated until you or I win. If the sequence \( HHT \) appears, I win. If the sequence \( THH \) appears, you win. Which sequence is more likely?

I think, the chance that you win should have been \( 3/4 \).

What do you guys think?

In this series of binary events, I drew out a tree of all the possible pathways. I quickly came to the same conclusion as you. One way to explain it as follow:

If my first flip is tails (p=0.5), then you eventually win regardless of subsequent events.
When I flip heads then tails (p=0.25), you again eventually win.
But when I flip heads then heads (p=0.25), I will eventually win.
 
In this series of binary events, I drew out a tree of all the possible pathways. I quickly came to the same conclusion as you. One way to explain it as follow:

If my first flip is tails (p=0.5), then you eventually win regardless of subsequent events.
When I flip heads then tails (p=0.25), you again eventually win.
But when I flip heads then heads (p=0.25), I will eventually win.

And I realised, that I am still missing few sequences in the brute force approach of mine to the second part of the question. Brute-force quickly becomes unwieldy.

Like you said, if you get a \( T \) in the first two coin tosses, you must complete \( THH \) before you can get to \( HTT \). That's \( 0.25 + 0.5 = 0.75\). Cool!
 
Last edited:
Calculate exp(5) to two decimal places by pencil and paper.

Like Quasar said, the MacLaurin/Taylor series looks arduous. I cheated and used excel to test it this way, it finally stabilized to 2 digits with the 17th term. I don't think my potential employer would be happy waiting for me to compute and sum all the terms through \[ \frac{5^{17}}{17!} \] I have memorized a few constants to a few digits, and this one I know through 2.718281828, which does produce 2 digit accuracy. But by the time I finish hand computing multiplication of this 10 digit number a few times my interviewer would probably have shown me the door. So is this question is more about what is the minimum number of decimals needed to still get achieve 2 digit accuracy?
 
Top