• C++ Programming for Financial Engineering
    Highly recommended by thousands of MFE students. Covers essential C++ topics with applications to financial engineering. Learn more Join!
    Python for Finance with Intro to Data Science
    Gain practical understanding of Python to read, understand, and write professional Python code for your first day on the job. Learn more Join!
    An Intuition-Based Options Primer for FE
    Ideal for entry level positions interviews and graduate studies, specializing in options trading arbitrage and options valuation models. Learn more Join!

Interview Questions at JPMorgan Sales and Trading

The payment per unit time is the same no matter which frequency you pick. So the option with both the highest time value and the least credit risk is the same: The highest possible frequency.
 
@tylor
This isn't adequate enough.
Suppose instead of 100 dollars, the interviewer offered to double your net worth if you win or take you all your money if you lose. According to your logic, that is fair. However, no person would take that - a) because they're risk averse and b) the marginal utility of money decreases.

you could definitely be right. iirc, i was least confident with this answer. i've never gambled in my life so i know nothing about odds, spreads, etc.
 
@tylor
This isn't adequate enough.
Suppose instead of 100 dollars, the interviewer offered to double your net worth if you win or take you all your money if you lose. According to your logic, that is fair. However, no person would take that - a) because they're risk averse and b) the marginal utility of money decreases.

What do you think about my solution to 2?

For question 2:
you would pay anything that will give you positive Expectation.
so, for the first part, prob of H = prob of T = 1/2.

RV x= x0? 100
f(x) 1/2 1/2

the maximum you are willing to pay is 100 (or x0= -100), at which E[x]=100 x 0.5 - 100 x 0.5 = 0


for the second part
Prob H in two tosses = 1/2 + 1/2 x 1/2 = 3/4

RV x= x0? 100
f(x) 1/4 3/4

Maximum you are willing to pay is x0=100 * 3/4 / 1/4 = 300 (or x0=-300)
 
@AlexandreH

Let me refresh your memory with the question itself ;)

2) I flip a coin. If it's heads you pay me £100. What should I pay you to play this game? What about if I only have to get 1 heads in two tosses, what is the new price?
There's no way that the employer will pay more than £100... that's a certain (think 100%) loss for them all the time, lol!

EDIT: I see what you are writing, and even under your interpretation (that is, employer:"how much do I have to pay if it's tails") you are not accounting for your risk-aversion.
 
@AlexandreH

Let me refresh your memory with the question itself ;)

There's no way that the employer will pay more than £100... that's a certain (think 100%) loss for them all the time, lol!

EDIT: I see what you are writing, and even under your interpretation (that is, employer:"how much do I have to pay if it's tails") you are not accounting for your risk-aversion.

It's a simple probability/Expectation problem.
for the first part, the most you should be willing to pay is 100. Anything less will give you a Positive expectation, anything more a negative one.
For the second part, it is the same problem over two flips. where the second one is not needed unless you miss the first one. Again, you should be willing to pay up to what will give you zero expectation. You cannot account for risk aversion in problems like these. All you can do to answer this is run the calculations I have done and add that you are not accounting for risk-aversion.
By the way, where did you get the 100% loss for them all the time? if you lose you pay $300, if you win it is $100 but you have 75% prob of winning which gives you an Expectation of zero.
 
Again, in the first part of my reply (everything _before "EDIT"), I interpreted the question the way it was asked... how much should I pay to play this game, the game being the employer get 100$ for heads. Note that the employer pays _before_ the game... he is paying for the right to play.
Suppose the employer pays 100$...

then if it's heads (winning 100$), he lost 0$ net. If it's not (winning nothing), he lost 100$ net. No win scenario.

However, as _you_interpreted (that is, he pays on tails, rather than paying to play), I said, Ok, it's better, but it's not enough to have an expected return of 0. You still must adjust for risk aversion.

Here's an example why logic such as your is not adequate.
You may have heard of it. It's called the St. Petersburg paradox.

Taken from Wikipedia:
Consider the following game of chance: you pay a fixed fee to enter and then a fair coin is tossed repeatedly until a tail appears, ending the game. The pot starts at 1 dollar and is doubled every time a head appears. You win whatever is in the pot after the game ends. Thus you win 1 dollar if a tail appears on the first toss, 2 dollars if a head appears on the first toss and a tail on the second, 4 dollars if a head appears on the first two tosses and a tail on the third, 8 dollars if a head appears on the first three tosses and a tail on the fourth, etc. In short, you win 2<sup>k−1</sup> dollars if the coin is tossed k times until the first tail appears.
What would be a fair price to pay for entering the game? To answer this we need to consider what would be the average payout: With probability 1/2, you win 1 dollar; with probability 1/4 you win 2 dollars; with probability 1/8 you win 4 dollars etc. The expected value is thus
<dl><dd>
7db6192496dc8dd2458f512dc7b37d22.png
</dd></dl> <dl><dd> <dl><dd>
0dde9cfd4b4be7490c59ee6dd6f48361.png
</dd></dl> </dd></dl> <dl><dd> <dl><dd>
b426f3c8e4375c07f21770358c561b76.png
</dd></dl> </dd></dl> This sum diverges to infinity, and so the expected win for the player of this game, at least in its idealized form, in which the casino has unlimited resources, is an infinite amount of money. This means that the player should almost surely come out ahead in the long run, no matter how much he pays to enter; while a large payoff comes along very rarely, when it eventually does it will typically be far more than the amount of money that he has already paid to play. According to the usual treatment of deciding when it is advantageous and therefore rational to play, one should therefore play the game at any price if offered the opportunity.
OF COURSE NOBODY WILL PLAY ABOVE A CERTAIN PRICE. Why? Because of _risk aversion_.

Here is the mathematician Bernoulli's solution (also stolen from Wiki ;) )

Expected utility theory

The classical resolution of the paradox involved the explicit introduction of a utility function, an expected utility hypothesis, and the presumption of diminishing marginal utility of money.
In Daniel Bernoulli's own words:
<dl><dd>The determination of the value of an item must not be based on the price, but rather on the utility it yields…. There is no doubt that a gain of one thousand ducats is more significant to the pauper than to a rich man though both gain the same amount.</dd></dl> A common utility model, suggested by Bernoulli himself, is the logarithmic function u(w) = ln(w) (known as “log utility”). It is a function of the gambler’s total wealth w, and the concept of diminishing marginal utility of money is built into it. By the expected utility hypothesis, expected utilities can be calculated the same way expected values are. For each possible event, the change in utility ln(wealth after the event) - ln(wealth before the event) will be weighted by the probability of that event occurring. Let c be the cost charged to enter the game. The expected utility of the lottery now converges to a finite value:
<dl><dd>
a71f83b28180dfb489345d27abfd97ab.png
</dd></dl>
My note: Note that Bernoulli used u(w) = ln(w) because u'(w) = 1/w... the marginal utility is literally inversely proportional to the wealth. Risk aversion can logically be deducted from such a u(w).

Finally, here is an excellent resource to look into: http://econweb.umd.edu/~beuermann/Teaching/Chapter_06.pdf (in fact, it is the most relevant to the discussion ;) )

Basically, our utility functions can be modeled by a factor of our Expected Return and less a factor of variance of returns.



There is no reason
You cannot account for risk aversion in problems like these. All you can do to answer this is run the calculations I have done and add that you are not accounting for risk-aversion.
. It is what _you_ want the employer to pay. There is no "mathematically correct" answer (it depends on your utility function), but you should add a reasonable sum to your figures.
 
@euroazn

The first game you mentioned has no meaning: you will never play a game for which the best scenario has an Expected value of zero. So I think the problem is probably the way I explained it.
Now, going to your analysis of the Utility and all that, again, this is an interview and not a paper, I doubt you were expected to talk about all those. I still think it is a simple prob/expectation problem. You are not running this infinitely. It is just one flip and two flips. What do you want to answer when you get something like this in an interview? You don't want to start asking the interviewer questions like what is the wealth of the employee and all that...
 
I was illustrating a point... this is a Trading interview. You must absolutely talk about risk-aversion. It's too simple a concept to not mention it.
 
I was illustrating a point... this is a Trading interview. You must absolutely talk about risk-aversion. It's too simple a concept to not mention it.

I'm pretty sure traders use mathematical concept of expected log utility on a daily basis *sarcasm* :))
 
They don't use the mathematical model of log-utility, but they do use common sense and are (at least somewhat) risk-averse.... that's what I'm getting at.
 
the use of equations below is mind boggling.

Oh really. So according to you, if I offer you 1x your money every 1 year you'd pick that because you get money sooner? But wait, 1x your money every 1 year means I don't have to increase your salary. Fantastic.
 
Oh really. So according to you, if I offer you 1x your money every 1 year you'd pick that because you get money sooner? But wait, 1x your money every 1 year means I don't have to increase your salary. Fantastic.

right. as if 2 x your salary every 2 years means increase in your salary LOL
 
Back
Top