Category: Math

Portfolio Insurance and Black Monday, October 19, 1987

On the thirtieth anniversary of Black Monday, the stock market crash of October 19th and 20th in 1987, there have been mentions of “portfolio insurance” having possibly exacerbated the crash.

 

Portfolio insurance, in principle, is exactly what you might expect it to be: if you own a stock, Stock A, you insure it with a put option on Stock A.  Your position becomes equivalent to a call option on Stock A until the put option expires, with the price of this position being the premium of the put option when you bought it.

If you are managing a portfolio on behalf of clients, though, and you just need to insure the portfolio up to a certain date, after which, say, you hand over the portfolio, then to buy American put options to insure the portfolio would be unnecessary.  European put options would suffice.  So let’s suppose that we are only interested in European options.

In the article that I cite at the bottom (Abken, 1987), it seems that at the time, buying put options as insurance had a few issues.  This is assuming that the portfolio we want to insure is a stock index: the S&P 500 index.  The issues were:

  • Exchange-traded index options only had matures up to nine months
  • Exchange-traded index options had a limited number of strike prices
  • It’s implied that only American options were available (which we would expect have a premium over European options).

Thus, instead of using put options to insure the portfolio, the portfolio and put options are replicated by holding some of the money in the portfolio and some of it in bonds, Treasury bills, that we assume to provide us with the risk-free rate.

Without worrying about the math, the Black-Scholes equation gives us a way to represent our stock index S and put options P as:

$$S + P = S \cdot N_1 + K \cdot DF \cdot N_2$$

 

d

 

 

Source:

Abken, Peter A.  “An Introduction to Portfolio Insurance.”  Economic Review, November/December 1987: 2-25.

Link to articleArchived.

 

Testing MathJax-LaTeX

At first, we sample $$f(x)$$ in the \(N\) ($N$ is odd) equidistant points around \(x^*\):

\[
f_k = f(x_k),\: x_k = x^*+kh,\: k=-\frac{N-1}{2},\dots,\frac{N-1}{2}
\]

where \(h\) is some step.

Then we interpolate points \((x_k,f_k)\) by polynomial

\begin{equation}
\label{eq:poly} \tag{1}
P_{N-1}(x)=\sum_{j=0}^{N-1}{a_jx^j}
\end{equation}

Its coefficients \(a_j\) are found as a solution of system of linear equations:
\begin{equation} \label{eq:sys} \tag{2}
\left\{ P_{N-1}(x_k) = f_k\right\},\quad k=-\frac{N-1}{2},\dots,\frac{N-1}{2}
\end{equation}
Here are references to existing equations: \ref{eq:poly}, \eqref{eq:sys}.
Here is reference to non-existing equation \eqref{eq:unknown}.

Brainteaser: The Monty Hall Problem

You are on a game show and presented with 3 doors.  Behind one is a car and behind the other 2 are goats.  You want to choose the door with a car behind it, as if you do so, you win the car.  You choose one door.  Then, the host opens one of the other doors, which reveals a goat behind it.  The host gives you a choice to either switch your door to the other one that’s still closed or keep your original choice.  Should you switch doors?

 


 

Answer:

If your strategy is to stick to your original choice, your probability of choosing the door with the car behind it is 1/3.  Let’s see what happens if you switch.  So you choose a door, the host reveals one of the other doors with a goat behind it, and asks if you want to switch.  What has happened up to this point?  There’s a 1/3 chance that you picked the door with the car behind it, which means that if you switch, you are switching to a door with a goat behind it.  There’s a 2/3 chance that you picked a door with a goat behind it, which means that if you switch, you are switching to a car behind it.  So if your strategy is to always switch, there’s a 1/3 chance you get a goat in the end (because you happened to choose a car on your first choice, which has a probability of 1/3) and a 2/3 chance you get a car in the end (because you happened to choose a goat on your first choice, which has a probability of 2/3).  So the best strategy is to switch.

The host revealing one of the doors gives you additional information.  Switching lets you use that information, assuming that it was unlikely that you got a car on your original choice.

Perhaps a more intuitive answer is if there are 100 doors.  One has a car behind it and 99 of them have goats behind them.  Choose one door, the hosts reveals another door with a goat behind it, and asks if you want to switch.  If you don’t switch, there’s a 1/100 chance that you chose the door with a car behind it.  But if you switch, assuming that you probably didn’t choose the right door on your first try (because 1/100 is small), now, you have a 1/98 chance of choosing the right door (because the host as revealed one door with a goat behind it and you’re giving up your original door).  Of course 1/98 is better than 1/100.  The exact probability of getting the right door with the switching strategy is 99/100 × 1/98 (probability that you chose the wrong door on the first try × probability of choosing the right door after accepting the offer to switch).  99/100 × 1/98 = 1/100 × (99/98) > 1/100 where 1/100 is the probability of getting the car with not switching, and so switching is better than not switching.

Brainteaser: 100 Prisoners in a Line and Extension

There are 100 prisoners.  An executioner tells them that tomorrow morning, he will line them up so that each of them is facing the back of the head of another prisoner, except for one prisoner at the end of the line.  In other words, prisoner 1 sees the back of the head of prisoner 2 as well as the backs of the heads of prisoners 3-100, prisoner 2 sees the back of the heads of prisoners 3-100, …, prisoner 99 only sees the back of the head of prisoner 100, and prisoner 100 doesn’t see any prisoners in front of him.  The executioner tells them that he will put either a red or blue hat on each prisoner, then starting with prisoner 1 (the one who can see 99 other prisoners in front of him), will ask him what color hat he is wearing.  The prisoner says a color and if he is wrong, he will silently kill that prisoner (prisoner 1 would be killed in a way that prisoners 2-100 won’t know if he was killed or not).  If he is right, he will keep him alive.  Then, the executioner will move to prisoner 2, ask the same question, and kill if he’s wrong, keep him alive if he is right.  The executioner keeps doing this for every prisoner to prisoner 100.  The prisoners are allowed to discuss together in the night what to do for the next day.  What should their plan be in order to maximize the number of survivors?  For clarity, what should their plan be in order maximize the number of survivors in the worst case scenario (any random guess by a prisoner ends up being wrong)?

 


 

Not the answer:

A sort of baseline answer is that prisoner 1 says the color of the hat worn by the prisoner right in front of him or her, thus sacrificing his or her life with a guess.  Prisoner 2 is guaranteed to live.  Repeat this for every pair of prisoners, giving us at least 50 prisoners alive at the end.  With 2 colors of hats, it makes intuitive sense that this would be an answer.  Unintuitively, this is far from the answer :-/

 


 

Answer:

One key, or hint, that may remain unemphasized when this brainteaser is presented to people, is that when a prisoner makes and says his guess for his own color, that guess is heard by all the other prisoners.  If each guess is correct, that provides valuable information to the later prisoners.

Let’s say there are only 3 prisoners and we are the middle prisoner.  We see only one prisoner in front of us and say he is wearing a red hat.  From the perspective of the prisoner behind us, either 1 or 2 red hats are seen.  So it’s possible for the prisoner behind us to announce through some code (e.g. “Red” = there is 1 red hat in front of me, “Blue” = there are 2 red hats in front of me) to tell us this.  This allows us to answer our own hat correctly.  Additionally, the prisoner in front of us will have gained two pieces of information: how many red hats there are with the 2 last prisoners and what hat the middle prisoner was wearing.  In other words, initially, there were either 1 or 2 hats worn by the last two prisoners.  The middle prisoner has the ability to answer correctly after the first prisoner sacrifices himself or herself by announcing the code.  If the first prisoners announces that there are 2 red hats in front of him, the middle prisoner will definitely say that he himself is wearing a red hat, leaving 1 red hat for the last prisoner.  If the first prisoner announces that there is 1 red hat in front of him, and then the middle prisoner says “Red,” the last prisoner knows that they are Blue, while if the middle prisoners says “Blue,” the last prisoner knows that they are Red.

Let’s say there are 4 prisoners in a line.  The first prisoner sees 1, 2, or 3 red hats in front of him or her.  But as long as the second prisoner announces his or her own hat color correctly, that will provide information for the later prisoners.  So how can the first prisoner announce information so that at least just the second prisoner will get his or her own hat color correctly?  The second prisoner sees 1 or 2 hats in front of him or her.  The answer is that the first prisoner announces the oddness or evenness of the number of red hats he or she sees.  From the second prisoner’s perspective, whatever he sees in front of him and whatever the last prisoner sees in front of him can only differ by 0 red hats or 1 red hat (whatever hat the second prisoner is wearing).  Thus, the key is, when there is only a difference of one change at each increment, oddness and evenness conveys enough information to tell us what has changed.  So the first prisoner sacrifices himself by announcing, say “Red” for an even number of red hats and “Blue” for an odd number of red hats that he sees in front of him.  This allows the second person to say his hat color correctly.  The third person has information that among the last 3 people, the number of red hats was either odd or even, plus what exact hat color the second person has, plus, of course, what exact hat color the first person, the person in front of him, has.  Effectively, the second person knows the hat colors of all 3 people at the end of the line except his own color plus the information that the first person provides, what the oddness or evenness of the number of red hats was for those 3 people.  This is enough information for the second person to figure out what color hat he has.  It’s the same with the last person. 

So with 100 people, the first person sacrifices himself by announcing the oddness or evenness of one of the colors that exist by code.  The second person has exact knowledge of the colors of the 98 people in front of him plus the oddness or evenness of one of the colors for all 99 people excluding the first person (i.e. the 98 people in front of him plus himself), giving him correct knowledge of his own color.  The third person know has exact knowledge of the color of the person behind him and the colors of the 97 people in front of him, plus the oddness or evenness of one of the colors for the 99 people that includes him, giving him enough information to figure out his own color.  This continues until the whole line is finished.  Thus, at least 99 out of 100 people can be saved with this strategy.

 


 

Extension:

What if the executioner uses more colors?

In our above case, we had 2 colors, and we sacrificed 1 prisoner at the beginning of the line to announce the oddness or evenness of one of the colors for the 99 people he sees in front of him.  Since all prisoners know the number of prisoners that the first prisoner sees (99), everyone only needs to keep track of one of the colors, say red.  The first prisoner announces the oddness or evenness of red, and each subsequent prisoner counts how many reds there are in the 99 to see if they also have a red hat or not.

If we have 3 colors, the first prisoner that can be saved would see x prisoners in front with 3 different colors and needs to figure out what color hat he has on.  Extending the strategy from above, if we sacrifice two prisoners before him, they can announce the oddness or evenness of two of the colors.  This is enough information for the first prisoner we save what color hat he has.  All subsequent prisoners will then have exact knowledge of the hat colors of all prisoners that can be saved except for their own, which they deduce by the oddness or evenness of the 2 colors that that first two prisoner we sacrifice first announce.  So in this case, we sacrifice 2 prisoners at the start and the 98 subsequent prisoners can be saved.

Let us apply the same logic to more colors.  If the executioner uses y different colors where 1 ≤ y ≤ 100, the first y – 1 prisoners sacrifice themselves by announcing the oddness or evenness of y – 1 colors.  The remaining 100 – (y – 1) prisoners will have enough information to correctly state their hat color.  If the executioner uses more colors than there are prisoners, we don’t have enough prisoners we can sacrifice to convey accurate information about the oddness or evenness of the colors we have to prisoners at the end.  In addition, we can always default back to the “baseline” solution, where each pair works together by sacrificing one prisoner (who simply announces the color of the hat in front of him) and saving the other one (who simply says the color that was announced by the prisoner before him), and guarantee at least 50 prisoners saved.  Thus, for 1 ≤ y ≤ 49, the “sacrifice for odd or even” strategy saves 99 to 51 people.  For y = 50, the strategy saves 50 people, which is the same as the result for the “default pair sacrifice” strategy.  For y > 50 (and even if y > 100), the “default pair sacrifice” strategy can always save 50 people and becomes better than the “sacrifice for odd or even” strategy.

Brainteaser: Blue Foreheads

100 people are in a room.

 

  1. All 100 of them are perfect logicians.
  2. They are told that at least one person in the room has blue paint on their forehead.
  3. They are told that once you deduce that you have blue paint on your forehead, the next time that the lights are turned off, leave the room.

 

All 100 people have actually had their foreheads painted blue (but of course, each of them don’t know this at this point – they can only see the other people’s foreheads).  The light is turned off, then on, then off, on, etc.  What happens?

 


 

Answer:

So each person sees 99 other people with blue paint on their heads.  While this is the situation we begin with, it doesn’t seem to help with solving the problem at all.  The key for this problem is to start as small as possible and then expand.

 

Start with 1 person.  1 person in a room sees 0 other people.  Thus, if there is at least 1 person in the room with blue paint, he or she must be it.  The light goes off, and then on, and we see 0 people in the room, as the person has left.

 

Let’s say we have 2 people.  Put ourselves in to the shoes of one of them.  They see 1 person in the room with blue paint on their forehead, and don’t now if there is blue paint on their own forehead.  But if there was no blue paint on their forehead, then the other person should deduce that they must be the one with blue paint on their forehead, and will be gone by the next light.  The light is turned off, then on.  Since both people see the other person with blue paint, both remain.  Now, each person knows that the other person looked at their forehead and saw blue paint, and so each person knows that they have blue paint on their own forehead.  The lights turns off and on, and there are 0 people in the room.

 

I think you know where this is going (although I find the logic the most difficult from here).  3 people in the room.  Each person sees 2 other people with blue paint on their foreheads.  The additional key here is, each person needs to think, “What if I don’t have blue paint?  If what happens then is a contradiction, then I must have blue paint.”  Choosing one person’s perspective – our “first” person – we first posit that we don’t have blue paint.  In that case, each of the other 2 people sees 1 person without blue paint and 1 person with blue paint.  Our existence as someone without blue paint doesn’t matter in their calculations.  Each of them thinks, “There is one other person in this room with blue paint.  If they see me without blue paint as well, then they should disappear by the next light.  The light turns off, then on.  All 3 people are still there.  So each of the other 2 people think, “Since that other person didn’t leave, I must have blue paint.  So I will leave by the next light.  The light turns off and on.  But since the truth is that all 3 people have blue paint, the other 2 people won’t disappear.  Instead, each of them are thinking the same thing about the other 2 people in the room that they see have blue paint on their foreheads.  Everyone waited two turns to see if the other people would make a move.  Since they didn’t, everyone has found a contradiction to “If I had blue paint,” and thus everyone deduces that they have blue paint on their own forehead.  Thus, the third time that the light goes off and on, the 3 people have left the room.

 

4 people in the room.  Assume you don’t have blue paint, so your being there doesn’t affect the others’ logic.  There are 3 people wondering if they have blue paint and the each see 2 other people with blue paint.  After 3 turns of the light going off and on, they should all leave.  If they don’t, we have a contradiction, so we have blue paint.  So on the 4th light, all 4 people leave.

 

5 people in the room.  Described another way: Let’s say we don’t have blue paint.  There are 4 other people with blue paint.  Let’s label them A, B, C, and D.  D is wondering if he or she has blue paint, looking at A, B, and C.  D first assumes he has no paint and is thinking, “C is thinking if he doesn’t have blue paint, then after 2 turns, A and B will disappear.”  After 2 turns, A and B remain.  D is thinking, “So now, C will conclude that he has blue paint.  So on the 3rd turn, A, B, and C should leave.”  After the 3rd turn, A, B, and C remain.  D is thinking, “OK, so there’s a contradiction to the assumption that I don’t have blue paint.  Thus, I have blue paint, and will disappear on the 4th turn.”  On the 4th turn, we see that A, B, C, and D still remain.  Thus, we have a contradiction to our first assumption that we have no blue paint.  We have blue paint, so on the 5th turn, we leave.  Everyone else also has the same logic process, so on the 5th turn, everyone leaves.

 

If there are 100 people in the room, all with blue paint on their foreheads, first assume that you don’t have blue paint on your forehead.  So then, your existence shouldn’t matter to the other 99 people’s logic.  Let’s label us A.  There are 100 people in the room: A, B, C, …, X, Y, Z, AA, AB, …, CV.  Person A first assumes they have no paint, and thinks, “B must be thinking, if I don’t have paint, then, C would think, if I don’t have blue paint… etc.”  Basically, we are testing the assumption that everyone first assumes that they themselves don’t have blue paint on their forehead.  It doesn’t make intuitive sense since anyone can see that there are at least 99 other people with paint, but it’s the key step.  Assume, what if everyone from A to CV thought that they didn’t have blue paint?  Or rather that A assumes they don’t have blue paint and that B assumes that B doesn’t have blue paint and B assumes that … CU assumes CU doesn’t have blue paint and that CV assumes that they don’t have blue paint?  Well, this is a contradiction, because at least 1 person must have blue paint.  Now, let’s assume A to CU thinks that they don’t have blue paint and CU sees CV has blue paint and must assume that CV sees everyone else with no paint.  After 1 turn, CV doesn’t leave (because it’s not true that the other 99 people don’t have blue paint), and thus we have a contradiction and CU must believe that they have blue paint on their forehead as well.  After turn 2, CU doesn’t leave though (because it’s not true that the 98 other people other than CV and CU don’t have blue paint), so we have a contradiction and CT must believe that they have blue paint.  Keep going until turn 99, where B doesn’t leave because it’s not true that A doesn’t have blue paint (if B saw that A doesn’t have blue paint, B should have left on turn 99).  We have a contradiction, so A concludes that they have blue paint, and so on turn 100, everyone leaves.

 

It’s a lot easier to rely on the formula we built from the smaller examples that “With a room of x people, they all leave at once after x turns.”  But I find the intuition disappears with large numbers.  The above paragraph is an attempt to describe the intuition, the key being that we assume that all x people assume that they don’t have blue paint, and then one by one contradict that (because in reality, everyone has blue paint), until we’ve contradicted all cases down to 1 person assuming they have no paint.  Once that is contradicted on the xth turn, after that, everyone leaves at once, since everyone has the same logic process.

Brainteaser: Forehead Numbers

There are 3 people placed in a room.  They all have perfect logic.  The 3 people are told by a host that a number has been written on each of their foreheads.  Each of the 3 numbers are unique, they are all positive, and they relate to each other such that A + B = C (i.e. one is the sum of the other two).  In the room, each person can only see the other two people’s numbers, as they cannot see their own foreheads.

 

Suppose you are one of the 3 and you see one person with “20” on their forehead and the other person with “30.”  The host asks you, then the person with “20,” and then the person with “30” what number is on their heads and all 3 say that they don’t know.  The host then asks, again, you, then the person with “20,” and then the person with “30” what number is on their head, and all 3 answer correctly.  How does this happen?

 


 

Answer:

The key to this brainteaser is to calculate the logic of each person’s point of view, i.e. put yourself in each of their shoes.  The annoying part of solving this brainteaser, then, is having to keep track of 3 different points of view.

“First” person: If you see “20” and “30,” that means you are either 10 or 50.  So you don’t know what’s on your forehead among these two numbers.

The “20” person: You see either 1.) “30” and “10” or 2.) “30” and “50.”  In case 1.) you are either 20 or 40.  In case 2.) you are either 20 or 80.  So you don’t know.

 

The “30” person: You see either 1.) “20” and “10” or 2.) “20” and “50.”  In case 1.) you must be 30 because you cannot be 10 as well the “First” person.  So the key here is that if you see one person has number “x” and another has number “2x,” you know you cannot also have “x” on your forehead.  You must be “3x.”  So in this case, the “30” person would know the answer that he or she has 30 on his head.  In case 2.) the “30” person has either 30 or 70, and so he or she wouldn’t know.

 

Since after the first round, everyone answered that he or she did not know, that means that we cannot have the “30” person’s case 1.), which is that he sees “20” and “10.”  In other words, our “First” person cannot have 10.  He has 50 on his forehead.  So when the host asks the “First” person the second time, he or she will answer 50.

 

The most illuminating and clean part of the problem is just up to here, but in an attempt for completeness, I kept going.

 

From the “20” person’s point of view, we assume that he or she is able to figure out the above sort of logic on his or her own.  What the “20” person sees is “30” and “50,” which means that he or she is either 20 or 80.  Somehow, the “50” person figured out on his or her own on the second round of questioning that they have 50 on their head.  The logic is that in order to find out what your number is on the second round, you are using someone’s “I don’t know” answer in the first round of questioning.  So if the “20” person indeed has 20 on his or her head, they can deduce that the “50” person is able to figure out all the above and that his or her number is 50 on the second round.  If the “20” person has 80 instead, the “First” person sees “80” and “30” and is thus wondering if his or her number is 50 or 110 and the “30” person sees “80” and 50 and wondering if they’re 30 or 130.  In none of these cases is a person announcing that they are not seeing an “x” and “2x” situation (which is what the “First” person experiences: seeing a “2x” and “3x” situation, and then seeing that the “3x” person doesn’t immediately say that he or she knows that his or her number is “3x.”).  If the “20” person has 20, then, again, the “First” person sees that the “30” person is announcing that they aren’t seeing an “x” and “2x” situation, which means that the “First” person can’t have 10 and must have 50.  This causes the “20” person to know that his or her number is 20.

 

Similarly, the “30” person sees “50” and “20” initially doesn’t know if he or she is 30 or 70.  If it’s 70, the other people either see “70” and “50” or “70” and “20,” which doesn’t allow the situation described above of someone announcing that he or she doesn’t see an “x” and “2x” situation.  If it’s 30, then everything that’s been discussed happens, and so it must be 30.

 

The key basically is that if someone sees “x” and “2x,” they should know immediately that they are 3x.  If someone sees “2x” and “3x,” they are immediately on high alert to see if the “3x” person immediately knows that he or she is 3x.  If the “3x” person doesn’t know, that is an announcement that the “3x” person did not see an “x” and “2x” situation, which means that the person we started with must be “5x.”  So, in an “x” and “2x” situation, you know immediately that you are 3x.  In a “2x” and “3x” situation, if everyone says that he or she doesn’t know in the first round, that announces that no one saw (and the “3x” person in particular did not see) an “x” and “2x” situation, which means that you must be 5x.