Month: June 2016

Cyberpunk Images: Japan and China

Nail House in Chongqing, China. http://www.theatlantic.com/photo/2015/04/and-then-there-was-one/390501/

Jiutian International Plaza, Zhuzhou, Hunan, China

Caiyuanba Bridge on-ramp, Chongqing. Elevated Roads Encroaching Farmhouses Chongqing, Mark Homs, Getty Images

IMHO, Japan is “fantasized cyberpunk.”  People (especially in the west) look at the night lights of Japanese cities and it makes them fantasize about cyberpunk, conjuring images from Neuromancer, Blade Runner, various anime, and the Matrix (the green Japanese-looking code on the screens).  China is real “high tech-low life” cyberpunk, because there’s actual low life in the midst of development and high tech, and it’s an actual dystopia.  Like, that last part isn’t an exaggeration – it actually is a dystopia.  It is utopian in its amazing, unprecedented growth out of poverty since 1979, but dystopian in its government, economic inequality, and environmental issues.  Japan on the other hand is a clean, environmentally-minded developed country (EPI ranks Japan in the 20s, ranked 39th in 2016) (first link archived) with economic equality that’s the same (archived) as developed Western Europe.  There are some who don’t join the rat race and feel alienated (e.g. NEETs, hikikomoris, or just plain otaku geeks, I guess, and they are probably overrepresented in anime-related media) and that can certainly be cyberpunk, but in terms of inequality, it doesn’t compare to China’s.

Brainteaser: The Monty Hall Problem

You are on a game show and presented with 3 doors.  Behind one is a car and behind the other 2 are goats.  You want to choose the door with a car behind it, as if you do so, you win the car.  You choose one door.  Then, the host opens one of the other doors, which reveals a goat behind it.  The host gives you a choice to either switch your door to the other one that’s still closed or keep your original choice.  Should you switch doors?

If your strategy is to stick to your original choice, your probability of choosing the door with the car behind it is 1/3.  Let’s see what happens if you switch.  So you choose a door, the host reveals one of the other doors with a goat behind it, and asks if you want to switch.  What has happened up to this point?  There’s a 1/3 chance that you picked the door with the car behind it, which means that if you switch, you are switching to a door with a goat behind it.  There’s a 2/3 chance that you picked a door with a goat behind it, which means that if you switch, you are switching to a car behind it.  So if your strategy is to always switch, there’s a 1/3 chance you get a goat in the end (because you happened to choose a car on your first choice, which has a probability of 1/3) and a 2/3 chance you get a car in the end (because you happened to choose a goat on your first choice, which has a probability of 2/3).  So the best strategy is to switch.

The host revealing one of the doors gives you additional information.  Switching lets you use that information, assuming that it was unlikely that you got a car on your original choice.

Perhaps a more intuitive answer is if there are 100 doors.  One has a car behind it and 99 of them have goats behind them.  Choose one door, the hosts reveals another door with a goat behind it, and asks if you want to switch.  If you don’t switch, there’s a 1/100 chance that you chose the door with a car behind it.  But if you switch, assuming that you probably didn’t choose the right door on your first try (because 1/100 is small), now, you have a 1/98 chance of choosing the right door (because the host as revealed one door with a goat behind it and you’re giving up your original door).  Of course 1/98 is better than 1/100.  The exact probability of getting the right door with the switching strategy is 99/100 × 1/98 (probability that you chose the wrong door on the first try × probability of choosing the right door after accepting the offer to switch).  99/100 × 1/98 = 1/100 × (99/98) > 1/100 where 1/100 is the probability of getting the car with not switching, and so switching is better than not switching.

Brainteaser: 100 Prisoners in a Line and Extension

There are 100 prisoners.  An executioner tells them that tomorrow morning, he will line them up so that each of them is facing the back of the head of another prisoner, except for one prisoner at the end of the line.  In other words, prisoner 1 sees the back of the head of prisoner 2 as well as the backs of the heads of prisoners 3-100, prisoner 2 sees the back of the heads of prisoners 3-100, …, prisoner 99 only sees the back of the head of prisoner 100, and prisoner 100 doesn’t see any prisoners in front of him.  The executioner tells them that he will put either a red or blue hat on each prisoner, then starting with prisoner 1 (the one who can see 99 other prisoners in front of him), will ask him what color hat he is wearing.  The prisoner says a color and if he is wrong, he will silently kill that prisoner (prisoner 1 would be killed in a way that prisoners 2-100 won’t know if he was killed or not).  If he is right, he will keep him alive.  Then, the executioner will move to prisoner 2, ask the same question, and kill if he’s wrong, keep him alive if he is right.  The executioner keeps doing this for every prisoner to prisoner 100.  The prisoners are allowed to discuss together in the night what to do for the next day.  What should their plan be in order to maximize the number of survivors?  For clarity, what should their plan be in order maximize the number of survivors in the worst case scenario (any random guess by a prisoner ends up being wrong)?

A sort of baseline answer is that prisoner 1 says the color of the hat worn by the prisoner right in front of him or her, thus sacrificing his or her life with a guess.  Prisoner 2 is guaranteed to live.  Repeat this for every pair of prisoners, giving us at least 50 prisoners alive at the end.  With 2 colors of hats, it makes intuitive sense that this would be an answer.  Unintuitively, this is far from the answer :-/

One key, or hint, that may remain unemphasized when this brainteaser is presented to people, is that when a prisoner makes and says his guess for his own color, that guess is heard by all the other prisoners.  If each guess is correct, that provides valuable information to the later prisoners.

Let’s say there are only 3 prisoners and we are the middle prisoner.  We see only one prisoner in front of us and say he is wearing a red hat.  From the perspective of the prisoner behind us, either 1 or 2 red hats are seen.  So it’s possible for the prisoner behind us to announce through some code (e.g. “Red” = there is 1 red hat in front of me, “Blue” = there are 2 red hats in front of me) to tell us this.  This allows us to answer our own hat correctly.  Additionally, the prisoner in front of us will have gained two pieces of information: how many red hats there are with the 2 last prisoners and what hat the middle prisoner was wearing.  In other words, initially, there were either 1 or 2 hats worn by the last two prisoners.  The middle prisoner has the ability to answer correctly after the first prisoner sacrifices himself or herself by announcing the code.  If the first prisoners announces that there are 2 red hats in front of him, the middle prisoner will definitely say that he himself is wearing a red hat, leaving 1 red hat for the last prisoner.  If the first prisoner announces that there is 1 red hat in front of him, and then the middle prisoner says “Red,” the last prisoner knows that they are Blue, while if the middle prisoners says “Blue,” the last prisoner knows that they are Red.

Let’s say there are 4 prisoners in a line.  The first prisoner sees 1, 2, or 3 red hats in front of him or her.  But as long as the second prisoner announces his or her own hat color correctly, that will provide information for the later prisoners.  So how can the first prisoner announce information so that at least just the second prisoner will get his or her own hat color correctly?  The second prisoner sees 1 or 2 hats in front of him or her.  The answer is that the first prisoner announces the oddness or evenness of the number of red hats he or she sees.  From the second prisoner’s perspective, whatever he sees in front of him and whatever the last prisoner sees in front of him can only differ by 0 red hats or 1 red hat (whatever hat the second prisoner is wearing).  Thus, the key is, when there is only a difference of one change at each increment, oddness and evenness conveys enough information to tell us what has changed.  So the first prisoner sacrifices himself by announcing, say “Red” for an even number of red hats and “Blue” for an odd number of red hats that he sees in front of him.  This allows the second person to say his hat color correctly.  The third person has information that among the last 3 people, the number of red hats was either odd or even, plus what exact hat color the second person has, plus, of course, what exact hat color the first person, the person in front of him, has.  Effectively, the second person knows the hat colors of all 3 people at the end of the line except his own color plus the information that the first person provides, what the oddness or evenness of the number of red hats was for those 3 people.  This is enough information for the second person to figure out what color hat he has.  It’s the same with the last person.

So with 100 people, the first person sacrifices himself by announcing the oddness or evenness of one of the colors that exist by code.  The second person has exact knowledge of the colors of the 98 people in front of him plus the oddness or evenness of one of the colors for all 99 people excluding the first person (i.e. the 98 people in front of him plus himself), giving him correct knowledge of his own color.  The third person know has exact knowledge of the color of the person behind him and the colors of the 97 people in front of him, plus the oddness or evenness of one of the colors for the 99 people that includes him, giving him enough information to figure out his own color.  This continues until the whole line is finished.  Thus, at least 99 out of 100 people can be saved with this strategy.

Extension:

What if the executioner uses more colors?

In our above case, we had 2 colors, and we sacrificed 1 prisoner at the beginning of the line to announce the oddness or evenness of one of the colors for the 99 people he sees in front of him.  Since all prisoners know the number of prisoners that the first prisoner sees (99), everyone only needs to keep track of one of the colors, say red.  The first prisoner announces the oddness or evenness of red, and each subsequent prisoner counts how many reds there are in the 99 to see if they also have a red hat or not.

If we have 3 colors, the first prisoner that can be saved would see x prisoners in front with 3 different colors and needs to figure out what color hat he has on.  Extending the strategy from above, if we sacrifice two prisoners before him, they can announce the oddness or evenness of two of the colors.  This is enough information for the first prisoner we save what color hat he has.  All subsequent prisoners will then have exact knowledge of the hat colors of all prisoners that can be saved except for their own, which they deduce by the oddness or evenness of the 2 colors that that first two prisoner we sacrifice first announce.  So in this case, we sacrifice 2 prisoners at the start and the 98 subsequent prisoners can be saved.

Let us apply the same logic to more colors.  If the executioner uses y different colors where 1 ≤ y ≤ 100, the first y – 1 prisoners sacrifice themselves by announcing the oddness or evenness of y – 1 colors.  The remaining 100 – (y – 1) prisoners will have enough information to correctly state their hat color.  If the executioner uses more colors than there are prisoners, we don’t have enough prisoners we can sacrifice to convey accurate information about the oddness or evenness of the colors we have to prisoners at the end.  In addition, we can always default back to the “baseline” solution, where each pair works together by sacrificing one prisoner (who simply announces the color of the hat in front of him) and saving the other one (who simply says the color that was announced by the prisoner before him), and guarantee at least 50 prisoners saved.  Thus, for 1 ≤ y ≤ 49, the “sacrifice for odd or even” strategy saves 99 to 51 people.  For y = 50, the strategy saves 50 people, which is the same as the result for the “default pair sacrifice” strategy.  For y > 50 (and even if y > 100), the “default pair sacrifice” strategy can always save 50 people and becomes better than the “sacrifice for odd or even” strategy.

100 people are in a room.

1. All 100 of them are perfect logicians.
2. They are told that at least one person in the room has blue paint on their forehead.
3. They are told that once you deduce that you have blue paint on your forehead, the next time that the lights are turned off, leave the room.

All 100 people have actually had their foreheads painted blue (but of course, each of them don’t know this at this point – they can only see the other people’s foreheads).  The light is turned off, then on, then off, on, etc.  What happens?

So each person sees 99 other people with blue paint on their heads.  While this is the situation we begin with, it doesn’t seem to help with solving the problem at all.  The key for this problem is to start as small as possible and then expand.

Start with 1 person.  1 person in a room sees 0 other people.  Thus, if there is at least 1 person in the room with blue paint, he or she must be it.  The light goes off, and then on, and we see 0 people in the room, as the person has left.

Let’s say we have 2 people.  Put ourselves in to the shoes of one of them.  They see 1 person in the room with blue paint on their forehead, and don’t now if there is blue paint on their own forehead.  But if there was no blue paint on their forehead, then the other person should deduce that they must be the one with blue paint on their forehead, and will be gone by the next light.  The light is turned off, then on.  Since both people see the other person with blue paint, both remain.  Now, each person knows that the other person looked at their forehead and saw blue paint, and so each person knows that they have blue paint on their own forehead.  The lights turns off and on, and there are 0 people in the room.

I think you know where this is going (although I find the logic the most difficult from here).  3 people in the room.  Each person sees 2 other people with blue paint on their foreheads.  The additional key here is, each person needs to think, “What if I don’t have blue paint?  If what happens then is a contradiction, then I must have blue paint.”  Choosing one person’s perspective – our “first” person – we first posit that we don’t have blue paint.  In that case, each of the other 2 people sees 1 person without blue paint and 1 person with blue paint.  Our existence as someone without blue paint doesn’t matter in their calculations.  Each of them thinks, “There is one other person in this room with blue paint.  If they see me without blue paint as well, then they should disappear by the next light.  The light turns off, then on.  All 3 people are still there.  So each of the other 2 people think, “Since that other person didn’t leave, I must have blue paint.  So I will leave by the next light.  The light turns off and on.  But since the truth is that all 3 people have blue paint, the other 2 people won’t disappear.  Instead, each of them are thinking the same thing about the other 2 people in the room that they see have blue paint on their foreheads.  Everyone waited two turns to see if the other people would make a move.  Since they didn’t, everyone has found a contradiction to “If I had blue paint,” and thus everyone deduces that they have blue paint on their own forehead.  Thus, the third time that the light goes off and on, the 3 people have left the room.

4 people in the room.  Assume you don’t have blue paint, so your being there doesn’t affect the others’ logic.  There are 3 people wondering if they have blue paint and the each see 2 other people with blue paint.  After 3 turns of the light going off and on, they should all leave.  If they don’t, we have a contradiction, so we have blue paint.  So on the 4th light, all 4 people leave.

5 people in the room.  Described another way: Let’s say we don’t have blue paint.  There are 4 other people with blue paint.  Let’s label them A, B, C, and D.  D is wondering if he or she has blue paint, looking at A, B, and C.  D first assumes he has no paint and is thinking, “C is thinking if he doesn’t have blue paint, then after 2 turns, A and B will disappear.”  After 2 turns, A and B remain.  D is thinking, “So now, C will conclude that he has blue paint.  So on the 3rd turn, A, B, and C should leave.”  After the 3rd turn, A, B, and C remain.  D is thinking, “OK, so there’s a contradiction to the assumption that I don’t have blue paint.  Thus, I have blue paint, and will disappear on the 4th turn.”  On the 4th turn, we see that A, B, C, and D still remain.  Thus, we have a contradiction to our first assumption that we have no blue paint.  We have blue paint, so on the 5th turn, we leave.  Everyone else also has the same logic process, so on the 5th turn, everyone leaves.

If there are 100 people in the room, all with blue paint on their foreheads, first assume that you don’t have blue paint on your forehead.  So then, your existence shouldn’t matter to the other 99 people’s logic.  Let’s label us A.  There are 100 people in the room: A, B, C, …, X, Y, Z, AA, AB, …, CV.  Person A first assumes they have no paint, and thinks, “B must be thinking, if I don’t have paint, then, C would think, if I don’t have blue paint… etc.”  Basically, we are testing the assumption that everyone first assumes that they themselves don’t have blue paint on their forehead.  It doesn’t make intuitive sense since anyone can see that there are at least 99 other people with paint, but it’s the key step.  Assume, what if everyone from A to CV thought that they didn’t have blue paint?  Or rather that A assumes they don’t have blue paint and that B assumes that B doesn’t have blue paint and B assumes that … CU assumes CU doesn’t have blue paint and that CV assumes that they don’t have blue paint?  Well, this is a contradiction, because at least 1 person must have blue paint.  Now, let’s assume A to CU thinks that they don’t have blue paint and CU sees CV has blue paint and must assume that CV sees everyone else with no paint.  After 1 turn, CV doesn’t leave (because it’s not true that the other 99 people don’t have blue paint), and thus we have a contradiction and CU must believe that they have blue paint on their forehead as well.  After turn 2, CU doesn’t leave though (because it’s not true that the 98 other people other than CV and CU don’t have blue paint), so we have a contradiction and CT must believe that they have blue paint.  Keep going until turn 99, where B doesn’t leave because it’s not true that A doesn’t have blue paint (if B saw that A doesn’t have blue paint, B should have left on turn 99).  We have a contradiction, so A concludes that they have blue paint, and so on turn 100, everyone leaves.

It’s a lot easier to rely on the formula we built from the smaller examples that “With a room of x people, they all leave at once after x turns.”  But I find the intuition disappears with large numbers.  The above paragraph is an attempt to describe the intuition, the key being that we assume that all x people assume that they don’t have blue paint, and then one by one contradict that (because in reality, everyone has blue paint), until we’ve contradicted all cases down to 1 person assuming they have no paint.  Once that is contradicted on the xth turn, after that, everyone leaves at once, since everyone has the same logic process.

There are 3 people placed in a room.  They all have perfect logic.  The 3 people are told by a host that a number has been written on each of their foreheads.  Each of the 3 numbers are unique, they are all positive, and they relate to each other such that A + B = C (i.e. one is the sum of the other two).  In the room, each person can only see the other two people’s numbers, as they cannot see their own foreheads.

Suppose you are one of the 3 and you see one person with “20” on their forehead and the other person with “30.”  The host asks you, then the person with “20,” and then the person with “30” what number is on their heads and all 3 say that they don’t know.  The host then asks, again, you, then the person with “20,” and then the person with “30” what number is on their head, and all 3 answer correctly.  How does this happen?

The key to this brainteaser is to calculate the logic of each person’s point of view, i.e. put yourself in each of their shoes.  The annoying part of solving this brainteaser, then, is having to keep track of 3 different points of view.

“First” person: If you see “20” and “30,” that means you are either 10 or 50.  So you don’t know what’s on your forehead among these two numbers.

The “20” person: You see either 1.) “30” and “10” or 2.) “30” and “50.”  In case 1.) you are either 20 or 40.  In case 2.) you are either 20 or 80.  So you don’t know.

The “30” person: You see either 1.) “20” and “10” or 2.) “20” and “50.”  In case 1.) you must be 30 because you cannot be 10 as well the “First” person.  So the key here is that if you see one person has number “x” and another has number “2x,” you know you cannot also have “x” on your forehead.  You must be “3x.”  So in this case, the “30” person would know the answer that he or she has 30 on his head.  In case 2.) the “30” person has either 30 or 70, and so he or she wouldn’t know.

Since after the first round, everyone answered that he or she did not know, that means that we cannot have the “30” person’s case 1.), which is that he sees “20” and “10.”  In other words, our “First” person cannot have 10.  He has 50 on his forehead.  So when the host asks the “First” person the second time, he or she will answer 50.

The most illuminating and clean part of the problem is just up to here, but in an attempt for completeness, I kept going.

From the “20” person’s point of view, we assume that he or she is able to figure out the above sort of logic on his or her own.  What the “20” person sees is “30” and “50,” which means that he or she is either 20 or 80.  Somehow, the “50” person figured out on his or her own on the second round of questioning that they have 50 on their head.  The logic is that in order to find out what your number is on the second round, you are using someone’s “I don’t know” answer in the first round of questioning.  So if the “20” person indeed has 20 on his or her head, they can deduce that the “50” person is able to figure out all the above and that his or her number is 50 on the second round.  If the “20” person has 80 instead, the “First” person sees “80” and “30” and is thus wondering if his or her number is 50 or 110 and the “30” person sees “80” and 50 and wondering if they’re 30 or 130.  In none of these cases is a person announcing that they are not seeing an “x” and “2x” situation (which is what the “First” person experiences: seeing a “2x” and “3x” situation, and then seeing that the “3x” person doesn’t immediately say that he or she knows that his or her number is “3x.”).  If the “20” person has 20, then, again, the “First” person sees that the “30” person is announcing that they aren’t seeing an “x” and “2x” situation, which means that the “First” person can’t have 10 and must have 50.  This causes the “20” person to know that his or her number is 20.

Similarly, the “30” person sees “50” and “20” initially doesn’t know if he or she is 30 or 70.  If it’s 70, the other people either see “70” and “50” or “70” and “20,” which doesn’t allow the situation described above of someone announcing that he or she doesn’t see an “x” and “2x” situation.  If it’s 30, then everything that’s been discussed happens, and so it must be 30.

The key basically is that if someone sees “x” and “2x,” they should know immediately that they are 3x.  If someone sees “2x” and “3x,” they are immediately on high alert to see if the “3x” person immediately knows that he or she is 3x.  If the “3x” person doesn’t know, that is an announcement that the “3x” person did not see an “x” and “2x” situation, which means that the person we started with must be “5x.”  So, in an “x” and “2x” situation, you know immediately that you are 3x.  In a “2x” and “3x” situation, if everyone says that he or she doesn’t know in the first round, that announces that no one saw (and the “3x” person in particular did not see) an “x” and “2x” situation, which means that you must be 5x.

Arendt, Action, and Psychological Stuff

I know next to nothing about Hannah Arendt except what I’ve read on Wikipedia and Reddit (archived).  Nevertheless, it sounds really cool.

She defined the three human activities as labor, work and action, with two mutually exclusive spheres: the political and everything else.

Arendt introduces the term “vita activa” (active life) by distinguishing it from “vita contemplativa” (contemplative life), which represents her understanding of Western society. There are only three human activities: labor, work and action. They correspond to the three basic conditions under which humans live. Action corresponds to the political actions of anyone…

What’s striking to me (and I agree with it) is the stark division between the active life and the contemplative life.  There’s action and there’s talk (so I’m already disregarding Arendt’s actual philosophical definition of “action,” but oh well.)

According to Arendt, modern life is divided between two realms: that of the public in which “action” is performed, and that of the private, site of family life where the father ruled. It is in the public realm where one distinguishes oneself through “great words and great deeds” in the same way as personal glory is attained on the battlefield.

Again, emphasizing how what you do matters.  Your private life is just for yourself.  Your public life affects you and can bring you things (reputation, resources, i.e. money, connections) that don’t really exist or have meaning with just yourself in your private, home life.

Arendt claims that her distinction is unusual and new as it has not been attempted previously by the thinkers who concerned themselves with the subject of ‘human activity’, e.g. Karl Marx. She goes on to explain that “labor” is one of the only three fundamental forms of activity that are the human condition. It is repetitive and only includes the activities that are necessary to mere living, such as the production of food and shelter as well as any material production, with nothing beyond that. The condition to which ‘labor’ corresponds is sheer biological life.

So “labor” is stuff like eating, bathing, cleaning and hygiene, and perhaps some other health and maintenance-related activities.

“Work”, on the other hand, has a clearly defined beginning and end. It leaves behind a durable object, such as a tool.

I don’t grasp the meaning of this.  I’d like to think it means the same as “work” in our normal sense, although that’s probably wrong.  But anyway, surely there’s something other than the “life maintenance” things we do (like eat, sleep, and bathe in private at home) and the political life, which I’d guess, is our normal definition of “work.”  And the point of “work” is basically accumulating resources that allow us to enrich our lives further than what bare life maintenance provides us.

On the other hand, exercise improves mood (archived), and not in a new age-y way but physiologically:

Looking deeper, Lehmann and his colleagues examined the mice’s brains. In the stimulated mice, they found evidence of increased activity in a region called the infralimbic cortex, part of the brain’s emotional processing circuit. Bullied mice that had been housed in spartan conditions had much less activity in that region. The infralimbic cortex appears to be a crucial component of the exercise effect. When Lehmann surgically cut off the region from the rest of the brain, the protective effects of exercise disappeared. Without a functioning infralimbic cortex, the environmentally enriched mice showed brain patterns and behavior similar to those of the mice who had been living in barebones cages.

Humans don’t have an infralimbic cortex, but we do have a homologous region, known as cingulate area 25 or Brodmann area 25. And in fact, this region has been previously implicated in depression. Helen Mayberg, MD, a neurologist at Emory University, and colleagues successfully alleviated depression in several treatment-resistant patients by using deep-brain stimulation to send steady, low-voltage current into their area 25 regions (Neuron, 2005). Lehmann’s studies hint that exercise may ease depression by acting on this same bit of brain.

I couldn’t find anything on the following from a bit of low-effort Googling, but I think there are some who say that action helps to alleviate depression or improve moods as well.  And when I say action, I don’t mean Arendt’s political action nor physical exercise, but just general action, like doing things, instead of sitting and contemplating about doing.

Walking helps with thinking (archived), especially creative thinking.  Dipping into fanciful evolutionary psychology headcanon (or I might’ve also read this somewhere else), if we were all persistence hunters once (or simply had to walk a lot to gather berries and water for our hunter-gatherer tribe), we walked a lot daily, and the body uses this time and monotonous activity to let the brain think.  Many people talk about their subconscious helping them figure out problems – problems that they couldn’t when they were face-to-face with it at a desk – while they were doing something completely unrelated, even really smart people (archived):

Poincaré deliberately cultivated a work habit that has been compared to a bee flying from flower to flower. He observed a strict work regime of 2 hours of work in the morning and two hours in the early evening, with the intervening time left for his subconscious to carry on working on the problem in the hope of a flash of inspiration. He was a great believer in intuition, and claimed that “it is by logic that we prove, but by intuition that we discover”.

Of course, I don’t know what exactly Poincare did during his off-hours and it may not have been persistence hunting.  But even if it wasn’t, as long as it’s something habitual (archived):

To illustrate the differing thoughts and emotions involved in guiding habitual and nonhabitual behavior, 2 diary studies were conducted in which participants provided hourly reports of their ongoing experiences. When  participants  were  engaged  in  habitual  behavior,  defined  as  behavior  that  had  been  performed almost  daily  in  stable  contexts,  they  were  likely  to  think  about  issues  unrelated  to  their  behavior, presumably because they did not have to consciously guide their actions. When engaged in nonhabitual behavior,  or  actions  performed  less  often  or  in  shifting  contexts,  participants’  thoughts  tended  to correspond to their behavior, suggesting that thought was necessary to guide action. Furthermore, the self-regulatory benefits of habits were apparent in the lesser feelings of stress associated with habitual than nonhabitual behavior.

It seems that action improves mood, and I assume an improved mood improves action.  But then if the opposite might be true, why does inaction worsen mood and a worsened mood worsen action?  Again into fanciful evolutionary psychology, perhaps it’s a survival mechanism.  When circumstances are bad, you want to conserve energy and action and stay away from possibly taxing or dangerous situations.  Basically, sit tight and wait out the night/rain/drought.  Of course, when this occurs not because of a lack of resources but some internal mental reason (which is much more likely in modern life), it’s much harder or at least more mentally complex to get out of that spiral.  A modern person’s way out of inaction or depression is not the same as a hunter-gatherer seeing food for the first time in a while (some external thing happening to him) that might quickly improve his mood, spur him into action, and so on and so forth.  The tricky thing is that while action may improve mood, if an internal mental reason is what caused mood to worsen to begin with, the action isn’t targeting the source of depression.  Action in this instance is a solution that’s unrelated to the source of depression/the bad mood.  Action and its positive effects on mood might still be good enough to overcome internally-caused depression.  But I imagine that the disconnect between action and source of depression is why modern depression isn’t easily solved by action and exercise even if it undoubtedly helps physiologically.  It’s interesting and crazy that we’re such contemplative beings with such big brains but we’re still meat and water bags that are heavily influenced by physical, biological, neurochemical existence.

The Theory of Interstellar Trade, by Paul Krugman (1978)

Archived

Assume we have two planets, Earth and Trantor, separated by a large distance, the traversal of which necessitates travel at velocities comparable to the speed of light.  Assume that Earth and Trantor are in the same inertial reference frame, i.e. they are not accelerating with respect to each other.

Assume that a spaceship traveling between the two planets travels at a constant $v\$.

Let’s say that from the perspective of an observer on one of the planets, the time it takes for a spaceship to make the trip is $n\$.

Then, the time it takes for a spaceship to make the trip from the perspective of someone on the spaceship is

$\overline{n}=n\sqrt{1-\frac{v^2}{c^2}}$

(which is shorter than $n\$).  The factor on the right is a well-known result from relativity, derived mathematically from the Lorentz transformation, that gives us “time dilation” from traveling at relativistic speeds.  Krugman demonstrates the above relation by representing the voyage in Minkowski space-time in a figure using imaginary axes.

Let

$p_E\textup{, } p_T$ be the price of Earth goods and the price of Trantorian goods on Earth, respectively.

$p_E^*\textup{, } p_T^*$ be the price of Earth goods and the price of Trantorian goods on Trantor, respectively.

$r\textup{, }r^*$ be the interest rates on Earth and Trantor, respectively.

The first question Krugman asks is what is the correct interest rate on a planet with regard to interstellar trade, since depending on whether you’re on a planet or on a spaceship engaging in trade, the passage of time is different.  So putting ourselves on Trantor, we compare what happens to a Trantorian trader who can engage in investing in a Trantorian bond or engage in interstellar trade with Earth.

Interstellar trade with Earth would involve buying goods on Trantor, traveling to Earth, selling the Trantorian goods there and then buying Earth goods with those proceeds, and then traveling back to Trantor and selling the Earth goods there.  Let $c\$ be the cost of outfitting the spaceship.  The cost of buying goods on Trantor is $q_T^* p_T^*$ where we define $q_T^*$ as the quantity of Trantorian goods to be traded.  Thus, the initial expenditure of the Trantorian trader while still on Trantor is

$c + q_T^* p_T^*$.

The trader then travels to Earth and then sells its (not his or her) $q_T^*$ goods at price $p_T$.  The trader now has money $q_T^*p_T$ and will buy Earth goods with this money to bring back to Trantor.  Earth goods cost $p_E$ so the quantity of Earth goods that can be bought is $q_T^*p_T/p_E$.  Let us call this quantity of Earth goods to be brought over to Trantor $q_E^*$.  So we have

$q_E^* = \frac{q_T^*p_T}{p_E}$

When the trader arrives back on Trantor, it will sell these Earth goods at price is $p_E^*$, resulting in a revenue of

$\frac{q_T^* p_T p_E^*}{p_E}$

But the Trantorian trader also had the choice of investing its money in a Trantorian bond before leaving for Earth.  The expenditure needed for the trading venture of $c + q_T^* p_T^*$ invested into a Trantorian bond would:

after time $2n$, which is the time it takes for a round trip from the point of view of a trader who decided to stay on Trantor for that duration of time instead of go to Earth and back, become

$(c + q_T^* p_T^*)(1+r^*)^{2n}$

or

after time $2\overline{n}=2n\sqrt{1-\frac{v^2}{c^2}}$, which is the time it takes for a round trip from the point of view of a trader who decided to actually make the round trip and thus be on a spaceship for that duration of time (maybe he puts some money into a Trantorian bond and some money into the trading venture that he personally goes on), become

$(c + q_T^* p_T^*)(1+r^*)^{2n\sqrt{1-\frac{v^2}{c^2}}}$

So which perspective is right?  The perspective from Trantor or from the spaceship?  Krugman answers this by reminding us of the reasoning behind present value calculations, which is that of opportunity cost – any money that you choose not to possess today (by delay possessing it until the future for a larger amount) is money that you could have invested in a riskless bond today that would have also grown in value in the future.  And so the way I’ve framed it here sort of answers the question in advance, which is that the correct value of the bond is from investing it risklessly with some riskless bond issuer, like the Trantorian government.  The opportunity cost of spending money to outfit a trading venture to Earth is the lost opportunity of buying a Trantorian bond with that money and then receiving the proceeds by the time the trader would return from the venture, which according to the bond issuer, the Trantorian government, is $2n$ time.  Thus, we have Krugman’s First Fundamental Theorem of Interstellar Trade:

When trade takes place between two planets in a common inertial frame, the interest costs on goods in transit should be calculated using time measured by clocks in the common frame, and not in the frames of trading spacecraft.

Another way to think about it is that a bond earns interest because the bond issuer gets to possess cash now, invest it during some time, and afterwards will have more cash as a result.  The location that this is occurring is at the bond issuer’s location, which is Trantor.  The trader can always come back to Trantor and earn his bond interest and principal (assuming it’s a bond that automatically renews after maturing every time).  All this occurs on Trantor, and so the bond’s time progression is according to Trantor’s time progression.

Now, for simplification Krugman assumes that perfect competition reduces the profits of Earth-trading to 0.  In other words, we force the revenue earned from a trading venture to Earth that started with expenditure $(c + q_T^* p_T^*)$ to equal the revenue earned from if we had invested that same expenditure into a Trantorian bond and waited the time that it takes to make a round trip to Earth and back:

$\frac{q_T^* p_T p_E^*}{p_E} = (c + q_T^* p_T^*)(1+r^*)^{2n}$

Also for simplification, Krugman assumes away $c\$, the cost of outfitting a spaceship:

$\frac{q_T^* p_T p_E^*}{p_E} = ( q_T^* p_T^*)(1+r^*)^{2n}$

$\frac{ p_E^*}{p_E} = \frac{p_T^*}{p_T}(1+r^*)^{2n} \hfill \textup{(Relative goods prices)}$

So pursuing option 2, what price could this Trantorian bond fetch on Earth?  The Earthling has a choice of either buying and then bringing Earth goods to Trantor for sale or buying this Trantorian bond instead that can be redeemed when he or she arrives on Trantor.  The value of a one “T$(Trantorian Dollar) bond on arrival at Trantor will be $(1+r^*)^{2n}$ in Trantorian Dollars. Thus, we want to know what value of Earth goods, when sold on Trantor, would net the same amount of Trantorian Dollars, for that is the Earth price of the Trantorian bond. Suppose the Earthling buys $q$ amount of goods on Earth to bring to Trantor. The cost of that purchase is $p_E q$ in Earth Dollars. Once on Trantor, those $q$ goods are sold for a total of $p_E^* q$ Trantorian Dollars. Let this equal the amount that would have been earned if a Trantorian bond was redeemed instead. Thus, we have $p_E^* q = (1+r^*)^{2n}$ $q = \frac{1}{p_E^*}(1+r^*)^{2n}$ The original Earth price for this transaction was $p_E q = \frac{p_E}{p_E^*}(1+r^*)^{2n}$ Thus, $\frac{p_E}{p_E^*}(1+r^*)^{2n}$ is the fair price in Earth Dollars for a Trantorian bond that can be redeemed in Trantor for $(1+r^*)^{2n}$. If the Trantorian trader went with option 1, that means investing that one Trantorian Dollar in buying Trantorian goods to sell on Earth instead of buying a Trantorian bond and bringing it over. One Trantorian Dollar buys $\frac{1}{p_T^*}$ quantity of Trantorian goods, which on Earth will sell for a total of $\frac{p_T}{p_T^*}$ Earth Dollars. But from the “Relative goods prices” equation, we have that $\frac{p_T}{p_T^*} = \frac{p_E}{p_E^*}(1+r^*)^{2n}$ Thus, one Trantorian Dollar invested in Trantorian goods and brought over to Earth can be sold for an amount of Earth Dollars that is the same as the fair price in Earth Dollars that a Trantorian bond will fetch on Earth. So as long as there is at least one Earthling who also wants to make a one-way trip to Trantor and is open to buying a Trantorian bond instead of buying and bringing over Earth goods to sell, the Trantorian trader is indifferent between bringing over Trantorian goods to sell or bringing over Trantorian bonds to sell – both will earn the same profit. This shows that the First Fundamental Theorem of Interstellar Trade holds as long as for one migrant going one way, we have another migrant going the other way. What we seem to have is that as long as there is an “effective” round trip (either made by one Trantorian trader, or one Trantorian and one Earthling migrant) and the assumption of no arbitrage (so on each leg of the trip, the traveler is indifferent between carrying goods or bonds), we have a relation between the Trantorian interest rate and the prices of goods, and that none of this challenges the statement that the Trantorian bond’s interest gains run according to Trantorian time (the First Fundamental Theorem). For Earth, we can construct the same scenarios except with the trip originating from Earth. Krugman writes that he proves the theorem in the presence of transportation costs in a paper from the future. Krugman then asks whether interest rates on Earth and Trantor will be the same or not. The “Relative goods prices” equation (and its associated assumptions) are kept. Even though transportation costs, as in $c$, the cost of outfitting a spaceship, was assumed away earlier, the “Relative goods prices” equation shows that there is an effective cost to transporting goods across stars that comes from the transportation time needed. (If you are on Earth and you desire Trantorian goods, even if it costs no money for the Trantorian trader to outfit its spaceship to make the journey to Earth and sell its goods to you, the profit that the Trantorian trader makes by the time it returns to Trantor after its round trip needs to match the profit it would have made from just investing in Trantorian bonds and sitting at home for the same duration of time. Thus, that gets built into the cost of Trantorian goods in planets that are interstellar distances away from Trantor.) Could interstellar distances cause differences in the planets’ interest rates? We imagine a scenario where a Trantorian trader buys Trantorian goods, travels to Earth, sells the goods there, invests the proceeds into Earth bonds, spends $k$ time on Earth, redeems the bonds and buys Earth goods with that money, travels back to Trantor, and sells the goods there. This scenario involves the prices of goods and Earth’s interest rate; and there is always another option: invest in Trantorian bonds at the beginning, sit at home for $2n+k$ time, and then redeem after that. Then by forcing a no arbitrage condition, we are then able to obtain a relation between the prices of goods, Earth’s interest rate, and Trantor’s interest rate. In the first option, the trader buys $p_T^* q_T^*$ (where $q_T^*$ is the quantity of Trantorian goods bought) worth of Trantorian goods, travels to Earth, sells these goods for $p_T q_T^*$, invests that in Earth bonds for $k$ time, earns $p_T q_T^*$ (1+r)^k$ after that, buys $\frac{p_T q_T^* (1+r)^k}{ p_E}$ quantity of Earth goods, travels back to Trantor, and then sells these goods for a total value of $\frac{p_E^* p_T q_T^*}{p_E}(1+r)^k$ Trantorian Dollars.  Forcing a no arbitrage condition, we require that this revenue equal the revenue from investing the initial expenditure in Trantorian bonds for the same amount of total time, giving us:

$p_T^* q_T^* (1+r^*)^{2n+k} = \frac{p_E^* p_T q_T^*}{p_E}(1+r)^k$

$(1+r^*)^{2n+k} = \frac{p_E^*}{p_T^*}\frac{p_T}{p_E}(1+r)^k$

$\frac{p_E^*}{p_E} = \frac{p_T^*}{p_T}(1+r^*)^{2n} \hfill \textup{(Relative goods prices)}$

Putting the two equations together gives us:

$(1+r^*)^{2n+k} = (\frac{p_E}{p_T}(1+r^*)^{2n})\frac{p_T}{p_E}(1+r)^k = (1+r^*)^{2n}(1+r)^k$

$(1+r^*)^{k} = (1+r)^k$

$r = r^*$

Thus, we have the Second Fundamental Theorem of Interstellar Trade:

If sentient beings may hold assets on two planets in the same inertial frame, competition will equalize the interest rates on the two planets.

In his conclusion, Krugman writes

I have not even touched on the fascinating possibilities of interstellar finance, where spot and forward exchange markets will have to be supplemented by conditional present markets.

I was thinking this was a light-hearted paper on a light-hearted topic but looks like there’s a lot more out there.  Thirty-five years later, Krugman gives a mention (archived) on the topic.