## “In r/badhistory, the view that technology is linear gets poked fun of every once in a while. Why is the view wrong?”

In r/badhistory, the view that technology is linear gets poked fun of every once in a while. Why is the view wrong?  Isn’t technology linear? Also, why is the west so dominant when compared to other once great civilizations?

Are you familar with the idea of local maximums? Imagine in a low spot between two hills. You can climb either hill. Regardless off what hill to climb, you are increasing your elevation. But one of the hills will not reach as high as the other.

In evolution, its called a fitness landscape. Paths of technological development can be similar.

A recent question on ask historians involved the development of iron weapons. The answer involved metallurgy – iron is frequently inferior to bronze, and it was only the development of consistant alloys and better techniques that gave iron an edge (a better edge in this case). Focusing on bronze is advantageous – until another civilization manages to be able to alloy a superior form of iron. Local maximums.

What I think is interesting is that once people discover that a new technique for metallurgy creates iron that is better than bronze, everyone starts to adopt it. This is some sort of direction to technology, is it not? One society goes in one direction and discovers a local maximum and another society goes in another direction and discovers a different local maximum. But if these two societies intermingle (whether it’s a peaceful or violent intermingling, take your pick), if the people in the two societies discover that one of the local maxima is higher than the other, then they’ve discovered a local maximum of a larger region and have made a step in the direction of the global maximum. Keep repeating this and as your map gets larger and more societies intermingle, your greatest local maximum gets higher and higher.

At the danger of summoning the hatred of Jared Diamond-critics, I think he mentioned in one of his works that one way he “measures” a civilization is its ability to dominate another civilization and avoid being dominated by other civilizations (whether it’s violent, peaceful, economic, diplomatic, or cultural domination). What I’m saying is, when two societies – each having discovered a different local maximum – intermingle, if one dominates another, they will have succeeded in that domination for some reason. That “some reason” is their local maximum. Once news of this knowledge spreads (if that “reason” can be identified and is made into knowledge), other societies will pursue that local maximum as well. As long as societies have enough knowledge of the past to pursue higher and higher maxima, they will be acquiring more and more things that enable them to “dominate other civilizations or avoid being dominated by other civilizations.” Could this suffice as a kind of definition for “technological progress?” If there is a “linear” component to it, it’s that as our “map” expands, our local maximum of that map increases or at least doesn’t decrease.

Of course, this assumes that knowledge of the past remains and there are no catastrophes that set back civilizations in general. But I think catastrophes setting back technologies isn’t something that goes against the belief of “linear technological progress.” I think even staunch “linear progress” believers allow for the fact that unseen catastrophes can cause set backs, or even that scientific knowledge is not a smooth process. I think the key question that both sides wrestle with is how progress seems to happen over generations, even if key inventions seem to happen by chance (like fermented foods or penicillin being discovered fortuitously by leaving things too long or less hygienic than intended). So could “societal domination leads to the adoption of more ‘dominating’ technologies” be a satisfactory explanation? This does mean that if societies that get dominated had wonderful, advanced technologies that get lost to time, it may be a long time until those technologies are rediscovered again, if at all. That’s just a matter of us not finding those directions in the map that could lead to undiscovered local maxima. The key is that as more people interact and more societies intermingle, as long as we would rather dominate than be dominated, “dominating” technologies are going to be adopted more and more. We could try to describe what exactly are more “dominating” technologies (e.g. faster, cheaper mobility for people, more crops per area of land produced that is sustainable, etc.), but I think it’s easier to fall into incorrect claims with details like that. The key is “What technologies help you dominate other societies (or people) or avoid being dominated by other societies (or people)?”

## Variations of the Trolley Problem

Variations taken from this comment on Reddit.  I’d like to give my amateurish comments on each variation.

The Fat Person Variation

Note that in all of these variations, the six people at risk are tied down onto the tracks.  This is different from the example when it’s described that the six people are on the tracks of their own free will (perhaps workers, perhaps just taking a walk on the tracks) but simply cannot hear the trolley running towards them.  Thus, we can assume that the six people have been tied to the tracks against their will.

Most people would say not to push the fat person over the bridge to stop the trolley, betraying some sort of morality that is not strictly utilitarian.  In this variation, I think reciprocity (a very common theme in morality, I’m sure) gets highlighted.  Although the six people may be tied down against their will, the fact here is that you and the fat person, observing from the bridge, have more in common than with any of the people being tied down.  You would not want the fat person to be thinking of pushing you down the bridge (next time the situation occurs, perhaps the trolley is much smaller and lighter and so your body on the tracks would suffice, even if for this time, only the fat person’s body would be big enough).  Thus, you don’t push the fat person off because you don’t want to be pushed off next time.  However, I do think the assumption that “you and the fat person, observing from the bridge, have more in common than with any of the people being tied down” is needed.  If this wasn’t the case, it’s less clear.  In the extreme, say this is a war situation and you are in you base.  The six people tied down are your fellow soldiers (perhaps enemy spies came in the night and tied them down), and the fat man is an enemy prisoner-of-war (imprisoned in the base, perhaps under your guard, perhaps you captured him) that just happens to be beside you at this moment.  If you judge that you have more in common with the six tied down, pushing down the fat prisoner-of-war may be more likely since though you may be committing a war crime, you are saving six of your soldiers.  The theme is again reciprocity – you have more in common with your fighting mates than with an enemy.  If you’re in trouble next time, you’d want your fellow soldiers to sacrifice an enemy if it’ll save you.

Another aspect (still related to reciprocity) is that of shame or later social consequences.  If the situation happened in regular life (not war), pushing the fat man off the bridge would likely mean having to try to proclaim your innocence in trial or something.  I think there’s an assumption that it’s more likely that you’ll be tried for murder for pushing the fat man off than you’ll be tried for being a bad Samaritan for not doing anything on the bridge.  And even if you win your trial in the former case, no one would want to walk on a bridge with you ever again.  But if you were a king, or a god, or playing a computer game, and this situation happened to you, I think pushing the fat person off the bridge becomes more likely.  Or even simply if you’re a farmer and the fat person and the six people tied down instead are all livestock, like cattle or pigs.  So, in the end, it’s reciprocity, again.  Shame and social consequences occur because of reciprocity – people see that you won’t reciprocate, so they shame, ostracize, harass, or physically harm you.  But if you’re removed from that expectation and pressure of reciprocity, it is much less repugnant to be strictly utilitarian.  It seems that we’re first reciprocal, then utilitarian.

The Victim Variation

A different theme rears its head in this case, which is self-preservation.  If we allow the desire for self-preservation and the subsequent actions that that leads to (and it is likely that in a society where we reciprocate, if I want to preserve myself, I allow others to preserve themselves as long as it doesn’t endanger my self-preservation), then we allow the people that have been tied down to want the trolley to run over the people on the other side if it will save them.  But we can’t expect society to care for every person’s preservation – at some point, we leave people to self-preserve themselves.  We don’t hire security guards to guard every single street, intersection, door, hallway, and window in our society, or produce the best bear suit armors for everyone to wear.  Instead, we just hire enough police, and have enough laws regarding helmets for motorcycle riders, seat belts regarding car passengers, and airbags regarding car makers.  We don’t know what caused the six to get tied to the tracks.  Maybe it was purely chance – an act of god (the six truly don’t deserve it).  Or maybe it’s wartime and these are enemy spies or traitors from your side that have been captured are supposed to be humanely executed tomorrow but a renegade official particularly angry at them has decided to execute them more gruesomely a day before in secret.  In any case, if it isn’t an act of god, something caused them to be there.  Maybe they weren’t careful enough.  Maybe they couldn’t buy bear suits for their protection in time.  Anything could be the reason, but the people on the bridge were somehow able to self-preserve enough to not be tied down and the people tied down were not able to self-preserve enough to avoid being in their situation.  At some point, we ask people to self-preserve themselves.  A person tied to the tracks is allowed to wish for his or her self-preservation over others, and the people on the bridge are allowed to not have to risk their own self-preservation just because they’re on a bridge seeing the trolley bear down on those six.

The “Kantian” Variation

If this is implying that the moral duty might be to pull the lever, it seems to be implying that if you are selfless enough, then the moral duty is the utilitarian answer of saving five people instead of saving one person (despite it being called the “Kantian” variation).  If we allow people to desire self-preservation, we allow the one person to let the trolley kill the other five people.  If not, it seems that (at least the author of the title of the graphic is saying that) we default to utilitarianism (a sort of “Kantian utilitarianism?  I have no idea).  So, first comes self-preservation, then comes reciprocity.  Then comes “default” utilitarianism.

The Veil of Ignorance Variation

If self-preservation comes first, then it is more likely that we’ll be one of the five tied down than the one tied down, so we’d pick for the trolley to run over the one person.  While we’re picking the same choice as the utilitarian choice, that’s not our reasoning – our reasoning is purely for self-preservation.

The “Hedonist” Variation

I think reciprocity is the name of the game here (and note how self-preservation is not, since if we’re standing at the lever, our self-preservation is not at danger).  If we feel something in common with the six people tied down, reciprocity makes us save them.  If we don’t feel the expectations and pressures of reciprocity – if we are a king, god, or we’re playing a computer game, then we may be the hedonist and let the six die for our amusement.

The Game of Chicken Variation

Allowing self-preservation allows us to not pull the lever.  If we were kings or gods or playing a computer game, we might command the two to pull the levers so that we get a utilitarian outcome.  If both the red-dotted person and black-dotted person were not standing on the tracks, it is possible for both to pull the level for the utilitarian outcome.

My amateurish analysis is that it’s likely that most people choose their actions according to self-preservation, some form of reciprocity (depending on who you feel in common with – who are your allies and who are not), and utilitarianism in that order.

## Econopunk Fiction A.1.1

I make my living by hacking hospital records to find out what high-profile businessmen and politicians are in very poor health.  Then, I make bets that the financial securities of companies connected to those people will decrease in value.

All aspects of life that are imaginable to most people seem to be artificial.  Light, air (air conditioning), seats and cushions, clothes and doors.  They are built well, almost too well.  But there’s one thing about each of them that’s uncomfortable.  The light is too cool.  The air conditioning makes a steady, smooth whooshing sound of flowing air, and an almost imperceptible yet unmistakably present draft of air felt partially on the body (only at the feet, or on the left shoulder and chest).  My seat is comfortable but where my skin is in contact with my clothes and the seat, it gets a bit too hot, and so the rest of your body is a bit too cool from the air conditioning but my buttocks, thighs, and back are a bit too hot and damp with sweat.  The lights in the room feel a bit too bright.  Outside, through the blinds, if I peek out to see the night, it’s so dark.  When sleeping, I use an eyemask that so it can feel too dark but in the mornings, there’s a slit somewhere at my nose where light shines in through the mask way too harshly.  I play an internet game that in order to play optimally requires actions at unpredictable times of the day: 7 AM, 11PM, 3PM, 4AM.  I usually don’t play optimally – just once or twice at regular times of the day when I have free time, but the option to play it optimally is there.  The setting of the game is medieval high fantasy, like Lord of the Rings.  I don’t know why I would play this game when life is the complete opposite of that setting.  Things like honor and honorific titles and elves and orcs are actual things in this game, but to play the game well, you need to optimize strategies with numerical calculations and simulations.  Excel of the Rings Online.

Back to the earlier descriptions of the setting of life – things that you would expect to depend on nature are man-made and they are man-made very well – almost too well.  And there’s an aspect of them that is completely unintended and hard to predict, and that’s what makes them uncomfortable and feel artificial.  Like Hong Kong in the summer: sweltering outside but your skin and dress shirt gets drenched in cold sweat inside.  Plaza-like areas outside have too few people for a dense, populous city, and catwalks connecting buildings have too many people using them, making the plaza spaces below look way underused.  The plazas are shiny.  The ground is shiny and either silvery, white-ish, black-ish, blue-ish, or some combination of those colors.   The catwalks connect large buildings that are always a bit too big for comfort to the mind (e.g. 200 stories high) but normal inside (the interior is built like any normal building).

In my world, I’ve found that businesses invest a lot in IT security to protect their information.  That includes health care organizations and insurance companies.  But hospitals themselves don’t have the best IT security around, and doctors are relatively easy to fish information from.  So I arbitrage that difference.  Business information is expensive but hospital information is cheap.  So I “buy” hospital information and “sell” business information.  This is the life of an econopunk.

But there are many econopunks in an econopunk world.  A part of each of us needs to be a rational agent in order to survive in this rational existence.

The above is a work of fiction titled “Econopunk Fiction A.1.1.”

## Sleeping

Jordi Alba ”I sleep for 12 to 13 hours at night. Then I nap for two, three or four hours more. It’s one of the keys to my strength.”

The more you sleep, up to some limit (maybe 8 hours, maybe 12 hours, maybe 17 hours), the more effective you are when you are awake.  I wonder if there is an “optimal” amount of sleep to awake ratio depending on what your daytime activities are.  For professional athletes (though it would depend on their sport), perhaps the optimal sleep : awake ratio is higher than average.  For intellectual activities (though this would also depend on what kind of intellectual activity: logical, creative, using knowledge you already know, or learning and then using knowledge that you never knew before), I wonder what the optimal ratio is.  (Of course there would be individual differences, but let’s say we can average those out.)

## Utilitarianism and Today

The basic idea of utilitarianism, taken from the Stanford Encyclopedia of Philosophy’s entry, The History of Utilitarianism, is:

Though there are many varieties of the view discussed, utilitarianism is generally held to be the view that the morally right action is the action that produces the most good…  one ought to maximize the overall good — that is, consider the good of others as well as one’s own good.

Utilitarianism is also distinguished by impartiality and agent-neutrality. Everyone’s happiness counts the same. When one maximizes the good, it is the good impartially considered. My good counts for no more than anyone else’s good. Further, the reason I have to promote the overall good is the same reason anyone else has to so promote the good. It is not peculiar to me.

Economics uses utility as a numerical unit of “good” and in general, as far as I can tell, the maximization of utility is the goal (being agnostic and open to different approaches regarding the distribution of that utility among people).  Without putting too much thought into the philosophical implications, one can say that it certainly makes it conducive to be a mathematical study.  And by restricting itself to being a mathematical study (and depending on the situation, restricting itself to being positive (descriptive) instead of normative (value judgment)), it is able to be agnostic about anything.  Mathematically and logically, 1+1=2, but no statement is made on whether what that implies is what we actually want for people.

So… is liberty, the right to do as one wants free from the interference of others, so long as what one wants does no harm to others. (And merely offending the moral sensitivities of others does not count as harm. Especially since others often confuse feelings of repugnance with feelings of moral disapprobation.)

The above famous position on liberalism, taken from the Stanford Encyclopedia of Philosophy’s entry on John Stuart Mill, is probably one sort of ideal that many liberal democracies work towards.  It’s also a position that’s conducive to a mathematical approach. As long as at least one person’s utility increases and every person’s utility doesn’t decrease, an action that causes that would be considered desirable.

$\textup{If } \exists \Delta x_{i}>0 \textup{ for some } i\ |\ \Delta x_{j} \geq 0\ \forall\ j \textup{, then action}\ i \textup{ is } \ddot\smile$

If there exists an action that would increase the utility of a person such that the utilities of all people don’t decrease, then that action is desirable.

However, this doesn’t say anything about the distribution of utility that we have in our world today.  Not that the below says much about the distribution of utility, but taken from the History of Utilitarianism entry:

Hutcheson was committed to maximization, it seems. However, he insisted on a caveat — that “the dignity or moral importance of persons may compensate numbers.” He added a deontological constraint — that we have a duty to others in virtue of their personhood to accord them fundamental dignity regardless of the numbers of others whose happiness is to be affected by the action in question.

Personally, I think the above is what we have today in times of peace in societies as long as they are not ruled by absolute autocrats.  There is some sort of attempt at utility maximization going on, whoever it might be, wherever it might be, usually almost everyone and everywhere.  But some minimum amount of dignity, or some minimum amount of utility that the society considers is the minimum amount of dignity, is given to a person just by virtue of being a person.  What I mean are things like welfare, shelters for those in need, minimum levels of trust we give to strangers, acquaintances, and family.  This does not say much about the distribution of utility except to say that everyone deserves some minimum level of utility.  However, this does cover an area that Mill’s famous position alone doesn’t cover.  If someone falls into misfortune, Mill’s position only says to not harm that person by your actions.  Francis Hutcheson’s position says that we have a duty to help that person “fundamental dignity,” or in utility-speak, a minimum level of utility.

I am comfortable with this position.  It gives value to the state of simply existing as a human being and it actually matches what we have in reality.  It is agnostic about what that minimum level of utility should be as well as on what a desirable distribution of utility should be.  These things, especially the former, changes as technology develops anyway.  In my opinion, it is good for economics to be agnostic, just like math is agnostic.  In the end, value judgments are the realm of philosophy and politics, and there exists the field of political economy with which positive economics does not rely on.  Math makes economics hard for people though, since math is logic and our ability to do logic depends on our physical make-up (i.e. our neurons, neurochemistry, and general physical health and stability).

Anyway, economics and math are agnostic about the distribution of utility, and a philosophical position (and a value judgment) stated by Hutcheson provides a simple, realistic position from which to start.  Everything beyond that is agnostic.