Anti-consequentialism

Consequentialism is awesome, except when it’s not. Particularly, the whole idea “inaction to prevent something from happening = action to make it happen minus the cost of action” is a fantastic tool to improve our ethics, make sanity checks of extraordinary claims like “death or disease is good,” and generally be more efficient. However, it’s also a superweapon that should be banned under the Geneva Conventions or something. No one can possibly live up to this standard, and therefore this argument allows you to bulldoze anyone’s position and decisions. It denies any decent person who don’t want to be a murderer choice and the right to be happy, because most of the things that make us happy aren’t precisely the most cost-effective actions to prevent deaths in the world. What’s worse, once people notice that you have a rhetorical bulldozer, they’ll just start ignoring it (don’t try this in the physical world), and you won’t be able to make any arguments.

But what’s even much much worse is that if you succeed, and convince people that they’re literally Voldemort for merely existing, they may as well give up and start doing other Voldemort things too. Unless you really internalized this argument (in which case you’re not reading this, since you sold your PC and donated money to charity – you wouldn’t kill a person to be able to browse Facebook, which means that it’s unethical to not give up facebooking to save a life either), you probably agree that murdering three-four innocent people to buy a new car in a sense of not donating this money to an anti-malaria charity instead isn’t as bad as murdering then in a sense of breaking into their houses, blowing their brains out, and taking the money. But imagine that you succeeded persuading people that it’s basically the same thing. We already know that you can trick people into believing that cheating on their partner isn’t that big of a deal by simply telling them that they’re statistically likely to cheat. If there’s nothing one can do about being a villain, they will redefine evil. People rationalize even potential actions, and for darn sure they would rationalize actual inaction (even I’m doing that right now). Now imagine you convinced them that any rationalization of inaction they came up with also applies to action, and it’s totally OK to shoot people as long as they can escape the criminal justice system. Oops.

OK, what’s about those people whose moral core prevents them from going Voldemort? Maybe they will start helping others so much that the fact they’re suffering doesn’t even matter? No, they won’t. They will drown in depression and self-loathing, but still not become perfectly rational anti-death optimizers BECAUSE THIS STANDARD IS FREAKING IMPOSSIBLE TO LIVE UP TO.

For practical reasons, we probably need an ethical framework that allows us to distinguish bad actions from good actions, without labeling the whole peacefully living population murderers, and without concluding that murder is a-OK thing if you’re not really enthusiastic about it. Oh, and for sure this ethical framework shouldn’t say that it’s OK for Bill Gates to go on a rampant shooting massacre, and still be a nice guy, while a regular Joe is a cold-blooded murderer, since he hadn’t applied every possible effort to maximize the amount of money donated to charity. And we can totally have all these nice things, if we simply accept one irrational axiom: inaction doesn’t equal action.

Advertisements

11 thoughts on “Anti-consequentialism

  1. I think that there is a very common misconception about consequentialism that it says anything about which people are bad and which are good. What it actually determines is whether one action is better than another one. A bad person in consequentialism is actually the same as a blameworthy one. That is, if blaming or punishing someone leads to good consequencies, they’re a bad person. It’s completely possible that buying a car instead of donating to an efficient charity is as bad as killing four children instead of killing none. But that doesn’t actually means the blame or punishment should be the same. Bad or good people are about virtue, and virtue determines if you should cooperate with that person or not. Virtue ethics is mostly about what you should do so the other people would like you and cooperate with you. Always doing consequentialism::optimal actions isn’t easy, but such a person is a High Saint, and being a High Saint shouldn’t be easy under any ethical system, or it’s not strict enough to enable continuous improvement.

    Like

    1. Doesn’t it only make things worse though? If you have a person who picks up a gun and shoots four children for evulz, it’s probably not very cost efficient to reason with them. You actually need the whole criminal justice system to deal with them. On the other hand, a decent person, especially a highly scrupulous one, who considers buying a car can probably be much more easily guilt-tripped into donating to charity instead. So again it turns out that yelling at the average Joe is a more efficient strategy than trying to eliminate all crime.

      Like

  2. You have a point there. I guess a part of the problem is human psychology. Guilting a person about something everyone does usually create some misery and resentment and than fails. Maybe it’s different if you’re a known High Saint, as it works in most religions, so that people know you actually walk you talk, but I’m really not sure the optimal campaign would be about guilt. In the least convenient possible world, where guilting rich folk is actually the best tool possible to eliminate worldwide suffering, and where you can calculate all the externalities, I’d bite it and say that yes, you’d better do that. Eliminating crime may look bad if you count only the obvious consequences (and may even look negative), but its utility comes almost exclusively out of the crimes that haven’t been committed because of the law system working.
    I have another problem with preference consequentialism, though. If we can have preferences about future and they actually matter, wouldn’t our moral calculations of today be dominated by preferences of the dead?

    Like

    1. But we don’t really consider the preferences of the dead, do we? Except for the last will, which is only partially respected, we usually don’t include in calculations what the dead would have wanted, nor do we expect the future people to do it with our preferences. Unless, of course, we persuade that that they should, but then it becomes their own preferences.

      Like

  3. That’s how we do it. And as far as I understand utilitarianism the preferences are a real function on the set of 4D universe-states. So a person can have preferences for the world-state long after that person’s death and they are important for the utilitarian calculation. But then why should we discount the preferences of the dead people as we do now? Isn’t it wrong (immoral)?
    There’s also another issue I’m’ confused about.

    Like

    1. I always interpreted preference as a property of a mind. That is, you only care about preferences as much it can cause satisfaction or dissatisfaction of those who can experience it. Thus, there’s no preference if there’s no one to experience it.

      Like

  4. I’m not sure if you’re saying that we can’t have preferences over anything other than what we feel – and I guess that’s more like hedonic utilitarianism, which I don’t endorse for obvious reasons (I don’t want to be wireheaded, I don’t want to be locked in a superficial simulation).
    Or if the morality is a property only of the minds that exist right now – and that probably means it’s dynamically inconsistent, and I believe that’s a sign of a bad theory. Like you have an artist, and she wants her pictures to last a hundred years, and while she is alive, you burning these pictures _after her death_ is bad. But then she actually dies – and now it’s okay-ish.
    Or maybe utility is a real function of (mind, 4D state of the world)? Yes, I agree, but then we take all the minds (that matter, at least) and try to combine it into some sort of Utility = U(4D state of the world) objective morality.
    There may even be a kind-of-crazy idea that a mind matters no matter if it’s in the past or in the future – sort of like HPMOR (spoilers?) Harry thinks what the children of their children will think about his actions (end spoilers?). And my inner model of Robin Hanson says that for the future people we can use… Guess what? Right, prediction markets. But I doubt the same trick would work for the past people or at least would be as easy to implement.

    Like

    1. I guess what I’m saying is halfway between preference and hedonistic utilitarianism. I acknowledge the right of people to have preferences that by every single objective and subjective measurement lead only to their suffering and unhappiness (which normally isn’t really accommodated for by hedonistic utilitarianism), but I think that the knowledge of the fulfillment of preferences has to be at least theoretically (even if with a low probability, like with cryonics – if it doesn’t work, then the preferences about what to do with a dead body don’t matter, but if it works then they do – therefore, overall, they matter) reachable by the agent.

      In the least convenient possible world, where aliens, previously unheard of, abduct me in my sleep, and put me into a perfect simulation, I’ll bite the bullet and say that it’s OK. In every realistic world, you probably know about the development of simulation technologies, anticipate the possibility of being put there, which gives you anxiety, and therefore goes against your utility. In the artist example, if she has zero followers who like her art, but on the other hand everyone assures her that the pictures wouldn’t be burned, and then suddenly after her death everyone decides that it’s OK to burn them – I guess I’m fine with that. In realistic worlds, she either anticipates the possibility of the pictures being burned, which gives her anxiety and goes against her utility, or she has followers that will object to the burning even after her death based on their own utility, and after a while they will gain historical significance, even if no one personally likes the pictures very much.

      Or well, just a mind exercise – I won’t be seriously defending this idea – why don’t we say that this is yet another reason why death is bad? Once we fix this problem and remove death from the natural order of things:

      1. We may disregard previously dead people, and the population of the newly dead people will be small enough for it to be not a huge problem (i.e. the population growth will outpace the accumulation of the dead, so the preferences of the living will outweigh the dead, even if we count them as the same rate).

      2. People won’t be normally anticipating their death, and thus won’t really think of and have preferences going beyond their death.

      Like

      1. As for my preferences, Here is a list of possible situations in order of diminishing utility:
        – a good, but checkable and exitable simulation
        – a poor but checkable and exitable simulation
        – no simulation
        – an uncheckable and unexitable simulation I know is being developed
        – an uncheckable and unexitable simulation I know nothing about.

        The perfect simulation is too easy to bite. Let me make it a little bit harder (hehe).
        The aliens you’ve never heard about – because they’re too smart to use spaceships with astronauts instead of tiny probes – are abducting you when you’re sleeping and are putting you inside of a simulation. You’re their Chosen One because of some of their alien values and they’re very careful to fulfil your preferences, though as cheaply as possible. So the simulation is far from perfect. It’s just good enough to fool you for indefinite time. Most humans are either scripted or don’t exist or are generated only when you are going to interact with them and are basically p-zombies. The world details, photoes on the Internet and land features are created dynamically when you look at them. Meanwhile, the aliens kill all the other real humans in the world by slowly and painfully devouring them with nanobots, as it is their most cost-effective method that doesn’t violate your preferences.
        The problem with the artist is not that you say you won’t burn her pictures and then you do. It’s that while she’s alive, the moral evaluation for burning her pictures after her death gives negative result, and after she dies it gives a small positive one (you kinda like looking at flame and don’t care about the pictures). It changes without any additional information input. If her followers are not born yet (let’s assume her first true follower will be born in a year after her death – that happens) their preferences don’t matter as well (do they?), unless you care about them (and you don’t). Artist’s anxiety doesn’t really matter for our calculation, as it doesn’t really depends on if you decide to burn their pictures after her death or not. If she doesn’t want anxiety, this theory probably says you should lie to her, saying you’d never do that and then burn the pictures anyway. Very like the hedonic one, actually.
        Argument against death? It’s difficult for me to dislike any argument against death, as we sure need much more of them. Yay anti-deathist tribe. Seriously. But I still don’t really understand the idea. It’s not that we aren’t allowed to disregard the preferences of the dead. It’s that it may be wrong. Probably in the same way as not caring about people you don’t know is wrong. It’s especially evident if a person sacrifices their own life for a case.

        Like

        1. OK, I see the point. Now that I think about it, there’s a simpler counterexample: if it’s unethical to instantly evaporate sleeping people, so that they’re guaranteed to feel nothing, then there has to be a way to extrapolate one’s utility beyond the reach of their feelings. Although simulation is a stronger argument, since in case of evaporating one may object that global utility has to be maximized, therefore it’s unethical to destroy happiness-generating entities, but in case of the simulation happiness continues to be generated.

          But that creates all sorts of wonderful problems, the preferences of the dead being only one of them. Now people are allowed to have preferences concerning the past, the insides of black holes, and I’m not sure, but it could be the case that it’s even possible to have preferences concerning non-existing entities like souls, etc.

          Like

  5. I guess that yes you may. I have some preferences about souls, actually. I’d prefer them to exist. I’d prefer people to have souls and to actually be souls, and to be immortal, and to go through different lifetimes to learn and to improve, and so that all the memories are preserved inside a soul, but are unaccessable during its mortal lives, but are fully accessible for learning and introspection in the periods between. I’d prefer all the evil people would get the same opportunity and walk a similar path as well as all the good ones and the ones in betweeen. I’d prefer there not to be any dead ends where the soul may be stuck forever. I’d prefer the soul to go through lots of cycles learning and improving itself before going a step higher and then higher again, until it becomes a lesser god, a creator of a universe – and I’d prefer this step to only be a beginning. I’d like it to have capability to imagine and feel the Graham’s number and to be able to recount something greater from its own real experience. The problem is, actually, that this preferences almost never inform my actions, as none of them are going to make this picture more real. I kind-of-hope that may be real, like a person may hope a randomly chosen real number out of [0; 1] uniform distribution is exactly 0.5. The probability is about the same, anyway. But the reality of this view doesn’t depend on my actions (well, a good Singularity is probably the closest we can get), so it doesn’t inform my actions and is morally irrelevant for me (and, unless you’re a God, for you as well). Your preferences about the past may be morally irrelevant for you and for the contemporary people, but are (were?) acausally relevant for the people of the past (it doesn’t mean they were always acting according to them, but that’s their moral failure). So yes, people can have preferences about anything, maybe even about pi being four, but the only morally relevant for you are the differences about the states of reality that you can change. Where “reality” includes the brain states, of course. So the only preferences that are morally relevant for you are all about possible real things inside of your light cone. For the other people, though, your preferences are relevant when they’re about possible real things inside their light cones, no matter if you’re a part of that cone or not.

    Like

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s