I write stories. The deep-down reason I write stories is that there are stories I want to read—stories I need to read—that nobody has written or will write. I have a massive, detailed image in my head of the story I will someday hold in my hands, and I work every day to give it physical form.
This is the real reason. Whenever I speak of other reasons for writing, I am actually speaking of my justifications—the things I tell myself so I can feel OK about becoming an epic fantasy author instead of a physicist (my original intended career, from ages 8-13) or some world-improving self-sacrificing activist, struggling to fight world hunger, climate change, or harmful ideologies.
If we are being completely honest, spending one’s life laboring to mend the big problems of humanity is the most direct, effective, and efficient way to make a worthwhile, lasting impact on the world. Sure, becoming a world-famous fantasy author will give me a voice to enact change and the money to do it—but I have an accurate sense of my own potential, and I know that I could do more good if I got rid of the middleman and went straight to saving the world, in the style of Elon Musk or Mother Teresa.
But I’m not going to do that. I’m going to write fantasy books. In this post I am going to explore the thought processes that lead me to feel at least somewhat justified in doing so.
When I am trying to justify my choice of career to myself, to give myself rational, intelligent motives to mask my simple need to read the stories only I can write, I start by going through the list of the purposes of fiction:
First and foremost, fiction is entertainment. Whatever we may tell ourselves, we don’t read to find deeper meanings or gain wisdom from a story—we could do that much more efficiently by reading non-fiction. We read because it’s enjoyable, in the same way I don’t eat peanut butter for the protein or play video-games to analyze the computer engineering that went into the game engine. And we shouldn’t have to apologize for this—after all, learning about new cultures or viewpoints is a form of entertainment (and a much more interesting form than gratuitous violence or Dan Brown-style thrillers).
Second, fiction increases our powers of empathy. Reading from another’s viewpoint—especially a wildly different viewpoint, such as we might encounter in fantasy—is a workout for our empathy muscle, practice in understanding other people and refusing to see them as soulless husks with whom it is acceptable to go to war, or whom it is OK to ignore or harm without good reason. In essence, fiction is weight-lifting that makes you a better person, instead of trivially making your muscles larger.
Third, fiction can have deeper meanings. I don’t mean this in the vague spiritual sense—I mean that written fiction is perhaps the most convincing medium for conveying ideas we are unlikely to come across in real life. Lessons about how to live your life. An example of how to recover from heartbreak. A new viewpoint on religion. Or complex moral dilemmas.
This last one is what I am here to talk about today. My brother, Brendan de Kenessey, is soon to be a philosophy professor, and over his six years of graduate school and five years of undergrad I have spent a great deal of time (because I call him once a day) talking with him about difficult moral problems (his area of expertise).
Moral dilemmas in fiction challenge readers to reconsider their assumptions about what’s right and what’s wrong, and how the characters react to moral problems can affect the course of the story. It’s a trend in modern fiction to have grey morality—where everyone is both good and bad in different ways, and you don’t really know how to feel—but writers who excel at this tend to simply show their characters doing both good and bad things, instead of (the more interesting route) doing bad things in order to accomplish good things.
It’s this issue—Do the ends justify the means?—that I want to delve into. And, spoiler alert: the answer is not simply yes or no.
I am going to present various scenarios, and there are two relevant things to keep in mind:
1. It is even more important to figure out why you think something is right or wrong than to figure out what you think is right or wrong.
2. Evading the scenarios with sarcastic answers along the lines of “I would just solve the problem in this other way” ignore the point of these exercises. Choose between the two options I give you—for the purposes of these questions, no other options are possible.
You are wearing 400$ shoes. You aren’t particularly wealthy, but for some inexplicable reason you really like fancy shoes, so you bought them. You are walking along the sidewalk, and you are the only one on that street. Nearby, you see a small child starting to drown in a fountain—your shoes are extremely tight, and you don’t have time to take them off, so your choices are either let the child drown and keep your shoes or ruin your shoes to save the child.
Which should you do?
Response: Obviously you should save the child. Nobody would say your shoes are worth more than a child. In fact, you seem obligated to save the child and ruin your shoes—you would be condemned if you chose your shoes over the child. You would be a terrible person.
This situation becomes interesting, however, when compared with the next scenario:
There exists a charity (several, actually, the best of which is probably Heifer International) that can save a child on another continent for 400$. If you give the charity 400$ (or, realistically, less than that), the charity will provide necessary antibiotics/build a well/give a homeless family a goat/etc., and a child that otherwise would be dead within a month will have a good chance of living to become an adult.
Say there exists a hypothetical charity for which you can be just as sure that 400$ will save a child who will have the same chance of living to old age as you were sure that the child you rushed into the fountain to save would live to old age. Both children would have the same basic quality of life.
Are you obligated to give the charity 400$ to save a child?
Response: Stated in bald mathematical terms, Scenarios 1 and 2 are identical: for 400$, you can save a child’s life. Yet it seems that the first scenario is much more obviously a yes, you should save the child, asshole situation than the second. After all, more than ten thousand children die from preventable causes every day (according to various sources I just found by Googling “how many children die each day”), and you could save them for the price of a fancy pair of shoes.
There are clearly two possible answers to this question:
1. Yes, you are just as obligated to give the 400$ to that charity as you are to save the child from the fountain…
2. …or no, you are not obligated to give the 400$ to charity, even though it would be a good thing to do.
It seems the first answer comes from the view that the right thing to do is whatever will maximize the total happiness of the world. It doesn’t matter how you do it; it just matters that you take the action that will most efficiently improve the well-being of the sum total of humanity.
This view is consequentialism. The consequences are what matter.
The second answer, on the other hand, seems to place some emphasis on the nature of the action itself. It seems to say that there’s some distinct difference between rushing into a fountain to save a drowning child and building a well (for example) for a starving family in a faraway country.
This view is deontology. The action matters too.
I cannot think of a single example of a moral dilemma that does not come down to the distinction between deontology and consequentialism. They are the only two aspects to any action: what the action is, and what the results of the action will be.
But from these two simple ways of looking at morality—one putting the focus on what the effects of any given action will be, and the other focusing on the action itself—we can derive an extremely complex (and still unsolved in the field of professional philosophy) set of moral issues and questions.
Right now, you may have convinced yourself that one of the two views is clearly right, and the other wrong, and you’re about to click away from this article. You’re either thinking well, if there’s no difference between the fountain and the charity, I should clearly give all my money to charity and I’m obviously a horrible person because I don’t or the fountain was right there, so it’s clearly different, and also why am I so obsessed with expensive shoes that I can’t remove before wading into a fountain?.
…so, behold this a set of three scenarios that I hope will illustrate the degree to which your hastily-made decision is nonsensical:
The Trolley Problem
Scenario 3a. You are standing next to a train track, next to the control that moves the train from one track to another. There are two tracks: on one of them, one person is tied to the track; on the other, five people are tied to the track. You don’t know anything about the people, and you don’t have time to untie them. The only thing you can do is move the track to the one-person track, or leave it so that it’s fixed on the five-person track (where it originally was). A train will be here in ten seconds. Should you let it kill five people, or make it only kill one?
Response: Obviously, you should switch the track and kill the one person instead of the five. It sucks to be the one person, but it’s better to sacrifice one person and save five than to sacrifice five and save one.
Scenario 3b. You are on a bridge above a single train track. Five people are tied to the track, about a hundred feet down the track. An extremely fat man is leaning over the side of the bridge, and you know that if you push him over the side, he will land on the track and his mass will stop the train that’s about to pass by. Should you push him over and save the five people, or just let the train hit the five people and let the fat man live? (Ignore the fact that he’s going to die soon because of his low health.)
Response: This is trickier, but I still think it’s right to push him over onto the track. He didn’t ask to be there, but neither did the people tied to the track. But, because he’s just a bystander, instead of being tied to a track, this is a harder decision than the first scenario.
Scenario 3c. You are a doctor, and you have five patients in your waiting room. They all have AB- blood, and they are going to die in two days if they don’t get organ transplants. (Each needs a different organ.) No organs are available for AB-, and none will be available within the next two days.
A random, unrelated person comes into your office for a routine checkup. Someone with AB- blood. You are secretly James Bond, and you know that you have the ability to kill this person and harvest the person’s organs without being caught by the police. You could kill the person, conduct five surgeries over the next two days, and your five patients will live a normal lifespan.
(Remember, we know nothing else about any of these people. Pretend they are all the same age, gender, and all have families and friends. Don’t complicate this with random other details that I haven’t provided you with.)
Response: Well, no, you should not kill the patient. It’s intuitively wrong—there’s some key difference between this and the first two scenarios.
The question is: What is the difference between this scenario and the first or second scenario?
Speaking from the consequentialist point-of-view, these three situations are exactly identical. One person dies, but five survive—the other choice is for five people to die, and only one survives, and that’s simply mathematically worse. From the consequentialist point-of-view, you should still kill the patient, because that will increase the general happiness of the world more than letting the patient live and letting your other five patients die.
But, if we look at the situation from the deontological point-of-view, everything gets much more complicated.
It seems that in the first situation, the choice was between letting one person die or letting five people die—there was no real difference between the two choices. An extreme deontologist might say that, because the train would hit the five people without your intervention, and would only hit the one person if you moved the train-track lever, there is a difference…but that seems like a very weak distinction, to me. Here, my inner deontologist and my inner consequentialist agree.
In the second situation, it’s a bit more difficult. You have to go out of your way to push this fat man onto the train track—in effect, you have to kill one person to save five. If you had a magic power that allowed you to stop a train by slitting a person’s throat, would you be justified in saving the five people tied to the train-track by murdering a random passerby? Are there significant differences among moving a train-track such that the train hits a person, pushing a fat person off a bridge onto the train-track, and directly murdering someone, so long as each of those actions has the same consequences?
I think that there are differences between the three actions—meaningful differences, that might change my decision if there were only one or two people at stake instead of five. As it stands, those differences don’t matter enough to me to outweigh the lives of five others.
Now, the third scenario. Nobody would do this. Even if you convinced yourself that this was the right path, that because killing your client and harvesting his/her organs would save five other clients’ lives—clients who didn’t choose to have organ problems, just as your healthy client didn’t choose to have AB- blood and have James Bond disguised as his/her doctor—you would still sense on an intuitive, gut-level that you can’t just kill this client to save five others.
And so we return to the question: What’s the difference?
I think it’s pretty damn clear that there is a difference. And I’m not sure that it comes down to the distinction between killing and letting die.
A Theory That Can Help Answer This Question
This is, more or less (definitely less), the doctorate thesis of my brother, Brendan de Kenessey.
Here’s another scenario, which will lead us in to the aforementioned theory, and will return to the Trolley Problem:
Scenario 4. You have an oracle who tells you that if you have an extramarital affair during February, your spouse will never find out. Won’t even suspect.
Furthermore, you have a very weak conscience. You and the person you have an affair with won’t feel particularly bad about the affair, because you’re sociopaths. But you still want to make the right choice.
Considering this affair won’t affect the happiness of your spouse, but will raise the good-feelings of both you and your fellow fornicator, shouldn’t you have the affair?
Response: The consequentialist says yes, it will improve the overall happiness of the world.
But it still feels wrong. Why does it feel wrong?
Here’s my Kenesseyian answer: It is wrong because it breaks the promises inherent in your relationship with your spouse. Even if neither of you noticed or cared, the relationship between you two would be undercut by this action.
You have relationships with everyone in the world. For the vast majority of the people, it’s a minimal relationship: your relationship with Dmitry Shabalobakov (made-up person), who lives in Bumphuck (somewhere), is simply a mutual agreement not to kill each other, or do any bad things directly to each other. It’s the same relationship you have with basically everyone: I won’t steal your shoes, and you don’t steal mine.
For a smaller subset of people—the citizens of your country—your relationship is slightly more involved: You both agree to vote in elections and not commit crimes. In addition to the minimalist relationship you have with every person in the world, you agree to be a somewhat responsible citizen of your country: I won’t burn down the local coffee shop, and you don’t poison the donuts. It’s an upgraded version of your relationship with a random Chinese/Russian/Indian/African/other citizen you don’t interact with, and it’s upgraded because your actions typically more directly affect citizens of your own country than citizens of other countries. So you have more of a relationship.
For a much, much smaller subset of people (your friends), you have a much more in-depth relationship. You’ll hang out once in a while. You’ll respond to texts and emails. You’ll let them know of important things in your life, and you’ll listen to them talk about their own lives. You won’t lock them in a cage in your basement and feed them only banana peels.
And then you have your best friends, immediate family, romantic partner(s), and children (in that order, usually). Your best friends are an upgraded version of your friends; your immediate family members aren’t necessarily your friends, but you have a very deep relationship with them because there’s basically no way to avoid them unless you want to cut yourself off from everyone in your family; your romantic partners are upgraded versions of your best friends; and your children are people you created and who depend on you, so you have the greatest, most complex and relevant relationship of all with them.
Generally speaking, you have more of an obligation not to do something bad (murder, eat, steal from, recommend Eragon to) to someone if you have more of a relationship with them. It’s always bad to drop someone off a small cliff, but it’s clearly worse to drop your child or spouse off the cliff than to drop a random stranger off the cliff.
It is true that your child will suffer more from being dropped off a small (non-lethal) cliff by its parent than a random stranger will suffer from being dropped off that same cliff by another random stranger. In addition to the physical and emotional damage, the relationship between the two people has been greatly damaged—and so the loss is greater for the child than for the stranger.
That’s the Kenesseyian view of the two situations. They aren’t equal, because one is more harmful than the other, damaging both the relationship and the person, instead of merely the person.
The Kenesseyian view is one particularly convincing way to bridge the gap between what we intuitively believe to be the right action in each of the three Trolley Problem scenarios and what we can consciously reason our way into thinking.
Scenario 1: Your relationships with the one person tied to the tracks and the five people tied to the other tracks are identical. No reason to choose the one over the five, so we choose the five.
Scenario 2: Sure, you have an implicit understanding with the fat man that you won’t push him off the bridge and use his mass to stop a train. But, it seems that that promise is lesser than the promise to the other five strangers that you will do what you can to save them from the train—the relationships aren’t identical (it would be wrong, for instance, to push the fat man off the bridge to save only one person), but the relationship with the fat man isn’t strong enough to let the train kill the five strangers.
Scenario 3: Here, it seems that your relationship with your one healthy client, which has the implicit contract I won’t kill you, is stronger than your relationship with your five dying clients. To put it in math terms: the promise I won’t kill or harm you > 5(I will do what I can to save you). So, even though harvesting one client’s organs to save five lives would increase the overall happiness of the world more than letting the five patients die, it still isn’t the right action to take.
This way of thinking makes sense. These implicit promises between people are what allow society to function, instead of devolving into senseless anarchy. We go into a car dealership thinking I’m going to get a car, not My dismembered bones will strengthen the structural integrity of a car—else we wouldn’t go into a car dealership.
Shelly Kagan’s Dilemma
Kagan is one of my brother’s old professors (and I took his online Yale course on death), and he seems to be one of the most unapologetic, uncompromising consequentialists in the world—he will give you the argument in any and every circumstance for taking the route that will increase the overall happiness of the world, regardless of the toll it might take on any one person.
In his book, The Limits of Morality, he rephrases the philosophical conundrum above in terms of killing vs. letting die. The consequentialist view—that both acts result in the same thing, therefore they are the same—holds that there is no meaningful distinction between killing and letting die. Hence, failing to rush into the fountain and save the child at the cost of your 400$ shoes is the same as murdering a child that is holding 400$ shoes and stealing those shoes. Both have the same result: a child is dead, and you now have 400$ shoes.
And, in that particular case, it may be so—it certainly seems almost as wrong not to save the child as it seems to murder a child. But the deontological view, which puts more emphasis on the action itself, holds the more intuitively-pleasing view that killing is worse than letting die.
Kagan’s dilemma rephrases the question of killing vs. letting die in an interesting way:
Scenario 5. Terrorists have kidnapped some number N random people and will kill them unless you kill Alex (another random person). This will be the last hostage situation of all time (so you don’t need to worry about the message this will send to future terrorists)—just focus on the here-and-now, on whether letting N people die outweighs killing Alex.
Response: It’s unclear, if we are going to make a distinction between killing and letting die, that it’s right to kill Alex instead of letting one or maybe even two people die. Some would say killing is so much worse than letting die that it would even be wrong to kill Alex to save as many as ten people from dying—but even those people would choose to kill Alex if the number N of people became high enough.
Choose the number you think would be the tipping point, at which it is right to kill Alex instead of letting that number of people die. That number is N.
By choosing this number, it seems to be the case that you are saying you are willing to kill one person to save N lives.
Scenario 6. Your uncle’s will is going to leave you a fortune upon his death. Clearly, it’s wrong to kill the uncle and take the money—you have to wait for the uncle to die.
But is it wrong to kill the uncle and take the money if you give the money to charity and save N people’s lives? Say your chosen ’N’ is 100. If you know your uncle will leave you a million dollars, you also know that donating a million dollars to Heifer International (or some similar charity) will save more than 100 lives. If it is the case that it is right to kill one person to save N lives, how is it not the case that it’s OK to kill your uncle to save N lives?
Response: This is the same as the previous scenario—the only difference lies in how the question is framed. Why does it seem worse to kill your uncle than to kill a random stranger (Alex)? In both cases, the choice is between killing one person and letting N people die, yet in the first case, you seem obligated to kill the one person, while in the second, you seem not only not obligated but also wrong to kill the uncle.
Or, maybe you don’t seem wrong to kill the uncle. Maybe the situation has convinced you that it’s right to kill the uncle to save N other lives. (And, by the way, the uncle is still a random, average other person—he has lots of money, but he isn’t running a charity or anything that is unusually good for the world. Killing him is just like killing Alex.)
Putting the question in terms of the intermediary factor of money makes it all seem less clear. Obviously, we should sacrifice our 400$ shoes to save someone drowning in a fountain—we are obligated to do so, and we are bad people if we don’t do so.
But doesn’t this imply that we are also obligated to give most of our money to charity? Assuming you know the charity is perfectly efficient and non-corrupt, and that every 100$ saves a life that will now have a good chance of living to old age, how is it possible that we can work 80,000$/year jobs (for example) and not give 79,000$ to charity to save 790 lives? Is our own comfort of greater worth than the lives of those who don’t even have the opportunity to work jobs that will give them the money to survive?
In fact, even if we give 79,000$ to charity, how can we justify keeping the last 1,000$ for ourselves? Our we each worth ten starving children?
Bill Gates plans to give 95% of his wealth to charity before his death. While this is all well and good, how can he possible justify keeping 5% (~1.6 billion dollars) for himself and his family?
My point here is not to argue that we should be giving all our money away. That’s the mindset that leads to communism, which is wrong-headed and harmful on several levels (read Terry Goodkind’s Sword of Truth series if you want to understand the flaws of communism more fully), and the answer to the question Should we give all our money to charity? is still a resounding No. My point is to raise the questions so we can address them more fully, and so that we can figure out why we aren’t obligated to give all our money to charity, instead of simply deciding that we have no such obligation.
The way we seem to view it is this: Everyone is expected to be decent, but nobody is expected to be an angel. We all have to be minimally good and kind, but—even if it is a good thing to give large amounts of money to charity—we are not expected or obligated to do so. Just as we are not expected or obligated to devote our lives to helping others.
Some of these questions I feel I can resolve with the Kenesseyian perspective. We shouldn’t kill our uncle and give the money to charity because our relationship with our uncle outweighs our relationships with starving people elsewhere. The bar for what it is unacceptable to do to another person is whether you and the other person could jointly decide, together, to do that thing—and, just like you and your spouse cannot decide as a pair for you to have a secret affair, iisn’t possible for your uncle to agree to be murdered to save the lives of others. Even if you could convince him to give all the money to the charity, you could not convince him to let himself die first.
Others of these questions are also answered by my brother’s theory of morality, but I am still left unsure about them. Yes, I think that my relationship with myself outweighs my relationship with hundreds of thousands of strangers—cold as that may sound—and so I feel justified in becoming a fantasy writer instead of a hybrid of Elon Musk and Mother Teresa. And, even though if I struck it rich and became a millionaire, I would give the vast majority of that money to charity, I still feel OK about drawing a line—I will keep at least a hundred thousand dollars for myself, and nobody would begrudge me that.
Yet I am not a professional philosopher. This post isn’t meant to argue for or against consequentialism or deontology (although I think the answer lies in some middle-ground), but instead to survey the different views and provide fodder for discussion. To quote Hoid in Brandon Sanderson’s The Way of Kings, “The purpose of a storyteller is not to tell you how to think, but to give you questions to think upon. Too often, we forget that.”
So, all the above questions are questions I would like you to respond to in the comments. (If you haven’t noticed, I respond to nearly every comment.) Comment sections are the bowels and bane of the Internet, but I’d like to change that—let’s have a serious discussion of consequentialism and deontology (and the specific thought experiments addressed above).
But, first, I do have one last thing to say:
In Stories: Do the Ends Justify the Means?
I am not a professional philosopher, but I am a professional fantasy writer. And I have something to say about consequentialism and deontology in storytelling.
I love the idea of a serious moral quandary in a story. Consider, for example, the movie Captain America: Civil War. I love the idea of a story where there are two or more sides on an issue, and it’s unclear who is in the right.
I have rarely read or seen one of those stories. Almost every story either chooses the deontological route (don’t murder someone to save others, don’t lie or deceive even if it’s for the greater good, etc.) or the consequentialist route only in extreme scenarios (yes, you can ruin Ender Wiggin’s life for the greater good, but only because it’s the only hope of saving the entirety of mankind and there’s no other way; yes, you can murder one person to get the hostages released, but only for a whole lot of hostages).
In Season One of Lost, there’s a moment when the villain says “Give me the pregnant woman or I will kill one of you every day,” and nobody even thinks of immediately giving him the pregnant woman. People are reluctant even when the villain does as he said he would and kills someone. Furthermore, the writers don’t make it seem OK to sacrifice the woman and her child for the good of all—and it isn’t necessarily OK, but it should have been a whole lot more unclear than it was.
In Captain America: Civil War, the deontological side (the Avengers shouldn’t be beholden to the government) is clearly shown to be the right side. And, what’s more, in the Marvel universe, it is the right side, because the government is corrupt. But, not only is Iron Man/Tony Stark portrayed as being wrong in accepting government supervision, but he is also portrayed as emotional, out of control, and possibly manipulative of a young Spider-Man in pursuit of his end. (Consider the final scenes, where he freaks out and attacks the Winter Soldier for a past crime that he wasn’t responsible for.)
Civil War wasn’t a genuine moral dilemma—it was a dilemma with a clearly portrayed right side to be on, and the disagreements between the various characters ultimately boiled down to an issue of miscommunication. (The biggest cop-out in a moral dilemma is not having every character have all the information, and then, when everyone learns the whole situation, they all agree. As if there are no real moral dilemmas if we have all the relevant data.)
Have you read The Stormlight Archive? If not, skip this paragraph. (I’m writing this post before Oathbringer comes out.) I worry that Sanderson will eventually show Taravangian to be in the wrong, instead of in the grey area he’s actually in (or even in the right). Either he didn’t have all the information, and the approach he took was needlessly brutal, or he did have all the information, but he still shouldn’t have done what he did. I don’t exactly expect Sanderson to portray him like this, considering I have an extremely high opinion of Sanderson as a storyteller, but I’ve never seen an author give us a similarly consequentialist situation and not eventually renounce the character’s actions.
Have you read Tigana? If not, skip this paragraph. Alessan d’Tigana enslaves a wizard in order to use his magic powers in the fight against the two dictators of the peninsula. This is the only time I’ve ever seen a true, acknowledged grey area—as Guy Gavriel Kay said in the afterword, “I wanted to…[show] a darker side to such a link: and that wish found an outlet in Alessan’s binding of Erlein. I hoped to explore, as part of the revolt would chronicle, the idea of the evils done by good men, to stretch the reader with ambiguities and divided loyalties in a genre that tended (and still tends) not to work that way.”
The vast majority of fiction fails to face the true ambiguities of certain moral dilemmas, all of which take the form of the ends justifying or failing to justify the means to achieve the ends. We all say with our work: No, it is never acceptable to harm a minority to help the majority.
Is this one-sided position really the only one we want to acknowledge or explore? Fiction—and fantasy fiction in particular—is the realm of exploring the what if? questions, and yet we rarely even consider exploring the true moral dilemmas that permeate our world.
Life is full of grey areas. Even the divisive issues where nearly everyone holds one side or the other, and demonizes anyone who holds the opposite view, usually should be resolved with a middle ground. And it is time our fiction stopped sidestepping the issues.
Fantasy is the what if? genre. What if we stopped ignoring the grey areas in morality?
If you liked this, you may also like:
If you didn’t like this, you may also dislike: