A block away from my workplace in the small town of Jaltenango de la Paz, Chiapas, a new community center just opened. It's called, in Spanish, "The Effect of Soma." If that seems like a funny name, it is: the literary function of Soma in Brave New World is that it aids in pacifying and isolating people to facilitate totalitarian control. But it is also pure pleasure, hence the name of the community center.
On a recent trip home my brother mentioned that he's skeptical of the possibility of an objective ethics. Along with the community center thing, these two little stimuli prompted the biggest breakthrough in my own thinking on the philosophy of ethics in a long time.
I apologize in advance to those with little background in philosophy for being technical. I apologize in advance to those with much background in philosophy for being sophomoric.
I've subscribed for years to a sort of adapted utilitarianism. The idea is that we can construct an objective ethical system using a function that takes as input the happiness and suffering of everyone concerned, across all future time. The biggest problem with this (other than that it's useless as a tool for making ethical decisions, ironically enough) is the Soma problem: there's no way to tally happiness and suffering such that a Soma-fueled authoritarian dystopia is unethical.
So what I'm drawn to now is a Teleology of Liberation. The idea is that an action is moral insofar as it increases the capacity of people to pursue self-actualization.
A few notes:
- To act out your own liberty is neither inherently moral nor immoral. The question is about the impact of your actions on the liberty of others.
- We can apply a progressive metric to tally liberty, much as we can for utilitarianism. i.e., the liberty to eat weighs much more heavily than the liberty to fly around the world.
- A moral social order is then an equitable one, one in which everyone has a similar capacity to pursue self-actualization.
- Whereas Nietzsche and Rand's liberty includes the liberty to dominate, this precludes it. Domination is immoral. A Soma-prison is immoral. Oppression is immoral. Check check check.
- I think it's healthy for ethical systems to be robust to a loss of the noumenon. That is, they should function similarly if we deny that an objective reality exists. I mostly like this stipulation because a lot of my friends are postmodern-ass anthropologists who think the idea of objective truth is colonialist. An intersubjective utilitarianism is easy to construct, since people have a sense of their happiness (and suffering) that they can communicate. I hear Sartre constructed an intersubjective deontology, but I can't tell you much about it. Anyway, an intersubjective liberation teleology also seems straightforward, since I have some sense of the constraints on my life. But it would be fair to point out that liberty is much squirrelier than happiness and suffering.
I have no doubt that this idea has been thoroughly expounded, probably a long time ago. I'd be interested if anyone can point me to where I can read more about it (brandonistenes@gmail.com).
Some Background on Ethics
Don't read this if you already sort of get what I said above. It is not good.
One way to divide up thinking on ethics is into deontological ("it's what you do") and teleological ("it's what results"). A point of notable contrast is The Trolley Problem. They both get in trouble in certain tricky (or not-so-tricky) situations. Kant, deontology's MVP, would have told Nazis where his Jewish neighbors are because lying is bad, so don't do lying. John Stuart Mill, the guy who invented utilitarianism (the Bacardi of teleology), would have pushed someone out of an airplane, provided doing so was necessary to ensure the arrival of medicine that would have saved two lives. Most people are uncomfortable with both of these.
This problem with deontology can be circumvented with what I call an "information hack." Kant's dictum was "Act only according to that maxim whereby you can, at the same time, will that it should become a universal law." So if your maxim is "don't lie," you're going to be a shithead snitch, and you deserve to get beat up. But if the maxim you apply to the above situation is "don't tell state-appointed murderers where the people they want to kill are," you'll be in much better shape. However, this might require an arbitrary amount of context to resolve—you might have to describe the present scenario in an arbitrary amount of detail to figure out what the moral thing to do is. What exactly is the moral decision might change depending on how much context is provided, as happens with Kant and the Nazis. And who's to say what's the right level of detail to provide in a maxim?
This is similar to a problem that much closer to the heart of teleology, which is that you might have to know an arbitrary amount of stuff to an arbitrary point in the future in order to fully measure the ethical import of an action. This makes it pretty useless for figuring out what the moral thing to do is. But at least it's obvious how ethics should ultimately be measured: taking the whole universe from that point forward forever into account. There's no such easy answer for deontology.
I don't really mind either of these problems, since I don't think this kind of ethical theory is actually useful for living ethically. I think normative ethics is a more relevant body of thought for that kind of stuff, but I'm not very familiar with it either. Mostly I just wing it.
But there's trouble! Some people don't think that there is a single, fixed, objective world that we can refer to. I complain about these people a lot; some of them are close friends of mine. For me it's a very productive tension. I'd rather not know exactly how they feel about it. In any case, I think it's productive to ask which of these theories stay standing when we pull out the rug of reality from under them.
The vanilla statement of utilitarianism presumes that there is an objective (or "noumenal") reality in which happiness exists in objective, if unmeasurable, amounts. What happens if we remove the objective, leaving the intersubjective? By the intersubjective I mean minds and the things they communicate with each other, allowing the possibility that they perceive genuinely different things and that neither perception is more "right" than the other. Utilitarianism survives in style – people have some sense of their own happiness and suffering that they can communicate with others.
How might we construct an intersubjective deontology? I'm sort of at a loss, but some folks say Sartre figured it out. I gotta read Sartre one of these days.