Aristotle was the first (known) philosopher to posit that we can become better people. He thought we could do this by modifying our actions in such a way that acting morally became an unconscious habit, and that we could know what morality was by watching paragons of virtue and imitating them.
Now, the interesting things (for today’s purposes) about his position are twofold:
First, it implies that regardless of whether humans are baseline/essentially moral, or baseline/essentially immoral, morality is possible for all individuals.
Secondly, it implies that change is possible. Whether or not I agree with this is a subject for a different time.
Now, Aristotle thought we could accomplish these things though attention & repetition. But, should someone parse the psychology of morality & discover how moral decisions are made, what impulses work against morality, and discover a genetic or otherwise externally manipulable trigger that could be used to make individuals more moral without a conscious effort on their part- would it be morally required to enforce a scheme of genetic modification in pursuit of a more moral beings?
Just such a proposition is posed in this article, “The Case for Genetically Engineered Ethical Human Beings.”
The argument in favor of requiring parents to genetically engineer their children to be smarter, have better memories, and be overall, more moral, is a practical one. It cites evidence such as the potential for nuclear war, the proliferation of easily accessible technologies which could result in an “omnicidal” event, climate change, and political tensions as inciting factors which continually make our world unstable. This instability, combined with immense individual power means that even a single person behaving immorally could have catastrophic consequences, and masses of people behaving in morally irresponsible ways is equally troubling.
It recognizes that humans may not be designed to be moral in the way our globalized, technocratic world demands. We have poor memories, and are bad at setting long term goals. We often act in immediate self-interest as opposed to long-term interest. We are bad at making determinations on scientific facts.
In fact, humans are even bad at making choices that are in their own self-interest. We have poor impulse control, make Ulysses bargains with ourselves and then break them again. We do things like refuse to follow medical or dietary advice, we create drama in our personal lives. We may say we want to do better, to be better, but it’s damn hard to put the work in.
Would we even want to make ourselves better and more moral?
Maybe. That’s a question I can’t answer for you, but I certainly would like to be. Especially if it came easily, and if I could derive more personal satisfaction from being moral than for example- taking a vacation overseas (necessitating the purchase of an airfare and therefore contributing about as massively as an individual can to climate change). Can humans choose to be better?
In some ways though, the argument seems circular: I would like to be a better person so that it would be easier to be a better person. And yet, if we could interfere at the beginning, this statement could be not only aspirational, but a constant cycle of improvement saving us from the worst parts of ourselves.
I also can’t help but think that this would be the epitome of Virtue Ethics, that it would warm the cockles of Aristotle’s sexist, ableist heart. After all, what is more habitual and ingrained that something that you are literally genetically predisposed to, that you do without questioning otherwise because it is in your literal DNA?
Now, even if we imagine such a scheme, there are still issues. We would not be able to encode moral values, because, let’s face it- there are still many of us who are getting it wrong when it comes to making moral determinations- that’s at least part of the problem.
Furthermore, I don’t think it will ever be possible to program a human being with if x then y inputs. Therefore, we’d just be edging human psychology towards being more compatible with morality- give us the ability to be more objective, more empathetic, have better listening skills, and be more forward thinking. It would give us the ability to be better at sticking to commitments, and give us the willpower to resist temptations.
We don’t want to end the evolution of morality, we just want to give everyone the ability to stick to what they know is moral, and to arrive at a universal, albeit flexible morality sooner. Morality will always need to be flexible, because there will always be situations and contexts that we can’t imagine until they occur.
We can also still imagine a case in which these hypothetical future hyper-moral beings came into conflict about morality- much like ethicists today. We can even imagine such a being who came to the conclusion that an omnicidal event would in fact be the moral thing to perpetrate- especially from a non-anthropocentric model of morality.
But if that really is the most moral thing to do, should we let it happen?
Is there a system of morality that dictates we do things we don’t want to do?
I would argue yes, and we’re already there, which is why we are discussing this in the first place. We don’t want to take moral actions, but maybe we can make our children better than us, in the literal sense.
So I ask you: do you want to be better than you are?