On Human and AI Ethics

By Peter Voss | 20 October 2016

Ethics is not an end in itself.

Right and Wrong, Good and Bad are not Platonic forms to be discovered.

But let’s start with another major source of confusion: Ignoring the distinction between descriptive and prescriptive ethics—how we actually behave, versus how we should ideally behave. I wrote on this some time ago.

Now what is the purpose of ethics or morality? Why do we need it, or want it?

We need it as a guide to survive and to optimize our lives. It is useful—no, crucial—for us to have generalized rules, or principles by which to live. Life is too complex for us (or an AI) to figure out the best action for every micro decision we face: Should I lie, or tell the truth; should I cooperate or not; should I pray for, or work on a solution?

There are objective answers to such questions. We can and should treat ethics as a science. Currently, most people don’t even attempt that.

We all automatically acquire, develop, and internalize some principles. That’s our moral compass. However, few people try to rationally explore how they might discover and learn the best principles; those that best optimize life, and minimize moral conflicts—both internally and externally.

Good and bad only have meaning in terms of ‘good for whom?’, and ‘good to what end?’. In ethics it means good for the individual, and by extension good for society. The end is human flourishing.

Advanced general-purpose AI (AGI) will clearly need to understand and deal with actual individual human morality (descriptive). They will also need to effectively respond to and mediate between different existing value systems. This is (just) knowledge and skill acquisition, like in any other domain. Crucially, it involves context, clarification, learning, and reasoning.

AGIs will also help us navigate and improve our morality (prescriptive). We’ll have the best personal psychologists and philosophers one could wish for. Their intelligence will help us discover the best principles to live by, and the best goals to pursue.

Reprinted with permission from the author.

Peter VossPeter Voss is an entrepreneur, inventor, engineer, scientist and AI researcher. In 2001 he (co-) coined the term ‘AGI’ (Artificial General Intelligence), and has been working towards achieving high-level AI since then. He also has a keen interest in the inter-relationship between philosophy, psychology, ethics, futurism, and computer science, and frequently writes and talks on these topics.

Peter Voss, AGI Innovations inc @ iHuman: The Future of Minds and Machines, SVForum

Part 1: Inventor, futurist, engineer Peter Voss

Amazing Progress in Artificial Intelligence – Ben Goertzel

Erica: Man Made

Be sure to ‘like’ us on Facebook


  1. The only thing that I would like to add is to say that morality can not exist without the basic concepts of pleasure and pain. From a Darwinian perspective, pleasure and pain are essential to our survival. If I put my hand in the fire, I need to know very quickly that this is not something that will be beneficial to my wellbeing. The same can be said of an A.I. The natural progression of this is empathy and the projection of these feeling onto others.


Please enter your comment!
Please enter your name here