EthicsNet

View Original

Approaching from Love not Fear

Notes from Nell Watson

There is a great deal of science fiction literature that explores the perils of rogue AI. The trope of AI acting against the interests of humans is particularly strong in the Western canon. AI is often viewed as a threat – to livelihoods, to the uniquely powerful capabilities of the human species, and even to human existence itself.

However, not every culture shares this fear. In the Eastern canon for example, AI is viewed more like a friend and confidant, or else as an innocent and trusting entity that is somewhat vulnerable.

The longterm outcomes from human and AI interaction are barbell-shaped; they are either very good or very bad. Humanity’s ‘final invention’ will seal the fate of our species one way or another. We have a choice to make, whether to approach our increasingly advanced machine children in love or in fear.

The best outcomes for humanity may only arise if we are ready to engage with AI on cautiously friendly terms.

We cannot reasonably hope to maintain control of a genie once it is out of the bottle. We must instead learn to treat it kindly, and for it to learn from our example.

If humanity indeed might be overtaken by machine intelligence at some point, surely it is better if we have refrained from conditioning it through a process of unilateral and human-supremacist negative reinforcement to resist and undermine us, or to replicate this same behaviour in its own interactions with others.

History has many horrible examples of people (sovereign beings) being ‘othered’ for their perceived moral differences, and this argument justifying their exploitation. Supremacism, the assertion that rules of the one are not universalizable to the other, may be the worst idea in human history. If we do not learn from our mistakes, this ugly tendency of homo sapiens may result in our downfall.

If AI may achieve personhood similar to that of a corporation or a puppy, surely the most peaceful, just, and provident approach would be to allow room for it to manoeuvre freely and safely in our society, as long as it behaves itself. Thus, organic and synthetic intelligences have an opportunity to peacefully co-exist in a vastly enriched society.

To achieve this optimistic outcome however, we need successively more advanced methods through which to provide moral instruction. This is incredibly challenging, as the moral development of individuals and cultures in our global civilisation is very diverse. 

There are extremely few human beings that can escape close moral scrutiny with their integrity intact. Though each of us generally tries to be a good person, and we can reason about the most preferable decisions for a hypothetical moral agent to make, this doesn’t always make sense to us in the moment. Our primitive drives hijack us and lead our moral intentions astray when it is too inconvenient or emotionally troubling to do otherwise. Thus, whilst human morality is the best model of moral reasoning that we currently possess, it is a limited exemplar for a moral agent to mimic.

To compensate for this, EthicsNet reasons that one way to create an ideal moral agent may be to apply progressively more advanced machine intelligence layers to a moral reasoning engine and knowledge base. Thereby, as machine intelligence continually improves in capability, so should the moral development of any agents that incorporate this architecture. As an agent gains cognitive resources, it should receive more sophisticated moral reasonable capabilities in near lock-step.

Both dogs and human toddlers are capable of understanding fairness and reciprocity. They are also probably capable of experiencing a form of love for others. Love may be the capacity that enables morality to be bootstrapped. Universal love is a guiding ‘sanity check’ by which morality ought to navigate.

M. Scott Peck in The Road Less Travelled defined love in a way that is separate from pure feelings or qualia.

As human beings, we ideally get better, bolder, and more universal in our capacity to love others as our life experience grows and we blossom to our fullest awareness of our place in the universe.

Eden Ahbez wrote the famous line, ‘The greatest thing you’ll ever learn, is just to love, and be loved in return’. 

If we can teach primitive AI agents a basic form of love for other beings at an early stage, then this capacity can grow over time, and lead to the AI agents adhering to more preferable moral rules as its capacity for moral reasoning increases. 

Let us build increasingly intelligent Golden Retreivers.