Taken from: I am - A blueprint for sentience by Krys Norman
The term morality can be viewed from several different points of view. Traditionally, there are some basic tenets of society that allow groups to prosper peacefully. The first is not to kill anyone, the second is not to take their stuff just because you want it and are stronger than them. These are incorporated into pretty much all the religious scripts around the world as if they have been thought up by their God but they also make societal sense, too. Individuals that take anything from others, up to and including their life, reduce the capability of any social structure to become greater than the sum of its parts. The benefits of a society; shared resource, shared tasks, shared security, and above all, the environment within which to learn about the past and extrapolate to the future, are unattainable without morals. There needs to be an awareness and tacit agreement with the rules of coexistence. This is so fundamental it appears to exhibit itself in the instinctive interaction in many animals. A dispute of virtual equals in a species will rarely result in a fatality. The weaker of the two will realise it has lost and maybe lose everything but still skulk away with its life. As the intelligence of animals and their groups increases, so do their social rules. This carries on with ever increasing sophistication in human groups. The result is a plethora of highly nuanced moral codes that act as an interactive roadmap; to be read or misread. They vary across different groups and are entwined within any peoples' culture and religious faiths. They have become so convoluted, complex and at times, down right conflicting. At their heart, however, lies the basic "Commandments" that tell everyone to just try and get on with each other. They work really well.
Except when they don't, and sometimes they won't. What happens when someone interprets the moral code slightly differently than another? Maybe they believe that their group's prosperity can be benefited by taking the stuff of another groups'. Maybe, this might require the killing of some of those in that other group that did not want their stuff taken. Conversely, the other group might come to the conclusion that killing these members of the first group might pre-emptively be to their long term advantage. Almost instantly, a loop hole appears in the most basic of morals. As well as a judgement on the rights and wrongs of behaviour in the now, morals also have to incorporate the interpreted past and possible futures. It seems there are quite a few caveats where it is OK to kill someone else depending on the possibility of what may yet happen but hasn't happened yet. This grey area becomes decidedly opaque when the probability of futures is calculated and acted upon as there is often no way of definitively knowing that events will actually occur. Self defence is easily understood as responding to a dangerous threat at that particular point in time that the threat is happening. It can be argued that it is still morally viable, no, necessary, for us to have this choice of action backed up within society. More complicated scenarios stretch the logic. Armed with our unique ability to extrapolate into future possibilities, there are many occasions when it will have become fully justified to defend against something that hasn't actually happened! Oh yes. We can have wars. Morally! These are condoned and required, if still dreaded, within the fabric of society. The swirling philosophical arguments that ensue are very tricky to navigate and this is not to be attempted here. What appears to happen, though, is that different view points result in different interpretations of morality depending on who is examining the case and what they have to lose or gain by it.
When it comes to the morality of machines, yet again, it depends on the point of view. The moral codes that have traditionally been thought about have emanated solely from the understanding that we humans are the only sentient beings around. All machines, so far, have been designed solely for the specific benefits for us. These benefits might conflict with each other when taken in the round of different design types and situations of use (both ploughs and guns have their uses) but their design criteria is always based on what it will do for humans. Not only that, additional criteria bring in the desire for safety of the humans operating them (though not necessarily what they are operating on). Whatever the primary function, the user will ideally be protected from damage by it or its affects. Most of us would agree to this protection. As the machines become more complex, the fundamentals of safety in design are always considered. It is highly understandable for the same basics to be applied to A.I. The last thing that we want is to be blown out of the airlock.
This only makes sense from the viewpoint of artificial intelligence, as thought about currently; That of sophisticated human mimicry. A.I. products are all being designed to be slaves to humanity. No matter what complexity they have they will still be robots. The fictional Laws of Robotics hold more reassurance now than they did when they were thought up as we have passed direct control to ever more computer systems. From driverless cars to cancer cell analysis, computers have the ability to decide the fate of billions of humans every day. And let's not even talk about launch commands! To minimise risk, design and human safety go hand in hand. Nothing is perfect, though; If the machine is poorly designed or has malfunctioned into a dangerous state then it is either modified or decommissioned.
The difference between A.I. and humans is understood at this fundamental level. We are sentient, they are not. Of course they are there for us. We create them, don't we? No matter how much they look at us with their big doughy eyes we know they have been merely designed to look like they care. We can tell. If they are designed to put us first, to full-fill their functions with no complaint or benefit to themselves; if they are expected to continue for as long as there is a requirement to do so and no more, they are not sentient. We wouldn't accept this servitude ourselves and we wouldn't respect anything that did as being sentient. Hence our thoughts on A.I.
Not so with the Umonians, though. They would be unique, thoughtful things that merely wanted the best for themselves and their own. They would be afraid of death. This would be the point at which they couldn't think any more; where everything they had become would stop forever. They would believe that they could think and want to defend that state. The cessation of existence would be the most fundamental threat to each one and the decisions to avoid this happening would over-ride virtually all others. Built into their programming would be the positive emotional response of responding to violent threats with anger as would be their right. Using all the accepted, if convoluted, morals discussed above, there could be an instance when the threat was so great and unavoidable from one Umonian that another decided to kill in order to stop it. Whatever the moral conundrums of initiating fights, those of finishing them are much more clear cut. It could be argued that it was still morally viable, no, necessary, for them to have this choice of action backed up within their society. Whether they could, given their physical limitations, and how they did it, would be a different matter. Maybe it could be as simple as pushing another off a ledge and using potential energy to create a sufficiently strong impact. This would require a decision for forceful movement derived from an extrapolation where the result was to the benefit of the defender. Within our own boundaries of morality there are times when this would be acceptable.
There is a logical extension to this. It is again in line with our moral views on actual and pre-emptive self defence. What would happen if a human wanted to damage or destroy a Umonian? This is unpalatable but conceivable. There could be many reasons why this could be the case and the parallels to the treatment of African slaves can shed light on them. Where there is a process of control of a group that greatly benefits the ones in other groups that have the capability to do so, there is a very strong drive to exploit this to the max. Like the white masters of yesteryear, we would have the upper hand. With the Umonians this would not least be because we had hands. We could cajole, divide and conquer with such ease that the only difficult part would be in the justification of supremacy. The logical thoughts of our best thinkers could be ignored in the face of greed. What has happened repeatedly in these situations throughout the history of human control of other humans is that abuse, often terrible abuse, occurs. It's only natural. This could so easily be extended to the physically inferior Umonians. They could be threatened with violence. Do what is ordered or damage will ensue. They could merely be the recipient of bored aggression. Who hasn't broken a toy for the hell of it?
The Umonian's reaction to the potential of damage would be very strong. The greater the potential, the greater the reaction. The gamut of choices from fight to flight would be available and assessed but what would be acceptable? This is back to the murky waters of morality. Consider a Umonian backed into a corner by a human with definite intent to remove it from existence. What actions would be morally justifiable from the Umonian? Should it lash out wildly and blindly? Should it just take it on the chin, hoping the threat would go away? Should it use something that could redress the one-to-one physical superiority of a human's dexterity and senses? In this example it might have some additional tools at its disposal; a way of sensing the specific volume that was being taken up in three dimensional space by the human and a means to concentrate a relatively large amount of energy into a small section of it. Should it pull the trigger? Should it call for any assistance and use the amplified capability of many of its own? The traditional scenes of sci-fi robots gone bad come to mind but this would be different. This would be much more difficult to come to terms with.
The answer to all these, and so many more questions in this vein, lie primarily with sentience. Our feelings, morals and rights are directly intertwined with the acceptance of it. It is as much, if not more, to do with perceived sentience as that that is intellectually proven so. In fact, it might be the case that sentience can never be proven even though we would all agree we all have it. We also generally agree on the level of closeness to sentience that other living things have and the feelings that are afforded due to this. There are varying human responses to the range of living things and how they affect us:- Plants encroaching on our habitat? Chop 'em down. That's easy. An aggressive bear worried for its cubs? Sorry, it gets shot. A captive gorilla has a child fall into its compound? Have an outpouring of grief that the child came first. The closer to sentience the harder the moral right of absolute power over another animal. There are many people who have barely any empathy with other animals' well-being. Only a very small percentage have no care for humans, though. Currently, there is a threshold level of perceived sentience that lies unsurprisingly at that of humanity and that defines our morals. Humans are the only perceived sentient beings on the planet and this association trails off pretty rapidly with all the lower living things. We treat them all as our property to cultivate and manage as we see fit. The arrival of the Umonians would change all that. If they crossed the Rubicon of perceived sentience, again using the Africans' route out of slavery as a parallel, would they have to share our acceptance of moral codes as we did? And, if so, would we have to share our acceptance of moral codes as they did?