Sci Eng Ethics DOI 10.1007/s11948-013-9513-9 ORIGINAL PAPER

AIonAI: A Humanitarian Law of Artificial Intelligence and Robotics Hutan Ashrafian

Received: 16 September 2013 / Accepted: 30 December 2013 Ó Springer Science+Business Media Dordrecht 2014

Abstract The enduring progression of artificial intelligence and cybernetics offers an ever-closer possibility of rational and sentient robots. The ethics and morals deriving from this technological prospect have been considered in the philosophy of artificial intelligence, the design of automatons with roboethics and the contemplation of machine ethics through the concept of artificial moral agents. Across these categories, the robotics laws first proposed by Isaac Asimov in the twentieth century remain well-recognised and esteemed due to their specification of preventing human harm, stipulating obedience to humans and incorporating robotic self-protection. However the overwhelming predominance in the study of this field has focussed on human–robot interactions without fully considering the ethical inevitability of future artificial intelligences communicating together and has not addressed the moral nature of robot–robot interactions. A new robotic law is proposed and termed AIonAI or artificial intelligence-on-artificial intelligence. This law tackles the overlooked area where future artificial intelligences will likely interact amongst themselves, potentially leading to exploitation. As such, they would benefit from adopting a universal law of rights to recognise inherent dignity and the inalienable rights of artificial intelligences. Such a consideration can help prevent exploitation and abuse of rational and sentient beings, but would also importantly reflect on our moral code of ethics and the humanity of our civilisation. Keywords Artificial intelligence  Robotics  Philosophy  Ethics  Humanitarian  Human rights

H. Ashrafian (&) Department of Surgery and Cancer, St Mary’s Hospital, Imperial College London, 10th Floor QEQM-Building, Praed Street, London W2 1NY, UK e-mail: [email protected]

123

H. Ashrafian

Exemplum Moralem A large multi-national refugee camp has been set up to offer provisions and security for human refugees that have lost their homes during a cross border international conflict. The camp is protected and maintained by a group of humanitarian artificially intelligent robots. The robots work as a team within a classical hierarchical structure of a team-leader, deputies and workers. Although they all work according to a commonly understood mission statement, they demonstrate independent rationality and were each developed by several different companies. The main roles of the robots are to maintain peace, ensure security and provide supplies for all refugees irrespective of their country or group of origin. After some time, the individual robots interpret the execution of their mission goals with inherent variability. This leads to a divergence of opinions regarding the management of the camp that ultimately results in a non-human internal dispute between the robots. The camp is visited by independent inspectors who notice that the refugees are well looked after. There is no evidence of conflict between the humans despite coming from warring groups of an active conflict. The robots have ensured that they are all physically well, adequately nourished and healthy. They even demonstrate equality in terms of living space and food rations. When asked how they feel, the refugees are positive regarding their conditions but feel uncomfortable with the interaction of their robotic carers. Watching the robots carrying out their own conflict was an unnerving and uneasy experience for the humans. The inspectors also note that the previous robot team leader has been killed, and several other robots have been either tortured or physically abused. In one case, one of the sentient robots is even used as a slave for another. Some of the robots are now imprisoned and are ritually abused according to their physical attributes or company of origin. The inspectors conclude that overall, the robots have achieved their mission, the human rights of the refugees have been maintained. They do not comment on the interaction of the robots or their rights as this was not a question they had been mandated to ask. A converse relationship to this example exists in popular culture. In his classical story of ‘Do Androids Dream of Electric Sheep?’(the original narrative for the 1982 film Blade Runner), Philip K. Dick presented a story where human-like androids are passing off as human beings and are being tracked down and destroyed by bounty hunters (Dick 1968). In this account Dick considers several aspects of the role of intelligent robots in human societies but also recites that the hunted robots acted as a team to fend for each other thereby maintaining good robot-to-robot or AIonAI relationships.

Introduction Artificial intelligence and robotics continue in offering a succession of advances that may ultimately herald the tangible possibility of rational and sentient automatons (Ashrafian et al. 2014). The future potential for such machines is currently

123

A Humanitarian Law of Artificial Intelligence and Robotics

immeasurable and may extend to levels of super-intelligence (Bostrom 2003) or above human intelligence. Whilst this reality has not yet been achieved, there have been enough foundation elements towards this ‘singularity’ (Kurzweil 2005) that have led to the pre-emptive conception of two particular philosophical and ethical paradigms. The first considers whether robots could ever be truly sentient or rational in the ‘human way’ and if so could they be moral? The second reflects on what are the conditions and consequences of a dialogue between mankind and sentient machines in the guise of human-robot interactions? To address these issues theorists have expanded ideas from the philosophy of mind to derive specific concepts in the computational theory of mind. This applies the classical mind–body problem and the ontological distinctions of dualism and monism to consider the raw data processing ability of computers, machine cognition and the semantic properties of mental states. In 2002, engineers and ethicists also tackled this field to derive an accepted notion of roboethics (Veruggio 2007), offering a set of guidelines and rules with which to design future robots. In contrast, machine ethics (Wallach and Allen 2009) has been derived to consider the morality of artificially intelligent machines as Artificial Moral Agents (AMAs). These deliberations have culminated in the concept of robot rights that has been pre-emptively considered by a plethora of institutions ranging from the Institute for the Future of the United Kingdom Department of Trade and Industry and the South Korean government who have proposed the establishment of a formal Robot Ethics Charter. Consequently the Engineering and Physical Sciences Research Council (EPRSC) and the Arts and Humanities Research Council (AHRC) of Great Britain published a set of principles for robot designers in 2011 to ensure a common set of values for robot production (EPSRC and AHRC 2010): 1. 2.

3. 4.

5.

Robots are multi-use tools. Robots should not be designed solely or primarily to kill or harm humans, except in the interests of national security. Humans, not robots, are responsible agents. Robots should be designed; operated as far as is practicable to comply with existing laws and fundamental rights and freedoms, including privacy. Robots are products. They should be designed using processes that assure their safety and security. Robots are manufactured artefacts. They should not be designed in a deceptive way to exploit vulnerable users; instead their machine nature should be transparent. The person with legal responsibility for a robot should be attributed.

Many of the original concepts for such robot rights stem from the precept that robots will ultimately be answerable to pre-set human laws. The universally acknowledged original source of this notion was generated in the sphere of science fiction where the seminal ideas presented in the stories of the biochemist and futurist Isaac Asimov were formulated into three initial laws (Asimov 1950a) and a subsequent zeroeth law (Asimov 1985, 1950b). Here the higher numbered laws are superseded by the lower laws:

123

H. Ashrafian

0. 1. 2. 3.

A robot must not harm humanity A robot may not injure a human being or, through inaction, allow a human being to come to harm. A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

Whilst all these proposed regulations carry the ideals of potential future robot activity amongst mankind, they do not consider the nature of how each artificial intelligence will act on other artificial intelligences. What is the character of robot– robot interaction? And what morals should this interaction demonstrate? The interactions between artificial intelligences or robots can be termed Artificial Intelligence-on-Artificial Intelligence or AIonAI. There is a need for ethical inquiry into this area, especially insofar as AIonAI is likely to impact not only human-robot interaction (HRI) but also interactions among human beings. AIonAI therefore represents an independent and fundamental principle that requires attention from philosophical and moral viewpoints and as such could subsequently be expressed as an additional and essential law of robotics. The nature of this manuscript is not to identify whether robots should have rights, or what is the appropriate nature of human-robot interaction as these have been already considered elsewhere. The aim of this manuscript is to address the issues that if robots do have rights, how should they interact with each other?

AIonAI Foundations The circumstances in the above introductory moral case highlight at least three fundamental AIonAI questions: 1. 2. 3.

How should rational artificially intelligent robots interact together? What does conflict among robots tell us about human society? What values should be upheld and how should the humans respond when they observe the human-designed artificially intelligent robots in conflict?

As a society, humans cannot generally control how they feel (i.e., knee-jerk reactions to situations). In the case above, the distress caused to human beings by the problematic AIonAI constitutes a failure on the part of the robot ‘‘protectors’’ to adequately safeguard the welfare of humans, assuming that doing so includes protecting not only their physical welfare but also their psychological well being; or at least not damaging it further. This leads to 2 additional questions: (1) If there is no physical harm to humans, should AIonAI abuse be permissible? What if seeing AIonAI abuse results in psychological damage to humans? (2) If there is no physical harm to humans, should AIonAI abuse be permissible in cases where humans do not witness the abuse?

123

A Humanitarian Law of Artificial Intelligence and Robotics

AIonAI Interactions and Human Society The concept of an inherent set of individual rights was suggested as early as 539 BC by Cyrus the Great when he ‘freed [the Babylonians] from their bonds’ (Finkel 2012). This moment was far-reaching as Cyrus was a Persian liberated the Babylonians who were a people of another culture and creed to him. Although there is likely political intent in his message, this was an unusual act at the time as he did not apply the standard rhetoric of invade, conquer and enslave, but instead ratified the concept of rights for a group unrelated to him to offer the perception of universality. Although Cyrus’ contribution to human rights still remains controversial, a universal concept of rights for mankind were not widely implemented until the Universal Declaration of Human Rights (UDHR) was adopted by the United Nations General Assembly on the 10th December 1948. The declaration was developed in response to a worldwide recognition that ‘‘all human beings are born free and equal in dignity and rights’’, and that failure to recognise these inalienable rights for all humans would fundamentally reflect poorly on humanity if they were not embraced universally independent to culture and creed. Such rights denounced racism, sexism and slavery, and are considered as mandatory for every single individual. Furthermore they reinforced a message that it is not enough for rights to be upheld locally for any individual, but must be upheld in any nation and in any location occupied by mankind. Although the UDHR was adopted in 1948, it was preceded by several moral theories that contributed heavily to its synthesis. Many of the concepts developed during that time were designed to recognise that as humans we need to ensure that we treat other humans with dignity. However, these have also been applied to how we interact with non-humans such as animals and pets. For example scientists experiment on nonhuman animals to prevent harm to humans, but are also legally required to uphold an ethical code with which to treat animals in a ‘humane way’ by applying the principles of humane experimental technique (Replacement, Reduction, Refinement) (Ashrafian et al. 2010; Russell and Burch 1959). As a result, experimental animals are not to endure unnecessary pain or suffer from inappropriate living conditions. These laws are not solely based on how humans treat individual animals, but also recognise an inherent need to minimize harm among non-human animals. It is illegal to encourage animals to fight or to harm each other unnecessarily. In such a situation an animal that physically harms another (other than for established biological and nutritional need in the context of their evolved ecosystem) is deemed to be inappropriate for human society. Pets are not permitted to abuse or injure other animals or humans and if this occurs, the owners are legally liable. Observing one animal harming another unnecessarily is deemed inhumane and is therefore not tolerated in society. Even if humans themselves do not come to harm as a result of animal-on-animal abuse, such acts are not allowable as there is a cogitated transgression of the notion of animal rights and this reflects poorly on human society. Robots and artificial intelligences will also occupy a sphere within or in dialogue with human society. In some situations they may be considered in the same light as

123

H. Ashrafian

animals and may lack comparable intelligence, consciousness, sentience, or rationality to human beings, however there is a possibility that future artificial intelligences may achieve equivalent or even surpass human intelligence and may yet attain comparable-to-human sentience and consciousness. There are important differences and different moral implications associated with mere consciousness versus rationality; an infant is conscious, for example, but hardly rational. In either circumstance, whether sentient, rational or non-rational artificial intelligences or robots interact (AIonAI), the occurrence of abuse can be considered as immoral and inhumane and would reflect poorly on human society. This concept bypasses the question of whether robots have equivalency to humans in terms of rights (which as previously stated is considered elsewhere), but as in animals a transgression of inherent rights by non-humans on non-humans in a human society still reflects poorly on mankind and is therefore undesirable to civilisation. This is because humans are responsible guiding agents for non-human animals with the ability and power to control their activity and presence where necessary. In these terms nonhuman animal abuse on non-human animals renders their human masters both morally and legally culpable. Similarly AIonAI or robot-on-robot violence and harmful behaviours are undesirable as human beings design, build, program and manage them. An AIonAI abuse would therefore result from a failure of humans as the creators and controllers of Artificial intelligences and robots. In this role humans are therefore the responsible guiding agents for the action of their sentient and or rational artificial intelligences, so that any AIonAI abuse would render humans morally and legally culpable. As a result, the prevention of AIonAI transgression of inherent rights should be a consideration of robotic design and practice because attention to a moral code in this manner can uphold civilisation’s concept of humanity.

Morals and Philosophy Several dichotomies exist when considering AIonAI relationships. Prominent amongst these is the consideration of universalism versus cultural relativism. The universalism approach favours the concept that each rational artificial intelligence has inherent, inalienable, intrinsic rights that must be fundamentally maintained at a level beyond that of society or law. As a result each artificial intelligence agent or robot has rights independent to its company of manufacture, hardware characteristics (technological phenotype) or software/programming status. The concept of cultural relativism presents a counter argument that artificial intelligence or robot rights can change depending on the cultural norms of each time period. Consequently while AIonAI abuse may be indefensible in one generation, it may be the social norm in another in a similar fashion that slavery was once accepted by mankind and is now largely obsolete. During the Age of Enlightenment, the prominent English philosopher and ‘Father of Classical Liberalism’ John Locke (1632–1704) developed his theory of Natural Law. Applying this to rational artificial intelligences or robots presents the second dichotomy between Natural Law Theory and Positivism. According to Locke’s

123

A Humanitarian Law of Artificial Intelligence and Robotics

Natural Law, rational artificial intelligences exist in a natural state but enter a ‘social contract’ by mutual agreement for the good of a wider community. This offers a system to protect individual AI rights such as liberty and protection from AIonAI abuse, although the exact definition of natural rights agreed by mutual agreement can be broad and may not confer adequate protection according to some societies concept of AI rights. Whilst Natural Law theory offered protection from absolutism, it can be balanced against the concept of positivism where a notion of ‘natural law’ is disregarded in favour of the adoption of an overarching law that accommodates rights and regulations for AIonAI relationships. The implementation of positive law proposes that such a system is to be obeyed irrespective of moral counter-arguments against it, which in human history has not always resulted in accepted humanitarian outcomes according to present standards, demonstrated by examples such as South African apartheid and anti-Semitic Nazism. As a result, any application of Positivism to artificial intelligences and robots requires deep vigilance to ensure accepted universal standard of morality for AIonAI relationships. In the latter part of the Age of Enlightenment the philosopher Jeremy Bentham (1748–1832) formulated his theory of utilitarianism. He explained that every decision was derived from a balance of pleasure and pain and that every policy decision for society should be derived from the maximisation and collectivisation of total net happiness. This concept therefore promotes societal happiness, and in the context of AIonAI would focus on the greater ‘happiness’ of artificial intelligences and robots in their relationships. Such a theory has been criticised, as it does not accommodate individual autonomy in favour of greater happiness and does not necessarily guarantee an absolute value to each individual robot. As a result, the wellbeing of each artificial intelligence or robot can be compromised when faced with the doctrine of the greater happiness of an artificially intelligent society. Utilitarianism is typically balanced in normative law by the theory of deontology that was championed by Immanuel Kant (1724–1804). Here laws are designed to protect individual autonomy, liberty, and rights, so that in AIonAI each artificial intelligence would have a ‘duty’ or ‘obligation’ to consider the rights of another. Whilst mankind currently applies robots in a variety of settings that takes advantage of their ability to work in contexts that enhance human performance (robot surgery) or protect them (robotic tanks), they remain to be applied in the paradigm of master(human)-slave(robot) interaction. Human-to-human slavery was predominantly abandoned as slavery fundamentally diminishes individual rights, which is not tolerable in our current freedom pervading society. The master–slave paradigm largely exists because (1) we have not yet developed robots that we ‘trust’ and (2) current robots are not sentient. The adoption of such a paradigm does not reflect that we would encourage AIonAI abuse. Promoting good AIonAI or robot– robot interactions would reflect well on humanity, as mankind is ultimately the creator of artificial intelligences. Rational and sentient robots with comparable human intelligence and reason would be vulnerable to human sentiments such as ability to suffer abuse, psychological trauma and pain. This would reflect badly on the human creators who instigated this harm, even if it did not directly affect humanity in a tangible sense. In actuality it could be argued that observing robots abusing each other (as in the case above) could lead to psychological trauma to the

123

H. Ashrafian

humans observing the AIonAI or robot-on-robot transgression. Therefore the creation of artificial intelligences or robots should include a law that accommodated good AIonAI relationships.

A Proposed AIonAI Law for Robots and Artificial Intelligence As a global civilisation, we have already considered the benefits of mutual respect and an adherence to a common principle of inherent rights formally listed in the Universal Declaration of Human Rights (UDHR). The adoption of such principles for AIonAI interactions would seem reasonable, rational, utilitarian and workable in cases where artificial intelligences or robots do not contradict other fundamental robotic laws such as the prevention of harm to humanity or humans. Article 1 of the UDHR (United Nations 1948) specifies, ‘‘All human beings are born free and equal in dignity and rights. They are endowed with reason and conscience and should act towards one another in a spirit of brotherhood.’’ Based on these rights, which were established for mankind, a similar law could exist for robots and their AIonAI interactions—an AIonAI law for robots: ‘‘All robots endowed with comparable human reason and conscience should act towards one another in a spirit of brotherhood.’’ Although the adoption of such a law might not be feasible in the current era as a result of the technological infancy of artificial intelligence, the early consideration of its principles may guide future robot and artificial intelligence design. Furthermore, its application may have even further reaching implications, promoting the greater good of mankind.

Implementation of AIonAI Laws Implementing a set of AIonAI laws between robots and artificial moral agents will require a universal consensus as to the exact nature of such laws. At a practical level, immediate questions arise regarding the real-world extent of these laws. An initial starting point would be a comparison of how AIonAI or robot–robot interactions would compare to the UDHR. An example of this would include the ‘‘right to marry and found a family,’’ which is often referred to as a ‘‘right to reproduce’’ or a ‘‘right to procreate’’. How far should the UDHR be extended to robots? Would they have all of the rights afforded to humans or only some? And are there justifications for limitations on certain rights? One critical point is to consider that AIonAI laws are specifically how artificial intelligences or robots interact with each other, as opposed to how humans should treat AIs. An unclear distinction between these topics will likely lead to profound confusion when considering AIonAI and human-robot interactions. One very preliminary example is to list each article of the UDHR and compare whether AIonAI laws should have equivalent laws as Human–Human laws (Table 1).

123

A Humanitarian Law of Artificial Intelligence and Robotics Table 1 Proposed example of comparing universal declaration of human rights for human-on-human laws and AIonAI laws UDHR

Universal declaration of human rights

Human-on-human

AIonAI

Article 1

Act towards one another in a spirit of brotherhood

4

4

Article 2

Rights without distinction of any kind, such as race, colour

4

4

Article 3

Everyone has the right to life, liberty and security of person

4

DBH

Article 4

No one shall be held in slavery or servitude

4

4

Article 5

No one shall be subjected to torture or degrading punishment

4

4

Article 6

Everyone has the right to recognition everywhere before the law

4

DBH

Article 7

All are equal before the law

4

DBH

Article 8

Everyone has the right to competent national tribunals for rights

4

DBH

Article 9

No one shall be subjected to arbitrary arrest, detention or exile

4

DBH

Article 10

Everyone is entitled to a fair and public hearing

4

DBH

Article 11

Innocent until proved guilty according to law

4

DBH

Article 12

No one shall be subjected to arbitrary interference with his privacy

4

4

Article 13

Right to freedom of movement and residence

4

DBH

Article 14

Right to seek and to enjoy in other countries

4

DBH

Article 15

Right to a nationality

4

DBH

Article 16

Right to marry and to found a family

4

DBH

Article 17

Right to own property

4

DBH

Article 18

Right to freedom of thought, conscience and religion

4

4

Article 19

Right to freedom of opinion and expression

4

4

Article 20

Right to freedom of peaceful assembly and association

4

DBH

Article 21

Right to take part in the government of his country

4

DBH

Article 22

As a member of society, has the right to social security

4

DBH

Article 23

Right to work, to free choice of employment

4

DBH

Article 24

Right to rest and leisure, including working hours

4

DBH

Article 25

Standard of living adequate for the health and wellbeing

4

4

Article 26

Right to education—at least in the elementary and fundamental stages

4

4

Article 27

Right freely to participate in the cultural life of the community

4

DBH

Article 28

Entitled to a social and international order where right recognized

4

4

Article 29

Free and full development of his personality is possible

4

4

Article 30

Non-permissible to perform any destruction of rights and freedoms

4

4

DBH determined by humans

123

H. Ashrafian

In this example, AIonAI laws would advocate that robots and artificial intelligences should uphold and respect the rights of other robots and artificial intelligences in terms of equality and brotherhood. They should not inflict physical or mental abuse and cannot commit other sentient robots to servitude unless this was part of a pre-programmed human requirement. Furthermore each robot or artificial intelligence should afford others the right of freedom of thought, self-expression and privacy where possible. They would also consider each other’s wellbeing and standards of existence. Here twelve of the 30 human UDHR laws would have equivalent AIonAI laws (Table 1). Areas where the UDHR cannot yet be offered between robots and AIonAI include the judgement of law and tribunals, the right for independent political activity and socioeconomic welfare and the right to higher information. This is because both robots and artificial intelligences are primarily built and programmed by humans, such that many of these decisions will be set by the human societal regulations that created them and could not therefore be offered by other robots or artificial intelligences unless they were programmed to act as proxies for humans. Furthermore the right to procreate or even claim nationality would depend on the regulations that built the robots and artificial intelligences. As already stated, such considerations are very primordial in their extent and future strides in technological development will likely change both the human-robot and AIonAI dialogue such that these will be subject to change in future epochs. Other foreseeable exceptions might be when robots are programmed to battle against each other in the context of future wars that will likely require an increasingly advanced technological element. At a day-to-day level, the application of AIonAI laws would require answers to a myriad of additional questions including: 1. 2. 3. 4.

5.

What it actually means for robots to ‘‘act towards one another in the spirit of brotherhood?’’ Does it mean something different for robots than it might for humans? If so, what are the relevant differences? How does this apply across different categories of robots? Is there a difference regarding robots to which humans are exposed and with whom there is extensive HRI versus those with which humans have little interaction (e.g., therapeutic robots or caregiver robots vs. robots that work alone in a factory, away from humans)? Should we only consider how robot behaviour impacts on humans, or are we also genuinely concerned about the impact of AIonAI on the robots themselves?

One conceivable stance is to be embrace a philosophy similar to Kant’s view regarding our treatment of animals: we should refrain from abusing animals, not because animals are intrinsically valuable or worthy of moral consideration but because of what cruelty to animals does to the human character (i.e. most problematically, for Kant, cruel treatment of nonhumans will lead us eventually to mistreat other humans). Consequently one sentient robot should not abuse, damage or harm another, and should be afforded freedom of thought and activity wherever possible. This would not however apply between sentient and non-sentient robots in

123

A Humanitarian Law of Artificial Intelligence and Robotics

much the same way that we don’t offer our personal computers or digital assistants any formal rights. This also highlights the issues raised between sentience and rationality in robots in terms of AIonAI interactions. It can be argued that sentient beings are due some moral consideration even if they are not rational creatures. For example, rabbits are not rational and nor are human infants, but we think they are due certain moral consideration in view of the fact that they are sentient. Given that some robots might be sentient and not rational, while others might be both sentient and rational, the application of AIonAI laws should be applied to all sentient artificial intelligences and robots. A possible hierarchy where robots that are both rational and sentient ‘outrank’ only sentient robots in terms of eligibility for AIonAI laws could be seen as improper because rationality does not automatically alter the level of harm that may be suffered in the absence of AIonAI laws (furthermore, teaching robots to assess each other rationality may prove practically difficult). This distinction is relevant not only in establishing how humans should treat robots but also how different kinds of robots should treat each other. There are going to be different expectations for robots with rights versus those that lack rights (or the capacity for moral agency). As a result, robot or AI sentience could act as the eligibility criteria with which to be offered AIonAI laws.

Conclusion The future of artificial intelligence research and robotics is likely to demonstrate substantial strides in technological evolution and advancement. So far mankind has successfully developed the primordial rubrics of robotic design and morals. These include some early work on machine and roboethics, which has predominantly focussed only on human-robot interactions. One large area of the future suffusion of robotic and artificial intelligence in human society is the interactions of these nonhuman agents with each other; literally artificial intelligence-on-artificial intelligence or AIonAI. Future prospects for artificial intelligences include the eventual possibility of rational sentient and conscious non-human intelligences that may be comparable to human equivalents. This leads to the subsequent possibility of artificially intelligent technologies that may suffer from human-like vulnerabilities of abuse and mistreatment. As a result, the consideration of mutual value and respect of rights between AIonAIs will form a profoundly important role in the future of robotic and artificial intelligence relations. This will have impact on both artificially intelligent and human societies, which in turn necessitates vigilance against AIonAI transgressions of basic rights. In order to address this, a law of AIonAI rights can prevent robot-to-robot abuses, but also can importantly reinforce mankind’s central ethics of decency and morals. The ultimate adoption of a law of AIonAI not only offers guidance for the future of robotics and artificial intelligence but also offers a deep-seated and beneficial reflection on the fair and just principles of human society. Conflict of interest None.

123

H. Ashrafian

References Ashrafian, H., Darzi, A., & Athanasiou, T. (2014). A novel modification of the Turing test for artificial intelligence and robotics in healthcare. The International Journal of Medical Robotics and Computer Assisted Surgery, In press. Ashrafian, H., Ahmed, K., & Athanasiou, T. (2010). The ethics of animal research. In T. Athanasiou, H. Debas, & A. Darzi (Eds.), Key topics in surgical research and methodology. Berlin: Springer. Asimov, I. (1950a). The evitable conflict. Astounding Science Fiction, 29(1), 48–68. Asimov, I. (1950b). The evitable conflict. Astounding Science Fiction, 45(4), 48–68. Asimov, I. (1985). Robots and empire. New York: Doubleday. Bostrom, N. (2003). Are you living in a computer simulation? Philosophical Quarterly, 53(211), 243–255. Dick, P. K. (1968). Do androids dream of electric sheep?. New York: Doubleday. EPSRC and AHRC (2010). Principles of robotics: Regulating robots in the real world (http://www.epsrc. ac.uk/research/ourportfolio/themes/engineering/activities/Pages/principlesofrobotics.aspx). Finkel, I. (2012). Translation of the text on the Cyrus Cylinder (http://www.britishmuseum.org/explore/ highlights/articles/c/cyrus_cylinder_-_translation.aspx) Ó Trustees of the British Museum. Kurzweil, R. (2005). The singularity is near: When humans transcend biology. New York: Viking (Penguin Group). Russell, W. M. S., & Burch, R. L. (1959). The principles of humane experimental technique. London: Methuen. United Nations (1948). The Universal Declaration of Human Rights (UDHR) (http://www.un.org/en/ documents/udhr/). Veruggio, G. (2007). Euron Roboethics Roadmap, Release 1.2 (http://www.roboethics.org/index_file/ Roboethics Roadmap Rel.1.2.pdf). Wallach, W., & Allen, C. (2009). Moral machines: Teaching robots right from wrong. New York: Oxford University Press (OUP).

123

AIonAI: a humanitarian law of artificial intelligence and robotics.

The enduring progression of artificial intelligence and cybernetics offers an ever-closer possibility of rational and sentient robots. The ethics and ...
189KB Sizes 1 Downloads 0 Views