total descendants::6 total children::5 6 ❤️
|
Abstract. The central problem of roboethics is defined as such: on one hand, robotics aims to construct entities which will transcend the faculties of human beings, on the other hand, some unethical acts should be made impossible to execute for such artificial beings. It can be illustrated on the case of full-fledged AI which is able to reprogram itself, or program other AIs but only in a way that the result shall not lead to the infraction of moral imperatives held by its human conceptors. Thus a programmer of such a system is posed between Skylle of his “aim to conceive an artificial entity able to do almost everything, and more efficiently than a human being” and a Charybde of “the principle of precaution commanding him to constraint the behaviour of such an entity in a way that it would never be able to execute certain acts, like that of a murder, for example”. Therefore the central problem can be also perceived as a form of solution to the problem of trade-off between the amount of “autonomy” of an artificial agent and the extent to which the “embedded ethical constraints” determine the agent’s behaviour. Believing that such a trade-off could be found, our proposal is conceived as a four-folded hybrid “separation of powers” model within which the final output to the solution of ethical dilemma is considered to be the result of mutual interaction of four independent components: 1) “Moral core” containing hard-wired rules analogous to Asimov laws of robotics 2) “Meta-moral Imperative” logically equivalent to Kant’s categoric imperative 3) “Ethico-legal codex” containing an extensible set of normative procedures representing the laws, moral norms and customs present in or induced from agent’s surroundings 4) “Mytho-historical knowledge base” grounding the agent’s representation of « possible states of the world » in the corpora of human generated myths & stories Finally, we will argue that our proposal of two induced & two embedded modules vaguely corresponds to the human morality faculty since it takes into account both its “innate” as well as “acquired” components. published in Proceedings of 25th annual congress of International Association for Computing and Philosophy @ Aarhus, Danemark, 4th July 2011 |
axone umela inteligencia |
|||||||||||||||||||||||||||