AI systems are already bringing an augmentation of human capabilities to shape the world. They may also drag a replacement of human conscience in large chunks of life. AI systems can be designed to leave in human hands moral control of the processes they help (intentionality, intelligence, freedom), to obstruct or diminish that moral control, or even to prevent it, replacing human morality with pre-packaged or developed 'solutions' by the 'intelligent' machine itself.
Nowadays, Artificial Intelligent (AI) systems are increasingly being used in multiple applications and receiving more attention from public and private organizations. Practitioners must ensure that AI systems are governable, open, transparent and understandable and able to work efficiently with people and consistent with human values and aspirations. However, there is still little research in the area of the moral control in AI systems. The purpose of this article is to offer a comprehensive mapping of the logical technologies underlying AI systems whether they assist, guide or directly adopt decisions, always considering the moral 'locus of control', in terms of intentionality, intelligence and free will.
Through a literature research and reflection process, we explore questions related with four aspects of the AI technologies: (1) Problem definition in the use of AI for complex issues; (2) Influence of the data input and the data quality; (3) Extent in which human intentionality, intelligence and free will are allowed/required to intervene in the various types of processes and models; (4) AI systems in decision support and decision autonomy.
This article contributes to the field of Ethics and Artificial Intelligence by providing a comprehensive discussion for developers and researchers in order to understand how and under what circumstances the 'human subject' may, totally or partially, lose moral control and ownership over AI technologies.
Experience level
Intermediate
Intended Audience
All
Speaker(s)
Session Time Slot(s)
Time
-