AI: Ontology, Ethics, Aesthetics
3rd Posthuman Studies Workshop
Organized by the History and Humanities Department of John Cabot University
Organizers: Brunella Antomarini, Francesco Lapenta, Stefan Lorenz Sorgner
SEPTEMBER 21, 10am-6PM
JOHN CABOT UNIVERSITY,
Via della Lungara 233
Towards General AI? The next challenge
In the humanistic tradition, it used to be upheld that only humans count as persons, and that personhood established a hierarchy, and unique moral status, of humans vs non-human species. Many believe that after Darwin this dichotomy is no longer plausible. All entities which have self-consciousness and the capacity to have cognitive abilities, agency and suffer, seem to deserve a special moral consideration. Yet, there are differences with respect to the capacity of suffering, depending on the qualities entities possess, in terms of consciousness or sentience. The decisive questions are the following ones: is sentience necessary for personhood? There are humans which cannot feel physiological pain. Should they not count as persons? Cognition might not be dependent on consciousness either, as there are indications for the possibility of non-conscious cognition. Or, viceversa, cognition can also lead to a type of cognitive suffering, which AIs with sensors (embodied AIs) could also realize.
These ethical considerations create the background against which a vibrant discussion is emerging about the moral status of non-organic entities such as AIs and Robots. And what are the moral and ethical responsibilities of humans in relation to their different and possible developments. The contemporary and possible future evolutions in AI seem to challenge the organic mind/brain bases of cognition, self-consciousness, suffering and the definition of cognitive abilities such as intelligence and creativity.
One issue is whether there is some structure common to both, or whether the AI ‘creativity’ is envisaged as the next evolutionary step, an emergence which cannot be broken down into simpler elements or anything that compares to, or it is an exclusive function of the organic chemistry (flesh-metal or wet-dry opposition). After the failure of the first cybernetics to use AI as a model of the brain, and after general systems theories developing from the second order cybernetics, there might be a new attempt to consider the character of the relationship between organic cognition (probable inference) and artificial cognition (big data).
And finally whether you assume AIs or robots are supposed to reproduce human abilities, creativity and aspect, or evolving on their own; develop human like cognition, morals and ability of suffering, or redefine their biological and human definitions, a question persists about the moral agency and responsibility of humanity in the development of these technological evolutions, and their effects on the environment in which they will eventually co-exist. The moral dilemma of the possible futures questions the human choices that will lead to a more or less desirable future, and whether it’s at all possible to define it, and whether it’s at all possible to ethically guide and control them, or if in a somewhat uncontrollable and competitive Darwinian evolution, as in nature’s own amoral evolution, they will follow their own unpredictable path to redefine consciousness, creativity, intelligence and our human hegemonic moral ambition.
10-11.30 am: Section I – Systemic relationship brain/AI
12-1pm: Keynote speaker: – Domenico Parisi from Rome CNR
1-2.30pm Lunch Break
2.30-4pm: Section II – Privacy, Power, and Financial Potential; AI Governance; The Moral Status of Complex Algorithms; New Juridical Issues
4-4.30 pm: Coffee Break
4.30-6pm: Section III – A new ‘techne’ bentween Art and Science
If you wish to receive further information or would like to participate in the workshop, please contact Prof. Dr. Stefan Lorenz Sorgner: firstname.lastname@example.org