Automation of every aspect of society revolutionizes how humans interact among them, private organizations, and public authorities. Assistive robots help people eat, dress, and walk; intelligent home appliances vacuum, help us cooking, and surveil the place when we are on holidays; medical robots help surgeons operate and do therapy, and sex robots may help disabled populations enjoy the pleasures from which their condition deprived them. These machines are just a few examples of devices that increasingly interact with humans in private, professional, or public settings. Algorithms also determine whether people get a loan, go to college, or be a risk to society.
As one can imagine, introducing these technologies does not entail mere instrumental changes that improve resource efficiency, restrain expenditure, or help us improve our life. On the contrary, these changes are part of complex social processes that growingly include the industry and inevitably raise fundamental legal questions relating to safety, autonomy, privacy, responsibility, discrimination, and dignity, to which the answers are yet to be formulated. The growing use of data processing to support ulterior decision-making processes, for instance, leads to new vulnerabilities like discrimination against certain groups. Also, our laws focus very much on physical safety, whereas we interact with machines in many ways, and the continuous use of these machines may entail harm in ways that humans cannot necessarily correct or oversee. These processes raise questions about legitimacy, fairness, accountability, transparency, and empowerment, affecting the power balance between citizens, the government, and organizations. Failing to address these vulnerabilities may impede adequate legal protection and erode human dignity in an increasingly automated society.
To avoid the potential adverse consequences AI, robots and machines may have for society, we need to ensure that the future of AI is for, by, and of the people. This course will address and critically assess how the law regulates human-machine interactions. We do so by particularly looking at the automation of society. We begin the course by providing some concrete examples of society automation, including healthcare, farming, sex, and industrial robots. After setting the scene, we explore why societies around the globe see in technology the promise to increase productivity and resource efficiency and an excellent tool to restrain expenditure, although certain domains may not respond equally to the same parameters.
During the course, we will focus on the benefits but also the particular challenges revolving around the deployment of AI, robots, and machines, including problems concerning transparency and explanation; potential discrimination scenarios and exacerbation of existing biases; the construction of responsibility in highly automated environments; and the blurring of well-established concepts such as safety, in the context of machines, AI and interconnected products. We will explore solutions and learn how Europe aims to regulate these new interactions and learn how to assess the risk posed by these technologies via impact assessments. We will close the course reflecting on the long-term consequences of human-machine interactions, the added value AI and robots bring to society, and how the law should balance innovation and user rights protection in the context of an increasingly automated society.
Overarching learning objectives
The course Law and Human-Machine Interaction has five main objectives:
1. Learn about emerging AI and robot technologies, applications, and benefits.
2. Understand what the main challenges result from human-machine interactions from legal and regulatory perspectives.
3. Map the regulatory initiatives revolving around AI in Europe and the world.
4. Learn methodologies to assess the legal and ethical risks posed by AI and machines.
5. Think critically about the deployment of AI in society.
Upon successful completion of this course, students will have the ability to weigh and evaluate the development of specific AI applications, to see where potential regulatory and ethical challenges might arise in their use or deployment, and to learn methodologies to be able to advise on designing AI technologies in a way that mitigates such issues. In particular:
Understand the basic architecture and operation of AI and the directions in which it will develop soon.
Identify and recognize the potential benefits and drawbacks of the deployment of AI.
Learn and understand the fundamental regulatory issues that have emerged concerning the deployment of AI and the relevance of design choices in AI architecture.
Understand the complexity of the regulatory and policy landscape to address the legal and regulatory issues arising from the use of AI.
Learn practical methodologies to evaluate and mitigate the potential risks arising from the implementation of AI.
Academic skills and attitude
Students will further develop writing, argumentation and presentation skills by actively participating in classroom debates, systematically researching relevant legal questions, and defending statements, both orally and/or in writing, and by presenting their findings.
Attendance of 80% of the scheduled course lectures is mandatory
Group work (25 %)
Final exam (65%)
Attendance and active participation in class (10%)
If students fail this course (weighted final grade < 5,5), the grade they have received for the assignment will no longer count for the retake! There won't be another assignment for the retake, the obtained points for the retake exam determine 100% of the final grade.
Ms Patricia Garcia Fernandez
Telephone number: 0031- 71 527 4228
“Disclaimer: This course has been updated to the best of our knowledge at the current time of publishing. Due to the Covid 19 pandemic and the fluctuating changes in lock down regulations all information contained within this course description are subject to change up to 1 September 2021.
Due to the uncertainty of the Covid 19 virus after 1 September 2021, changes to the course description can only be made in the event of strict necessity and only in the circumstances where the interests of the students are not impinged. Should there be a need for any change during the duration of the course, this will be informed to all students on a timely basis and will not be to the prejudice of students. Modifications after 1 September 2021 may only be done with the approval and consent of the Faculty Board”