Research Papers

Document Type

Conference Paper

Abstract

Deepfakes - synthetic videos generated by machine learning models - are becoming increasingly sophisticated. While they have several positive use cases, their potential for harm is also high. Deepfake production involves input from multiple engineers, making it challenging to assign individual responsibility for their creation. The separation between engineers and consumers may also contribute to a lack of empathy on the part of the former towards the latter. At present, engineering ethics education appears inadequate to address these issues. Indeed, the ethics of artificial intelligence is often taught as a stand-alone course or a separate module at the end of a course. This approach does not afford time for students to critically engage with the technology and consider its possible harmful effects on users. Thus, this experimental study aims to investigate the effects of the use of deepfakes on engineering students’ moral sensitivity and reasoning. First, students are instructed about how to evaluate the technical proficiency of deepfakes and about the ethical issues associated with them. Then, they watch three videos: an authentic video and two deepfake videos featuring the same person. While watching these videos, the data related to their attentional (eye tracking) and emotional (self-reports, facial emotion recognition) engagement is collected. Finally, they are interviewed using a protocol modelled on Kohlberg’s ‘Moral Judgement Interview’. The findings can have significant implications for how technology-specific ethics can be taught to engineers, while providing them space to engage and empathise with potential stakeholders as part of their decision-making process.

DOI

https://doi.org/10.21427/EAJR-WE79

Creative Commons License

Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License
This work is licensed under a Creative Commons Attribution-NonCommercial-Share Alike 4.0 International License.


Share

COinS