Over the past decade there has been an exponential growth in a need for trained data analysts and machine learning specialists.
As teaching students who excel both at technical skills and have strong moral values is core to Ignatian pedagogy, my mission in designing a data science course has been to apply cura personalis to today's ever-growing emphasis on automation. The industry emphasis on automation through machine learning can lead to ethical fading, where the users and writers of software tools do not feel personally responsible for bad decisions made by machine learning, such as discrimination that has been found even in prediction tools by large companies such as Facebook, Amazon, Google. Algorithm-based discrimination can cause massive harm, and students need to learn to use the powerful machine learning tools they gain from courses like mine with responsibility.
The skill need for data scientists has extended to business fields adjacent to computer science, such as marketing analytics, accounting, finance, and operations, to name a few. The rush to train enough students to fill the demand in the job market has led to many technical courses that cover principles of data analysis and prediction tools but lack in educating the risks of applying machine learning and the ethics of using such tools which can inadvertently discriminate. In a USAID sponsored research grant while at MIT I studied together with an interdisciplinary group from engineering, computer science, and medicine the effects of low diversity in training data in applications of machine learning to developing country contexts. With USAID I conducted two field trips – one to Ghana and one to India – working with local partners to understand what the regulations of applying machine learning tools are outside the US when applied to outcomes of socioeconomic importance such as hiring, admissions, housing, and how to ensure fair outcomes in a world where we celebrate diversity. Our research has been published by MIT (Awwad et al., 2020), MIS Quarterly (Teodorescu et al., 2021), AIS (Tarafdar et al., 2020), or in revise & resubmit (Morse et al, 2020) and has been incorporated into my courses together (Fletcher et al., 2020) with other readings from management and computer science. Highlights from this research are included in my course.
My teaching approach combines both practical programming exercises which test for fairness (Cropanzano et al., 2003) as well as case examples, as well as theoretical concepts grounded in cutting edge management and computer science research. As more and more multinational firms employ analytics, it becomes critical to ensure that firms do not discriminate against their users or employees, or any of their stakeholders.
Algorithmic discrimination can be difficult to detect even for the best trained programmers and managers and requires thoughtful organizational processes. The course includes a framework for ethical machine learning and cases of applying machine learning in developing country contexts.
Students face ethical dilemmas as “should I make my algorithm higher performance overall, or should I make it fairer across all ethnical groups in my data?” or “should I apply machine learning to this life-changing setting for my users” (cases include medical decisions, hiring decisions, admissions, lending, housing).
One core ethical issue that has been revealed by the increasing delegation of responsibilities from human decisionmakers to machines is that of ethical fading (Tenbrunsel & Messick 2004). In an automated credit lending platform that may have a hidden bug or other algorithmic flaw, ethical fading may occur when managers simply do not “see” ethics as
being a part of the relevant decision criteria or as part of their responsibilities. For example, an individual user may be unfairly denied credit due to a flaw in the program, while the absence of a human supervisor would leave that user with no recourse. This results in a dangerous feedback loop where human supervisors of ML systems and high level managers become less and less interested in verifying a fair outcome due to no direct reward for such checks and the machine learning tool becomes more and more powerful, arriving to extremes such as automated drones that kill with no oversight (UN Report 2021), or facial recognition software that misidentifies someone yet there is no human element to prevent a computer error from sending a person to jail (Hill, 2020).
Ethicists have an opportunity to reverse this trend through research, collaborations with computer scientists, lawyers, and regulators. As faculty, we must prepare the next generation of students to program with fairness and ethics in mind.
REFERENCES
Awwad, Y., Fletcher, R., Frey, D., Gandhi, A., Najafian, M., & Teodorescu, M. 2020. Exploring fairness in Machine Learning for international development. CITE MIT D-Lab, Massachusetts Institute of Technology.
Cropanzano, R., Goldman, B., & Folger, R. 2003. Deontic justice: The role of moral principles in workplace fairness. Journal of organizational behavior, 1019-1024.
Fletcher, R. Frey, D., Teodorescu, M., Gandhi, A., Nakeshimana, A., 2020. RES.EC-001 Exploring Fairness in Machine Learning for International Development. Spring 2020. Massachusetts Institute of Technology: MIT OpenCourseWare, https://ocw.mit.edu. License: Creative Commons BY-NC-SA.
Hill, K., 2020, "Another Arrest, and Jail Time, Due to a Bad Facial Recognition Match", New York Times, December 20 2020, https://www.nytimes.com/2020/12/29/technology/facial-recognition-misidentify-jail.html
Morse, L, Teodorescu, M, Awwad, Y, Kane, G. "A Framework for Fairer Machine Learning in Organizations", Academy of Management Proceedings 2020 (1), 16889
Tarafdar, M., Teodorescu, M., Tanriverdi, H., Robert, L. P., & Morse, L. 2020, "Seeking Ethical use of AI Algorithms: Challenges and Mitigations.", AIS ICIS Proceedings December 2020
Tenbrunsel, A. E., & Messick, D. M. 2004. "Ethical fading: The role of self-deception in unethical behavior." Social justice research, 17(2), 223-236.
Teodorescu, M., Morse, L., Awwad, Y., Kane, G, 2021, "Failures of Fairness in Automation Require a Deeper Understanding of Human–ML Augmentation", MIS Quarterly, September 2021
United Nations Security Council Report S/2021/229, 2021 https://undocs.org/S/2021/229
Experience level
Intermediate
Intended Audience
Faculty
Speaker(s)
Session Time Slot(s)
Time
-