LSU to Embed Ethics in the Development of New Technologies, Including AI
April 28, 2022
Deborah Goldgaber, director of the LSU Ethics Institute and associate professor in the Department of Philosophy & Religious Studies, has received a $103,900 departmental enhancement grant from the Louisiana Board of Regents to begin to reshape LSUâs science, technology, engineering, and math (STEM) curriculum around ethics and human values.

Deborah Goldgaber is the director of the LSU Ethics Institute and will lead the work to embed ethics in LSU STEM education and technology development.
Baton RougeâWhile many STEM majors are required to take at least one standalone ethics course in addition to their âregularâ science and engineering classes, LSU students in multiple disciplines will soon see ethics as more integral to STEM. By partnering with 10 faculty across LSUâs main campus, from library science to engineering, Goldgaber will build open-source teaching modules to increase moral literacyââknowing how to talk about valuesââand encourage the development of more human-centered technology and design.
The LSU program will be modeled on Harvardâs EthiCS program, where the last two letters stand for computer science. Embedded ethics education goes counter to the prevalent tendency to think of research objectives and ethics objectives as distinct or opposed, with ethics becoming âsomeone elseâsâ responsibility.
âIf we want to educate professionals who not only understand their professional obligations but become leaders in their fields, we need to make sure our students understand ethical conflicts and how to resolve them,â Goldgaber said. âLeaders donât just do what theyâre toldâthey make decisions with vision.â
The rapid development of new technologies has put researchers in her field, the world of Socrates and Rousseau, in the new and not-altogether-comfortable role of providing what she calls âethics emergency servicesâ when emerging capabilities have unintended consequences for specific groups of people.
âWe can no longer rely on the traditional division of labor between STEM and the humanities, where itâs up to philosophers to worry about ethics,â Goldgaber said. âNascent and fast-growing technologies, such as artificial intelligence, disrupt our everyday normative understandings, and most often, we lack the mechanisms to respond. In this scenario, itâs not always right to âstay in your laneâ or âjust do your job.ââ
âLeaders donât just do what theyâre toldâthey make decisions with vision.â
Deborah Goldgaber
Artificial intelligence, or AI, is increasingly helping humans make decisions, come up with answers, and discover solutions both faster and better than ever before. But since AI becomes intelligent by learning from established patterns (seen or unforeseen) in available data, it can also inherit existing prejudices and biases. Some data, such as a personâs zip code, can accidentally become a proxy for other data, such as a personâs race.
âI think thereâs a fear that technology could get beyond our control, which is alienating,â Goldgaber said. âEspecially in such a massive terrain like AI, which already is supplementing, complementing, and taking over areas of human decision-making.â
âFor us to benefit as much as possible from emerging technologies, rights and human values have to shape technology development from the beginning,â Goldgaber continued. âThey cannot be an afterthought.â
Goldgaber gives the example of a mortgage application to illustrate the so-called âblack box problemâ in AI and how technologies developed to increase efficiency inadvertently can undermine equality and fairness, including our ability to judge whatâs just or unjust for ourselves. Risk assessments based on lots of lots of data has become one of the core applications of AI.
âYou get a report that youâre not accepted and donât get the loan, but you donât know why, or the basis on which the decision was made,â Goldgaber said. âIf you canât know and evaluate for yourself the kinds of reasons or logic used, it undercuts your rights and your autonomy. This goes back to the foundation of our normative culture where we believe we have a right to know the reasons behind the decisions that affect our lives.â
Western philosophy is based on the Enlightenment idea that humans have worth because they can reason and make decisions for themselves, and because they have these capacities, they also should have the right to choose and act autonomously. AI, meanwhile, often provides answers without any explanation of the reasoning behind the output. There is no further information.
âLSU students must know that what they do matters to the world we live in.ââ
Deborah Goldgaber
Her collaborative curriculum development at LSU will span four areas: AI and data science; research ethics and integrity; bioethics; and human-centered design. Over the last year, she has also been collaborating with Hartmut Kaiser, senior research scientist at LSUâs Center for Computation & Technology, on an AI-focused speaker series where ethics is one of the recurring themes.
âOur goal is to place LSU at the forefront of emerging efforts of the worldâs greatest universities and research institutions to embed ethics in all facets of knowledge production,â Goldgaber said. âLSU students must know that what they do matters to the world we live in.â
The LSU Ethics Institute was founded in 2018 with generous seed funding from James Maurin. It is the center for research, teaching, and training in the domain of ethics at 51Âț».