“Whose Ethics?”

Reid Blackman, Ph.D.

Product development teams, especially those working on machine learning models, have a new set of responsibilities they may not have bargained for: thinking about the ethical impacts of their creations. The task is not easy, and one question that many people ask is: “Whose ethics?”

The rationale that motivates the question is entirely reasonable. It’s also misguided. Let’s see why.

When people ask “whose ethics?” they’re recognizing that different people and different groups of people (e.g. ethnicities, cultures) have different ethical beliefs. Some people think abortion is morally permissible, others think it impermissible. Some think immigration and diversity are good for ethical and economic reasons, while others think it’s disastrous on both counts. Some thinks statistical parity with regards to the distribution of goods and services across various subpopulations is sufficient for securing the fair treatment of those groups, and some vigorously deny this.

There is also a touch of humility behind the question. “We, the product team, have our ethical values, but is it right to impose our values on others? Shouldn’t we defer to the values of the people that will use our product? Isn’t it their values that matter and not ours?”

In one way, this line of thought is right on track. In another, it misses a larger point.

What it gets right is that talking to product users and those who will be impacted by the product (note: these are multiply distinct groups, even if there is overlap in the Venn diagram) should, ideally, be consulted with regards to how the product may or may not comport with their values. Stakeholder feedback is important for your ethical ambitions. But learning about the values of your stakeholders is far from sufficient for knowing what to do, and it doesn’t settle the question, “whose ethics?”.

First, your stakeholders are not a monolithic group from which you can programmatically derive an ethical conclusion. Your varying stakeholders have varying ethical beliefs, and your product team will have to think through those ethical conflicts just as it does when it thinks about stakeholder input with regards to more “functional” features of the product. How to weigh the conflicting values and interests of your stakeholders in your design is an ethical decision you can’t help but make.

Second, and relatedly, when you ask “whose ethics?” and decide to defer to the ethical values of your stakeholders, you’ve made an ethical decision that it’s ethically acceptable to defer in this case. Suppose, for instance, your stakeholders are mostly white supremacists. If they commission the product and you say, “Well, we need to defer to the ethical values of those for whom we’re creating,” then you’ve decided it’s ethically permissible to defer to the values of white supremacists. That, however, is clearly a highly ethically contentious decision.

Ethical decisions are not something your product team can ignore or foist upon someone else. Being a product designer, especially in the age of machine learning, means, in part, being someone whose ethical values will necessarily play a role in how the product is designed. I’ve discussed elsewhere how to start engaging in responsible ethical reflection, and there is more to be said. But here as elsewhere, practice, practice, practice.

Latest articles

Browse all
April 23, 2024

IBA Group and Data Monsters Form Strategic Generative Al Partnership

IBA Group, a global leader in software services and prominent SAP partner, together with Data Monsters, an NVIDIA Elite consulting

Read
April 23, 2024

Data Monsters Celebrates Recognition at NVIDIA's EMEA Partner Day

In an impressive celebration of innovation and commitment to artificial intelligence (AI), NVIDIA has recently honored 18...

Read