Businesses have a moral duty to explain how algorithms make decisions that affect people


Credit: Unsplash / CC0 Public Domain

Increasingly, businesses rely on algorithms that use data provided by users to make decisions that affect people. For example, Amazon, Google and Facebook use algorithms to tailor what users see, and Uber and Lyft use them to match passengers with drivers and set prices. Do users, customers, employees and others have the right to know how companies using algorithms make their decisions? In a new analysis, researchers explore the moral and ethical foundations for such authority. They conclude that the right to such an explanation is a moral right, then find out how companies can do so.

Analysis by Carnegie Mellon University researchers, appears in Business ethics quarterly, A publication of the Society for Business Ethics.

Tae Wan Kim, Associate Professor of Business Ethics at Carnegie Mellon, explains, “In most cases, companies provide no explanation as to how they gain access to users’ profiles from where they collect data and with whom. Trade your data. ” University’s Taper School of Business, which co-authored the analysis. “It is not only fairness that is at stake, it is also trust.”

In response to the rise of autonomous decision-making algorithms and their reliance on data provided by users, a growing number of computer scientists and government bodies have called for transparency under the broader concept of algorithmic accountability. For example, the European Parliament and the Council of the European Union adopted the General Data Protection Regulation (GDPR) in 2016, part of which governs the use of automated algorithmic decision systems. The GDPR launched in 2018 affects businesses that process personally identifiable information of EU residents.

But the GDPR is unclear about whether it includes the right to clarify how businesses access automated algorithm profiling system decisions. In this analysis, the authors develop a moral argument that can serve as the foundation for a legally recognized version of this right.

In the digital age, the authors write, some say that informed consent — obtaining prior permission to reveal information with full knowledge of the possible consequences — is no longer possible because many digital transactions are ongoing. Instead, the authors conceptualize informed consent as an assurance of trust for incomplete algorithmic procedures.

Obtaining informed consent, particularly when companies collect and process personal data, is ethically necessary unless overridden for specific, acceptable reasons, the authors argue. Furthermore, informed consent in the context of algorithmic decision making, especially for non-relevant and unexpected uses, is incomplete without assurance of trust.

In this context, the authors conclude, companies have an ethical duty to provide clarification not only before, but also after, automatic decision making, so clarification can address both the functionality of the system and the justification for a specific decision.

The authors also explain how companies running businesses based on algorithms can provide an explanation of their usage in a way that attracts customers while maintaining trade secrets. This is an important decision for many modern start-ups, including questions such as how much code should be open source, and how wide and open the application program interface should be.

Many companies are already facing these challenges, the authors note. Some may choose to employ “data interpreters”, who fill the work of employee data scientists and those affected by the decisions of companies.

“Will an algorithm be needed that is interpretable or explainable, hindering the performance of businesses or producing better results?” Brian R., Associate Professor of Finance at Carnegie Mellon’s Taper School of Business. Asks Routledge, who co-wrote the analysis. “This is something we’ll see in the near future, much like the transparency struggle of Apple and Facebook. But more importantly, the right to clarify is a moral obligation other than a lower-level effect.”

Rights and obligations under the data protection rules of the European Union

more information:
Tae Van Kim et al, Why Right to Explain Algorithmic Decision-Making: A Faith-Based Approach, Business ethics quarterly (2021). DOI: 10.1017 / beq.2021.3

Provided by Carnegie Mellon University

Quotes: Businesses have a moral duty to explain how algorithms make decisions that affect people (2021, 14 May) on 14 May 2021 Retrieved from -duty-algorithms-decisions. Html

This document is subject to copyright. No part may be reproduced without written permission, except for any impartial behavior for the purpose of private study or research. The content is provided for information purposes only.



Fortnite Arrives on Google Play!

Previous article

Twitter politely asks you to protect its targeted ad dollars in new iOS 14.5 prompt

Next article

You may also like


Leave a reply

Your email address will not be published. Required fields are marked *

More in Business