Search

The Chronicles of AI Ethics: The Man, The Machine, And The Black Box - Forbes

jokbanga.blogspot.com

Today, machine learning and artificial intelligence systems, trained by data, have become so effective that many of the largest and most well-respected companies in the world use them almost exclusively to make mission-critical business decisions. The outcome of a loan, insurance or job application, or the detection of fraudulent activity is now determined using processes that involve no human involvement whatsoever.  

In a past life, I worked on machine learning infrastructure at Uber.  From estimating ETAs to dynamic pricing and even matching riders with drivers, Uber relies on machine learning and artificial intelligence to enhance customer happiness and increase driver satisfaction. Frankly, without machine learning, I question whether Uber would exist as we know it today. 

For data-driven businesses, there is no doubt that machine learning and artificial intelligence are enduring technologies that are now table stakes in business operations, not differentiating factors. 

While machine learning models aim to mirror and predict real-life as closely as possible, they are not without their challenges. Household name brands like Amazon, Apple, Facebook, Google have been accused of algorithmic bias, thus affecting society at large. 

For instance, Apple famously ran into an AI bias storm when it introduced the Apple Card and users noticed that it was offering smaller lines of credit to women than to men. 

In more extreme and troubling cases, judicial systems in the U.S. are using AI systems to inform prison sentencing and parole terms despite the fact that these systems are built on historically biased crime data, amplifying and perpetuating embedded systemic biases and calling into question algorithmic fairness in the criminal justice system.

In the wake of the Apple Card controversy, Apple’s issuing partner, Goldman Sachs, defended its credit limit decisions by noting that its algorithm had been vetted by a third-party and that gender was not used as an input or determining factor. 

While applicants were not asked for gender when applying for the Apple Card, women were nonetheless receiving smaller credit limits, underscoring a troubling truth: machine learning systems can often develop biases even when a protected class variable is absent

Data science and AI/ML teams today don’t match protected class information back to model data for plausible deniability. If I didn’t use the data, machines can’t be making decisions on it, right? In reality, many variables can be correlated with gender, race or other aspects of identity and, in turn, lead to decision-making that does not offer equal opportunity to all people.

The Imbalance of Responsibility

We are living in an era where major technological advances are imperfectly regulated and effectively shielded from social responsibility, while their users face major repercussions. 

We come face to face with what M.C. Eilish coined, “The Moral Crumple Zone”. This zone represents the diffusion of responsibility onto the user instead of the system as a whole. Just as a car’s hood takes the brunt of the impact in a head on collision, the user of technology takes the impact for the mistakes of the ML system. For example: as it stands, if a car with self-driving capabilities fails to recognize a stop sign, the driver is responsible for any mistakes and subsequent damages that the car makes, not those who trained the models and produced the car.

To make matters worse, the users of most technology very rarely have a full understanding of how the technology works and its broader impact on society. It is unfair to expect users to make the right risk management decisions with minimal understanding of how these systems even work.

These effects are magnified when talking about users in underrepresented and disadvantaged communities. People from these groups have a much harder time managing unforeseen risk and defending themselves from potentially damaging outcomes. This is especially damaging if an AI system makes decisions with limited data on these populations - which is why topics like facial recognition technology for law enforcement are particularly contentious. Turning a blind eye is no longer an option given the social stakes.

Those who intentionally built these complex models must consider their ethical responsibilities in doing so as our world has lasting structural consequences that do not resolve by themselves.

Rise Up Or Shut Up: Taking Accountability.

We live in a society that manages its own risks through establishing ethical frameworks, creating acceptable codes of conduct, and in the end: codifying these beliefs with legislation. When it comes to ML systems, we are way behind here. We are barely starting to talk about the ethical foundations of ML, and as a result our society is going to have to pay the price for our slow action. 

We must work harder to understand how machine learning models are making their decisions and how we can improve this decision making to avoid societal catastrophe.

So, what steps do we need to take now to start tackling the problem? 

STEP 1: Admit that proper ethical validation is mission-critical to the success of our rapidly growing technology. 

The first step in exposing and improving how AI/ML affects us as a society is to better understand complex models and validate ethical practices. It is no okay to longer avoid the problem and claim ignorance. 

STEP 2: Make protected class data available to modelers

Contrary to current practices which excluded protected class data from models to allow for plausible deniability in the case of biased outcomes, protected class data should in fact be available to modelers and included in data sets that inform ML/AI models. The ability to test against this data puts the onus on these modelers to make certain their outputs aren’t biased. 

STEP 3: Break down barriers between model builders and the protected class data 

Bias problems and analysis are not only the purview of model validation teams. Putting a wall between teams and data only diffuses responsibility. The team's building model needs this responsibility and needs the data to make those decisions.

STEP 4: Employ emerging technologies such as ML observability that enable accountability

You can’t change what you don’t measure. Businesses and organizations need to proactively seek tools and solutions that help them better monitor, troubleshoot, and explain what their technology is doing. And subsequently, uncover ways to improve the systems they’ve built.

Ultimately, the problem of the black box is growing as AI/ML technologies are becoming more advanced, yet we have little idea of how most of these systems truly work. As we give our technology more and more responsibility, the importance of making ethically charged decisions in our model building is amplified exponentially. It all boils down to really understanding our creations. If we don’t know what is happening in the black box, we can’t fix its mistakes to make a better model and a better world.

Let's block ads! (Why?)



"machine" - Google News
March 13, 2021 at 04:11AM
https://ift.tt/3ey8QCV

The Chronicles of AI Ethics: The Man, The Machine, And The Black Box - Forbes
"machine" - Google News
https://ift.tt/2VUJ7uS
https://ift.tt/2SvsFPt

Bagikan Berita Ini

0 Response to "The Chronicles of AI Ethics: The Man, The Machine, And The Black Box - Forbes"

Post a Comment

Powered by Blogger.