Why artificial intelligence needs ethics lessons: avoiding bias with tech governance

talle.
5 min readSep 6, 2021

It is no novelty that the world is going through tremendous transformations, especially as the speed of technological development is increasingly accelerated and including more people each day. The global pandemic also contributed significantly to this process by increasing the flow of digital consumption in numerous different areas.

Therefore, it raises some questions. Will the way that emerging technologies are being implemented be considered fair and functional?

Are the main results being positive for society?

What can we learn from what has been done in this sense recently?

Let’s do a simple experiment. If you search for “biometrics” on Google, probably some results related to data privacy and bias will come up, indicating the need for improvements in this sector, and this is not by chance.

Witnessing AI Bias

In recent years, several biased AI systems have come to light, two reports, one from the National Institute of Standards and Technology (NIST) [Ref.] and another from researchers from MIT and Microsoft [Ref.], confirmed patterns of gender and ethnic biases from several face recognition software.

Gender and ethnic classification algorithms presented error rates of just 1% for white men but almost 35% for dark-skinned women. Demographic groups such as women, Asians, Native Americans, and people of color commonly suffer from extreme discrimination.

NIST found that Asians, African Americans, and American Indians generally had higher false-positive error rates than white individuals. Women had higher false-positive rates than men, and children and the elderly had higher false-positive rates than middle-aged adults.

Apple was also accused of sexism when the company’s credit card seemed to offer men more credit than women, even when the women had better credit scores.
Steve Wozniak, Apple’s co-founder, tweeted a complaint in November 2019 replying to AppleCard:

“The same thing happened to us. I got 10x the credit limit. We have no separate bank or credit card accounts or any separate assets. Hard to get to a human for correction, though. It’s big tech in 2019.” [Link to the printscreen]

It’s important to notice that there are benefits and risks associated with any new technology. It is relevant to outline what these risks are so that they can be assessed and mitigated. As artificial intelligence continues to spread its influence globally, social change leaders and machine learning developers must know if and when the
technology is biased.

Where is the real issue?

Tech experts point out where AI gender and ethnic biases come from because AI is human creation (and humans currently make mistakes), surprised?

Humans generate, collect, and label the data that goes into datasets. Humans determine what variables, datasets, and rules the algorithms learn from to make predictions. Both of these stages can introduce biases that become embedded in AI systems.

Data points are like “snapshots” of the world we live in, which is still not equitable and just. Women are underrepresented in high-level, highly paid positions and overrepresented in low-paying jobs regarding wealth and income concentration. In short, women earn less than men, on average, in all industries (Ref.). Debt also significantly impacts wealth; women comprise 56% of college students but hold nearly two-thirds of student loan debt (Ref.). Similarly, people of color, on average, suffer from less economic security. Racial wealth gaps also coincide with the extreme concentration of global wealth. (Ref.)

Consequently, AI may reproduce the same inequitable access to credit along racial and gender lines. And if those issues aren’t taken into account when coding AI algorithms, the software will probably maintain those socioeconomic gaps, prevailing the status quo.

Can we fix it?

The good news is it can change. When algorithms do injustices, especially when using software for decision-making that can negatively impact citizen’s lives, new legislation can (and must) be developed and enforced to protect the people.

We can improve technology to avoid social bias from algorithms. One of the first steps in developing an algorithm is selecting training datasets that we can coordinate to foster inclusion and prevent stereotypes and prejudices. It means the data used to train the algorithms must be composed of characteristics of the society we aim to build, not the society as it is today.

Fixing the gaps in tech governance

With the intention of that to start to happen, several initiatives can be put in play for instance, by advocating for AI literacy training among gender and race experts can foster expertise integration into AI systems, so developers can better understand issues and solutions to mitigate bias related to several demographic groups.

Managers can consider increasing the representation of more diverse groups on the teams developing AI systems. Such as minorities and experts with different skill sets, both technical and non-technical, can help us avoid more gaps in technology governance. For example, Women’s World Banking and Mujer Financiera use machine learning to support financial inclusion for women (Ref.).

Another perspective was brought by Catherine D’Ignazio and Lauren Klein in the book Data Feminism; it presents a new way of thinking about data science and data ethics, offering “strategies for data scientists seeking to learn how feminism can help them work toward justice, and for feminists who want to focus their efforts on the growing field of data science.” (Ref.)

While mainly big tech corps are being unmasked and criticized by the revelation of bias in their AI codes, this represents a good market opportunity for smaller organizations that, in fact, are concerned with practical ethics, improving technology due to making the world a better place, using a more human-centered approach when developing their technological solutions. Let’s hope (and work for) that the fantastic innovations we are witnessing today improve dignity, justice, and equity in our global society.

By Ariel Kozlowski on Algorithms & Ethics for Talle’s Articles.

--

--

talle.

Global media company bringing solutions to finance and technology. #SharingKnowledge #TellingStories