AI Ethics: On the invisibility and reproduction of bias

An ethics report was recently published by the AI Now Institute looking at some of the ethical problems that still exist with AI, which are not being considered by tech companies, nor enforced by governmental agents.

The paper divides the ethical issues caused by the proliferation of AI into 4 broad categories: Labor and Automation, Bias and Inclusion, Rights and Liberties, and Ethics and Governance,

Here, I’m going to talk about the second of these issues: Bias and Inclusion, and why even if you consider yourself a group that is normally included, this is still a problem for you.

What is bias?

There are two types of bias: implicit, and explicit. Explicit bias is when your turd of a condo neighbor tries to enlist your help in getting rid of “those damn negros” on the other side. Implicit bias is a lot more insidious.

Implicit bias is when someone holds seemingly explicit egalitarian beliefs, but is nonetheless influenced by unconscious attitudes about groups which may cause them to act with bias despite all their intentions to do otherwise. For example, a recent study surveyed academic hiring departments, where women were largely considered to be the best candidate for the job, but hiring committees held the implicit bias that women would not move for the job if they had a (male) partner, and so declined to offer them the job to avoid the risk that they would not accept the offer. In some cases, they even Facebook stalked candidates to try to discern their relationship status if it was not disclosed.

Both explicit and implicit biases become an issue when they are embedded into data sets that AI are using and learning from. One of the key concerns in this area raised by the report is that of AI in contexts such as healthcare. We are aware that inadequate research has been done on the differences in disease expressions on marginalized groups such as women and racial minorities. The lack of data in these areas means that AI systems are being educated in an incomplete fashion, leading them to reproduce and proliferate incomplete data. Given our proclivity to treat computers and AI as “more intelligent” or more authoritative than us sad little meat sacks of feelings biased humans, it is extremely problematic that bias is in fact seeping in. This is happening because like us, AI are being educated in a context that is biased. In the same way that children internalize biased rhetoric about things such as gender, AIs are internalizing biased data and reproducing those biases going forward.

Context, Context, Context

You probably want me to shut up about context at this point, because isn’t the point of AI that it can consider things in all contexts? Sure, maybe, but you’d also need to know a) what all those possible contexts are, and b) program them in. Like us, AI would need to be told that they are learning in a biased environment and then be given tools to combat that bias based on where they are being employed. Given that we don’t even know how to combat things like implicit bias well, this seems like a computing problem for many years down the line.

Regardless, there is still the issue of AIs that consider things entirely decontextualized. One of the examples given in the paper is an AI you are all probably familiar with: GPS software. The paper posits that the way a GPS AI might work is that it will give you a good route- however, to ensure that routes remain good, the software will route different individuals on different routes to prevent traffic congestion. However, what the AI does not know is that you are the partner of a pregnant person, and that person is going into labor and you need to get to a hospital right now. You plug the coordinates into the GPS and it gives you a good route- but not necessarily the fastest route, because it does not understand your context. It knows you need to get to the hospital, but it does not know why you need to get to the hospital.

An easy fix might be simply to allow for an option where in your GPS you select “fastest route”, but then we run into the problem the AI is trying to fix in the first place: everyone will select “fastest route”, increasing congestion, and making the fastest route slower than it would otherwise be, and possibly even slower than a non-congested slower route. We are simply incapable of coordinating the best outcomes for all, or the best outcomes for those most in need, because we are obsessed with creating the best outcome for us.

It doesn’t matter if you’re a minority or not, everyone finds themselves in situations where context matters. Though I think it is necessary to point out, the lack of context is often far more detrimental to minority groups. In essence, we are all gonna get fucked by AIs that reproduce bias and ignore context, but some of us are gonna get fucked a whole lot harder.

Moving Forward

Given that bias is a problem, what can we do about it? In regards to bias, the AI Now Institute recommends the following:

  1. Before​ ​releasing​ ​an​ ​AI​ ​system,​ ​companies​ ​should​ ​run​ ​rigorous​ ​pre-release​ ​trials​ ​to ensure​ ​that​ ​they​ ​will​ ​not​ ​amplify​ ​biases​ ​and​ ​errors​ due​ ​to​ ​any​ ​issues​ ​with​ ​the​ ​training data,​ ​algorithms,​ ​or​ ​other​ ​elements​ ​of​ ​system​ ​design.​ As this is a rapidly changing field, the methods and assumptions by which such testing is conducted, along with the results, should be openly documented and publicly available, with clear versioning to accommodate updates and new findings.

  2. After​ ​releasing​ ​an​ ​AI​ ​system,​ ​companies​ ​should​ ​continue​ ​to​ ​monitor​ ​its​ ​use​ ​across different​ ​contexts​ ​and​ ​communities.​ The methods and outcomes of monitoring should be defined through open, academically rigorous processes, and should be accountable to the public. Particularly in high stakes decision-making contexts, the views and experiences of traditionally marginalized communities should be prioritized.

  3. More​ ​research​ ​and​ ​policy​ ​making​ ​is​ ​needed​ ​on​ ​the​ ​use​ ​of​ ​AI​ ​systems​ ​in​ ​workplace management​ ​and​ ​monitoring,​ ​including​ ​hiring​ ​and​ ​HR.​ This research will complement the existing focus on worker replacement via automation. Specific attention should be given to the potential impact on labor rights and practices, and should focus especially on the potential for behavioral manipulation and the unintended reinforcement of bias in hiring and promotion.

  4. Develop​ ​standards​ ​to​ ​track​ ​the​ ​provenance,​ ​development,​ ​and​ ​use​ ​of​ ​training​  datasets throughout​ ​their​ ​life​ ​cycle.​ ​This is necessary to better understand and monitor issues of bias and representational skews. In addition to developing better records for how a training dataset was created and maintained, social scientists and measurement researchers within the AI bias research field should continue to examine existing training datasets, and work to understand potential blind spots and biases that may already be at work.

  5. Expand​ ​AI​ ​bias​ ​research​ ​and​ ​mitigation​ ​strategies​ ​beyond​ ​a​ ​narrowly​ ​technicalapproach.​ Bias issues are long term and structural, and contending with themnecessitates deep interdisciplinary research. Technical approaches that look for a one-time “fix” for fairness risk oversimplifying the complexity of social systems. Within each domain – such as education, healthcare or criminal justice – legacies of bias and movements toward equality have their own histories and practices. Legacies of bias cannot be “solved” without drawing on domain expertise. Addressing fairness meaningfully will require interdisciplinary collaboration and methods of listening across different disciplines.

Are these things going to happen? Maybe not. It is notoriously very hard to craft enforceable global regulations. In fact, many UN regulations have no teeth, and rely on a system of ‘blame and shame’ to encourage countries to do better on national soil or risk losing prestige on the world stage. At the national level, there is often a bureaucratic inability to keep up with tech as it continues to evolve, so in the meantime, I advocate for the UN example: call out bad tech when you see it, and identify why it is bad, push for better datasets, push for fairness, and try to recognize you own bias where it exists- if you can.

Buy Me a Coffee at ko-fi.com

Advertisements

3 thoughts on “AI Ethics: On the invisibility and reproduction of bias”

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s