Impact Lab


Subscribe Now to Our Free Email Newsletter
September 28th, 2018 at 11:47 am

Artificial intelligence hates the poor and disenfranchised

IMG_9093

The biggest actual threat faced by humans, when it comes to AI, has nothing to do with robots. It’s biased algorithms. And, like almost everything bad, it disproportionately affects the poor and marginalized.

Machine learning algorithms, whether in the form of “AI” or simple shortcuts for sifting through data, are incapable of making rational decisions because they don’t rationalize — they find patterns. That government agencies across the US put them in charge of decisions that profoundly impact the lives of humans, seems incomprehensibly unethical.

HARD FORK

IL-Header-Communicating-with-the-Future

When an algorithm manages inventory for a grocery store, for example, machine learning helps humans do things that would, otherwise, be harder. The manager probably can’t keep track of millions of items in his head; the algorithm can. But, when it’s used to take away someone’s freedom or children: We’ve given it too much power.

Two years ago, the bias debate broke wide-open when Pro-Publica published a damning article exposing the apparent bias in the COMPAS algorithms – a system that’s used to sentence accused criminals based on several factors, including race. Basically, the report clearly showed several cases where it was obvious that the big fancy algorithm predicts recidivism rates based on skin tone.

In an age where algorithms are “helping” government employees do their jobs, if you’re not straight, not white, or not living above the poverty line you’re at greater risk of unfair bias.

That’s not to say straight, white, rich people can’t suffer at the hands of bias, but they’re far less likely to lose their freedom, children, or livelihood. The point here is that we’re being told the algorithms are helping. They’re actually making things worse.

Writer Elizabeth Rico believes unfair predictive analysis software may have influenced a social services investigator to take away her children. She wrote about her experience in an article where she describes how social services — whether intentionally or not — preys upon those who can’t afford to avoid the algorithm’s gaze. Her research revealed a system that equates being poor with being bad.

In the article, published on UNDARK, she says:

… the 131 indicators that feed into the algorithm include records for enrollment in Medicaid and other federal assistance programs, as well as public health records regarding mental-health and substance-use treatments. Putnam-Hornstein stresses that engaging with these services is not an automatic recipe for a high score. But more information exists on those who use the services than on those who don’t. Families who don’t have enough information in the system are excluded from being scored.

If you’re accused of being an abusive or neglectful parent, and you’ve had the means to treat any addictions or mental health problems you’ve had in a private facility, the algorithm may just skip you. But, if you use government assistance or have a state or county-issued medical card, you’re in the cross-hairs.

And that’s the problem in a nutshell. The best intentions of researchers and scientists are no match for capitalism and partisan politics. Take, for example, that Stanford researcher’s algorithm purported to predict gayness – it doesn’t, but that won’t stop people from thinking it does.

It isn’t dangerous in the Stanford machine learning lab, but the GOP-helmed Federal government is increasingly anti-LBGTQ+. What happens when it decides that applicants have to pass a “gaydar” test before entering military service?

Matters of sexuality and race may not be intrinsically related to poverty or disenfranchisement, but the marginalization of minorities is. LBGTQ+ individuals and black men, for example, already face unfair legislation and systemic injustice. Using algorithms to perpetuate that is nothing more than automating cruelty.

We cannot fix social problems by reinforcing them with black box AI or biased algorithms: It’s like literally trying to fight fire with fire. Until we develop 100 percent bias-proof AI, using them to take away a person’s freedom, children, or future is just wrong.

Via  The Nextweb 

IL-Header-Communicating-with-the-Future

Comments are closed.

Colony square1