This Geek I Know

This Geek I Know

tech news YOU can use

Human Bias in Artificial Intelligence

Image by Robin Zebrowski, check out the rest of her work here

Artificial intelligence (AI) has the potential to revolutionize our world, but it is not without its flaws. Despite its advanced technology, AI still reflects human bias, which can have harmful consequences for society and individuals. The reason for this is simple: AI is being trained by humans.

The process of developing AI involves feeding algorithms with vast amounts of data to teach them to recognize patterns and make decisions. But this data is often biased, reflecting societal and cultural norms that perpetuate inequalities and discrimination. As a result, AI can reflect the same biases as its human creators.

The root of the problem lies in the data used to train AI, which is often biased due to the biases of its human creators. AI programmers and engineers also play a role in perpetuating bias, as they are responsible for selecting and processing data to be used in AI development.

This doesn’t necessarily reflect malice. We all have biases, because our lives are “me-centric.” It’s a survival mechanism. As a result, AI bias can simply be a matter of subjective measurements, like “big”, “pretty,” “comfortable,” and “difficult.” We often perceive things in relationship to ourselves. I’m 5’9″ tall, so a “big” chair for me might be very different from a “big” chair for my mother-in-law, who is barely over 5′ tall. Most people thing learning about cybersecurity is “difficult,” but for me it isn’t. If I tell an AI system that something is “small,” but someone much smaller than I am tells that same system that the same item is “not small,” the system receives conflicting information about the same items. The system can become confused, taking many more examples to discern whether or not the item is “small.”

Examples of biased AI can be found in many areas of our lives, from facial recognition software that has been shown to be less accurate for people of color, to predictive policing algorithms that disproportionately target communities of color. These biased algorithms can perpetuate systemic injustices, making it even more difficult for marginalized communities to access resources and opportunities.

Efforts to address bias in AI development are underway, but they are still in their infancy. It is crucial that diverse teams and perspectives are involved in AI development, to ensure that AI reflects the diversity of our society. Transparency and accountability are also necessary to ensure that AI is used in an ethical and just manner. You should still get familiar with it, though, and pay attention to the potential biases you may find in the results. It may be a long time before we can say the systems are completely neutral.