Have you ever wondered why artificial intelligence-powered digital assistants like Amazon’s Alexa, Apple’s Siri, and Microsoft’s Cortana are called women?
Artificial intelligence (AI) and machine learning (ML) have now entered almost every aspect of our lives like an ‘invisible hand’. It processes massive amounts of data and makes key decisions in real time.
For example, it recruits people, decides who to give credit to, makes crime and criminal predictions, advises people to listen and watch, decides who should see which ad in the digital world…
Today, it is very difficult to find a field where artificial intelligence is no longer involved. Some researchers suggest that this is a new type of infrastructure. A strong infrastructure that is not physical or visible, but is at the center of the decision processes of all social relations, organizational practices and actions…
Now back to the first question! Why are these digital assistants gendered with female names? Actually, the answer is simple and dramatic. Because of the gender stereotypes that place women in the role of service and assistant, albeit in the digital world!
Artificial intelligence and ‘algorithmic bias’!
Technology makes our life easier. It provides efficiency at many points. Artificial intelligence applications are like this. It offers numerous benefits.
There is another side of the issue that needs to be discussed: These technological developments actually grow by being fed by society’s relationship sets, value judgments and patterns. In other words, issues such as deeply rooted prejudices, discrimination, and inequality for humanity can gain a solid place in the heart of these emerging technologies as ‘algorithmic bias‘.
Therefore, not everyone has the opportunity to benefit equally from the advantages offered by technology. The algorithmic bias that arises when algorithms produce discriminatory results against certain categories of individuals (usually minorities and women) can further fuel existing social inequalities, especially when it comes to race and gender.
There are many examples of this being discussed around the world. Amazon’s hiring AI app, which was discontinued after it was found to be sexist, is like Goldman Sachs being investigated by regulators for using an algorithm on Apple credit cards that allegedly discriminates against women by giving men more credit lines than women.
The issue is critical not only for humans, but also for all companies that embed artificial intelligence into their business models, especially in terms of facing reputational risks. Companies that entrust their decision processes to artificial intelligence need to be prepared for risks that they have not managed before.
Code of ethics in AI
Let’s continue with the new questions. Who decides how to mirror this set of values and prejudices to AI? How is it controlled? Who is held accountable? How is the situation in terms of accountability, transparency, fair audit?
When people do something wrong, it has consequences before authority and society. The simplest form of shame and guilt arises. (Or at least, we morally expect it to be.) Although the concepts of justice and equality are concrete and universally one and only, it may not be possible to make decisions based on these concepts with algorithms. Or, at the other extreme, it could be that algorithms turn into a much more autocratic compass with a mission of overcontrol. So, the subject is not so easy on the axis of values!
Artificial intelligence works by ‘learning’ from datasets: Algorithms are created to mine data, analyze it, identify patterns and do it. Datasets can come from any number of sources; photographs, health data, government data, or social media profiles.
Social prejudices and inequality are often embedded in such data, and artificial intelligence will not propagate social values such as justice unless programmed directly into it. So, if an AI recruitment system relies on previous recruitment data, where very few women were hired in the past, the algorithm will continue that pattern.
On the other hand, data can also be biased due to omissions. Datasets can bypass entire audiences without internet history or social media presence, credit card history, or electronic health records, leading to skewed or biased results.
This debate is not new, of course. In 2016, in the World Economic Forum, especially “humanity” and “equality” were listed as one of the ethical issues of artificial intelligence. UNESCO has published a code of ethics in the digital world.
Today, by reviewing all the ethical elements, the general principles that should be emphasized, especially for artificial intelligence, are listed “Transparency, justice and equity, non-harming, responsibility, privacy, kindness, freedom and autonomy, trust, sustainability, dignity, solidarity.”
Artificial intelligence and sustainable healthy future!
Technology applications such as artificial intelligence will offer different solutions at many points, especially regarding climate and environmental issues. However, for a sustainable and healthy future, not only climate and environmental issues, but also rooted issues of humanity such as justice, equality and freedom need to be handled with the same care and sensitivity.
As artificial intelligence gradually pervades our lives, it will be extremely important to deal with these problems and connect artificial intelligence in a way that will solve these fundamental problems rather than fuel them.
Article: Can AI Mitigate Racial Inequity? by Nitin Mehta