What is algorithm bias? (Response Optional but encouraged)

 Hey everyone, 

I wanted to talk a little bit about another topic that is a consistent problem across both our topics and that would be algorithm bias. Algorithm bias is the process that happens when an algorithm produces systematically biased results during the machine learning process. 

The private and public sector are increasingly turning to artificial intelligence (AI) systems and machine learning algorithms to automate processes. This has led to an ongoing conversation to discover why algorithms meant to replicate success, such as algorithms that filter out candidates for jobs, were returning inherently biased outputs. Previously, companies consistently rely on historically biased data for their AI training data sets. Those inferences from historical data can replicate past instances of ‘success’, but without looking into the historical biases, the algorithm will overlay past injustices and amplify them.

To balance the innovation of AI and machine learning, data has to be vetted thoroughly before market implementation. By distinguishing between algorithmic literacy, understanding the minutia of an algorithm’s code, and algorithmic harm, the negative outputs that yield biased results, people can hold companies accountable for sexist or racist results. For instance, companies have hired successful people but the criteria for success has shifted over time. Previously, females and minorities were excluded from the hiring process so any algorithm shown that data will replicate similar outputs.

What can potentially be done about algorithm bias?:

  • Anti-bias experimentation should occur before an algorithm is implemented, preferably in a regulatory setting.
  • Create safe harbors for using sensitive information to detect and mitigate bias. From those safe harbors, recognize past biases in data.
  • Update nondiscrimination and civil rights laws to apply to digital practices.
  • As a company, develop a bias impact statement that includes design principles and cross-functional work teams.
^ Those suggestions are just some I've come up with based on my experience. I would love to hear what you think!!! Here are some questions to consider: 

  1.  What does algorithm bias mean for governments? Should they focus more on this subject? 
  2.  Are their ways to prevent bias in algorithms or will they always be biased? 
  3. Do you imagine a future where algorithms will fully replace human decision making? What is your stance or your countries stance?  





Comments

  1. With the increased use of facial recognition technology and machine-learning technologies, especially in politics and legal systems, algorithmic bias has become a more prominent and complex issue for governments. In particular, algorithm bias in social media such as Facebook has shown to advertise pages and posts for one political candidate over another in the United States, and this has become a problem since posts of this kind have led to approximately 340,000 more votes from users (Bond et al.). If this phenomenon continues, it could potentially lead to a digital form of gerrymandering and significantly skew election results. For this reason, governments and companies alike should focus on researching methods of decreasing methods of algorithm bias. Such methods include developing bias-detecting algorithms that can be implemented in AI technology or using machine-learning auditors, which are software that can scan the artificial intelligence device and its data in order to detect any racial or gender-based biases. However, the complexity and length of algorithms along with a lack of transparency from social media companies and search engines, who may and have frequently considered their algorithms trade secrets, pose as serious challenges to researching this phenomenon and preventing it. While the pace of technology, especially since the start of the pandemic, is startlingly-fast, a world where algorithms completely replace human decision-making still seems far from the present. While computers will likely replace almost every mundane decision we make, personality choices, like what to wear and what to eat, and the highest level ethical decisions seem likely to remain in human hands - and given the manifold issues of algorithm bias, rather rightfully so.
    Source Used: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3834737/

    Written By: Aashi Jhawer & Jeremy White

    ReplyDelete
    Replies
    1. Love your use of the term "digital gerrymandering" because technology absolutely has the power to exasperate inequities in our society. Excellent response and assertions.

      Delete
  2. For governments, algorithm bias means further perpetuation of existing human biases, and raises many questions and concerns regarding how we can continue to integrate the use of AI into our daily lives whilst eliminating AI bias, if at all possible. Algorithms were initially created by humans, and data is likewise collected and processed by humans, and so intrinsic human bias is reflected in both those aspects. When the biased data is then processed to create new algorithms. those algorithms will go on to perpetuate the bias represented by the original data.
    With our government, corporations, and greater society in general, there should indeed be a greater focus on this subject when recognizing the commonly overlooked scenarios affected by bias. There have been recorded instances of inarguable bias, such as an instance when identical descriptions were given of a female manager and a male manager, and yet the same assertive traits made many perceive the woman to be aggressive and selfish, whereas the male was perceived far more positively. In another such instance, job recruiters were handed entirely identical resumes, with the only distinguishable difference being the names of the applicants (Gupta and Krishnan). The resumes with more archetypically African-American sounding names received 50% fewer interview calls, indicating a stark human bias, which would be unjust if used to form algorithms, and further detrimental in its inevitable butterfly effect.
    With the nature of how inequities are caused and even widened by a bias that extends its roots deeper and deeper into digital algorithms, it’s no question that there will always exist bias for as long as we choose to create and run human-made algorithms. Even in the event that we are able to accelerate the development of AI and algorithmic potential to the point where algorithms can fully write other algorithms with virtually no human interference, the original source for algorithms will always be traced back to human manipulation, no matter how far back. As long as human input exists, bias will too, and the only way in which we can try to correct the root of algorithm bias is in reforming the thoughts and contributions of the humans behind it.
    Fifty years ago today, or even 20 years ago today, it can presumably be stated on behalf of a majority of the world that few of us ever envisioned a world as technologically advanced and dependent as ours today. With the rate at which technological innovation has occurred in recent history, a future where algorithms are able to accurately simulate human thought and decision-making isn’t so unthinkable. However, there are certain limitations that we recognize to be unique to human thought beyond that of simulable potential. The primary concept is that of human perception. Although it is certainly possible to program AI to produce an output in response to a certain stimulus or input that is eerily human-like — having been demonstrated on multiple occasions — it is our belief that AI will never be able to fully simulate the countless intricacies and varieties of human thought. Although the human subconsciousness has an incredible impact upon our everyday function, it’s something that we haven’t even truly come to understand ourselves, making it far beyond the reach of machine simulation for the foreseeable future.

    Gupta, Damini, and T S Krishnan. “Algorithmic Bias: Why Bother?” California Management Review, 17 Nov. 2020, cmr.berkeley.edu/2020/11/algorithmic-bias/.

    -Delegation of Cameroon - personal opinion :)

    ReplyDelete
  3. As technology becomes more advanced and the world shifts over to a digital age, algorithms are increasing for the online ads a user might see, job recruitments as presented by Amazon, facial recognition, etc. And although these algorithms are becoming more common in day-to-day life, they are often biased as they can not replicate the essence of human consciousness and common sense. One of the most notable examples of algorithm bias is the Correctional Offender Management Profiling for Alternative Sanctions (COMPAS) which is used in US Court Systems to predict how likely it would be for the accused to relapse into another crime. Due to the data that had been inserted into the algorithm, it showed there were twice as many black defendants as white defendants predicted of recidivism. However, despite the preceding statistics, it cannot be used to effectively judge what will happen in the eventuality due to the diverse moving parts and ever-changing circumstances. These algorithms are basing their choosings on what the prior standards were, but they should be basing it off of what we expect it to be in the future. Despite the current measures to reduce inequalities, artificial intelligence systems and algorithms are contributing to them by using biased information from the past to make their selections. For example, Amazon’s algorithm for hiring employees was more prone to choose men over women, because applicants within the last ten years had been submitted by the majority of men. However, Amazon has been promoting a variety of initiatives to empower future female innovators, such as the Amazon Women in Innovation Bursary which funds the schooling of woman students. Why would Amazon, who heavily stresses to inspire female innovators, be biased in their choice of what gender works for their company? This is because they were using outdated statistics in their algorithm. As we try to lessen injustice decisions of inequalities, it is baffling to me that we are using this unfair information that factors inequalities in AI systems that we want to determine our future.

    There should be a heavier focus on algorithm bias by governments. Not only can these algorithm biases affect the equality of their citizens (which further affects their wealth and well-being), but they can also affect the economy. In countries with worsening inequalities, it has been shown that the economy is worse because the financial gap between the rich and the poor is increasing. Also, these algorithm biases encourage corruption and can oftentimes lead to unfair elections and choosings of governmental officials.

    With the new rise of technology and the fast development of contemporary innovations, AI is becoming more common in everyday life. Algorithms are being used to save time and money. Despite this, I do not believe that algorithms will overtake the use of human resources anytime soon. Humans have different perceptions of what should be considered fair or unfair, biased or unbiased, and right or wrong. There has been a prominent debate of the ethics regarding AI and we are nowhere near being able to agree on how a computer system should be able to make decisions. Although AI can consider a wide variety of factors in its decision-making, it will never be able to truly replicate human consciousness and will always only be basing its decisions on the information that has been imputed. Despite the rapid increase of artificial intelligence, we still have a long way to go before machine learning can process information to make unbiased choices.

    Written By: Katherine Verrando (The Delegation of Chile)

    https://www.aboutamazon.co.uk/diversity
    https://towardsdatascience.com/real-life-examples-of-discriminating-artificial-intelligence-cae395a90070

    ReplyDelete
  4. A key area of algorithm bias additionally comes in the political influence present in said algorithm biases. Algorithms show users content that they browse or peruse, leading to a very, very well documented case of digital users falling down a rabbit hole. It’s very easy to live in a political bubble, divorced from the other aisle, in such a way that an individual in a political bubble might be disconnected from any kind of reality.

    To use an instance users are likely familiar with, the January 6th Capital Riot in the United States is a sample of the long-term ‘rabbit hole’ effects and deep polarization they often lead to. Strewn about by misinformation and existing in the algorithm bias’ political bubble, many radicalized individuals found ways to congregate online. In fact, it’s largely affirmed that the Capitol Riot was primarily planned online within these very bubbles.

    Though a much more daunting task in real politik terms, it’s important to begin disassembling algorithm bias and tracking user data in general. Though it poses downsides, such as user convenience, it would prove a critical step to ensuring similar events don’t occur around the world.

    -Swiss Confederation (Alexander Guess)

    ReplyDelete

Post a Comment

Popular Posts