Why saying "AI Ethics" is too broad (Response Optional but Encouraged)
Hey Delegates,
Less than a week away until conference! We have loved grading your papers thus far (as much as anyone can love grading papers) and I think I've noticed how many of you reference AI ethics and AI ethical standards as a broader term.
I wanna dive into why the term AI ethics is a little too broad and how you can narrow your scope for your solutions. Think about the general term of ethics to understand why this is important. We all know that it is important to be "ethical" and we might recognize when something is certifiably unethical (i.e. maybe I don't know the dictionary definition of ethics but I do know it is unethical to take a bribe). When we dive into AI ethics, the lines are a little blurred though because the international community hasn't come to a consensus yet on what those lines are. Some even ask if artificial intelligence can be ethical at any stage of implementation.
Assuming you can use AI ethically, what could ethical usage constitute? Lets take some terms that Microsoft uses to explain "Responsible AI" as an example:
1. Fairness (Is a system operatively fairly?)
2. Reliability and Safety (Is data and outputs secure?)
3. Privacy and Security (Is data private and un-breechable?)
4. Inclusiveness (Can systems engage everyone equally?)
5. Transparency (Do we know how these systems work the way they do?)
6. Accountability (What happens if something goes wrong? Is there a mechanism for accountability)
Let me know what you think in the comments below. Here are some prompting questions:
1. Can AI be ethical? If so, how?
2. Does the government have a responsibility to regulate AI? Should government use AI?
3. What solutions involve AI?
Along with the rapid expansion of technology on a global scale, it’s undeniable that AI has become a fundamental part of several businesses and industries. For example, according to the Harvard Gazette: total business spending on AI is predicted to hit over $110 billion annually by 2024, even when including the economic slump resulting from the Covid-19 pandemic. As small and large businesses grow closer to dependence on AI systems, it’s increasingly important to authorize the ethical use of AI. While on the topic of how truly “ethical” AI can be, it’s undeniable that although hard, AI can definitely be regulated to perform it’s job while considering ethical morals through the following ways:
ReplyDelete1) AI researchers and ethicists must formulate a universal plan text that explicitly states what “ethical” behavior is.
2) Because AI systems are often biased in favor for the humans that created the algorithm, certain standards on who’s responsible during which processes of the AI system must be created to uphold accountability
3) The minimization of consumer echo chambers, especially on social media platforms and algorithms will curtail the political division between parties and elections
4) Protection against data anomalies that will possibly lead algorithms astray
5) While using machine learning AI systems, inherently biased empirical data must be balanced with carefully crafted synthetic data to actualize accurate predictions
But now, this leads to the question of- what powers does the government have over enforcing these ethical standards? Governments should have a large responsibility in regulating the use of AI, and should become familiar with using it too for its several different benefits. Accountability is especially important, especially when we see cases where even large companies like Amazon have created gender biased hiring algorithms. Or, on the political scene, where media sites like Twitter and Facebook have been accused of being inherently biased against politicians from conservative parties. Overall, governments must focus on increasing transparency between AI systems and the public, which will further allow trust between the two.
Last but not least, the reason why the issue on AI ethics is so important is because of the various solutions it would impact. Just today, AI systems are used in banking to approve loans, diagnostic decisions during hospital emergency rooms, used to bill and process paperwork in a variety of industries from health care to manufacturing firms, and more. Overall, as our globalized world increases its use of AI systems, how we define and ensure ethical use in AI is so evidently important.
Sources:
https://www.mckinsey.com/business-functions/risk/our-insights/controlling-machine-learning-algorithms-and-their-biases#
https://news.harvard.edu/gazette/story/2020/10/ethical-concerns-mount-as-ai-takes-bigger-decision-making-role/
https://towardsdatascience.com/why-ai-must-be-ethical-and-how-we-make-it-so-b52cdb1dd15f
https://medium.com/@drpolonski/can-we-teach-morality-to-machines-three-perspectives-on-ethics-for-artificial-intelligence-64fe479e25d3
https://www.theguardian.com/technology/2018/oct/10/amazon-hiring-ai-gender-bias-recruiting-engine
https://www.bbc.com/news/technology-54552101
-Written by the Republic of Cameroon!
As individuals, we agree wholeheartedly with the authors of Cameeron's comments. In particular, we think that the very idea of fairness merits further exploration, especially in regards to the line between operative fairness and fairness in a more general sense. Operative fairness is often, but not always, achievable, and usual testable, with clever computer scientists and some helpful mathematicians. But for AI to operate fairly, as in impartially in the broader sense of the word, is far more difficult. It is simply impossible to please every party to an issue, and it is far too easy for inflamed individuals to capitalize on injustice, whether perceived or real. As such, an agreeable definition of fairness itself seems out of reach: is fairness equal times for both sides to an issue, equal promotion of content from all perspectives? Or is fairness giving more air time, more promotion of content on social media to perspectives and ideas that represent a greater portion of the people? Is it fair to promote popular videos on video-sharing websites since they may consist of higher-quality content...or is it more fair to promote those less popular, whose creators likely worked just as hard? And what even is high-quality content in the first place? Perhaps we should rethink our idea of fair, to focus not on perceived equality in a field where equality among 7.8 billion people, many of whom lack access to this technology at all, is impossible but to encapsulate fair in a strictly technical sense: fair, as in probabilistically and statistically valid. Because as soon as AI begins to make decisions in which one group is favored over another, there is bound to be an argument - and usually a strong one - that the algorithm operates unfairly.
ReplyDeleteWritten by Aashi Jhawer & Jeremy White
I agree with the Delegation of Cameroon, and Aashi and Jeremy. I especially liked the idea by the Delegation of Cameroon of a universal document that states exactly what AI is. Although I as an individual like the idea, when thinking of it from a country or governmental stance, how will we get all countries to agree on this single definition of AI? Countries are currently operating on their definitions, what is going to convince them to incorporate other countries' ideals that they might not necessarily agree with? Aaashi and Jeremy, you propose some eye-opening questions that got me thinking about the fairness of AI and how certain social media companies are promoting specific posts. Your post ties back to Cameroon’s post, specifically the section tied to why governments should be involved and keep accountability on certain biases that might take place.
ReplyDeleteAI Ethics is an extremely broad topic, so it is difficult to discuss it because we do not have one set definition. Artificial intelligence cannot consort with ethical behavior. They can only factor in the data that is given and will use only algorithms to reach their decisions. The lack of ethics that technology has is why I previously posted that algorithms are not going to take over human professions, at least not in the near future. Countless companies are developing new technologies, and more people- now more than ever- are resorting to technology to support them in their everyday lives. However, AI can not “really” replace humans because of their lack of consciousness. Therefore, countries must set guidelines as artificial intelligence and the debate regarding AI ethics grows. There should be a few broad guidelines regarding accountability, fairness, transparency, etc. Companies are individually pursuing various sorts of technology, so while they have the same basic guidelines, they should have additional guidelines approved by the government to ensure that they have a clear way of operating with AI ethics. Although I believe we have a far way to go before coming to a consensus on the definition of AI ethics, once we can establish trust between the government, companies, individuals, and other countries, I believe that we will be on the path to preventing technological biases and preventing malfunctions that risk the safety of citizens.
Governments need to get more involved when it comes to this topic. They need to be able to set and approve guidelines and ensure that technology is made to the best of the company’s ability to prevent injury. There should be extensive inspection and testing to prevent any citizens from being injured. AI companies are holding more and more power as the demand for technology rises, and the government needs to establish ties with these companies so that they can monitor these companies to prevent further biases from worsening and to guarantee companies are not negatively affecting the well-being of citizens.
Al ethics is so crucial in the modern world because technology is being used for everything from preventing accidents to fighting diseases; from saving lives to helping us in day-to-day life. As technology advances, it is essential to consider the morality of it and how governments, companies, and even us individuals can define and secure the ethical application of technology.
Katherine Verrando (Chile)
As an aside, it is humorous to point out the irony of using Microsoft’s definition as the basis of ‘Responsible AI’, given Microsoft’s extensive controversial use of AI to meticulously track and harvest user data without any input from a user. With the sheer content, however, it comes as apparent that Microsoft's loose outline is quite in touch with necessary actions and modifications to existing AI.
ReplyDeleteAsides from many ‘national-security’ related programs and military, governments are largely AI (or broadly digital)-illiterate. Government websites often feel outdated and out of touch, always two steps behind the norm despite much needed renovation. AI can be used responsibly; say, for instance, a government needed to identify a leader of a terrorist cell by comparing collected pictures with that on a government database. To manually comb through results would require extensive time investments, which might be false assessments at that--this is an example of an AI-literate government organ that benefits society.. If a machine can do a human task better than a human that benefits society, the simple question must be asked: why not?
-Swiss Confederation (Alexander Guess)