What is a Deepfake? (Response Optional but Encouraged)

Hi delegates, 

Here is our first substantive blog post that I would love to see responses to so we can start the conversation before committee starts. The first topic we will talk about is deep fakes in relation to our first topic. While you don't have to respond, I highly encourage engagement whether you do so formally or informally! 

As innovation expands, the capacity to manipulate imagery and spread it widely has resulted in the spread of Deepfakes. Deepakes are defined as “synthetic media where a person in an image or video is swapped with another person's likeness” that can become incredibly realistic when combined with voice editing technology and inaccurate text descriptions. 

Government has a vested interest in maintaining public trust in technology since Deepfakes can sow distrust into the political process as well (Portnam 4). In May 2019, an altered video made Speaker of the House Nancy Pelosi appear inebriated and garnered over 2 million views and 45,000 shares on Facebook (Harwell 2). The video, which took an original speech by Speaker Pelosi and slowed it to 75%, was not removed from Politics WatchDog’s conservative Facebook page because the company does not require individuals or groups to post “information that is true” (Harwell 3). While Facebook did label the clip as factually misleading, part of the ownness of regulation falls on the consumer. The government could engage in a digital literacy campaign that helps individuals ascertain whether something they have seen requires further research. While this might require a congressional reallocation of resources, it is one of the most effective ways to promote innovation as seen in the National Institute of Standards and Technology’s (NIST) campaign against email phishing (Boss 1). Deepfakes expand beyond the political process as well, leading to increasing cases of inaccurate representation and harmful outcomes in sociotechnical systems. Therefore, individuals outside the political process have interests in government regulatory tools.

To truly prevent the spread of Deepfakes, the general public has to view the Deepfake phenomenon as a personal co-production problem rather than an overarching technology problem. While the government and industry can create standards, consumers have to self-monitor what they consume (Vaccari 29). Individuals need to become apt at recognizing what sources are considered reputable and transcend their personal biases to separate fact and fiction. Deepfakes have become ingrained into the social and cultural fabric of the internet. Since the 21st century has been defined by the ability to share and spread information quickly, individuals often fail to crosscheck the content they consume. Now, innovation has grown to the point where consumers can edit technology fairly easily so the ability to circulate realistic manipulated content has already expanded. 

So I would love in the comment sections to hear what you think, either as an individual or as your country. Here are some prompting questions: 

  1. Does increased technology automatically lead to increased misinformation? If so how do we solve this?  
  2. What has the coronavirus pandemic meant for the spread information? Has it accelerated the deep fake or misinformation phenomena?  
  3. Who should handle misinformation and technology? Is the responsibility on the company, individual, country or a combination? Who should lead these efforts. 

A quick aside! I LOVE this topic personally and professionally. The summer after my sophomore year of college I was in Washington D.C. going to congressional hearings about deep fakes and I actually ended up in the background of vice news! Please ignore my awful haircut, I assure you I did not think I'd be on ANY camera LOL. The video won't load on blogger but enjoy the screenshot of Sophomore Michelle! This was taken at a House of Representatives hearing in July 2019. 







Comments

  1. Technology truly impacts all spheres of life in the modern world. Thus, placing the responsibility for data protection solely on consumers, who have no control over what content reaches platforms, solely on tech companies, who hold very little regulatory authority in most countries, or solely on governments, who often lack the technical expertise to make informed decisions, only limits the ability of society to address deepfakes and misinformation. As the coronavirus pandemic continues to ravage much of the world, technology is an increasingly important component of many people's lives, including those who rarely used social media or video chat in the pre-pandemic era. As such, there are now many more unsuspecting users vulnerable to believing viral disinformation. Additionally, the virus has increased levels of fear across the world, and disinformation and deepfakes largely play into fear. Those the consumption, and in turn production, of rabid disinformation and deepfakes has only increased during the pandemic. As increased technology and greater sophistication of technology has led to the widespread creation of bots on the internet, a greater means of spreading fake news and propaganda, and even fabricated political campaigns through political astroturfing, it is inevitable that misinformation has increased (Menczer). Not only have trained hackers been able to successfully disinform ordinary users of the internet, but, through social media and messaging tools, ordinary users have also propagated misinformation by liking, commenting, and re-sharing posts, as we have especially seen amidst the COVID-19 pandemic. I think that the primary means to prevent the spread of misinformation should be education, particularly amongst high social media users such as teenagers and young adults. While this is already being done in much of the United States and some other countries, education curriculums should regularly teach students how to identify common signs/warnings of misinformation at a more global level.
    Source (cited above): https://theconversation.com/misinformation-on-social-media-can-technology-save-us-69264

    Note: This post reflects only my personal views. By all means feel free to disagree.

    ReplyDelete
    Replies
    1. Hi Jeremy!! Thank you so much for giving your thoughts. I think you gave a great explanation for your viewpoint. -Head Chair Michelle

      Delete
  2. In the same manner that there are always two sides of the same coin when it comes to any given situation, the growth of connectivity always brings forth both opportunities and issues concerning how people communicate with each other, and how public communication platforms can be used for less-than-virtuous purposes. Technology can truly be repurposed to do anything from sharing innocuous moments of your daily life to pushing forward a large-scale socio-political agenda. When the element of technology as sophisticated as deepfakes comes into play, it’s easy to see how it would naturally carry the potential to be a tool of malice and misinformation, regardless of whether the intention is to simply poke fun or slander.
    The issue of misinformation isn’t one that can truly be “solved” per se, as it’s not something that we can expect to leave us anytime soon, and so the most sustainable and applicable approach would be a defensive one in which we foster digital literacy and build resistance against falling victim to misinformation. In creating a better educated digital environment, we can create a community where people are more cognizant of potential threats and misinformation, such that the effects of misinformation are minimized, and serves as a natural discouragement for those attempting to spread such misinformation.
    The coronavirus pandemic, although not in direct relation to the issue of deepfakes, has seemingly contributed to the secondary plague, being that of misinformation, in bringing forth those with extreme opinions who have the means to push them forward. This is often seen in acts of fear-mongering and hate-mobbing, both of which thrive in such an environment of ongoing panic and lack of rationale in thorough research.
    When it comes to determining accountability for such transgressions, it’s hard to ever pinpoint a specific demographic or body of power, given the difficulty that comes with trying to balance between users’ freedom of speech/expression and the preservation of truth and accuracy in media. Although it is inherently wrongdoing on the individual’s part in intentionally spreading misinformation, the enforcement of regulations against misinformation ultimately falls upon the company who owns the means or platform from which the information is posted, but is then met by the issue of how to go about targeted enforcement without infringing upon individuals’ rights and heavily restricting much of the media posted which is potentially harmless. The efforts to fight misinformation should be led by a joint effort with input from both the lawmakers with legislative power and the corporate entities with the means of execution and enforcement.

    -Delegation of Cameroon - personal opinion :)

    ReplyDelete
  3. The increase of technology inevitably leads to the heightened spread of misinformation. Because of the high-speed transportation of messages and a platform where they can easily be shared, such as social media, this allows for the quick spread of ideas and information. While the spread of information can be good in some cases, because it can warn of natural disasters, current crises, the latest updates, etc. it also allows anyone to spread information and deepfakes throughout the internet even if it is not true. Therefore, it is difficult to tell fact from fiction. When modern technology was nonexistent, individuals could not spread information on a wide-scale, not only because communications took months to travel, but because there was not a uniform place that information could be shared with people across large cities, countries, and the world.

    Before cell phones were invented and technology was uncommon, it caused information to be spread more slowly but also lessened misinformation. Take the War of 1812 between the United States and Britain as an example. Communication was difficult, as messengers had to travel on horseback and could sometimes take up to months due to war-torn regions and the high demand for the transfer of information. The fighting ended in the Battle of New Orleans when the British attacked the Americans. However, this battle was unnecessary because the Treaty of Ghent was signed two weeks prior to the battle and ended the war. Because of the lack of substantial communication, the generals did not know the war had ended and carried out the battle which caused the death, injuries, and capture of thousands-- both British and American.

    If online communication had existed like it does today, this battle would not have occurred because the widespread of online information is much more convenient and quicker. Unfortunately, it also means it is more convenient for people to spread deepfakes. With the advantages come downsides. Having strict regulations in moderating misinformation and a consensus between social media companies and governments, will be the most effective way of preventing misinformation and deepfakes from spreading.

    The current pandemic has promoted the spread of misinformation and deepfakes through online platforms, partially because the world is switching over to a digital age where everything is online. People are becoming more connected to others through their cell phones than they are in person, due to social distancing measures. This has allowed people to discover the vast world of social media and to be “posting and scrolling” more often to digest misinformation. While before, these deepfakes posts were not looked at by as many people, so many more have been increasing their screen time. Many individuals have lost their jobs, have fewer responsibilities, and are staying home all the time. This allows more time to scroll through the internet and social media, or even create deepfakes themselves.

    It is the responsibility of the government, companies, and individuals combined to prevent the spread of misinformation. Individuals should not reshare, like, or engage with deepfakes. Individuals should also refrain from posting any deepfakes or misinformation posts. The government should be holding companies responsible, and create the necessary regulations to set guidelines for companies. Companies should be in charge of ensuring that misinformation posts are removed from their platforms and make certain that their technologies go through necessary testing to prevent others from being harmed. To effectively eliminate misinformation, governments, companies, and individuals must have close relationships.

    Katherine Verrando (Chile - personal opinion)

    ReplyDelete
  4. The troubling potential of Deepfakes highlights the frightening absence of tech savvy governments. Citizens, for instance, are ill-equipped by many educational systems in countries to approach the digital age as technological boon rapidly exceeds realistic regulation by governments. However, misinformation in itself is quite easily spread; human beings are hardwired to embrace very emotional responses, rather than separating fact from opinion. Though critically beneficial during a caveman era, this fails to translate smoothly into our modern era. It is therefore not just the right but the responsibility for a government to actively pass laws to maintain security and relevance in an age where digital editing can alter a political figure immensely and social media enables falsified information to spread like wildfire.
    Additionally, these Deepfakes have had an astonishing ability to bleed into the general stream of misinformation that circulates the internet. Time and time again, this digital misinformation has real-world consequences, impacting elections, politics, and laws.
    Interestingly, the topics of digital technology seems to feed into one another quite well. For instance, a key solution to the very crisis posed by Deepfakes might be artificial intelligence to monitor this. Yet ironically, the very solution of artificial intelligence in itself poses a new set of problems; what if an AI is repurposed for ill-intent? To inflict damage instead of sow a wound?

    -Swiss Confederation (Alexander Guess)

    ReplyDelete

Post a Comment

Popular Posts