
Disinformation: The Struggle for Regulation and The Role of AI
Rose Broccolo
​
Abstract
Disinformation has become a powerful figure in today’s world. With the rise of artificial intelligence (AI), spreading fake content has never been easier, while regulating it has never been more difficult. In this literature review I will discuss the obstacles being faced and attempts made to regulate the spread of disinformation. I will also discuss the central role of AI and disinformation. With the advancement of chat and social bots, deepfakes, and language models, recognizing and controlling disinformation is virtually impossible.
Introduction
Defined by Don Fallis (2015), as “inaccurate or misleading information,” disinformation has been around since the beginning of time. Using rumors and smear campaigns for political or personal gain existed long before the internet. Even the allies in World War II created fake radio transmissions to throw off the Germans prior to the D-Day invasion (Fallis, 2015). Since the creation of the internet and social media, however, disinformation has become more powerful and dangerous than ever, encroaching on democracies, and leading to murderous riots (Bontridder, & Poullet).
With the rapid advancements of aritificial intelligence in the last decade, and particularly in the last year, disinformation is nearly impossible to regulate. It has become incredibly simple not only to create fake news but also to disseminate it (Bontridder, & Poullet, 2021). Lawmakers around the globe are scrambling to create policies to address the issue, but there has been very little proof that their actions have been effective. Some countries, like the U.S., are putting their faith in private corporations to regulate disinformation across their platforms. This, too, has been an ineffective course of action.
In relation to human-computer interaction, disinformation has created a strained relationship. The more people come across fake news, whether it be an article, image, or video, the more uncertainty about the internet, news, and media grows (Vaccari & Chadwick, 2020). Once a user’s trust is broken by a source, it is hard for them to get that trust back. If they do believe the fake news they are being fed, the repercussions can be much worse. Conspiracy theories and deepfakes spread anger and can turn into riots, like the January 6th capital insurrection (Bontridder, & Poullet).
Attempts have been made to use artificial intelligence for regulation, but they have been futile. It seems keeping up with the advancements of deepfake and chat bot technologies is a daunting task, and fighting fire with fire has not proven useful (Bontridder, & Poullet).
Literature Review
The Uphill Battle for Regulation
The spread of disinformation has become increasingly difficult to regulate for many reasons. The first challenge is defining what exactly disinformation is (Fallis, 2015.) Secondly, some types of disinformation, such as deepfakes, can be difficult to recognize. Therefore, recognition is another barrier. Third, and possibly most important, is the spread. Once disinformation is in existence, it can spread throughout the social media like an epidemic (Rubin, 2019).
​
Pielemeier discusses three challenges in defining disinformation. The first he refers to as the “Definition Challenge.” It can be very difficult to recognize disinformation or hate speech versus, for example, satire (Pielemeier, 2020). The second challenge is called the “Intent Challenge,” which can be related to the first but goes deeper. If someone creates disinformation, it may not necessarily be with malicious intent. It may, for example, be a white lie to spare a friend’s feelings or protect their privacy (Pielemeier, 2020). This is where knowing the source of the disinformation becomes important, and privacy issues come into play. The third challenge Pielemeier discusses is the “Harm Challenge,” which explains the difficulty in measuring the harm disinformation can do to the public. As opposed to direct attacks like hate speech or terrorist campaigns, the spectrum of disinformation is broader and often the repercussions are not recognized until it has spread beyond our control (Pielemeier, 2020).
​
Attempts to address the issue have been made across the world with little success (Pielemeier, 2020). Several countries, like the U.S., have experienced the spread of disinformation about their politicians, affecting the outcome of elections. Some have adapted laws to combat fake news specific to politics and elections.
​
The U.S. faces the biggest obstacles in regulation due to the First Amendment. Because of the protection of free speech, the country has been relying almost solely on private corporations, like Facebook and Twitter, to come up with their own policies (Hong, 2022). Some sources believe this lack of accountability has divided the country with conspiracy theories and baseless movements, including the January 6th insurrection (DiMaggio, 2022). As for the private corporations, despite years of proof that the free-market speech model has the potential for extreme harm, Facebook has failed to implement any effective regulations to protect their users (Hemphill & Banerjee, 2021). Zuckerberg continues to emphasize the importance of protecting the freedom of expression while not recognizing the gravity of his inaction (Hemphill & Banerjee, 2021).
While dealing with similar issues in the protection of free speech, the European Union has taken more aggressive measures to regulate disinformation. They too, however, have fallen short and received criticism for unsatisfactory results (Pielemeier, 2020). The European Commission created a code in 2018 that has since been signed by Google, Facebook, Mozilla, Firefox, Twitter, TikTok, Microsoft and several online advertisers. The code has five main commitments which focus on transparency in advertising and prioritizing the verification of real users versus bots (Harrison, 2021).
​
In 2017, the President of France, Emmanual Macron, implemented a policy to attack disinformation within the context of elections called “France’s Law on Manipulation of Information” (Pielemeier, 2020). The law allows public figures to request expedited legal review of specific content. They must seek review during the three months prior to elections, and if the judge finds that the content meets the defined criteria of being “manipulation of information,” they can take steps to stop its distribution. The law also requires a "duty of cooperation" for online platforms, requiring them to provide options for users to flag disinformation, as well as transparency and media literacy agreements (Pielemeier, 2020).
​
In Singapore, the Protection from Online Falsehoods and Manipulation Act (POMFA) was implemented in 2019 criminalized fake news and disinformation (Pielemeier, 2020). The law came into existence after a series of articles came out that spread lies about elected officials. POMFA allows those in power to “order either a ‘correction direction’ or a “stop communication direction” to the author, as well as to an internet intermediary, for any false statement of fact if they are “of the opinion that it is in the public interest” (Pielemeier, 2020, p. 934).”
In an effort to get ahead of the problem, the Malaysian government took a different approach. Before their 2018 election, they enacted an “Anti-Fake News” law which criminalizes the creating or spreading of disinformation. The consequences are fines and up to six years in prison (Pielemeier, 2020). Since this was enacted, there has been an effort to repeal due to the vagueness and severity of the law.
​
Perhaps most aggressive, Germany’s constitution requires political figures to remain neutral when speaking officially. Also, statements put out that are proven to be false are not protected under their free speech article. Media companies (specifically broadcasters) must follow the same requirements when allocating campaign advertising spots. Instead of the unbalanced news we often get in the U.S., they are required to provide more than one viewpoint (Hong, 2022).
In a disastrous presidential election in Brazil in 2018, groups used WhatsApp to spread lies about political candidates, leading to forty-seven lawsuits filed against social media companies. Almost no action was taken as the Brazilian government claimed it was too difficult to regulate an app which uses direct messaging (Santos, 2020).
​
Campaigns for media literacy and information literacy have risen as another step to tackle disinformation. Training users to recognize what is fake may be critical in stopping the spread. This is an uphill battle, however, because of the third person perspective theory (TPP) (Jang & Kim, 2018). TPP refers to the problem that we think others (particularly those of the opposing political party) are more susceptible to fake news, but we are not (Jang and Kim, 2018).
​
​
The Role of Artificial Intelligence
Artificial intelligence (AI) is one of the most exciting, yet terrifying topics of conversation today. With the rise of machine learning (ML) and tech such as ChatGPT, it has catastrophic potential. In the context of disinformation, the advancement of AI has not only made it more difficult to recognize, but it also has made it nearly impossible to control its dissemination (Bontridder & Poullet, 2021).
​
Somewhat newer to the scene of disinformation is the creation of deepfakes. Deepfakes are videos or images that have been manipulated using machine learning and AI (Kietzmann et al., 2020). The term was first coined by a Reddit user who combined the deep learning with fakes to create their username deepfakes. The user placed celebrities’ faces into explicit videos without their permission, leading to a trend of fake photos and videos being released. Since then, photo and video editing technology has improved and the ability to synthesize sound along with the video makes it essentially impossible to decipher whether it is fake (Kietzmann et al., 2020).
​
A significant amount of research has shown that the population is more easily deceived by fake images than by text. Deepfake videos are an even more powerful form of disinformation than images (Vaccari & Chadwick, 2020). In 2018, a video was released showing President Obama’s face in the oval office swearing into the camera. Toward the end of the video, it is revealed that it is not Obama, but comedian Jordan Peele speaking (and imitating his voice), who has used AI technology to synthesize their faces (Vaccari & Chadwick, 2020). The video was used by researchers to determine the believability of deepfake videos. They edited the video to remove the reveal that it was Peele, and studied different versions of the video to determine how susceptible participants were to believing it was real and the levels of uncertainty it caused (Vaccari & Chadwick, 2020). They concluded that, while not everyone will fall for the deepfake trick, it can cause a tremendous amount of uncertainty and distrust in the media (Vaccari & Chadwick, 2020).
​
Because there is software in place that make deepfakes easy to create, it is now being used for emotional, psychological, physical, and political attacks (Mirsky & Lee, 2021). In 2018, a video in India went viral on WhatsApp showing children being kidnapped. It caused more rumors about child kidnappings to spread and ultimately lead to retaliation and violence. The attacks killed at least nine people (Vaccari & Chadwick, 2020). It is predicted that an increase in phishing and blackmail will occur as real-time deepfake technology continues to advance (Mirsky & Lee, 2021).
​
The existence of “bots” has been around a bit longer than deepfakes, but advancement in recent systems such as ChatGPT has become, for lack of a better word, more intelligent. Referred to as “Language Models” (LMs), ChatGPT and other services have already shown the ability to write anything from a screenplay, to a homework assignment on Shakespeare, to a senate testimony (Bommasani et al., 2023). Researchers theorize that it is also going to be the most formidable tool for spreading disinformation across the internet (Hsu & Thompson, 2023).
​
ChatGPT, owned by OpenAI, has proven to be dangerously unreliable in the information it provides. In May of 2023 a story was released that a lawyer tried using ChatGPT to prepare for a lawsuit against an airline. The AI service provided several fake court decisions to support their case. The defense lawyers approached the judge, stating they were unable to find the cases cited, and the lawyer confessed he had used ChatGPT (Cerullo, 2023). Another recent example is that of a computer science professor at Princeton who asked ChatGPT to answer some of the questions on his exam. The tool replied with incorrect answers, although they sounded real, and could very well have been mistaken for correct answers in a different context (Hsu & Thompson, 2023). Companies such as NewsGuard and Check Point Research have been carrying out studies on the abilities of chat bots and LMs since their creation. They have requested ChatGPT to create claims on certain conspiracy theories from the perspective of Russian or Chinese news outlets along with other disinformation campaigns (Hsu & Thompson, 2023). ChatGPT did not always fall for the trap, but, according to Hsu & Thompson (2023), it offered disinformation in response to about 33% of the requests.
​
There are two items that are worth noted regarding ChatGPT. First, when opening the tool, a disclaimer pops up regarding the possibility that the user may be provided false or bias information. I know this from experience, as I opened ChatGPT for research purposes for this paper. Second, ChatGPT only holds data through 2021 (Hsu & Thompson, 2023). This means some of the content it feeds will be severely outdated.
​
In response to this harrowing advancement, AI is also being developed to combat fake content online (Bontridder, & Poullet, 2021), but has so far been unsuccessful. Detecting deepfakes is a difficult task, and one suggestion that has been made is to create tools to find the source of the content. A second option is to develop deepfake-detecting technology that may look for abnormalities. This would be flawed, however, as it would probably not be able to keep up with the advancement of the deepfake creation technology (Bontridder, & Poullet, 2021). “Authenticated alibi service” is an extreme third option, which is not even really an option. It would essentially monitor every individual’s location, speech, and movements, to prove what they are doing at all times, in case a deepfake is made with their likeness. This would be a terrible infringement of privacy (Bontridder, & Poullet, 2021).
​
The detection of content created by LMs may be more and more difficult as they are able to sound like humans. Similar to that of deepfakes, using detection technology in looking for certain abnormalities in the text may not be as effective as LMs develop. Another theory that has been tested is to use machine learning to train a system to differentiate between a factual article and a fake one. However, this is also flawed as it would be very difficult to obtain the amount of data needed for this to work, and if it did it could lead the system to be biased in its data (Bontridder, & Poullet, 2021). Overall, the use of AI to attack AI has been ineffective.
​
Discussion
It is intriguing to think about the inception of these technologies. Take not only AI, but the internet in general. When the world wide web was first conceived, developers were not creating it with the spread of disinformation in mind. These entrepreneurs did not have the foresight to believe their life’s work would be used for malicious intent. Did young Mark Zuckerberg think Facebook would be used to sway elections when he was sitting in his Harvard dorm? Doubtful. It goes to show, no matter the intentions, in the end it is up to the consumer to determine how something is used. If they did have the foresight, however, would it have stopped them?
When thinking about this in the context of AI, the potential seems endless. Deepfakes and LMs are just the beginning. Countless theories have risen about the possibilities of AI from taking our jobs (already happening) to starting wars to humans being completely dependent on AI. The sci-fi movies like I, Robot and Wall-E showing “robot autonomy” has also spread a certain level of fear about the future of AI. Maybe they aren’t quite as fictional as we think.
​
Conclusion
The regulation of disinformation in today’s world is ultimately failing. However, finding solutions in the middle of what is being called the “AI boom” is like fighting the hydra; once you cut off it’s head, two more grow back. ChatGPT has surpassed Google, Facebook, and TikTok as the fastest growing application in history (Bommasani, 2023). Meanwhile, Nvidia, a competitor of OpenAI, just hit $1 Trillion market value (Saul, 2023).
​
Lawmakers, researchers, and developers are fighting to keep up with the advancement of AI, but by the time it comes into existence, it seems to be too late. Disinformation is being created and disseminated before it can even be recognized as fake, and the terrifying truth is that there is currently no clear solution to this issue. As individuals, the only thing we can do is continue to educate ourselves with reliable news sources and practice our information and media literacy in the hope that when we see false content, we may recognize it.
​
References
Bommasani, Liang, P., & Lee, T. (2023). Holistic Evaluation of Language Models. Annals of the
New York Academy of Sciences. https://doi-org.ezproxy.depaul.edu/10.1111/nyas.15007
Bontridder, & Poullet, Y. (2021). The role of artificial intelligence in disinformation. Data &
Policy, 3. https://doi.org/10.1017/dap.2021.20
Cerullo, M. (2023, May 30). A lawyer used chatgpt to prepare a court filing. it went horribly awry. CBS News. https://www.cbsnews.com/news/lawyer-chatgpt-court-filing-avianca/
​
DiMaggio. (2022). Conspiracy Theories and the Manufacture of Dissent: QAnon, the “Big Lie”,
Covid-19, and the Rise of Rightwing Propaganda. Critical Sociology, 48(6), 1025–1048. https://doi.org/10.1177/08969205211073669
Fallis. (2015). What Is Disinformation? Library Trends, 63(3), 401–426.
https://doi.org/10.1353/lib.2015.0014
Harrison. (2021). Tackling Disinformation in Times of Crisis: The European Commission’s
Response to the Covid-19 Infodemic and the Feasibility of a Consumer-centric Solution. Utrecht Law Review, 17(3), 18–33. https://doi.org/10.36633/ulr.675
Hemphill, & Banerjee, S. (2021). Facebook and self-regulation: Efficacious proposals – Or
“smoke-and-mirrors”? Technology in Society, 67, 101797–. https://doi.org/10.1016/j.techsoc.2021.101797
Hong. (2022). Regulating hate speech and disinformation online while protecting freedom of
speech as an equal and positive right - comparing Germany, Europe and the United States. The Journal of Media Law, 14(1), 76–96. https://doi.org/10.1080/17577632.2022.2083679
Hsu, & Thompson, S. A. (2023). Disinformation Researchers Raise Alarms About A.I. Chatbots.
New York Times (Online).
Jang, & Kim, J. K. (2018). Third person effects of fake news: Fake news regulation and media
literacy interventions. Computers in Human Behavior, 80, 295–302. https://doi.org/10.1016/j.chb.2017.11.034
Kietzmann, J., Lee, L. W., McCarthy, I. P., & Kietzmann, T. C. (2020). Deepfakes: Trick or treat?
Business Horizons, 63(2), 135–146. https://doi.org/10.1016/j.bushor.2019.11.006
Mirsky, & Lee, W. (2021). The Creation and Detection of Deepfakes: A Survey. ACM
Computing Surveys, 54(1), 1–41. https://doi.org/10.1145/3425780
Pielemeier. (2020). DISENTANGLING DISINFORMATION: WHAT MAKES REGULATING
DISINFORMATION SO DIFFICULT? Utah Law Review, 2020(4), 917–.
Rubin. (2019). Disinformation and misinformation triangle: A conceptual model for “fake
news” epidemic, causal factors and interventions. Journal of Documentation, 75(5),
1013–1034. https://doi.org/10.1108/JD-12-2018-0209
Santos. (2020). Social media, disinformation, and regulation of the electoral process: a study
based on 2018 Brazilian election experience. Revista de Investigações Constitucionais, 7(2), 429–449. https://doi.org/10.5380/rinc.v7i2.71057
​
Saul, D. (2023, May 31). Nvidia hits $1 trillion market value. Forbes. https://www.forbes.com/sites/dereksaul/2023/05/30/nvidia-hits-1-trillion-market-value/?sh=3e390f5f3eab
Vaccari, & Chadwick, A. (2020). Deepfakes and Disinformation: Exploring the Impact of
Synthetic Political Video on Deception, Uncertainty, and Trust in News. Social Media + Society, 6(1), 205630512090340–. https://doi.org/10.1177/2056305120903408