Disinformation on social media has been a significant concern in recent years–becoming increasingly influential on the public discourse. Disinformation is especially prevalent in political campaigns, where false or inaccurate information is often used to sway voters.
Disinformation has also affected public-health matters contributing to a “stigma” around COVID-19 vaccination. There has been an ongoing debate about how we should regulate disinformation and misinformation on social media platforms like Twitter, YouTube, and Facebook.
In the past several years, many tech companies have established internal regulatory policies to address fake news and conspiracy theorists. However, these efforts, which include hiring policy experts and investing in technology to limit disinformation spreaders, have waned during the ongoing wave of layoffs in the tech industry. YouTube, for example, reduced its misinformation policy team to only one employee.
Another regulatory avenue is state policy. However, the government has largely taken a “hands-off legal position” toward speech regulation on social media platforms, but recent Supreme Court may alter the landscape of future regulation. Previously, the Supreme Court denied certiorari of a case holding social media giants liable for taking down disinformation on their platforms. The Supreme Court may reconsider the rules governing online speech in upcoming hearings concerning state laws in Texas and Florida that bar social media platforms from removing certain political posts or banning political candidates.
While the future of disinformation regulation on social media remains uncertain, there are concerns about AI tools like ChatGPT, which is an AI chatbot trained on a vast amount of data which it uses to produce its answers. These responses are limited to the information contained in the training data. However, ChatGPT may generate false answers that seem “authoritative,” causing difficulty in identifying the correct answer unless already known by the user. When asked to disclose its reference of sources, ChatGPT often cites to seemingly plausible but fake articles and scientific studies.
The powerful nature of ChatGPT, its potential to increase disinformation, and its impact on national security and education, raise questions related to how these tools should be regulated. So far, Congress has been “slow to react when it comes to technological issues,” including AI regulation. However, as the “fastest-growing consumer application in history,” the regulation of ChatGPT has garnered attention among the lawmakers recently. For example, Representative Ted Lieu from California urged Congress to establish a nonpartisan commission to recommend regulations for AI like ChatGPT. He warned that the risk of “unchecked, unregulated AI” is pushing us “toward a more dystopian future” and he emphasized the urgent need of a dedicated agency to regulate AI.
Surprisingly, ChatGPT itself has suggested a few potential strategies and areas of focus to regulate disinformation. First, it recommends regulating the work of the data providers to ensure that the data used to train the chatbot is based on diverse and representative information that is less likely to produce inaccuracies. Second, it suggests we establish algorithms to help identify potential sources of error and allow for corrections. Finally, it proposes the establishment of regulatory agencies and organizations to provide guidance and oversight of ChatGPT’s use and practices.
Though it is unlikely that any major regulations will be passed immediately, lawmakers agree that ChatGPT should be regulated at least on some level – either internally, by the state, or both.