top of page
  • Human Rights Research Center

Artificial Intelligence and Human Rights: Part Two

July 30, 2024


Artificial Intelligence and Freedom of Expression



In the first part of this article series covering the impact of AI on human rights, we explored the definition and historical evolution of AI along with some of the global regulatory mechanisms surrounding AI and human rights. This part aims to address how evolving AI technologies are anticipated to impact freedom of expression.


AI, which is defined as a “machine-based system that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments”, has become a powerful tool influencing various facets of society and human rights, including freedom of expression.


As AI systems become more integrated into digital platforms, they increasingly impact how individuals can express themselves online. While AI can promote freedom of expression by enabling the dissemination of diverse viewpoints and automating content translation, it also poses significant challenges. Ensuring that AI is aligned with human rights principles, including freedom of expression, requires vigilant regulation and ethical considerations.


AI’s impact on freedom of expression is seen in several ways, with one major concern being its role in content moderation. Social media platforms such as Facebook, X (formerly Twitter) and YouTube use AI algorithms to detect and remove harmful content like hate speech and misinformation. However, these systems can be overly aggressive, sometimes removing legitimate content and suppressing free speech. According to an article published on Electronic Frontier Foundation, automated content moderation poses a major risk to the freedom of expression online. There are several instances where automated content moderation erroneously flagged posts as hate speech, leading to their removal and subsequent chilling effects on freedom of expression.


Another concern regarding AI’s impact on freedom of expression is related to AI-driven surveillance technologies. AI technologies are often used for surveillance purposes, which can have a chilling effect on freedom of expression. When individuals know they are being monitored, they may be less likely to express dissenting or controversial opinions. Today, some authoritarian governments use AI to track online activities, identify dissenters, and suppress political opposition. For instance, the Chinese government’s restrictive online regime relies on a combination of legal regulations and technical control including the use of AI tools and proactive manipulation of online debates. Such practices create a climate of fear, discouraging people from freely expressing their thoughts on the Internet.


Moreover, AI algorithms can reinforce existing biases and create echo chambers, limiting exposure to diverse perspectives. These algorithms often prioritize content that aligns with users’ existing beliefs, leading to polarization and reducing opportunities for a meaningful dialogue. An article published by the Pew Research Center highlights that algorithmic categorizations tend to amplify partisan content and polarization, contributing to the echo chamber effect.


Another aspect of AI's impact on freedom of expression relates to the proliferation of deepfakes and misinformation, particularly evident in political processes. AI-generated deepfakes and misinformation can distort public discourse. The spread of false information can manipulate public opinion, and this can shape the outcome of elections worldwide. Furthermore, AI tools can influence the availability and accessibility of information. Search engines controlled by AI technologies can prioritize certain content over others, impacting what information users are exposed to.


In conclusion, while AI has the potential to enhance freedom of expression by making communication more accessible and efficient, it also poses significant risks. Ensuring that AI technologies are designed and implemented in ways that respect and promote freedom of expression is crucial. This requires transparent content moderation practices, robust privacy protections, and efforts to mitigate algorithmic biases.


 

Glossary


  • Algorithm: a process or set of rules to be followed in calculations or other problem-solving operations, especially by a computer.

  • Algorithmic Bias: systematic and repeatable errors in a computer system that create "unfair" outcomes, such as "privileging" one category over another.

  • Content Moderation: the process of monitoring, reviewing, and moderating user-generated content on websites and social media platforms.

  • Deepfake: refers to synthetic media, including images, videos, and audio, generated by artificial intelligence technology that portray something that does not exist in reality or events that have never occurred.

  • Echo chamber: an environment where a person only encounters information or opinions that reflect and reinforce their own.

  • Misinformation: refers to incorrect or misleading information. Misinformation is different from disinformation. Disinformation is false information that is deliberately intended to mislead—intentionally misstating the facts.

Comments


bottom of page