top of page
  • Human Rights Research Center

Artificial Intelligence and Human Rights - Part Four

September 25, 2024


Artificial Intelligence and Human Rights - Part Four

AI and Online Surveillance



The advent of artificial intelligence (AI) has ushered in a new era of technological advancement. However, the development and deployment of AI have also raised profound concerns about its impact on privacy and human rights, particularly in the realm of online surveillance.  


In the first and second parts of the article series covering the impact of AI on human rights, the evolution of AI and the international regulatory mechanisms on the use of AI have been addressed along with AI’s impact on freedom of expression. While the third part has addressed how AI impacts data privacy, this final part sheds light on AI’s impact on human rights in the realm of online surveillance.


It can be said that AI, with its ability to process vast amounts of data at unprecedented speeds, has become an indispensable tool for online surveillance. From facial recognition systems to predictive policing algorithms, AI-powered technologies are increasingly used to monitor and analyze users’ digital footprints. It is known that cybercriminals breach organizations’ systems using AI tools to steal consumer identity credentials.


While some people argue that online surveillance can enhance security and efficiency, critics contend that it poses a significant threat to fundamental freedoms.  One of the most pressing concerns is the erosion of data privacy. AI-driven surveillance systems can collect and analyze an extensive range of personal data, including online browsing history, social media interactions, and even biometric information. It is argued that increased adoption of biometric surveillance systems, such as the use of AI-powered facial recognition in public places, might spur biometric identity theft and anti-surveillance innovations. This level of data collection creates a detailed profile of users, allowing governments, corporations, and other entities to track their movements, predict their behaviors, and influence their decisions.  For instance, AI-based surveillance technology can be used for marketing purposes and targeted advertising.


AI allows for more detailed profiling than was ever possible before. AI-powered surveillance can be used to discriminate against marginalized groups. It is known that facial recognition systems are less accurate in identifying people of color, leading to wrongful arrests and convictions. Thus, it can be said that predictive policing algorithms, which are based on historical crime data, may perpetuate biases against certain communities.


Furthermore, the use of AI for surveillance raises questions about government overreach and censorship. Authoritarian regimes like China can employ AI to suppress dissent, monitor political activity, and control the flow of information. Even in democratic societies, mass surveillance can chill free speech and create a climate of fear.


To address these challenges, it is imperative to develop robust legal and ethical frameworks for the use of AI in surveillance. This includes enacting comprehensive data protection laws, establishing transparency and accountability mechanisms, and ensuring that AI systems are designed and used in a fair and unbiased manner. Furthermore, it is essential to promote digital literacy and critical thinking among the public. Individuals should be aware of the risks of online surveillance and take steps to protect their privacy. This includes using strong passwords, being mindful of the information they share online, and supporting organizations that advocate for digital rights.

 

In conclusion, the intersection of AI and online surveillance presents a complex and multifaceted challenge to human rights. By developing responsible AI policies and promoting digital literacy, it is possible to harness the power of AI technology while safeguarding fundamental freedoms.


 

Glossary


  • Authoritarian: favoring complete obedience or subjection to authority as opposed to individual freedom:

  • Biased: showing an unreasonable like or dislike for someone or something based on personal opinions.

  • Biometric information is used for identifying and authenticating individuals in a reliable and fast way through unique biological characteristics.

  • Democratic: based on the principles of democracy as a system of government in which state power is vested in the people or the general population of a state.

  • Digital Footprint: refers to one’s unique set of traceable digital activities, actions, contributions, and communications manifested on the Internet or digital devices.

  • Digital Literacy: having the knowledge, skills and confidence to keep up with changes in technology.

  • Dissent:  publicly disagree with an official opinion or decision

  • Facial Recognition: refers to technology potentially capable of matching a human face from a digital image or a video frame against a database of faces.

  • Marginalized groups: is used for different groups of people within a given culture, context and history at risk of being subjected to multiple discrimination

  • Mass Surveillance: the monitoring of or collection of information about the individuals, especially an entire population, particularly by electronic means.

  • Multifaceted: having many different aspects or features.

Comments


bottom of page