top of page
  • Human Rights Research Center

Artificial Intelligence and Human Rights: Part One

July 2, 2024



Introduction


Artificial Intelligence (AI) has become a transformative force in various sectors, raising significant concerns about its impact on human rights. The integration of AI technologies by governments and private entities has led to debates on the need for ethical frameworks to mitigate potential negative consequences. Ensuring that AI is aligned with human rights principles requires vigilant regulation and ethical considerations.


This article attempts to explore the relationship between AI and human rights, drawing on insights from multiple research papers and reports published by international organizations.


Artificial Intelligence: What is it and How Did it Evolve?


Artificial intelligence (AI) can be defined as a branch of computer science that aims to create systems capable of performing tasks that typically require human intelligence.  Generative AI (GAI) is defined as a “technology that (i) leverages deep learning models to (ii) generate human-like content (e.g., images, words) in response to (iii) complex and varied prompts (e.g., languages, instructions, questions)” (Lim et al., 2023, p.2). One recent example of GAI is ChatGPT initiated by OpenAI; a conversational model that can answer simple questions as well as craft essays, and have conversations with the user.


The formal inception of AI as a scientific discipline can be attributed to the mid-20th century. British mathematician Alan Turing, often regarded as the father of AI, proposed the idea of a “universal machine” capable of performing any computation. His seminal paper, “Computing Machinery and Intelligence” (1950), introduced the Turing Test to determine if a machine can exhibit intelligent behavior indistinguishable from that of a human-being.


The term “artificial intelligence” was coined by John McCarthy, Marvin Minsky, Nathaniel Rochester, and Claude Shannon during the Dartmouth Conference. This event is widely seen as the birth of AI as a distinct discipline (McCarthy et al., 1956). Early AI research focused on symbolic AI and heuristic problem-solving (Newell & Simon, 1956).


AI saw a resurgence with the development of expert systems in the ‘70s and ‘80s. In the late ‘90s and early 2000s, the focus of AI research shifted towards machine learning–particularly the development of algorithms that enable systems to learn from data.


The first decade of the 2000s saw the advent of deep learning, powered by neural networks with many layers, and this has revolutionized AI. Currently, AI has become deeply integrated into various aspects of everyday life, from virtual assistants like Alexa to autonomous vehicles and advanced medical diagnostics. Alongside its widespread adoption, ethical concerns regarding AI's impact on human rights have gained significant attention (Brynjolfsson & McAfee, 2014).


AI and Human Rights


The impact of AI on human rights is multifaceted, encompassing both potential benefits and significant challenges. In terms of positive impacts, AI can enhance access to information and essential services, especially for marginalized communities. For example, AI-driven translation tools can break down language barriers. In addition to that, AI technologies, such as facial recognition can contribute to public safety and security which empowers human security.


In terms of the negative impacts of AI on human rights, one of the primary areas of concern is related to data privacy. AI systems, especially those used for surveillance and data analysis, can infringe on individuals’ privacy. Mass surveillance programs using AI can lead to unwarranted monitoring of citizens. Besides that, AI algorithms can perpetuate and even amplify existing biases if they are trained on biased data. This can lead to discriminatory practices. Furthermore, the development of AI in military applications raises serious ethical and human rights concerns. Autonomous weapons systems that can make life-and-death decisions without human intervention pose significant risks to international humanitarian law and ethical standards.


Regulation and Governance of AI for Good


AI systems often require vast amounts of data, leading to significant privacy concerns and potential violations of data protection rights (Donahoe & Metzger, 2019). As noted above, AI can perpetuate and even exacerbate existing biases, resulting in discriminatory outcomes (Thinyane & Sassetti, 2020). AI can undermine personal autonomy and democratic accountability, necessitating global governance frameworks to address these challenges. Thus, effective regulation is required to ensure that AI technologies are developed and used in ways that respect human rights and democracy.


Creating frameworks for data protection, ensuring transparency, and establishing accountability mechanisms are very critical in that regulation. The legal regulations surrounding AI and human rights are rapidly evolving on the global scale. Some key frameworks and regulations are as follows:


United Nations (UN)

The UN publishes several reports addressing the impact of AI on human rights, highlighting concerns such as surveillance, data privacy, and the right to non-discrimination. Furthermore, the UN hosts forums and panels to ensure AI advancements benefit all of humanity while safeguarding human rights. One such report is the “High-Level Panel on Digital Cooperation” and it aims to bring together diverse stakeholders, including government, private sector, civil society, and academia to promote global cooperation in the digital domain. The panel emphasizes the need for a human-centric approach to AI that aligns with human rights.


Council of Europe

On May 17, 2024, the Council of Europe adopted the first-ever international legally binding treaty aimed at ensuring the respect of human rights, the rule of law and democratic  legal standards in the use of AI systems. The treaty, which is also open to non-European countries, sets out a legal framework that covers the entire lifecycle of AI systems and addresses the risks they may pose; while promoting responsible innovation (Council of Europe Newsroom, 2024).


European Union (EU)

The “Ethics Guidelines for Trustworthy AI” was developed by the European Commission’s High-Level Expert Group on AI in 2019. These guidelines stress the importance of respect for human dignity, data privacy, and non-discrimination (European Commission Report, 2019).


Conclusion


AI has the potential to enhance human rights but also poses substantial risks. AI raises issues like algorithmic transparency, cybersecurity vulnerabilities, unfairness, and discrimination, affecting human rights principles. Addressing these issues require robust ethical and legal frameworks and regulations that consider both the benefits and risks of AI.


According to Volker Türk, the UN High Commissioner for Human Rights, two schools of thoughts are shaping the current development of AI regulation. The first one is risk-based only, focusing largely on self-regulation by AI developers. The other approach embeds human rights in AI’s entire lifecycle and from beginning to end, human rights principles are included in the collection and selection of data; as well as the design, and the use of the models. Türk argues that, “AI technologies that cannot be operated in compliance with international human rights law must be banned or suspended until such adequate safeguards are in place.” (Office of the United Nations High Commissioner for Human Rights, 2023).


Countries and international organizations need to collaborate on setting standards and guidelines that protect human rights on a global level. This year, one of the recent collaborations took place in South Korea on May 21-22. The AI summit in Seoul, which was co-hosted with Britain, discussed concerns such as job security, copyright and inequality, after 16 tech companies signed a voluntary agreement to develop AI safely (Lee, 2024). As AI grows substantially, it is obvious that the world needs more effort and time to ensure that AI technologies are developed and used in ways that advance human rights globally.


 

Glossary


  • Algorithmic Transparency: The principle that the factors that influence the decisions made by algorithms should be transparent, to the people who use, regulate, and are affected by systems that employ those algorithms.

  • Autonomy: The quality or state of being self-governing; especially : the right of self-government.

  • Autonomous Weapon System: An autonomous military system that can independently search for and engage targets based on programmed constraints and descriptions.

  • Deep Learning: Deep learning is a subset of machine learning that uses multi-layered neural networks, called deep neural networks, to simulate the decision-making power of the human brain.

  • Heuristic Problem-solving: An informal, intuitive, speculative procedure that leads to a solution in some cases but not in others.

  • Machine Learning: A branch of AI and computer science that focuses on  using data and algorithms to enable AI to imitate the way that humans learn, gradually improving its accuracy.

  • Neural Network: A neural network is a machine learning program, or model, that makes decisions in a manner similar to the human brain.

  • Turing Test: The Turing test coined by Alan Turing in 1950, is a test of a machine's ability to exhibit intelligent behavior equivalent to, or indistinguishable from, that of a human.


 

Sources


© 2021 HRRC

​​Call us:

703-987-6176

​Find us: 

2000 Duke Street, Suite 300

Alexandria, VA 22314, USA

Tax exempt 501(c)(3)

EIN: 87-1306523

bottom of page