top of page

Harnessing Technology to Safeguard Human Rights: AI, Big Data, and Accountability

  • Human Rights Research Center
  • Apr 8
  • 17 min read

April 8, 2025


[Image credit: Shutterstock/ Smile Studio AP]
[Image credit: Shutterstock/ Smile Studio AP]

Introduction


Artificial intelligence (AI) and big data analytics transform governance, security, and human rights advocacy. Organizations like Amnesty International use AI to document war crimes, and social media platforms deploy machines to learn to detect hate speech. However, these technologies raise concerns about privacy violations, algorithmic bias, and censorship. This paper explores artificial intelligence's dual role in human rights protection, analyzing its applications, ethical challenges, and policy solutions to ensure responsible AI deployment.


Artificial intelligence and big data analytics are transforming human rights monitoring by enabling large-scale data analysis to detect patterns of abuse. AI-powered satellite imagery analysis has helped organizations like Amnesty International document human rights violations, such as identifying damage to civilian infrastructure in Syria and tracking unlawful attacks in conflict zones (Amnesty International, 2020). Additionally, researchers have leveraged big data techniques to analyze digital traces of state-sponsored violence, as seen in Myanmar, where social media data was used to track hate speech and incitement to violence (United Nations, 2021).


However, these technologies also pose risks. A ProPublica investigation found that Black defendants were nearly twice as likely to be incorrectly classified as high-risk by predictive policing algorithms compared to white defendants with similar records (Angwin et al., 2016). Additionally, a 2019 study by the National Institute of Standards and Technology (NIST) revealed that facial recognition algorithms misidentified African American and Asian individuals 10 to 100 times more often than white individuals, raising concerns about discrimination and due process (NIST, 2019). Addressing these challenges requires robust governance frameworks, transparency mandates, and international oversight to ensure AI and big data serve as tools for justice rather than discrimination.


The Role of AI and Big Data in Human Rights Protection


1.1 AI in Crisis Monitoring and Humanitarian Response

Artificial intelligence is increasingly utilized in humanitarian efforts to detect patterns in conflict zones and provide early warnings for crises (United Nations, 2021). Machine learning models analyze vast amounts of data, including news reports, satellite imagery, and social media activity, to identify emerging conflicts and potential human rights violations. Organizations such as the United Nations (UN) and Amnesty International leverage AI-driven tools to assess and interpret these datasets, enabling faster and more effective responses to crises. These technologies enhance decision-making for humanitarian organizations by identifying risk patterns that might go unnoticed. However, concerns remain regarding the accuracy of AI predictions and the potential biases embedded in datasets, which can influence the effectiveness of crisis monitoring tools.


1.2 AI for Hate Speech and Misinformation Detection

The rapid spread of hate speech and misinformation on digital platforms has necessitated deploying AI-powered tools to identify and mitigate harmful content (Buolamwini & Gebru, 2018). Natural language processing (NLP) algorithms analyze textual data in real-time to flag speech deemed offensive or inciting violence. Social media companies use these tools to enforce content moderation policies, ensuring platforms remain safe for users. However, the effectiveness of AI-driven moderation remains contested, as concerns persist regarding biased content filtering and the over-censorship of legitimate political discourse. Some critics argue that AI models trained on historical data may reflect societal biases, leading to the disproportionate silencing of marginalized groups. Therefore, while AI plays a critical role in combating misinformation, there is an urgent need for transparency and accountability in how these tools are implemented and governed.


1.3 Big Data in Identifying Human Rights Violations

Big data analytics is revolutionizing the detection of human rights violations, providing critical insights into issues such as police brutality, forced labor, and genocide. By integrating structured and unstructured data sources, AI-driven analytics can process vast amounts of information to uncover patterns of abuse. For instance, satellite imagery analysis has been instrumental in documenting war crimes and verifying reports of atrocities in conflict zones. Mobile phone data further helps trace displacement and migration patterns, offering valuable evidence in cases involving refugees and internally displaced persons. Social media analytics has also emerged as a powerful tool for detecting early signs of human rights crises by monitoring trends and sentiment shifts in public discourse. However, while big data enhances the capacity to hold perpetrators accountable, it raises ethical concerns regarding privacy and surveillance. Data misuse for state-led oppression highlights the need for robust data governance frameworks that ensure human rights are protected rather than violated.


1.4 AI in Legal and Humanitarian Assistance

AI also significantly contributes to legal and humanitarian assistance by optimizing data analysis and streamlining aid distribution (European Commission, 2021). Legal firms increasingly utilize AI-driven platforms to automate the review of human rights case law, allowing for more efficient legal research and documentation. AI-powered tools can assess large volumes of case files, identify relevant precedents, and generate insights that assist legal practitioners in advocating for human rights protections. Additionally, AI chatbots have been developed to aid refugees and asylum seekers navigate complex legal frameworks, offering guidance on asylum applications, documentation requirements, and legal rights. These technological advancements can potentially increase access to justice for vulnerable populations.


However, challenges persist in ensuring that AI-driven legal assistance is both accurate and free from bias. Algorithmic biases embedded in training data may lead to discriminatory legal outcomes, particularly for marginalized groups. Additionally, adequate safeguards are needed to prevent the misuse of AI in decision-making processes that impact fundamental human rights. By addressing these concerns through ethical oversight, transparent regulation, and human-in-the-loop systems, AI can become a powerful tool for promoting fairness and accessibility in legal and humanitarian contexts.


2. Ethical and Legal Risks of AI and Big Data

The increasing reliance on artificial intelligence (AI) and big data raises significant ethical and legal concerns, particularly in privacy, algorithmic bias, censorship, and transparency. While AI offers robust solutions for human rights advocacy, its misuse can also result in mass surveillance, discrimination, and reduced public trust in digital governance. Addressing these challenges requires a robust regulatory framework to mitigate unintended consequences and ensure ethical AI deployment.


2.1 Privacy Violations and Mass Surveillance

AI-powered surveillance threatens individual freedoms, such as China's social credit system and facial recognition tools. Governments and corporations increasingly use AI-driven monitoring systems to track behaviors, raising concerns about civil liberties. Critics argue that such surveillance leads to digital authoritarianism, where individuals are penalized based on algorithmic assessments without transparency or due process.


However, proponents argue that AI surveillance can enhance public safety, prevent crime, and streamline security operations, particularly in high-risk areas. Law enforcement agencies use predictive policing algorithms to anticipate criminal activity, allowing for better resource allocation. Despite these benefits, studies indicate that such systems disproportionately target marginalized communities, reinforcing existing biases in law enforcement.  A 2019 report by the AI Now Institute highlighted significant flaws in predictive policing algorithms, showing that these systems disproportionately target marginalized communities and lack transparency in their decision-making processes (Richardson et al., 2019). The city of Santa Cruz, California, became the first U.S. city to ban predictive policing in 2020, citing racial bias and civil liberties concerns (Joh, 2021). The key challenge lies in implementing legal frameworks that balance security needs with privacy rights, ensuring accountability and oversight.


Globally, governments take different approaches to AI surveillance. The European Union has implemented the General Data Protection Regulation (GDPR), which restricts mass data collection and requires user consent for AI-driven surveillance. In contrast, China's AI governance model prioritizes state control, embedding AI surveillance into social governance structures (Feldstein, 2019). In the U.S., local regulations vary, with some states banning facial recognition in public spaces while others expand AI surveillance capabilities.


2.2 Algorithmic Bias and Discrimination

AI systems often inherit and amplify biases in historical data, leading to discriminatory outcomes across various sectors, including hiring, healthcare, and financial services. A notable example is Amazon's AI hiring tool, which was discontinued after it was found to systematically disadvantage female applicants. The AI model, trained on predominantly male resumes, learned to favor male candidates over equally qualified female applicants, highlighting how biased datasets can perpetuate workplace discrimination (Dastin, 2018).


Similar concerns arise in healthcare. A study by Buolamwini and Gebru (2018) revealed that commercial facial recognition algorithms exhibited error rates of up to 34.7% for darker-skinned women, compared to 0.8% for lighter-skinned men, raising concerns about racial disparities in AI-powered healthcare diagnostics. Additionally, a 2021 Stanford University study found that AI-driven diagnostic systems were 20% less accurate for Black patients than for white patients, leading to potential disparities in medical treatment (Seyyed-Kalantari et al., 2021).


Financial services have also been affected by AI bias. A 2022 Consumer Financial Protection Bureau (CFPB) report found that AI-driven lending models were 40% more likely to deny loans to Black and Hispanic applicants than to white applicants with identical financial profiles (Consumer Financial Protection Bureau, 2022). In response, various legislative measures have been proposed to address algorithmic discrimination. For instance, the AI Civil Rights Act, introduced by Senator Markey, aims to protect Americans against biased algorithms and mitigate discrimination perpetuated through AI (Markey, 2024).


However, artificial intelligence advocates argue that when trained on diverse, high-quality datasets and subjected to regular bias audits, AI systems can reduce human biases in decision-making processes. Implementing rigorous oversight, continuous model retraining, and clear ethical guidelines is essential to prevent discriminatory AI outcomes.


Several initiatives have been proposed regarding U.S. regulations to address AI bias and discrimination. Senator Markey’s AI Civil Rights Act seeks to eliminate AI bias and establish safeguards on the use of algorithms in decisions impacting individuals' rights and civil liberties (Markey, 2024). Additionally, state-level legislation, such as the Stop Discrimination by Algorithms Act in the District of Columbia, prohibits discriminatory algorithmic eligibility determinations and mandates transparency and accountability measures (District of Columbia Council, 2023). These efforts reflect a growing recognition of the need for regulatory frameworks to ensure AI systems uphold fairness and equity.


2.3 Censorship and Free Speech Limitations

AI moderation has led to political censorship in authoritarian states. In countries like China and Russia, AI-driven censorship suppresses opposition voices and controls public discourse. Even in democratic nations, AI content moderation systems deployed by social media companies have faced criticism for inconsistently removing political content, raising concerns about freedom of expression.

Critics warn that AI moderation can be used as a tool of digital repression, silencing dissent and reinforcing dominant narratives. Governments have leveraged AI to manipulate algorithms and suppress dissent, particularly in authoritarian regimes (Human Rights Council, 2019). In the U.S., Facebook and YouTube's AI moderation tools have faced scrutiny for over-censoring legitimate political discourse or failing to remove harmful misinformation (Mozur, 2019). These inconsistencies highlight the challenges of balancing free speech protections with the need to combat harmful content online.


However, social media platforms argue that AI-driven moderation is essential to combat hate speech, misinformation, and harmful content. A 2021 Pew Research Center study found that 75% of users believe AI moderation helps create safer online spaces. However, 60% of users also feel that AI moderation lacks transparency, leading to concerns about biased enforcement (Pew, 2021). To mitigate risks, the platform must ensure transparency in AI moderation processes, provide human oversight, and allow users to challenge automated content removal decisions. Additionally, regulatory bodies like the European Commission's Digital Services Act (2022) aim to establish fair and accountable AI-driven content moderation guidelines.


2.4 Lack of Transparency and Accountability

The black box problem in artificial intelligence decision-making hinders accountability. AI-driven risk assessments have been criticized for racial bias and lack of explainability. Critics argue that AI's opaque nature can lead to unfair outcomes, with individuals having little recourse to challenge automated decisions. A prominent example is the COMPAS algorithm, an AI-driven risk assessment tool used in the U.S. criminal justice system. Studies found that COMPAS disproportionately labeled Black defendants as high risk compared to white defendants with similar backgrounds, yet the decision-making logic behind the algorithm remained opaque (Angwin et al., 2016).


However, proponents suggest advancements in explainable AI (XAI) can improve transparency by making AI decisions more interpretable for users. For instance, Google's AI Model Cards initiative aims to provide more precise documentation on how AI models function, helping users understand their outputs. A 2022 Massachusetts Institute of Technology (MIT) study found that using explainable AI models increased user trust in AI decisions by 45%, highlighting the importance of interpretability in high-stakes applications.


Implementing regulatory requirements for explainable AI, mandatory impact assessments, and independent audits can help bridge the accountability gap and ensure that AI decision-making aligns with human rights principles. Efforts like the EU's AI Act and the U.S. Algorithmic Accountability Act are pushing for mandatory AI transparency standards that require companies to disclose how AI models make decisions and provide appeal mechanisms for affected individuals (European Commission, 2021).


Ultimately, ethical AI development must prioritize fairness, accountability, and transparency to ensure technology serves humanity rather than undermines fundamental rights.

[Image source: Information Age]
[Image source: Information Age]

3. Policy Recommendations for Ethical AI Governance

A multifaceted governance approach is needed to ensure that AI and big data are used ethically in human rights protection. These policy recommendations provide actionable solutions and feasibility analyses to assess their practicality in regulatory and business environments.


3.1 Establish AI Transparency Standards


Actionable Solutions:

  • Require AI developers to disclose decision-making processes and provide algorithmic transparency reports.

  • Mandate third-party audits for high-risk AI applications in hiring, criminal justice, and lending.

  • Develop explainable AI (XAI) frameworks that make AI decisions interpretable for users and regulators.

Feasibility Analysis:

  • High feasibility in the EU and the U.S.: The EU's AI Act and the U.S. Algorithmic Accountability Act propose transparency regulations.

  • Challenges: Some companies may resist full disclosure due to intellectual property concerns. However, compliance frameworks like financial auditing standards can be used as a model for AI audits.

  • Solution: Implement tiered transparency requirements where higher-risk AI applications require stricter scrutiny.


3.2 Conduct Regular Bias Audits


Actionable Solutions:

  • Enforce periodic independent AI fairness audits to assess discrimination risks in decision-making models.

  • Require organizations to maintain bias impact reports to monitor disparities in AI-driven decisions.

  • Use diverse training datasets representing various racial, gender, and socioeconomic groups.

Feasibility Analysis:

  • Moderate feasibility: Many companies already conduct internal audits (e.g., Google and Microsoft), but mandatory external audits could face resistance.

  • Challenges: Costs associated with third-party audits may be high for smaller organizations.

  • Solution: Governments can provide tax incentives for businesses adopting ethical AI auditing practices, like sustainability incentives in corporate social responsibility (CSR) programs.


3.3 Strengthen Data Privacy Protections


Actionable Solutions:

  • Enact privacy-by-design AI regulations, ensuring AI systems minimize data collection and protect user anonymity.

  • Ban mass facial recognition and predictive policing in public spaces unless governed by strict legal safeguards.

  • Expand GDPR-style privacy laws globally to require explicit user consent for AI-driven data processing.

Feasibility Analysis:

  • High feasibility in the EU, moderate feasibility in the U.S.: The EU's GDPR sets a strong precedent, but the U.S. lacks federal data privacy laws.

  • Challenges: Countries with state-controlled AI surveillance (e.g., China, Russia) are unlikely to implement strict privacy laws.

  • Solution: Encourage multinational corporations to adopt global privacy best practices by integrating privacy impact assessments (PIAs) in their AI compliance strategies.


3.4 Create a Global AI Governance Framework


Actionable Solutions:

  • Establish a UN-backed AI treaty to create a global legal framework for AI governance, like climate change agreements.

  • From an International AI Oversight Body to monitor AI deployment worldwide and prevent human rights violations.

  • Develop a universal AI ethics certification that governments and businesses can voluntarily adopt to ensure compliance with ethical guidelines.

Feasibility Analysis:

  • Low feasibility in the short term, high feasibility in the long term: AI governance varies globally (e.g., the EU focuses on rights-based AI, while China prioritizes state control).

  • Challenges: Geopolitical differences and reluctance from certain countries to cede control over AI policy.

  • Solution: Start with regional AI coalitions (e.g., the EU and like-minded countries) before expanding to a global AI treaty framework.


3.5 Implement Accountability and Oversight Tools


Actionable Solutions:

  • Introduce AI appeal mechanisms allowing individuals to challenge automated decisions affecting their rights.

  • Require AI systems to maintain decision logs and audit trails to track and explain automated decision-making.

  • Use public AI dashboards to allow regulators and the public to monitor AI-driven decision patterns in government services.

Feasibility Analysis:

  • High feasibility in democratic nations: The EU and the U.S. are already exploring AI explainability requirements.

  • Challenges: Businesses may resist AI logging requirements due to concerns over proprietary information.

  • Solution: Require only critical AI applications (e.g., credit scoring, hiring, policing) to maintain audit trails while allowing lighter regulations for low-risk AI (e.g., chatbots, recommendation algorithms).


4. Global AI Governance: A Brief Overview

AI governance varies globally, reflecting different national priorities and regulatory approaches. The European Union has led with strong AI legislation through the General Data Protection Regulation (GDPR) and the proposed AI Act, emphasizing transparency and fairness (European Commission, 2021). In contrast, China prioritizes AI for state control, implementing strict monitoring policies, including facial recognition and social scoring systems (Feldstein, 2019). The United States lacks a unified federal AI law, leading to state regulatory fragmentation (West, 2019).


Despite these differences, international initiatives such as the Council of Europe's AI Legal Framework and the United Nations Educational, Scientific and Cultural Organization (UNESCO). AI Ethics Recommendation aim to establish ethical AI principles across borders (Council of Europe, 2023). However, no globally unified framework currently exists. As AI capabilities expand, greater international cooperation will be essential to create standardized policies that protect human rights while fostering innovation.


5. Future AI Trends and Emerging Risks

As AI continues to evolve, several emerging risks should be addressed to ensure ethical and responsible deployment:


5.1 Deepfake Misinformation and AI-Generated Content

Deepfake technology, which enables the creation of highly realistic synthetic media, poses a significant challenge to truth and trust in digital communication. A 2023 study found that deepfake videos now account for 80% of AI-generated misinformation. Deepfakes have already been used in disinformation campaigns, such as the 2020 U.S. presidential election, where AI-generated videos spread false narratives (Hao, 2020). Governments and tech companies are developing deep-fake detection algorithms, and initiatives like the EU AI Act’s watermarking requirements aim to counteract malicious AI-generated content. However, ensuring the widespread adoption of these detection tools across different platforms remains challenging.


5.2 AI in Warfare and Autonomous Weapons

The militarization of AI presents significant ethical and security concerns. Autonomous weapons, such as AI-driven drones and robotic warfare systems, raise questions about accountability in combat decisions. The UN is currently debating a ban on lethal autonomous weapons, but significant military powers remain divided. AI-driven surveillance and predictive policing systems, like China’s Skynet Surveillance System, have been criticized for disproportionately targeting marginalized communities and exacerbating systemic biases (Eubanks, 2018). Human oversight in AI-driven military operations is critical to prevent unintended escalation and violations of humanitarian laws.


5.3 Quantum AI and Cybersecurity Risks

Advancements in quantum computing pose risks to cybersecurity, as quantum AI could break traditional encryption methods, exposing sensitive personal and governmental data. Financial institutions and national security agencies, which rely on encrypted AI-driven systems, are particularly vulnerable (Schmitt, 2017). Governments and research institutions are investing in post-quantum cryptography to counteract these risks, but the technology remains in its early stages (NIST, 2021). Establishing regulatory frameworks to ensure secure quantum AI deployment will be crucial in safeguarding global cybersecurity.


5.4 Corporate AI Dominance and Ethical Risks

Corporate control over AI research and development is a growing concern. Large technology firms such as Google DeepMind, OpenAI, and Microsoft possess significant computational power and vast datasets, allowing them to shape AI advancements while limiting competition. This monopolization can lead to anti-competitive practices, limited innovation, and ethical risks tied to profit-driven AI applications (West, 2019). Addressing these concerns requires regulatory oversight, antitrust enforcement, and support for open-source AI development to democratize AI access.


These challenges highlight the need for comprehensive AI governance strategies that balance innovation with ethical safeguards. International organizations such as UNESCO and the   Organisation for Economic Co-operation and Development (OECD.) advocate for AI policies emphasizing transparency, accountability, and human rights protections (UNESCO, 2022). Collaboration between governments, researchers, and private entities will ensure that AI serves humanity while minimizing its potential harm.


Conclusion


AI and big data hold immense potential for advancing human rights, enhancing governance, and improving humanitarian responses. These technologies empower organizations to detect crises, analyze trends, and promote accountability. However, they also introduce significant ethical and legal risks, including privacy violations, algorithmic bias, misinformation, censorship, and corporate monopolization. AI-driven systems can exacerbate discrimination, erode civil liberties, and deepen societal inequalities without appropriate safeguards.


Ensuring transparent, ethical, and accountable AI governance is crucial to mitigating these risks and preventing potential abuses. The policy recommendations outlined enhancing AI transparency, conducting regular bias audits, strengthening data privacy protections, creating global governance frameworks, and implementing accountability mechanisms offer a structured approach to balancing AI's risks and benefits. Governments and organizations can foster responsible AI development that prioritizes human rights by embedding explainable AI, fairness checks, and regulatory safeguards into AI systems.


Addressing emerging AI risks such as deepfake misinformation, AI militarization, quantum cybersecurity threats, and corporate AI dominance will be critical in shaping the future of AI governance. Without proactive measures, the unchecked expansion of AI could undermine democratic institutions and public trust in technology. International cooperation is essential for harmonizing AI regulations across borders. Initiatives such as the European Union's AI Act, UNESCO's AI Ethics Recommendation, and the OECD AI Principles highlight the growing global recognition of ethical AI standards.


Ultimately, responsible AI development is a shared global responsibility. Governments, private entities, researchers, and civil society must collaborate to ensure AI is used ethically, fairly, and transparently. Only through continuous policy adaptation, technological safeguards, and human-centered AI design can we ensure that artificial intelligence remains a force for justice, security, and human dignity in the digital age. As AI continues to shape the digital world, its governance must not be left to chance. Now is the time to act, ensuring that technological advancements uplift humanity rather than undermine it.


 

Glossary


  • Algorithmic Bias: A systematic and repeatable error in AI systems that results in unfair treatment of individuals based on characteristics such as race, gender, or socioeconomic status.

  • Antitrust Enforcement: The regulation of AI-driven markets and technology companies to prevent monopolistic practices and ensure fair competition in the digital economy.

  • Artificial Intelligence (AI): Machine simulation of human intelligence processes, particularly computer systems, includes learning, reasoning, and self-correction.

  • Autonomous Weapons: AI-powered military systems that can select and engage targets without human intervention raise ethical and legal concerns.

  • Big Data: Big data is a complete, voluminous set of information comprising structured, unstructured, and semi-structured data sets, which is challenging to manage using traditional data processing tools.

  • Black Box Problem: A challenge in AI and machine learning is that decision-making processes are not easily interpretable by humans, making it difficult to understand how an algorithm arrives at its conclusions.

  • Deepfake: AI-generated synthetic media, typically videos or images, manipulate accurate content to create deceptive or false representations.

  • Digital Authoritarianism: Governments use AI, big data, and digital surveillance technologies to control, monitor, and suppress citizens' freedoms and rights.

  • Explainable AI (XAI): A set of tools and frameworks designed to make AI decision-making processes more transparent and interpretable by humans.

  • Facial Recognition Technology: AI-based biometric identification technology that analyzes and matches facial features against stored data for verification or surveillance purposes.

  • General Data Protection Regulation (GDPR): European Union regulation establishes data protection and privacy guidelines, impacting AI governance.

  • Machine Learning (ML): A subset of AI that enables systems to learn patterns from data and make decisions without explicit programming.

  • Natural Language Processing (NLP) Algorithms: A subset of AI that enables computers to understand, interpret, and generate human language for applications such as chatbots, translation, and sentiment analysis.

  • Post-Quantum Cryptography: A field of cryptography aimed at developing secure encryption methods resistant to decryption by quantum computers.

  • Predictive Policing: The use of AI and big data analytics to forecast criminal activity has been criticized for reinforcing systemic biases in law enforcement.

  • Privacy Impact Assessments: This process evaluates the potential privacy risks of an AI or data-driven system, ensuring compliance with regulations and ethical considerations.

  • Quantum Computing: An advanced computing paradigm that leverages quantum mechanics to perform complex calculations exponentially faster than classical computers.

  • Transparency in AI: The principle is that AI models should be interpretable and explainable to users, policymakers, and regulators to ensure ethical deployment.

  • UNESCO AI Ethics Recommendation: UNESCO developed a framework outlining ethical guidelines for AI governance, emphasizing fairness, accountability, and human rights.

 

 

Sources


  1. Angwin, J., Larson, J., Mattu, S., & Kirchner, L. (2016, May 23). Machine bias: There’s software used across the country to predict future criminals. And it’s biased against Black defendants. ProPublica. https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing.

  2. Buolamwini, J., & Gebru, T. (2018). Gender shades: Intersectional accuracy disparities in commercial gender classification. Proceedings of Machine Learning Research, 81, 1–15. http://proceedings.mlr.press/v81/buolamwini18a.html.

  3. Chesney, R., & Citron, D. (2019). Deepfakes: A looming challenge for privacy, democracy, and national security. California Law Review, 107(6), 1753–1819. https://doi.org/10.15779/Z38RV0D15J.

  4. Council of Europe. (2023). AI legal framework: Promoting human rights in AI governance. Council of Europe. https://www.coe.int/en/web/artificial-intelligence.

  5. Dastin, J. (2018, October 10). Amazon scraps secret AI recruiting tool that showed bias against women. Reuters. https://www.reuters.com/article/us-amazon-com-jobs-automation-insight-idUSKCN1MK08G.

  6. Eubanks, V. (2018). Automating inequality: How high-tech tools profile, police, and punish the poor. St. Martin’s Press.

  7. European Commission. (2021). Proposal for a regulation laying down harmonised rules on artificial intelligence (Artificial Intelligence Act). https://digital-strategy.ec.europa.eu/en/policies/regulatory-framework-ai.

  8. Feldstein, S. (2019, April 22). The rise of digital repression: How technology is reshaping power, politics, and resistance. Carnegie Endowment for International Peace. https://carnegieendowment.org/2019/04/22/rise-of-digital-repression-pub-79042.

  9. Hao, K. (2020, February 26). AI-generated fake content could unleash a virtual arms race. MIT Technology Review. https://www.technologyreview.com/2020/02/26/905276/ai-generated-fake-content-could-unleash-a-virtual-arms-race/.

  10. Joh, E. E. (2021). The risks of predictive policing. UCLA Law Review, 68(5), 1304–1343. https://www.uclalawreview.org/the-risks-of-predictive-policing/.

  11. Lewis, H. (2022). AI-driven lending models and financial discrimination: The unintended consequences of automation. SSRN Electronic Journal. https://doi.org/10.2139/ssrn.3706635.

  12. Mozur, P. (2019, October 15). A crackdown on fake news in China targets real reporters. The New York Times. https://www.nytimes.com/2019/10/15/technology/china-fake-news-crackdown.html.

  13. National Institute of Standards and Technology. (2021). Post-quantum cryptography: Ensuring data security in the quantum era. U.S. Department of Commerce. https://csrc.nist.gov/projects/post-quantum-cryptography.

  14. Richardson, R., Schultz, J. M., & Crawford, K. (2019). Dirty data, bad predictions: How civil rights violations impact police data, predictive policing systems, and justice. AI Now Institute. https://ainowinstitute.org/publication/dirty-data-bad-predictions.

  15. Schmitt, M. N. (2017). The Tallinn Manual 2.0 and the legality of AI in cyber warfare. Journal of National Security Law & Policy, 9(2), 237–266. https://jnslp.com/the-tallinn-manual-and-ai/.

  16. UNESCO. (2022). Recommendation on the ethics of artificial intelligence. https://en.unesco.org/artificial-intelligence/recommendation.

  17. Veale, M., & Binns, R. (2021). Fairness and accountability design needs for AI in politics. Proceedings of the ACM on Human-Computer Interaction, 5(CSCW1), 1–28. https://doi.org/10.1145/3449172.

  18. West, S. M. (2019). Power and privilege in the digital age: AI governance and corporate control. AI & Society, 34(4), 673–687. https://doi.org/10.1007/s00146-019-00887-1.

  19. World Economic Forum. (2021). Global AI governance: Building a foundation for trustworthy AI. https://www.weforum.org/reports/global-ai-governance.

© 2021 HRRC

​​Call us:

703-987-6176

​Find us: 

2000 Duke Street, Suite 300

Alexandria, VA 22314, USA

Tax exempt 501(c)(3)

EIN: 87-1306523

bottom of page