Logos

UN BHR Forum 2025: Safeguarding Human Rights in the Age of Artificial Intelligence

Event Details

to

This event is organized by the Investor Alliance for Human Rights together with other partners and investors, and is part of the 14th United Nations Forum on Business and Human Rights to be held in Geneva on November 24th at 4.40 pm CET / 10.40 am ET.

As artificial intelligence (AI) becomes increasingly embedded in public and private sector decision-making, its development, procurement and deployment during times of crisis and transformation raises urgent human rights concerns. From biased algorithms in hiring and surveillance to opaque decision-making in public services, the impacts can be particularly acute for at-risk communities. These technologies often reflect and reinforce existing inequalities, especially when developed without adequate understanding of the social, cultural, and political contexts in which they are deployed. 

Businesses developing, procuring or deploying AI have a responsibility to respect human rights, as outlined by the UN Guiding Principles on Business and Human Rights (UNGPs), including by implementing human rights due diligence which, in the AI context, must be early, ongoing, and context-specific. It must also involve meaningful stakeholder engagement, particularly with at-risk communities. States also have a duty to protect individuals from AI-related harms, requiring a “smart mix” of regulatory and policy measures to align corporate conduct with human rights. This includes embedding human rights due diligence into AI regulation, ensuring coherence across national and global levels, and anchoring regulatory approaches in human rights—not just safety or security. These measures should apply across sectors and emphasize transparency, accountability, and data protection throughout the AI lifecycle. 

In the public sector, procurement processes are a critical yet under-utilized safeguard to ensure AI systems respect human rights. Meanwhile, in the private sector, investor and civil society pressure is mounting to hold businesses accountable for digital rights harms. Gendered impacts of AI, including online violence, surveillance, and algorithmic bias, further underscore the need for intersectional approaches. This session will explore how development, procurement, regulation, and stakeholder engagement can be leveraged to identify, prevent and mitigate adverse human rights impacts of AI systems. 
   
Key objectives of the session: 

  • Examine how public procurement processes can serve as a frontline defense for human rights in AI deployment.  

  • Explore the intersection of AI, gender, and digital rights, and how the UNGPs can guide rights-respecting innovation.  

  • Highlight the role of investors  and civil society in driving corporate respect for human rights and holding companies accountable for AI-related harms.  

  • Identify practical tools, frameworks, and data sources that support rights-based AI governance. 

Speakers:

  • Emma Kallina, Research Consultant at AI & Equality, Women@theTable

  • Lyra Jakulevičienė, Member of the UN Working Group on Business and Human Rights

  • Anna Lupi, Legal and Policy Officer in the Responsible Business Conduct unit, European Commission DG Growth

  • Thobekile Matambe, Senior Manager Partnerships and Engagements, Paradigm Initiative

  • Luda Svystunova, Head of Social Research, ESG Research, Engagement and Voting Team, Amundi

  • Jinhwa Ha, Senior Manager, AI Safety Team, Kakao Corp

  • Isabel Ebert, Human Rights Officer, B-Tech, OHCHR


Key discussion questions: 

  • How can procurement frameworks be designed to anticipate and mitigate adverse human rights impacts in public services, particularly amidst crises and transformation? 

  • What are the most pressing gendered risks of AI, and how can companies and states address them using the UNGPs, including amidst crises and transformation?