Technology and Human Rights

What are the risks? 

The technology sector plays a crucial role in advancing human rights and realizing the vision outlined in the 2030 Agenda for Sustainable Development. This includes facilitating mobile banking, improving access to healthcare, and enhancing remote learning. Additionally, technology enables greater citizen participation, fosters freedom of expression, and facilitates the coordination of democratic movements through various social media platforms. 

Nevertheless, technology companies have the potential to cause, contribute to, or be directly associated with adverse impacts on workers, users, customers, and other individuals or communities through their business activities and relationships. Issues at the intersection of technology, including artificial intelligence, and human rights include concerns such as privacy, freedom of expression, and non-discrimination online - collectively referred to as digital rights risks. These encompass a broad spectrum of issues spanning political participation and democratic process, security and conflict, child safety online, and supply chain impacts, including those affecting workers involved in content moderation and gig workers. 

How are Businesses Connected?

The Technology Sector includes platform companies, telecommunications firms, internet service providers, content providers, technology and artificial intelligence developers, hardware manufacturers, software developers, and more. Big tech platform corporations have come under scrutiny for their pivotal role and responsibility in the dissemination and proliferation of misinformation and viral hate speech. Additionally, they face criticism for heightened levels of illegal surveillance, attacks on democracy, censorship of dissident voices, and discrimination against marginalized communities, including racial and gender discrimination. The core issues lie in tech companies' business models, primarily driven by targeted advertising and the lack of accountability for the development and use of artificial intelligence, including inherently biased algorithms. 

Other sectors: The integration of Artificial Intelligence (AI) in industries such as healthcare, food and beverage, and automotive, while beneficial, also introduces significant human rights risks that need to be carefully managed, such as: 

  • Privacy, discrimination, and infringement of individual autonomy that can result from misuse or breaches of sensitive patient data processed by AI in the healthcare sector

  • Workforce displacement, exacerbation of inequalities, and reliable food safety and quality oversight in the food and beverage industry through AI-driven automation; and

  • Accountability, moral decision-making, and discrimination within the automotive industry's deployment of AI in autonomous vehicles through algorithmic biases. 

Therefore, while AI offers transformative possibilities, its deployment must be governed by robust ethical frameworks and regulatory measures to mitigate these human rights risks, ensuring that the march of technology aligns with the imperatives of human dignity and equality. 

How can Investors Respond?

As an integral component of their human rights diligence, investors should actively engage with their portfolio companies across both the technology sector and other industries to ensure the responsible development, deployment and utilization of technology in their business operations. This involves adopting and implementing human rights due diligence policies and processes that uphold human rights international standards. Importantly, these measures should include mechanisms that empower victims of abuses with avenues for seeking remedies. 

Investors can also advocate with governments and standard-setting bodies to design and implement a smart mix of robust, mandatory, and voluntary measures and incentives to create enabling environments for responsible business conduct. Some of these measures to address human rights risks include pressing companies to: 

  1. Identify human rights risks linked to business operations, which includes assessing the adequacy of training data and its potential bias through multi-stakeholder engagement. 

  2. Being transparent about efforts to identify, prevent, and mitigate human rights risks. 

  3. Making visible avenues and processes for redress for those affected by adverse impacts, including discriminatory outputs. 

Featured Resources

Technology & Human Rights: Salient Risk Briefings

Find briefings on privacy and data protection, freedom of opinion and expression, conflict and security, discrimination, political participation, and child rights.

Ranking Digital Rights 2022 Results

View rankings of 'Big Tech' and 'Telco Giant' companies

How to Participate

The Investor Alliance facilitates strategic conversations and provides tools to support investor engagement as part of our commitment to embedding respect for human rights into corporate policy. To learn more and become involved in this engagement on ICT, please contact Anita Dorett.