AI in a ball

Investors Say Tech Companies are Failing to Address Systemic Human Rights Risks Inherent in Business Models and Exacerbated by AI

A suite of 14 shareholder proposals for the 2024 proxies of Alphabet, Amazon and Meta call out the deleterious human rights impacts of the tech sector’s products and services as well as its governance structures and revenue models.

NEW YORK, NY –TUESDAY, FEBRUARY 13, 2024 – Investors who have been engaging top technology companies, Alphabet ($GOOG), Amazon ($AMZN), and Meta Platforms ($META) for several years around their human rights risks announced a series of shareholder proposals they had filed for 2024 corporate proxies.

Technology companies undoubtedly play a vital role in realizing human rights in society including through the facilitation of financial services, healthcare, education, civic participation, and freedom of expression. However, without adequate oversight, these same companies are known to contribute either directly or indirectly to human rights abuses including data privacy violations, censorship, hate speech and discrimination, and threats to democracy through misinformation campaigns both here in the U.S. and overseas. Tech’s impacts on elections promise to be particularly scrutinized as elections are being held in over 64 countries during 2024.

Tech companies' business models, built on targeted advertising, are reliant on inherently biased algorithms driven by artificial intelligence (AI). The lack of accountability for the development and use of AI is at the core of these harms. In their proposals, the shareholders cite this lack of oversight and accountability as a material risk to their investments and call for strengthened governance structures to mitigate them.

“Given its pervasive influence in virtually every aspect of society, it is not hyperbole to say that the tech sector is unique in its outsized human rights risks,” said Anita Dorett, Director of the Investor Alliance for Human Rights. “AI and the advent of generative AI, which to date has been largely unregulated, only compounds these risks. Companies acknowledge the power of these technologies and the potential for their misuse. What they are not as readily acknowledging is their responsibility to put guardrails in place to prevent these harms from occurring.”

For example, Meta’s recent development of generative AI (gAI) products including conversational assistants and advertising tools, puts the company at increased risk from misinformation and disinformation campaigns generated through its own products. Meta recognizes this risk, stating these tools “have the potential to generate fictional responses or exacerbate stereotypes it may learn from its training data.”

“Artificial Intelligence offers great promise — and we should be excited about that — but even leading tech researchers are concerned about the potential for its abuse,” said Michael Connor, Executive Director of Open MIC which co-filed a proposal at Meta requesting a report on the company’s role in facilitating misinformation and disinformation disseminated or generated via generative Artificial Intelligence. “In a world where democratic institutions are already threatened by online mis- and disinformation, Alphabet and Meta need to assure billions of users and their shareholders that their management and boards are up to the task of responsibly managing the technology.”

All three companies received proposals related to the risk of AI requesting strengthened oversight measures.

A proposal filed at Meta requests a report on steps the company is taking to mitigate the risks of disinformation and hate speech inciting violence on its Facebook and Instagram platforms in non-U.S. markets.

Said lead filer Anna Kaagaard of AkademikerPension, “Meta has implemented content moderation guardrails in the U.S. to guard against the online hate-mongering and incitements to violence that could threaten election integrity and undermine democratic institutions. However, these measures have not been extended to non-Western, non-English speaking markets, leaving them vulnerable to these risks.”

One proposal filed at Alphabet calls on the company to publish an independent third-party Human Rights Impact Assessment examining the actual and potential human rights impacts of Google’s artificial intelligence-driven targeted advertising policies and practices.

"As artificial intelligence continues to expand into nearly every aspect of society, shareholders would be wise to ensure that the companies building this technology — including Google and its parent, Alphabet — do so in a way that respects fundamental human rights," said Sarah Couturier-Tanoh, Associate Director of Corporate Engagement and Advocacy at SHARE which filed the proposal. "Companies that neglect those rights open themselves up to massive legal, regulatory, financial and reputational risks — not to mention the wider systemic and societal risks presented by such disruptive tech."

Another proposal focuses on Meta’s capture of children/teens’ attention through its social media and messaging platforms Facebook, Instagram, Messenger and Whatsapp, and its failure to adequately protect them online.

“Meta is the world’s largest social media company, used by billions of children and teenagers,” said Michael Passoff of Proxy Impact which filed a proposal calling for a report assessing whether Meta has improved its performance globally regarding child safety. “Meta has become a dangerous playground for children and teens. Instagram, Facebook and Messenger are linked to a youth mental health crisis and a vast pedophile network. Meta's recent announcement to encrypt its platforms will provide child predators cover that will exponentially expand their outreach and the number of child and teen victims.”

Another proposal filed for three straight years at Alphabet and Meta focuses on the lack of shareholder democracy at these companies given their dual-class share structure, which concentrates control among founders and a handful of executives. In the past, these proposals have been supported by over 90% of independent shareholders.

The full slate of proposals, available at this link, is scheduled to go to a vote at the companies’ annual meetings this spring/summer.

About the Investor Alliance for Human Rights

The Investor Alliance for Human Rights is a collective action platform for responsible investment that is grounded in respect for people’s fundamental rights. The Investor Alliance’s over 200 members include asset management firms, public pension funds, trade union funds, faith-based institutions, family funds, and endowments. Collectively, they represent nearly US$12T in assets under management and 19 countries. The Investor Alliance is an initiative of the Interfaith Center on Corporate Responsibility. Visit our website at: and follow us on Twitter: @InvestForRights


Susana McDermott

Director of Communications

Investor Alliance for Human Rights

201-417-9060 (mobile)