Investor Support of a Democratic and Human Rights-Centered Approach to Technology Development and Implementation

By Lauren Compere, Managing Director/Head of Stewardship & Engagement, Boston Common Asset Management

 

With the proliferation of deep fakes, widespread global surveillance, generative AI (or general use AI) and associated emergent properties, how can we as investors truly understand the risks and opportunities posed by new technology, including AI and algorithms? The emergent properties of these technologies have been described as, “AI systems… teaching themselves skills that they weren't expected to have.” In a recent interview, Alphabet’s CEO Sundar Pichai highlighted that “...all of us in the field call it a "black box." You know, you don't fully understand. And you can't quite tell why it said this, or why it got it wrong.” One fact is clear, governments, in conjunction with industry, should be regulating how technology is developed and used to ensure it supports ethical and responsible development and use grounded in human rights standards and democratic principles. 

 

As a board member of the Global Network Initiative, I had the opportunity to attend the second Summit for Democracy on March 30th in Washington, D.C. The Summit, co-hosted by the U.S. government together with the governments of Costa Rica, the Netherlands, Republic of Korea, and Republic of Zambia, provided an opportunity to learn more about how governments worldwide, companies, civil society, and other multi-stakeholder partners are working to support democratic renewal around the Summit’s three themes of (1) strengthening democracy and defending against authoritarianism; (2) fighting corruption; and (3) promoting respect for human rights. 

 

During the Summit, I learned more about how the Biden-Harris Administration is supporting the core themes that underpin the US government’s actions, including: 

 

  • Advancing democracy and internet freedom in the digital age.
  • Countering the misuse of technology and the rise of digital authoritarianism.
  • Shaping emerging technologies to ensure respect for human rights and democratic principles.

 

The biggest policy development coming out of the Biden Administration was the Executive Order on the Prohibition on Use by the United States Government of Commercial Spyware that Poses Risks to National Securityannounced in the days leading up to the Summit. Although it details expectations around the responsible use of spyware, the Order does not ban specific programs like NSO’s Pegasus, which was purchased by the FBI for non-operational use in 2022. The announcement was complemented by the Cybersecurity Tech Accord focused on minimizing the risk associated with commercial spyware. Led by Microsoft, Meta, Cisco, and Trend Micro, the principles were signed by over 150 companies and endorsed by Apple and Google. 

 

Proliferation of Principles-Based Commitments 

 

A particular focus of the Biden Administration has been protecting the American public in this age of AI. The White House Office of Science and Technology launched the Blueprint for an AI Bill of Rights, a principles-based approach to guide the design, use, and deployment of AI. Key criteria of the Blueprint include safe and effective systems, algorithmic discrimination protections, data privacy, notice and explanation, and human alternatives, consideration, and fallback.

 

Some of other principles-based commitments coming out of this second summit include the Declaration of the Summit for Democracy, signed by 73 out of 120 participating countries, which outlines 17 principles for protecting and strengthening democracy that signatory governments are “jointly dedicated to.” In particular, the Eleventh Principle focuses on new and emerging technologies including AI, biotechnologies, and quantum technologies, and calls for them to be shaped by democratic principles of equality, inclusion, sustainability, transparency, accountability, diversity, and respect for human rights including privacy.” 

 

Civil society, including Freedom House, the McCain Institute, and the George W. Bush Institute, issued their own call for democratic governments to live up to a set of 14 standards in the Declaration of Democratic Principles. In particular, Principle 6 asks that “States should uphold fundamental human rights on digital platforms by strengthening legal protections for free expression online, addressing digital threats to human security, and enacting data protection and privacy laws that regulate access to and use of personal data.”

 

While we should take a moment to celebrate such globally focused and concerted action to uphold democratic and human rights normative frameworks in the use and development of new and emerging technology, principles-based approaches rely on robust regulatory oversight and accountability for and transparency from companies. 

 

Investor Action 

 

As investors, the full breadth of our stewardship leverage and influence (corporate engagement, public policy engagement, proxy voting, investor collaboration) is required to ensure industry is held accountable to the highest ethical and regulatory standards. Some examples of investor leverage include:

 

Corporate Engagement 

 

  • Co-leading with Fidelity International, Boston Common is driving the World Benchmarking Alliance's (WBA) Digital Inclusion Collective Impact Coalition (CIC). The multi-stakeholder coalition launched in September has 29 supporting investor members representing a total $6.3 trillion in assets under advisory. The group is engaging 40 companies globally to adopt public ethical AI commitments this year. Only 44 out of 200 companies assessed as of March 2023 have public ethical AI policies. 

 

  • Using the Ranking Digital Rights benchmark, the Investor Alliance for Human Rights’ Digital Rights Engagement serves as a platform for investors to engages with 26 of the world’s most powerful digital platform and telecommunications companies to ensure that they are transparent and accountable to users with strong privacy policies, and actions that support freedom of expression and allow users to control how their data is collected, used, and shared.

 

  • Supporting robust and ongoing human rights due diligence processes, including human rights impact assessments (HRIAs), across the full value chain of technology development and rollout, to assess and mitigate potential and actual harms to those most vulnerable, including girls, women, human rights defenders, and users and societies in conflict and high-risk settings.

 

Policy Engagement

 

  • Supporting legislation, such as the proposed European Union AI Act. In a February 2023 investor statement coordinated by the Investor Alliance for Human Rights, investors called on the European Parliament, European Commission, and the Council of the European Union to ensure that the Act adopts meaningful HRIA requirements for developing and deploying AI systems, expand the publicly viewable database requirements to AI users to ensure meaningful transparency, and mandate stakeholder and rightsholder participation.

 

Proxy Voting

 

  • Voting in favor of 2023 shareholder proposals focused on human rights at technology companies such as Alphabet, Amazon, and Meta.

 

Technology is neither inherently good nor bad, but when used without proper governance, ethics, and human rights safeguards, aligned with the UN Guiding Principles on Business and Human Rights, it can cause real harm. As investors, let’s support strong global regulatory systems and accountability and transparency by the companies in which we invest, and back our convictions with actions.