Artificial Intelligence

Artificial Intelligence

Artificial Intelligence (AI) refers to the development of computer systems that can act without explicit human instruction and self-modify as necessary. Companies around the world, from online platforms to health and financial institutions, are investing millions of dollars to develop new products that use AI to enhance performance, from improving the accuracy of medical diagnoses and increasing productivity to reducing risks in the workplace.

AI also poses human rights risks, including threats to the privacy of consumers due to the significant expansion of corporate data collection used for marketing purposes. New systems can enhance efficiency, but they also intensify the surveillance of workers, who often do not know when and how they are being tracked and evaluated, or why they are hired or fired. Automation also poses significant risks to the future of work by replacing human labor, resulting in mass job losses and increasing income inequality. Algorithm-based decision-making can also perpetuate and amplify human bias and results in discriminatory outcomes, such as in hiring and health diagnosis.

To address these risks, investors should (1) press companies to identify human rights risks linked to business operations, which includes assessing the adequacy of training data and its potential for bias through a multi-stakeholder engagement; (2) be transparent about efforts to identify, prevent, and mitigate human rights risks; and (3) make visible avenues and processes for redress for those affected by adverse impacts, including of any discriminatory outputs. 

Featured Resources: