Artificial Intelligence (AI) refers to the development of computer systems that can act without explicit human instruction and self-modify as necessary. Companies around the world, from online platforms to health and financial institutions, are investing millions of dollars to develop new products that use AI to enhance performance, from improving the accuracy of medical diagnoses and increasing productivity to reducing risks in the workplace.
AI also poses human rights risks, including threats to the privacy of consumers due to the significant expansion of corporate data collection used for marketing purposes. New systems can enhance efficiency, but they also intensify the surveillance of workers, who often do not know when and how they are being tracked and evaluated, or why they are hired or fired. Automation also poses significant risks to the future of work by replacing human labor, resulting in mass job losses and increasing income inequality. Algorithm-based decision-making can also perpetuate and amplify human bias and results in discriminatory outcomes, such as in hiring and health diagnosis.
To address these risks, investors should press companies to identify human rights risks linked to business operations, which includes assessing the adequacy of training data and its potential for bias through a multi-stakeholder engagement; be transparent about efforts to identify, prevent, and mitigate human rights risks, and make visible avenues and processes for redress for those affected by adverse impacts, including of any discriminatory outputs.
- AI Now Institute, New York University.
- Algorithmic Accountability: A Primer, Data & Society, 2018.
- Formulas for Trouble: Why Smart Companies Must Tread Carefully With Algorithms, Open Mic, 2018.
- How to Prevent Discriminatory Outcomes in Machine Learning - White Paper, Global Future Council on Human Rights 2016-2018, World Economic Forum, 2018.