Artificial Intelligence and the Challenges of Workplace Discrimination and Privacy
AI systems have the ability to generate insights that are not accessible based on ordinary human observation, and the more complex systems may generate results that are not fully explainable or understandable, even by their human creators.4 Early efforts at artificial intelligence endeavored to make machines into the equivalent of humans, with the ability to exercise judgment in a variety of contexts.5 These efforts to create a “general” AI have largely failed.6 However, there have been great successes in narrow AI-namely, the application of artificial intelligence to a particular problem or context.7 Familiar examples of AI breakthroughs include programs that play games such as chess and Go; speech-recognition programs that translate speech to text; and spam filters for email accounts.8 Increasingly, AI systems are being used in social domains as well-for example, to make decisions regarding policing, bail, credit, and employment.9 As these AI tools are deployed in arenas with significant human and societal impacts, concerns have been raised about the fairness, accountability, and transparency of these systems.10 Fairness centers on the risk of “discriminatory or unjust impacts when comparing across different demographics or affected communities and individuals. […]the introduction of AI may bring employees into a vortex of massive information collection, data vulnerability, and seemingly whimsical decision-making. Employees report a feeling of powerlessness when AI is given significant power over their jobs, as they lose the ability to interact with their “supervisor” in a meaningful way.22 The voracious maw of data collection paired with the inexplicability of decisions made can create the feeling that the employee is trapped in a matrix of computer-controlled reality from which there is no escape.23 In the next two sections we explain these concerns and examine the extent to which existing law addresses them. […]certain patterns of consumption could be correlated with health conditions, causing an algorithm to implicitly discriminate against individuals with disabilities, even if the employer neither knows nor intends to screen on that basis.30 AI can also produce biased results if it is trained using biased data.31 An algorithm trained using the subjective evaluations of a biased supervisor will make systematically biased predictions of future job