By Ben Dattner, Tomas Chamorro-Premuzic, Richard Buchband and Lucinda Schettler
Digital innovations and advances in AI have produced a range of novel talent identification and assessment tools. Many of these technologies promise to help organizations improve their ability to find the right person for the right job, and screen out the wrong people for the wrong jobs, faster and cheaper than ever before.
These tools put unprecedented power in the hands of organizations to pursue data-based human capital decisions. They also have the potential to democratize feedback, giving millions of job candidates data-driven insights on their strengths, development needs, and potential career and organizational fit. In particular, we have seen the rapid growth (and corresponding venture capital investment) in game-based assessments, bots for scraping social media postings, linguistic analysis of candidates’ writing samples, and video-based interviews that utilize algorithms to analyze speech content, tone of voice, emotional states, nonverbal behaviors, and temperamental clues.
While these novel tools are disrupting the recruitment and assessment space, they leave many yet-unanswered questions about their accuracy, and the ethical, legal, and privacy implications that they introduce. This is especially true when compared to more longstanding psychometric assessments such as the NEO-PI-R, The Wonderlic Test, the Ravens Progressive Matrices test, or the Hogan Personality Inventory that have been scientifically derived and carefully validated vis-à-vis relevant jobs, identifying reliable associations between applicants’ scores and their subsequent job performance (publishing the evidence in independent, trustworthy, scholarly journals). Recently, there has even been interest and concern in the U.S. Senate about whether new technologies (specifically, facial analysis technologies) might have negative implications for equal opportunity among job candidates.
In this article, we focus on the potential repercussions of new technologies on the privacy of job candidates, as well as the implications for candidates’ protections under the Americans with Disabilities Act and other federal and state employment laws. Employers recognize that they can’t or shouldn’t ask candidates about their family status or political orientation, or whether they are pregnant, straight, gay, sad, lonely, depressed, physically or mentally ill, drinking too much, abusing drugs, or sleeping too little. However, new technologies may already be able to discern many of these factors indirectly and without proper (or even any) consent.
Before delving into the current ambiguities of the brave new world of job candidate assessment and evaluation, it’s helpful to take a look at the past. Psychometric assessments have been in use for well over 100 years, and became more widely utilized as a result of the United States Military’s Army Alpha, which placed recruits into categories and determined their likelihood of being successful in various roles. Traditionally, psychometrics fell into three broad categories: cognitive ability or intelligence, personality or temperament, and mental health or clinical diagnosis.
Since the adoption of the Americans with Disabilities Act (ADA) in 1990,employers are generally forbidden from inquiring about and/or using physical disability, mental health, or clinical diagnosis as a factor in pre-employment candidate assessments, and companies that have done so have been sued and censured. In essence, disabilities — whether physical or mental — have been determined to be “private” information that employers cannot inquire about at the pre-employment stage, just as employers shouldn’t ask applicants intrusive questions about their private lives, and cannot take private demographic information into account in hiring decisions.
Cognitive ability and intelligence testing have been found to be a reliable and valid predictor of job success in a wide variety of occupations. However, these kinds of assessments can be discriminatory if they adversely impact certain protected groups, such as those defined by gender, race, age, or national origin. If an employer is utilizing an assessment that has been found to have such an adverse impact, which is defined by the relative scores of different protected groups, the employer has to prove that the assessment methodology is job-related and predictive of success in the specific jobs in question.
Personality assessments are less likely to expose employers to possible liability for discrimination, since there is little to no correlation between personality characteristics and protected demographic variables or disabilities. It should also be noted that the relationship between personality and job performance depends on the context (e.g., type of role or job).
Unfortunately, there is far less information about the new generation of talent tools that are increasingly used in pre-hire assessment. Many of these tools have emerged as technological innovations, rather than from scientifically-derived methods or research programs. As a result, it is not always clear what they assess, whether their underlying hypotheses are valid, or why they may be expected to predict job candidates’ performance. For example, physical properties of speech and the human voice — which have long been associated with elements of personality — have been linked to individual differences in job performance. If a tool shows a preference for speech patterns such as consistent vocal cadence or pitch or a “friendly” tone of voice that do not have an adverse impact upon job candidates in a legally protected group, then there is no legal issue; but these tools may not have been scientifically validated and therefore are not controlling for potential discriminatory adverse impact — meaning the employer may incur liability for any blind reliance. In addition, there are yet no convincing hypotheses or defensible conclusions about whether it would be ethical to screen out people based on their voices, which are physiologically determined, largely unchangeable personal attributes.
Likewise, social media activity — e.g., Facebook or Twitter usage — has been found to reflect people’s intelligence and personality, including their dark side traits. But is it ethical to mine this data for hiring purposes when users will have generally used such apps for different purposes and may not have provided their consent for data analysis to draw private conclusions from their public postings?
When used in the hiring context, new technologies raise a number of new ethical and legal questions around privacy, which we think ought to be publicly discussed and debated, namely:
1) What temptations will companies face in terms of candidate privacy relating to personal attributes?
As technology advances, big data and AI will continue to be able to determine “proxy” variables for private, personal attributes with increased accuracy. Today, for example, Facebook “likes” can be used to infer sexual orientation and race with considerable accuracy. Political affiliation and religious beliefs are just as easily identifiable. Might companies be tempted to use tools like these to screen candidates, believing that because decisions aren’t made directly based upon protected characteristics that they aren’t legally actionable? While an employer may not violate any laws in merely discerning an applicant’s personal information, the company may become vulnerable to legal exposure if it makes adverse employment decisions by relying on any protected categories such as one’s place of birth, race, or native language — or based on private information that it does not have the right to consider, such as possible physical illness or mental ailment. How the courts will handle situations where employers have relied upon tools using these proxy variables is unclear; but the fact remains that it is unlawful to take an adverse action based upon certain protected or private characteristics — no matter how these were learned or inferred.
This might also apply to facial recognition software, as recent researchpredicts that face-reading AI may soon be able to discern candidates’ sexual and political orientation as well as “internal states” like mood or emotion with a high degree of accuracy. How might the application of the Americans with Disabilities Act change? Additionally, the Employee Polygraph Protection Act generally prohibits employers from using lie detector tests as a pre-employment screening tool and the Genetic Information Nondiscrimination Act prohibits employers from using genetic information in employment decisions. But what if the exact same kind of information about truth, lies, or genetic attributes can be determined by the above-mentioned technological tools?
2) What temptations will companies face in terms of candidate privacy relating to lifestyle and activities?
Employers can now access information such as one candidate’s online “check in” to her church every Sunday morning, another candidate’s review of the dementia care facility into which he has checked his elderly parent, and a third’s divorce filing in civil court. All of these things, and many more, are easily discoverable in the digital era. Big data is following us everywhere we go online and collecting and assembling information that can be sliced and diced by tools we can’t even imagine yet — tools that could possibly inform future employers about our fitness (or lack thereof) for certain roles. And big data is only going to get bigger; according to experts, 90% of the data in the world was generated just in the past two years alone. With the expansion of data comes the potential expansion for misuse and resulting discrimination — either deliberate or unintentional.
Unlike the EU, which has harmonized its approach to privacy under the General Data Protection Regulation (GDPR), the U.S. relies on a patchwork approach to privacy driven largely by state law. With regard to social media, specifically, states began introducing legislation back in 2012 to prevent employers from requesting passwords to personal internet accounts as a condition of employment. More than twenty states have enacted these types of laws that apply to employers. However, in terms of general privacy in the use of new technologies in the workplace, there has been less specific guidance or action. In particular, legislation has passed in California that will potentially constrain employers’ use of candidate or employee data. In general, state and federal courts have yet to adopt a unified framework for analyzing employee privacy as related to new technology. The takeaway is that at least for now, employee privacy in the age of big data remains unsettled. This puts employers in a conflicted position that calls out for caution: Cutting-edge technology is available that may be extremely useful. But it’s giving you information that has previously been considered private. Is it legal to use in a hiring context? And is it ethical to consider if the candidate didn’t consent?
3) What temptations will companies face in terms of candidate privacy relating to disabilities?
The Americans with Disabilities Act puts mental disabilities squarely in its purview, alongside physical disabilities, and defines an individual as disabled if the impairment substantially limits a major life activity, if the person has a record of such an impairment, or if the person is perceived to have such an impairment. About a decade ago, The U.S. Equal Employment Opportunity Commission (EEOC) issued guidance to say that the expanding list of personality disorders described in the psychiatric literature could qualify as mental impairments, and the ADA Amendments Act made it easier for an individual to establish that he or she has a disability within the meaning of the ADA. As a result, the category of people protected under the ADA may now include people who have significant problems communicating in social situations, people who have issues concentrating, or people who have difficulty interacting with others.
In addition to raising new questions about disabilities, technology also presents new dilemmas with respect to differences, whether demographic or otherwise. There have already been high-profile real-life situations where these systems have revealed learned biases, especially relating to race and gender. Amazon, for example, developed an automated talent search program to review resumes — which was abandoned once the company realized that the program was not rating candidates in a gender-neutral way. To reduce such biases, developers are balancing the data used for training AI models, to appropriately represent all groups. The more information that the technology has and can account for/learn from, the better it can control for potential bias.
In conclusion, new technologies can already cross the lines between public and private attributes, “traits” and “states” in new ways, and there is every reason to believe that in the future they will be increasingly able to do so. Using AI, big data, social media, and machine learning, employers will have ever-greater access to candidates’ private lives, private attributes, and private challenges and states of mind. There are no easy answers to many of the new questions about privacy we have raised here, but we believe that they are all worthy of public discussion and debate.