AI and Recruitment Part 2 of 3: Ethics Thank You! Do Robots see Candidates as Humans?

The continuation of the artificial intelligence (AI) and recruitment blogpost by one of our headhunters. As mentioned in the last article, the robots use the information that is online to estimate the qualities in a candidate to the best extent possible. However, whether the algorithms used are fair, transparent, and precise has not been investigated sufficiently, which brings ethical and moral considerations into the equation. Is it justifiable to measure the candidates, like they are measured by AI robots? Candidates are being treated as an item from the grocery shelf, whose qualities can be described as precise and factual based on parameters we don’t necessarily master ourselves. Even though the AI robots can find personality patterns through algorithms, the general impression of a candidate and the emotional intelligence cannot be tested the same way as if two people were in the same room. For example, by processing data for many candidates, AI robots are able to find out what determines whether people are “high performers”. Here the focus is on tone of voice, choice of words, groupings of words and facial features. The robot can use these data to find out, how the skilled candidates should look and speak. That means that there is less focus on the very characteristics and competences of the subject, and more resembles to a prefabricated “robot” that fits into a “template” that provides a hint as to whether the candidate will be able to perform in the future. In the future the candidates might be created and replaced by robots that possesses the exact characteristics. We can already create robots using simple AI methods that replicates the appearance and, to some extent, characteristics of an existing human being. This way, where humans are classified and created, sheds light on why it is important to have ethics in our reflection when it comes to the handling of people. If not, we take the risk that these robot’s algorithms make the candidates seem as robots, which is a rather problematic development, because it goes against the entire value basis to the HR foundation. Therefore, we need a code of ethics from which we can navigate when AI make their entrance. 

 

The limits of technology: Chemistry and emotional intelligence. 

Imagine that you get called in for an interview and you meet a robot: “Hello, my name is Job Robot. I have selected you for an interview based on the information that I have been able to find about you online and offline media. Should we get started?”. This will be the future if we devise a scenario for how AI will be able to conduct a job interview. AI can be a great help to select which candidates are going to a job interview, but we should never forget that human contact cannot be replaced by AI. At an employment interview, the recruiter will most likely be able to find sides of yourself, that you have not already written in your application. Here it gets relevant to ask if AI would be able to find your hidden skills and abilities to the same extent as a human being is able to. However, as AI becomes more and more advanced, the human psyche may be replicated by AI and inserted into a robot. At the same time, different online personality tests are becoming more and more advanced, and the internet will increasingly be able to map people’s psychological compositions in a work-related context. Until AI is advanced enough to understand the deeper psychological mechanisms, we risk that it will be less about one’s characteristics in relation to the job, and more about the ability to penetrate to AI robots, which is a development we follow with great skepticism. 

 

Comments are closed.