“The online tests currently used in the application process are useful, but their significance is limited,” said Scheffer. This is due mainly to the fact that people are by no means always right in their self assessment and for various reasons choose a profession that does not suit them at all. In many cases, it is only in the course of their professional lives that it becomes clear to them that something is wrong. Some people reorient themselves, some can never unfold their potential fully. “However, if we are able to address the unconscious level in applicants using an AI tool, we gain far more precise results – with all the opportunities and risks involved,” he added.
The use of artificial intelligence is still in the early stages and gradually spreading to diverse fields of application including personnel recruitment. Professor David Scheffer of Nordakademie, says: “We’re still at the beginning. We are currently seeing the first attempts to use artificial intelligence in recruitment. But if they are successful, this could revolutionize personnel management entirely.” Scheffer heads the Computer Aided Psychometric Text Analysis (CAPTA Institute) at Nordakademie. And he is also researching the possibilities of machine learning and artificial intelligence for optimized personnel selection and communicating with Ph.D. students.
Analysis of the unconscious
Total transparency
All conscious statements are based on an unconscious level, which manifests itself in syntax, language style, speaking speed and pauses, Scheffer explained. “What we say and how we say it says a lot about what we actually end up doing. And AI analyses not only what we say, but how we say it.” However, this turns AI into a double-edged sword because the analysis not only measures an applicant’s skills, but the data also provide information about aspects that simply do not concern the employer – up to and including a possible tendency to depression. “Therefore, we certainly need regulation based on ethical and legal principles,” Scheffer pointed out.
AI becomes a gatekeeper
On the other hand, the AI analysis allows for a far more precise matching of applicant and profession. “Finding exactly the right position, leads to clearly happier and motivated co-workers – not to mention entrepreneurial success,” said Scheffer. The criterion for accurate AI analysis is however, a careful and representative data basis. Otherwise, AI can become an unwanted gatekeeper, as Kenza Ait Si Abbou Lyadini pointed out at this year’s TEDxHamburg. Despite her first-class education, she was categorised as a woman of colour and thus “not promising”, but not for racist reasons. The algorithm had simply been based on examples of mostly white males and was thus inadequately trained. “If the data are good, however, a very accurate assessment of the applicant can be made,” said Scheffer. As Germany has already initiated basic data protection laws, the professor sees good chances of limiting the risks associated with AI.
ys/pb
Sources and further information
Read the other parts of our AI series as well:
Part 1: Artificial intelligence – a tool not a mind
Part 2: jung diagnostics: Algorithms for MRI image analysis
Part 3: My colleague, the robot – a popular member of staff?
Part 4: Artifical intelligence – catalyst of positive future
Part 5: Humanoid robotics: Step by step towards normality
Part 6: Robotic cars to take to Hamburg’s roads?
Part 7: No need to fear artificial intelligence
Part 8: Hamburg to get Health AI Hub – start-ups joining forces
Part 9: AI ‘Made in Germany’ conquering outer space
Part 10: Five promising fields for AI in medicine