Unlock Editor's Digest for free
FT editor Roula Khalaf has chosen her favorite stories in this weekly newsletter.
AI alarmists warn that machine learning will eventually wipe out humanity, or at least make us unnecessary. But what if the real worry is something more mundane: that the AI tools are simply doing a bad job?
That's what reporter and New York University professor Hilke Schellman found after spending five years researching the tools currently widely used by employers to hire, fire, and manage. Bots will increasingly determine the job ads we see online, the resumes recruiters read, which applicants make it to final interviews, and which employees receive promotions, bonuses, or termination letters. It is influenced left and right. But in this world, algorithms “define who we are, where we excel, and where we struggle.” . What if the algorithm is wrong? ” Shellman asks. algorithman explanation of her findings.
There are many reasons why recruiters and managers rely on AI. This is to help us sift through an incredibly large amount of resumes and fill positions faster. To help us find talented people, even those with special backgrounds. To remove human bias and make fairer decisions. Or track performance and identify problem staff.
But in Shellman's experience, many of the systems on the market can do more harm than good. For example, when testing video interviewing software, you may find that she's a good fit for the role, even if it replaces an original plausible answer with a parroting phrase like “I love teamwork,” or speaks entirely in German. You can see.
She spoke with experts who have audited resume screening tools for potential bias and found they tend to exclude candidates from certain zip codes, a factor in racial discrimination. did. Give preferential treatment to certain nationalities. Or they may view liking a male-dominated hobby like baseball as an indicator of success. Moreover, talented individuals are often made redundant or automatically excluded from jobs for which they are qualified simply because they perform poorly on seemingly unrelated online games used to score candidates. There are cases where you do.
After playing some, Schellman says rapid pattern matching games and personality tests can help recruiters identify who is most likely to fail in a role, or who is most likely to excel. I'm skeptical that this is the case. The game will also be more difficult for people who are distracted by children or have disabilities that the software doesn't recognize.
But many of the problems Schellman discovered aren't inherently about the use of AI. If recruiters don't understand why some hires do better than others, developers won't be able to design good recruiting tests. If the system is designed primarily to fill vacancies quickly, the best candidates will not be selected.
Schellman says that unless developers intervene, recruitment platforms will respond most positively to recruiters regardless of experience and serve more ads to candidates (often male) applying for senior roles. I discovered that. Administrators also rely blindly on tools whose sole purpose is to inform human judgment, in the mistaken belief that they will protect them from legal challenge in some cases. A problem arises.
Machine learning can amplify existing biases in ways that are difficult to detect, even when developers are vigilant. Algorithms identify patterns in people who have performed well or poorly in the past, but they lack the ability to understand whether the characteristics they find are important. And when algorithms go wrong, sometimes at scale, it can be very difficult for individuals to determine the cause, seek redress, or even find someone to talk to.
Perhaps the most useful sections of Shellman's book are tips for job seekers (use bullet points and avoid ampersands to make your resume machine readable) and tips for people monitored by employers (e-mail This is an appendix to help keep the lights on. But she also has suggestions for regulators on how to ensure AI tools are tested before they go to market.
At the very least, she argues, lawmakers could mandate transparency in technical reports about the data used to train AI models and their effectiveness. Ideally, government agencies will themselves scrutinize the tools used in sensitive areas such as policy, credit ratings, and workplace surveillance.
In the absence of such reforms, Shellman's book is a wake-up call for anyone who thought AI would remove human bias from hiring, and an essential handbook for job seekers.
Algorithms: How AI can take over your career and steal your future Written by Hilke Schellmann Hearst £22/Hachette Books $30, 336 pages
Join our online book group on Facebook. FT Books Cafe subscribe to the podcast life and art No matter where you listen