How Applied’s recruitment tool can help HE tackle gender bias in recruitment

[:en]

CEO of Global Academy Jobs.com Wendy Stone talks to Kate Glazebrook, the CEO and co-founder of Applied, about the ground-breaking work Applied are doing to tackle unconscious recruitment bias, and what the implications are for Higher Education recruitment processes.

A very senior British scientist told me last year about the frustrating shortage of women at the higher levels of research work. This problem must begin with university recruitment and promotion because, while women are well represented in the HE workforce, we now know that pay disparity in the UK Higher Education sector is very real.

We work with university HR professionals around the world, so we know that universities have made commendable efforts to lead the way in open, transparent and fair recruitment processes. At the same time there are unavoidable complexities in academic and research recruitment processes. Even the most determined institutions struggle to overcome the biases that creep into their traditional, slow ‘CV and Interview’ recruitment and selection systems. This is why I was so intrigued when I first learned about the Applied recruitment tools and the elegant and timely solutions they offer to these challenges.

Co-founder Kate Glazebrook and her team are changing the recruitment landscape by providing tools that university selection committees can use to counter the unintentional bias that stifles academic recruitment and consequentially has a negative impact on both candidates and institutions. By closely analysing statistical data from a massive data pool, Kate has implemented her deep understanding of behavioural sciences to create a platform that is already delivering excellent outcomes for ambitious recruiters.

I am constantly looking for new, innovative tools that can help universities recruit successfully from a wider talent pool so Applied caught my attention.  With a strong foundation in empirical evidence; the Applied tools have grown from academic research with a substantive evidence base. This has the potential to both save time and money for our university clients and help them improve their recruitment results. Naturally I wanted to know more.

WS: In a traditional model, university selection and promotion panels are comprised of intelligent and well-informed individuals. How does bias sneak in?

KG: Largely because no matter how well-intentioned or well-informed, we’re all human, and that means we’re all subject to the unconscious biases that influence how we see the world and therefore how we understand other people.

Some biases are easy to identify, like the overreliance on certain markers, or proxies, of success: like what someone studied and where, or how well someone performs in graduate entrance exams. In actual fact, even in graduate schools, studies have shown that the very exams – like the GREs – which schools use as a measure of aptitude do very poorly as predictors of performance even in graduate school.

Other biases are more insidious, like our innate tendency to want to hire people who look or sound like us – often referred to as affinity bias.  Or the fact that our brains succumb to stereotype biases that make it more cognitively taxing to imagine a female computer scientist or a male nurse. These biases are allowed to risk influencing our assessment of talent when gender is made salient in job applications, which it often inadvertently is when we hire using documents where a person’s name is the first (and often bolded!) thing we read in an application.

But a whole other category of biases are even harder to protect against, and those include the whole host of contextual biases that influence how we score and assess candidates. Even seemingly small details about the decision-making environment like the order that candidates appear, or the time of day, or who in the hiring committee speak up first, can affect who gets the job.

Kate Glazebrook is CEO and founding member of Applied

WS: What is the key research that underpins the Applied tools and what are the key outcomes?

KG: Applied is the first technology spin out of the Behavioural Insights Team (often referred to as the ‘nudge’ unit). That means we’re steeped in the science of how the brain works and also in a passion for testing and experimenting. We’ve developed the science behind our platform in close concert with Harvard Professor Iris Bohnet – a leading expert in behavioural economics and equality – as well as other academics like Professor Adam Grant. We’ve worked with them to develop our own debiased methodology for sifting candidates where we reshape candidate information to ensure you focus on what counts and not what doesn’t. It:

  • anonymises candidate applications to avoid stereotype and affinity biases,
  • chunks applications up so you do like for like comparisons of candidates on individual aspects and avoid ‘halo’ effects,
  • randomises their order to account for fatigue and ordering effects, and
  • harnesses the wisdom of the crowd to generate more balanced outcomes.

Each of these features we’ve tested and published on (you can read more in our blogs here!), as well as other features of the platform like how to best deliver feedback to unsuccessful candidates so they can best learn about their strengths and weaknesses, to how to improve gendered language in job descriptions. In all of our research we aim to test how we can best make recruitment smarter, fairer, and easier.

 

WS: Will the Applied team be using the data from the tool(s) for further research?

KG: Absolutely – we already run data science projects and partner with academics to conduct research on various questions we have like how best to create a growth mindset in the midst of rejection to which job boards deliver the best, most diverse candidates. Social impact are quite literally written into the articles of the company, and publishing research is a key part of that mission.

  • Planning interview questions
  • Working from an existing question bank
  • Notes from questions can be collected and stored properly
  • Questions can be randomised

All of these make a difference, but probably the most fundamental starting point is collecting data – it’s impossible to know what is and isn’t working if you can’t diagnose with data. That’s why we’ve always been a tool focused on people analytics: which candidates apply to which jobs, where do they come from, what state(s) in the process result in disproportionate drop off of candidates from particular groups, which questions work best and for which roles, and so forth. We’d have no chance of knowing the answers to these questions, or ways of improving them, if we didn’t collect, analyse and present data in ways that help us all to learn.

WS: Applied goes beyond standard CV’s and interviews and therefore gives a broader assessment of candidates than traditional recruitment processes offer. What are those tools and how do they fit with academic recruitment and selection processes?

KG: You’re right, we’re a platform designed to help organisations benefit from more predictive (and less biased) methods of testing candidates, and that means jettisoning the traditional reliance on CVs as the measure of quality. All of the available evidence points to the fact that very little of what we pay attention to on a CV predicts performance on the job. What does, however, is testing people on the practical things they’ll do on the day job (or ‘work sample’ assessments).

For that reason, we’re a product that’s more interested in what candidates can do not what they have done. So we have a library of questions that employers draw on and contribute to that test candidates on practical, day-in-the-life of, tasks that better reflect what they actually need to know how to do in the real world. This method has a couple of benefits. First and foremost, it tends to best uncover who can actually do the job, so it tends to be most correlated with performance. Second, it’s more inclusive, because candidates can present abilities irrespective of where they learnt the skill. It’s background agnostic, but skill-focused. Finally, feedback we get from candidates is that it’s a far more valuable way to apply for a job as it gives them a better window into the things the employer really cares about. And if they don’t get the job, it’s clear how that related back to something relevant to the actual job, not something in their background.

WS: When recruiting for professional roles in a university, the opportunity to ask carefully designed questions during the application process can help to show more clearly how a candidate will meet the person specification.  How do recruiters typically use your ‘questions’ tool?

We all get that blank sheet of paper moment when we’re asked to come up with something new, so for some of our users who are trialling work sample assessments for the first time, they use our library of questions to draft their application processes. We’ve had hundreds of questions asked of candidates and these are all linked to detailed stats on how well those questions performed as predictors of performance, how subjective they seemed to be, and even which skills they related to. All of this data then feeds back into the library to help generate valuable insights into the best ways of testing particular things, allowing teams to not only take from the best, but test and experiment.

WS: We know that feedback and progress reports are essential for retaining candidate engagement throughout the selection process. This can be very time consuming and create a heavy administrative load for a university department. What does Applied offer to help with that?

KG: You’re absolutely right. We’ve all been on the side of the candidate, nervously waiting by our email inboxes, only to receive a generic ‘Thanks but no thanks’ email with absolutely nothing to go on to help you learn what you did well or could have improved on. We felt like we had a moral obligation to use the data we were helping organisations to collect to also support them in offering more meaningful, personalised feedback to candidates even if they weren’t successful in that campaign. It turns out to be an incredible win-win for everyone, since as you say, almost all organisations would love to do more on this but are often held back by the sheer time cost of collating disparate pieces of feedback and providing it back to candidates on an individual basis.

We were so confident that it was going to be a win-win that it was the first area of the platform where we sought candidates’ feedback (at the point of being told they didn’t get the job) and automatically sent it back to the hiring organisation. The average scores given for the feedback are over 7 out of 10, which just shows that giving back, even in the context of a negative outcome, is the reciprocal thing to do, and it pays back in terms of employer brand.

WS: Do you have an interview scoring tool?

KG: Yes we do: we developed a structured interviewing module after recognising a number of our clients were finding that they were doing a great job of bringing a more diverse set of candidates into the interview only to find that in the interview setting there was a reversion to type. It became apparent that there was a lot of low-hanging fruit we could support organisations with by simply making it easy for them to apply the principles of good structured interviews: deciding up-front what matters to you (before seeing candidates), asking all candidates the same questions and in the same order, making sure to score each section of the interview separately and simply adding up the component parts, and crucially, registering everyone’s scores before discussion took place. These simple rules turn out to do a remarkable amount of good in terms of improving not only the predictive validity of the interview but also the removal of bias.

WS: It occurs to me that Applied will be particularly useful to growing universities, and even more so to those institutions looking to move on from outdated applicant tracking systems, which do not help them improve transparency or diversity. What is Applied doing to disrupt habitually poor recruitment results in cases such as these?

KG: We absolutely see a great opportunity for universities to lead the way in terms of transparent and fair recruitment. Hence we are developing a set of resources specific to Higher Education to further explain the striking benefits of de-biased recruitment as well as provide support for the mindset and behavioural changes required to truly make a difference.

We’ve also made our gendered language tool, that improves the wording and accessibility of job adds, into a separate product, so organisations that are just starting on their journey to more diverse and inclusive workforces, can implement this first step easily and see the benefits rapidly.

We also love to share our research and knowledge in this area so we’ll be coming to Wonkfest to speak about de-biased recruitment – hope to see you all there!

[:]

Global Academy Jobs Bulletin

The best career advice and a carefully curated selection of the top academic positions, straight to your inbox