Blog



Are You Practicing Bad Data Science With Your Pre-Hire Talent Assessments?


Greta Roberts, CEO Talent Analytics, Corp.

Talent Analytics uses data gathered from our own proprietary talent assessments as an input variable to predict hiring success – pre-hire.  We treat this dataset just like any other dataset in our predictive work.  We are careful to analyze it for a strong (or weak) correlation to actual job performance. Our theory?  If there is no correlation between data gathered via this method our clients should stop using it.  Continuing without proof of success would be a little like a doctor “knowing” a certain medication doesn’t work for you, but continues to encourage their patients to keep using the medication.  Malpractice at the very least.

Like all great predictive solutions, we use the most current predictive analytics methods any top data scientist would use – with any dataset – to find if there are strong patterns in human attributes that predict either “lasting in a role” or achieving some kind of KPI performance like sales performance, calls per hour, balanced cash drawers, customer satisfaction scores, errors and the like.

We use methodologies that include training datasets, validation datasets and lots of cross validation which all lead to the highest level of rigor called Criterion Validation of our talent assessment.  Criterion validation proves the correlation between certain assessment characteristics – and specific performance in the role.

If your business can show that your talent assessments accurately increase your hiring success, then it clearly makes sense to continue using them.  If you can’t – what’s the point?

I am stunned by how few businesses (or assessment vendors) take the time to analyze their talent assessment dataset to see if it provides any positive or negative value.

We recently evaluated another vendor’s solution to see if it accurately predicted customer service scores (pre-hire) in Bank Tellers. It was predictive – but negatively so meaning, if their assessment said someone would have great customer service and flagged them as a “hire”- the new hires actually ended up having low customer service scores.  We analyzed the predictions, individual assessment scores as well and actual customer service scores new hires received after they were hired.

How do you know if you’re practicing good data science with your talent assessments:

  1. Ask your talent assessment vendor for access to the raw assessment scores so you can analyze how their scores compare to the actual performance they are “predicting”
  2. Ask your talent assessment vendor if their assessments are Criterion Validated. If so – how often.  If not, ask them how they know they work?
  3. Once you have the raw talent assessment scores, ask your workforce analytics team to see if there is a correlation between any of the scores and length of time in a role or performance KPIs
  4. If you can prove that nothing positive is being predicted, stop using them immediately.
  5. If you’d like some assistance with pre-hire testing, look for a solution that is Criterion Validated and uses modern data science to prove their usefulness

Pre-hire assessments can be a powerful dataset for learning more about your job candidates.  Used as part of a responsible data science initiative, they can often predict the probability of someone lasting in a role, or performing very specific KPIs.  Used irresponsibly they introduce bias, they waste time and worst of all they are a signification cost to your organization both in terms of the fees your organization pays to use them and in terms of the bottom performing employee they help you to hire.

To learn more about successful predictive pre-hire projects, visit Talent Analytics at www.talentanalytics.com or +1-617-864-7474




Posted in Tags: , , , , , , , , , , , , , , , ,

Leave a Reply