Can we trust it? Importance of Reliability
Hi, Friends!
It's July! We are officially more than 50% into the year and have seen a solid 15% increase in the value of the S&P 500, including the dividends.
Whoops, wrong newsletter!
July is my favourite month of the year.
It's my birthday month when people take time off to be with family and connect with friends. Reflecting on the last year, I will officially mark my 1st anniversary of being an entrepreneur at the end of this month. 🚀
But this newsletter is not about my love for July.
Today, I want to talk about data quality. More specifically, one aspect of data quality: reliability.
As you know, I am a sucker for definitions. What does it even mean to be reliable?
According to the ​Cambridge Dictionary​:
Reliability is...
the quality of being able to be trusted or believed because of working or behaving well
So, reliability in People Analytics is a property of our data and measurement asking simple questions, such as:
Can I trust the observations I have?
Do my methods guard against biases?
Will I get a consistent result?
Are my items aligned with what they are measuring?
There are 4 main types of reliability, and today:
Test-retest
Inter-rater
Parallel forms
Internal consistency
Are you ready?
Test-Retest Reliability
Definition: This type of reliability checks if you get the same results when you repeat the same test with the same people at different times.
Example: Imagine you assess someone's personality in January and then again in June. High test-retest reliability means that the results should be consistent over time, indicating that a person's personality did not change in 6 months. And it shouldn't, as it is a relatively stable characteristic of an individual's disposition.
How to Test:
Conduct the initial survey.
Conduct the same survey with the same respondents after some time.
Use the Pearson correlation coefficient to measure the consistency of the results over time.
Why care? Here, you want to make sure you can trust your results. Consistent results over time validate the survey's effectiveness and the reliability of your data.
Fun Fact: Did you know that the MBTI personality test is notorious for giving different results at different times? What does it mean? The test is not reliable.
Inter-Rater Reliability
Definition: Measures how much different people agree when they observe or rate the same thing.
Example: In hiring, different managers rate the same candidate. High inter-rater reliability means that all managers give similar ratings, suggesting that the selection criteria are clear and consistently applied.
How to Test:
Have multiple raters evaluate the same subjects.
Methods like Cohen’s kappa or the intraclass correlation coefficient (ICC) can be used to assess the level of agreement among raters.
Relevance: This reliability type ensures fairness and consistency in subjective assessments like interviews or performance evaluations. It allows you to minimize the bias or, at the very least, quantify it to control it.
Parallel-Forms Reliability
Definition: Compares the results of two similar tests given to the same group to see if they are consistent.
Example: Suppose you create two different versions of a skills assessment test and administer both to a group of employees. High parallel-forms reliability means that both tests yield similar results, confirming that either version accurately measures the same skills. You can then use these tests to select candidates.
How to Test:
Create two equivalent forms of the test.
Give both forms to the same group of respondents.
Use correlation coefficients to compare the scores from both tests.
Relevance: This is useful when multiple assessment forms, such as training programs, pre-hiring assessments, or certification exams, are needed. It helps ensure that different test versions are equally reliable and valid.
Internal Consistency Reliability
Definition: Checks if the questions or items within a single test give consistent results.
Example: In an employee satisfaction survey, if questions about workplace culture, management support, and career development yield similar results, the survey has high internal consistency reliability.
How to Test:
Conduct the survey or test with respondents.
Use methods like Cronbach’s alpha to assess the consistency of responses across items within the test.
Relevance: This is critical for surveys and assessments where multiple items are intended to measure the same construct. High internal consistency indicates that all items reliably assess the intended characteristic, enhancing the credibility of the survey results.
Why Reliability Matters in HR
Accuracy: Reliable measures ensure your data reflects true employee attitudes and behaviours.
Consistency: Reliable assessments lead to consistent and fair decision-making processes.
Credibility: Reliable data builds trust in your analytics and decision-making processes among employees and stakeholders.
In other words, you have robust people analytics if you have strong reliability.
Chat next week!
K
Whenever you’re ready, there are 2 ways I can help you:
#1
If you’re still looking to get started in People Analytics, I recommend starting with my affordable course:
​Practical People Analytics​: Build data-driven HR programs to 10x your professional effectiveness, business impact, and career. This comprehensive course will teach you everything from building an HR dashboard for business results to driving growth through more advanced analytics (i.e., regression). ​Join your peers today​!
#2
If you are looking for support in your human capital programs, such as engagement, retention, and compensation & benefits, and want to take a more data-driven approach, contact me at ​Tskhay & Associates​ for consulting services. Or simply reply to this email!