Library
Articles
Performance Reviews

The how and why of performance review calibration

October 18, 2018

1. What is review calibration?

2. Setting the stage for review calibration before the performance review cycle

3. How to calibrate reviews after a review cycle

4. How to do it in Lattice

What is review calibration?

Maybe this has happened at your company. It's definitely happened at ours.

During a recent company review cycle, you notice that most feedback for each person is pretty uniform -- and, based on their self-reviews, pretty self-aware. But then you notice that the actual scores recorded for each review varied based on who was giving them.

You realize that the scores were all different because individual reviewers thought of the scoring systems differently:

  • For Manager A, a 5 out of 5 on a rating meant that the reviewee was delivering on that attribute
  • For Manager B, a 3 out of 5 on that same rating meant the same thing -- the lower score represented that person B thought the reviewee had potential to grow and go beyond delivering that attribute.

In fact, Manager B gave lower scores to their direct report, not as an act of harshness, but out of the very well intentioned sense that everyone -- not to mention the whole company -- had room to grow.

How can you iron out differences in interpretation between managers when it comes to scores in reviews?

The answer to this is review calibration. Review calibration is when the people teams train and teach managers to apply a company standard when reviewing their direct reports’ performance at work. Calibration sessions are meetings after a review cycle to enact these policies.

If you have scores as part of your performance review, then you might want to set your company up for review calibration -- both in the product you’re using and in terms of training managers. Here's how to do it.

Setting the stage for review calibration before the performance review cycle.

People teams should give managers a framework for how to score their direct reports. That means three things: picking a scale, explaining what each number means, and figuring out what the distribution of scores may be.

First, you can pick a scale from 1 to 5, 1 to 7, 1 to 10 -- whatever works for your company.

Second, define what each point means. It’s likely variations on low, average, and high performance, but it’s important that everyone in the company have the same definitions. All managers should know what a 5 means on the scale you provide -- whether it’s midlevel performance, solid with potential for growth, exceeds expectations, or very high performance.

A note on defining terms: For midlevel performers, be careful using terms like “average” or “medium,” as these terms can influence managers’ scoring styles. If you call the middle number “average,” it can seem like a harsher judgment than it is. Terms like “solid,” “standard,” “potential reached,” “meets expectations,” can also work instead.

Lastly, ask managers and leadership what they expect the distribution of scores will be -- how many high performers and how many low performers might they expect? When scores are distributed equally, the standard distribution of employees should be some high performers, a few low performers, and mostly midlevel performers.

To sum up, here’s what to review when training managers on giving scores:

  • What do the numbers on the scale mean?
  • What is the distribution of scores that you expect?
  • What does it mean to be a low performer? A midlevel performer? A high performer?

Make sure to emphasize that managers will have to justify their scores in calibration sessions after the review.

How to calibrate reviews after a review cycle

There are two main ways to calibrate scores after the fact, and they depend on the size of your organization.

1. One-on-one conversations with all managers

In one case, people teams should meet with all managers to understand their reasons behind scoring. By getting a wide range of reasons across departments, they’ll be able to understand whether a manager is more lenient or more strict or just right when scoring their direct reports. You also want scores adjusted by the people who will have the most information on how the company is doing overall, so they have a strong sense of the overall performance distribution of the company.

People teams can also get more information on managers’ scoring styles by discussing these scores with managers’ managers, who have a clearer perspective on how that manager will approach scores.

The way to know whether the above works for you is answering a very simple question: how many meetings do you think your people team can handle?

2. Calibration Committees

Of course, the more data you can gather on managers’ scoring styles, the better -- but there is a turning point when it doesn’t make sense for the people team to do that. Instead, it’s better to form calibration committees, composed of leadership and managers’ managers, who know managers’ managing styles well enough to know how their scores need to be adjusted.

Also, research has shown that calibration committees have a longstanding effect on how managers provide ratings. Having seen how their scores were adjusted, managers are more careful and thoughtful the next review cycle.

Based on these discussions, and armed with the knowledge that each score has been vetted, the people team can adjust scores accordingly.

How to change it in Lattice

The final step of review calibration is using the information gathered both pre- and post-review to change scores accordingly. With an employee performance management system like Lattice, that can be as simple as downloading, tweaking, and uploading these calibrated scores into the system.

Review calibration is an essential step when holding performance review cycles with scoring. It makes managers’ jobs easier, makes employees’ performance evaluations more honest, and accomplishes the people team’s task to make sure that all performance review scores are given based on the same standard of performance.

Library
Articles
Performance Reviews

The how and why of performance review calibration

Review calibration is when the people teams train and teach managers to apply a company standard when reviewing their direct reports’ performance at work.

1. What is review calibration?

2. Setting the stage for review calibration before the performance review cycle

3. How to calibrate reviews after a review cycle

4. How to do it in Lattice

What is review calibration?

Maybe this has happened at your company. It's definitely happened at ours.

During a recent company review cycle, you notice that most feedback for each person is pretty uniform -- and, based on their self-reviews, pretty self-aware. But then you notice that the actual scores recorded for each review varied based on who was giving them.

You realize that the scores were all different because individual reviewers thought of the scoring systems differently:

  • For Manager A, a 5 out of 5 on a rating meant that the reviewee was delivering on that attribute
  • For Manager B, a 3 out of 5 on that same rating meant the same thing -- the lower score represented that person B thought the reviewee had potential to grow and go beyond delivering that attribute.

In fact, Manager B gave lower scores to their direct report, not as an act of harshness, but out of the very well intentioned sense that everyone -- not to mention the whole company -- had room to grow.

How can you iron out differences in interpretation between managers when it comes to scores in reviews?

The answer to this is review calibration. Review calibration is when the people teams train and teach managers to apply a company standard when reviewing their direct reports’ performance at work. Calibration sessions are meetings after a review cycle to enact these policies.

If you have scores as part of your performance review, then you might want to set your company up for review calibration -- both in the product you’re using and in terms of training managers. Here's how to do it.

Setting the stage for review calibration before the performance review cycle.

People teams should give managers a framework for how to score their direct reports. That means three things: picking a scale, explaining what each number means, and figuring out what the distribution of scores may be.

First, you can pick a scale from 1 to 5, 1 to 7, 1 to 10 -- whatever works for your company.

Second, define what each point means. It’s likely variations on low, average, and high performance, but it’s important that everyone in the company have the same definitions. All managers should know what a 5 means on the scale you provide -- whether it’s midlevel performance, solid with potential for growth, exceeds expectations, or very high performance.

A note on defining terms: For midlevel performers, be careful using terms like “average” or “medium,” as these terms can influence managers’ scoring styles. If you call the middle number “average,” it can seem like a harsher judgment than it is. Terms like “solid,” “standard,” “potential reached,” “meets expectations,” can also work instead.

Lastly, ask managers and leadership what they expect the distribution of scores will be -- how many high performers and how many low performers might they expect? When scores are distributed equally, the standard distribution of employees should be some high performers, a few low performers, and mostly midlevel performers.

To sum up, here’s what to review when training managers on giving scores:

  • What do the numbers on the scale mean?
  • What is the distribution of scores that you expect?
  • What does it mean to be a low performer? A midlevel performer? A high performer?

Make sure to emphasize that managers will have to justify their scores in calibration sessions after the review.

How to calibrate reviews after a review cycle

There are two main ways to calibrate scores after the fact, and they depend on the size of your organization.

1. One-on-one conversations with all managers

In one case, people teams should meet with all managers to understand their reasons behind scoring. By getting a wide range of reasons across departments, they’ll be able to understand whether a manager is more lenient or more strict or just right when scoring their direct reports. You also want scores adjusted by the people who will have the most information on how the company is doing overall, so they have a strong sense of the overall performance distribution of the company.

People teams can also get more information on managers’ scoring styles by discussing these scores with managers’ managers, who have a clearer perspective on how that manager will approach scores.

The way to know whether the above works for you is answering a very simple question: how many meetings do you think your people team can handle?

2. Calibration Committees

Of course, the more data you can gather on managers’ scoring styles, the better -- but there is a turning point when it doesn’t make sense for the people team to do that. Instead, it’s better to form calibration committees, composed of leadership and managers’ managers, who know managers’ managing styles well enough to know how their scores need to be adjusted.

Also, research has shown that calibration committees have a longstanding effect on how managers provide ratings. Having seen how their scores were adjusted, managers are more careful and thoughtful the next review cycle.

Based on these discussions, and armed with the knowledge that each score has been vetted, the people team can adjust scores accordingly.

How to change it in Lattice

The final step of review calibration is using the information gathered both pre- and post-review to change scores accordingly. With an employee performance management system like Lattice, that can be as simple as downloading, tweaking, and uploading these calibrated scores into the system.

Review calibration is an essential step when holding performance review cycles with scoring. It makes managers’ jobs easier, makes employees’ performance evaluations more honest, and accomplishes the people team’s task to make sure that all performance review scores are given based on the same standard of performance.

Partner

Download '':

Oops! Something went wrong while submitting the form.
Library
Articles
Performance Reviews

The how and why of performance review calibration

Review calibration is when the people teams train and teach managers to apply a company standard when reviewing their direct reports’ performance at work.

RSVP to join live or register for the recording.

You're all set!
Check your email for a link to the webinar
Oops! Something went wrong while submitting the form.
Library
Articles
Performance Reviews

The how and why of performance review calibration

Prefer Podcasts? You can listen on iTunes, or here:

1. What is review calibration?

2. Setting the stage for review calibration before the performance review cycle

3. How to calibrate reviews after a review cycle

4. How to do it in Lattice

What is review calibration?

Maybe this has happened at your company. It's definitely happened at ours.

During a recent company review cycle, you notice that most feedback for each person is pretty uniform -- and, based on their self-reviews, pretty self-aware. But then you notice that the actual scores recorded for each review varied based on who was giving them.

You realize that the scores were all different because individual reviewers thought of the scoring systems differently:

  • For Manager A, a 5 out of 5 on a rating meant that the reviewee was delivering on that attribute
  • For Manager B, a 3 out of 5 on that same rating meant the same thing -- the lower score represented that person B thought the reviewee had potential to grow and go beyond delivering that attribute.

In fact, Manager B gave lower scores to their direct report, not as an act of harshness, but out of the very well intentioned sense that everyone -- not to mention the whole company -- had room to grow.

How can you iron out differences in interpretation between managers when it comes to scores in reviews?

The answer to this is review calibration. Review calibration is when the people teams train and teach managers to apply a company standard when reviewing their direct reports’ performance at work. Calibration sessions are meetings after a review cycle to enact these policies.

If you have scores as part of your performance review, then you might want to set your company up for review calibration -- both in the product you’re using and in terms of training managers. Here's how to do it.

Setting the stage for review calibration before the performance review cycle.

People teams should give managers a framework for how to score their direct reports. That means three things: picking a scale, explaining what each number means, and figuring out what the distribution of scores may be.

First, you can pick a scale from 1 to 5, 1 to 7, 1 to 10 -- whatever works for your company.

Second, define what each point means. It’s likely variations on low, average, and high performance, but it’s important that everyone in the company have the same definitions. All managers should know what a 5 means on the scale you provide -- whether it’s midlevel performance, solid with potential for growth, exceeds expectations, or very high performance.

A note on defining terms: For midlevel performers, be careful using terms like “average” or “medium,” as these terms can influence managers’ scoring styles. If you call the middle number “average,” it can seem like a harsher judgment than it is. Terms like “solid,” “standard,” “potential reached,” “meets expectations,” can also work instead.

Lastly, ask managers and leadership what they expect the distribution of scores will be -- how many high performers and how many low performers might they expect? When scores are distributed equally, the standard distribution of employees should be some high performers, a few low performers, and mostly midlevel performers.

To sum up, here’s what to review when training managers on giving scores:

  • What do the numbers on the scale mean?
  • What is the distribution of scores that you expect?
  • What does it mean to be a low performer? A midlevel performer? A high performer?

Make sure to emphasize that managers will have to justify their scores in calibration sessions after the review.

How to calibrate reviews after a review cycle

There are two main ways to calibrate scores after the fact, and they depend on the size of your organization.

1. One-on-one conversations with all managers

In one case, people teams should meet with all managers to understand their reasons behind scoring. By getting a wide range of reasons across departments, they’ll be able to understand whether a manager is more lenient or more strict or just right when scoring their direct reports. You also want scores adjusted by the people who will have the most information on how the company is doing overall, so they have a strong sense of the overall performance distribution of the company.

People teams can also get more information on managers’ scoring styles by discussing these scores with managers’ managers, who have a clearer perspective on how that manager will approach scores.

The way to know whether the above works for you is answering a very simple question: how many meetings do you think your people team can handle?

2. Calibration Committees

Of course, the more data you can gather on managers’ scoring styles, the better -- but there is a turning point when it doesn’t make sense for the people team to do that. Instead, it’s better to form calibration committees, composed of leadership and managers’ managers, who know managers’ managing styles well enough to know how their scores need to be adjusted.

Also, research has shown that calibration committees have a longstanding effect on how managers provide ratings. Having seen how their scores were adjusted, managers are more careful and thoughtful the next review cycle.

Based on these discussions, and armed with the knowledge that each score has been vetted, the people team can adjust scores accordingly.

How to change it in Lattice

The final step of review calibration is using the information gathered both pre- and post-review to change scores accordingly. With an employee performance management system like Lattice, that can be as simple as downloading, tweaking, and uploading these calibrated scores into the system.

Review calibration is an essential step when holding performance review cycles with scoring. It makes managers’ jobs easier, makes employees’ performance evaluations more honest, and accomplishes the people team’s task to make sure that all performance review scores are given based on the same standard of performance.

Library
Articles
Performance Reviews

The how and why of performance review calibration

Prefer Podcasts? You can listen on iTunes, or here:

Enjoy the presentation? Download the deck

Oops! Something went wrong while submitting the form.

1. What is review calibration?

2. Setting the stage for review calibration before the performance review cycle

3. How to calibrate reviews after a review cycle

4. How to do it in Lattice

What is review calibration?

Maybe this has happened at your company. It's definitely happened at ours.

During a recent company review cycle, you notice that most feedback for each person is pretty uniform -- and, based on their self-reviews, pretty self-aware. But then you notice that the actual scores recorded for each review varied based on who was giving them.

You realize that the scores were all different because individual reviewers thought of the scoring systems differently:

  • For Manager A, a 5 out of 5 on a rating meant that the reviewee was delivering on that attribute
  • For Manager B, a 3 out of 5 on that same rating meant the same thing -- the lower score represented that person B thought the reviewee had potential to grow and go beyond delivering that attribute.

In fact, Manager B gave lower scores to their direct report, not as an act of harshness, but out of the very well intentioned sense that everyone -- not to mention the whole company -- had room to grow.

How can you iron out differences in interpretation between managers when it comes to scores in reviews?

The answer to this is review calibration. Review calibration is when the people teams train and teach managers to apply a company standard when reviewing their direct reports’ performance at work. Calibration sessions are meetings after a review cycle to enact these policies.

If you have scores as part of your performance review, then you might want to set your company up for review calibration -- both in the product you’re using and in terms of training managers. Here's how to do it.

Setting the stage for review calibration before the performance review cycle.

People teams should give managers a framework for how to score their direct reports. That means three things: picking a scale, explaining what each number means, and figuring out what the distribution of scores may be.

First, you can pick a scale from 1 to 5, 1 to 7, 1 to 10 -- whatever works for your company.

Second, define what each point means. It’s likely variations on low, average, and high performance, but it’s important that everyone in the company have the same definitions. All managers should know what a 5 means on the scale you provide -- whether it’s midlevel performance, solid with potential for growth, exceeds expectations, or very high performance.

A note on defining terms: For midlevel performers, be careful using terms like “average” or “medium,” as these terms can influence managers’ scoring styles. If you call the middle number “average,” it can seem like a harsher judgment than it is. Terms like “solid,” “standard,” “potential reached,” “meets expectations,” can also work instead.

Lastly, ask managers and leadership what they expect the distribution of scores will be -- how many high performers and how many low performers might they expect? When scores are distributed equally, the standard distribution of employees should be some high performers, a few low performers, and mostly midlevel performers.

To sum up, here’s what to review when training managers on giving scores:

  • What do the numbers on the scale mean?
  • What is the distribution of scores that you expect?
  • What does it mean to be a low performer? A midlevel performer? A high performer?

Make sure to emphasize that managers will have to justify their scores in calibration sessions after the review.

How to calibrate reviews after a review cycle

There are two main ways to calibrate scores after the fact, and they depend on the size of your organization.

1. One-on-one conversations with all managers

In one case, people teams should meet with all managers to understand their reasons behind scoring. By getting a wide range of reasons across departments, they’ll be able to understand whether a manager is more lenient or more strict or just right when scoring their direct reports. You also want scores adjusted by the people who will have the most information on how the company is doing overall, so they have a strong sense of the overall performance distribution of the company.

People teams can also get more information on managers’ scoring styles by discussing these scores with managers’ managers, who have a clearer perspective on how that manager will approach scores.

The way to know whether the above works for you is answering a very simple question: how many meetings do you think your people team can handle?

2. Calibration Committees

Of course, the more data you can gather on managers’ scoring styles, the better -- but there is a turning point when it doesn’t make sense for the people team to do that. Instead, it’s better to form calibration committees, composed of leadership and managers’ managers, who know managers’ managing styles well enough to know how their scores need to be adjusted.

Also, research has shown that calibration committees have a longstanding effect on how managers provide ratings. Having seen how their scores were adjusted, managers are more careful and thoughtful the next review cycle.

Based on these discussions, and armed with the knowledge that each score has been vetted, the people team can adjust scores accordingly.

How to change it in Lattice

The final step of review calibration is using the information gathered both pre- and post-review to change scores accordingly. With an employee performance management system like Lattice, that can be as simple as downloading, tweaking, and uploading these calibrated scores into the system.

Review calibration is an essential step when holding performance review cycles with scoring. It makes managers’ jobs easier, makes employees’ performance evaluations more honest, and accomplishes the people team’s task to make sure that all performance review scores are given based on the same standard of performance.