Image credit: UnsplashThis study examines a performance evaluation process that integrates multiple raters and calibration committees-practices increasingly adopted by organizations yet underexplored in academic research. Using proprietary data, we investigate how supervisors aggregate multi-rater assessments into initial performance ratings and how calibration committees adjust these aggregation decisions. Our findings indicate that both supervisors and calibration committees generally follow the principles of information economics. When aggregating multi-rater assessments, supervisors assess the informativeness of different assessments and place more weight on more informative assessments. However, our results also reveal limitations in information processing. Under conditions of high information load, supervisors appear to rely on cognitive shortcuts, engaging less in detailed weighting decisions based on the informativeness of individual multi-rater assessments. Regarding calibration, we find that committees are less likely to adjust supervisors’ aggregation decisions when those decisions are consistent with more informative multi-rater assessments and when supervisors provide well-substantiated justifications, highlighting the complementary roles of multi-rater systems and calibration. Additionally, calibration committees appear to strategically focus on cases where supervisors face a high information load and information sources that may not have been fully incorporated into the initial performance ratings.