Thursday, August 17, 2023

normalisation in computer based Examinations

Here is a table outlining some of the major criticisms and concerns regarding normalization in computer-based exams, along with potential mitigations to address them that could be part of a comprehensive framework:

| Criticism/Concern | Mitigation |
|-|-|
|Perceived as unfairly adjusting individuals' scores | - Clearly communicate that normalization aims to account for differences in exam conditions/difficulty, not arbitrarily change scores <br>- Provide examples showing how unnormalized scores across conditions can be misleading |
| Lacks transparency and comprehension | - Provide detailed information on normalization methods, rationale, and procedures <br>- Share examples using simulated or past data to demonstrate impact |
| Hard to explain varying difficulty across sessions | - Leverage psychometric analysis and data to show variations in conditions and difficulty <br>- Communicate that measures are put in place to minimize variability between sessions |  
| Raw scores high but final lower than expected | - Show examples of how raw scores can be misleading if not properly normalized <br>- Communicate that normalization maps scores to standardized scale |
| Legal challenges on disadvantaging merit | - Underscore that normalization is intended to uphold merit-based selection <br>- Share research/evidence on how normalization enhances fairness |
| Undermines confidence in selection process | - Promote continuous improvement and welcome feedback <br>- Enhance understanding that normalization works in candidates' interest overall |