Senior Education Officer
April 28, 2014
By Dan Li
On the first two pages of an IDEA Diagnostic Form Report for Student Ratings of Instruction, you see three tables presenting your converted averages when compared to other groups, be it the IDEA database, your discipline, or your institution. Instead of the 5-point Likert scale students use to indicate their progress on learning objectives, the tables list whole numbers on a scale of 1 to 100. You already have access to raw and adjusted scores, why should you use converted scores?
The converted scores we use are T scores, which are standardized scores with a mean of 50 and a standard deviation of 10. The main reason we suggest using converted scores for comparative purposes is that standard scores allow direct comparisons between items with different means and standard deviations. The relative rank of a raw score can vary significantly when compared to groups with various distribution characteristics. For example, a raw score of 4.2 is considered excellent if compared to a group with an average of 2.7 and a standard deviation of 1. However, the same score becomes average when compared with a group with a mean of 4.1 and a standard deviation of 2. If we use converted scores in both cases, we can easily see a difference of 14 points (65 vs 51)! Since our research has shown that student-reported progress on learning objectives varies to some extent, converted scores take in account the variance and enable us to make fair comparisons.