Institute of Education

Research & Expertise to Make a Difference in Education & Beyond

IOE Hosts Fifth International Summer School on Applied Psychometrics

Early this August, a picturesque historic location of HSE’s ‘Kochubey Mansion’ Training Center in Saint Petersburg, Russia welcomed testing professionals from over a dozen countries for IOE’s Fifth International Summer School ‘Applied Psychometrics in Education and Psychology.’ Instruction in the School was given by world-renowned experts in measurements and test development: Dr. Linda Cook (The National Center for the Improvement of Assessment), Dr. Mary Pitoniak (Educational Testing Service), Dr. Carol Myford (University of Illinois at Chicago), and Dr. Lidia Dobria (Wilbur Wright College & University of Illinois at Chicago).  

With its inception dating back to 2014, the School is one of the most well-established offerings on IOE’s international learning & networking agenda, which has seen an ever-growing popularity among both accomplished psychometricians and early-career academics. Every year, the School offers extensive opportunities to gain in-depth insights into the most important conceptual domains in modern psychometrics alongside unique hands-on experiences that enable students to lead the way in delivering accurate and objective testing solutions.

This year, the School enrollees were able to opt between two study tracks. Track One aimed to provide multiple considerations about setting standards in assessment environments and the fairness of testing. Track Two was devoted to best practices in analyzing rating data.   

Test Fairness Standards. Only a couple of decades back from now, adequate psychometric design, general evidence of validity and the absence of DIF (differential item functioning) were universally accepted as the three hallmarks of a good test. Things have changed significantly since that time, and today the above criteria are not always enough to design quality tests and plausibly judge about the fairness of assessment materials. Thus, for instance, while a certain test may work pretty OK with the majority of social groups, some individual features, such as visual and hearing disorders, dysmotility or other impairments, may render this test irrelevant unless it is appropriately modified to ensure fair scores in specifically these assessment contexts. Another example is dyslexia, where special research is required to substantiate whether a testing tool that has been designed especially for people with this kind of disorder in fact remains valid when applied to other cohorts of test-takers. As part of her School classes, Dr. Linda Cook, who is among the world’s most acclaimed exerts in the fairness of testing, provided the students with plenty of conceptual observations about the subject area while also offering unique hands-on tasks pertinent to analyzing and ensuring test fairness.         

Comparing and Adjusting Test Scores. Using multiple test variants is a common practice in educational assessment, and in such a case a psychometrician is often confronted with the task of ascertaining to what extent the scores these multiple versions deliver are equivalent. Where a given test version has been determined to be easier or more difficult to complete against the others, specific techniques are used to adjust and equalize the score levels. Therefore, it turns out that ensuring test scores derived under different test modifications have been appropriately reconciled to be on par with each other is one of the central imperatives within the test fairness framework. Linda Cook’s classes on this subject have primarily aimed to render advanced insights into modern-day methods of test score comparison and reconciliation. In addition, Linda has shared unique experience from the relevant research projects her colleagues and she have carried out at ETS.      

Adjusting Cut Scores. Adjusting cut scores was among the key topics that Dr. Mary Pitoniak covered as part of her School course on setting standards in assessment environments, since making well-reasoned considerations in establishing passing grades is indeed central to any psychometric practice. Drawing on her extensive experience in setting cut scores across assessment contexts, from school tests to medical staff certifications, Dr. Pitoniak offered the Summer School students a number of practical assignments and comprehensively addressed the challenges that most commonly confront psychometricians insofar as this aspect of test development is concerned.       

Approaches to Analyzing Rating Data. In assessment settings, various experts who evaluate student performance or products may introduce errors (so-called ‘rater effects’) into the assessment process, even though extensive guidelines are developed and special trainings are conducted to facilitate consistent and objective evaluations. The analytical approaches discussed in this course delivered by Dr. Carol Mayford and Dr. Lidia Dobria help gain a deeper understanding about how such different ‘facets’ (e.g., students, raters, rating criteria, etc.) of assessment systems are performing. This discussion is helpful in determining to what extent particular assessment systems are under statistical control and when introducing meaningful changes to improve them.

I think the 2018 Summer School was an undisputed success. Covering a range of theoretical and practical aspects crucial to building excellence in psychometrics, the School was entirely English-taught this year attracting a host of the world’s most accomplished experts in the field as well as a multinational cohort of earlier-career professionals. The event has become a truly vibrant, rewarding venue filled up with aspiration and learning drive, as the most of faculty feedback suggests. One of the lecturers has even named the class she was teaching this year to be the brightest she has had a chance to work with so far,” School Director Elena Kardanova has commented.     


Visit the Summer School webpage to learn about topics covered by the past events and the upcoming agenda for 2019.