Institute of Education

Research & Expertise to Make a Difference in Education & Beyond

Advanced Training at CITO: Exploring Cutting-edge Concepts in Test Design


A team of psychometric experts who teach in IOE’s Measurements in Psychology and Education MA have recently traveled to the Netherlands to take up a professional development program with CITO, Europe’s largest center for frontline R&D in educational testing and monitoring.  

This world of dashing societal and economic change has posed more complex requirements to skill profiles and the overall potential of 21st-century human capital.

When crafting more robust strategies for better closing the gaps of learning achievement, employment and labor productivity, it is vital that educators and markets have access to accurate and comprehensive evidence about where talent at different levels has performed well and what areas need to be improved. For that reason, the task of devising state-of-the-art psychometric approaches and evaluation frameworks becomes as important to advancing the quality of education and talent payoffs as in fact hardly ever before.

Accordingly, to keep measurement systems relevant and best representative on an ongoing basis, the testing corps nowadays just cannot but continuously update and expand their specialist knowledge and competency, to be able to harness the latest professional standards and best practices in every aspect of their increasingly challenging R&D agenda.        

A team of psychometric experts who teach in IOE’s Measurements in Psychology and Education MA have recently completed a professional development program with CITO, Europe’s largest center for frontline R&D in educational testing and monitoring. Funded by a World Bank award, this intensive one-week study trip has enabled a host of in-depth insights about what currently makes up the world’s best practices in psychometric scholarship and test development. More specifically, the program was centered upon such cutting-edge concepts as Computerized Adaptive Testing, Evidence-centered Design, etc.
 


IOE’s Measurements in Education and Psychology MA track is a one-of-a-kind offering in the Russian market that equips students with profound theoretical knowledge and advanced skills in operating today’s frontline concepts and tools in social science measurement. The program has been developed with the support of the Center for Educational Assessment at the University of Massachusetts, a global leader in the field of measurement training.


The invigorating experience they gained in the course of advanced training with CITO will certainly become important to further reinforcing and expanding IOE’s test development curriculum, the experts noted unanimously as they got back to Moscow.

  


Irina Brun

The track we’ve taken up at CITO devotes a particular attention to explaining the concept of Evidence-centered Design (ECD). This is a brand-new systemic approach to test development that embeds analyzing observable behavioral patterns to measure latent constructs as its core idea. What is really new about ECD is that this high-potential framework enables validating a testing tool at every individual stage of its development. Based upon advanced computer modeling and Bayesian processing, ECD is the best choice when it comes to measuring, for example, highly complex constructs related to 21st-century skills, such as critical thinking, creativity, ICT literacy, etc.
       


Denis Federiakin

A situation where a testing tool works differently with varying samples has long been referred to in psychometrics as DIF (Differential Item Functioning). DIF has traditionally presented a major problem hampering measurement, because once any side factors step in affecting the evaluation procedure, it becomes impossible to reliably benchmark the degree of achievement, skill progression, character traits, etc. among different groups of respondents.         

The CITO approach sets out a completely new perspective where DIF is no longer treated as a kind of barrier, but it is rather interpreted as a source of extra data supportive to deriving more accurate and at the same time multifaceted results. In fact, this methodology highly benefits the testing process by enabling, first of all, a ‘net basis’ measurement where a particular skill, trait, etc. is stripped off any biases resulting from side dimensionalities. In turn, this also allows a psychometrician to conduct a further benchmarking to judge about the extent to which such side factors are manifested in each different sample being evaluated. To sum up, the CITO approach suggests DIF can serve as a vehicle enabling comparisons way more multidimensional and representative than what a researcher might have originally planned to achieve.       

Another important framework to note is Computerized Adaptive Testing (CAT), which is likely to greatly facilitate the task of developing flexible testing tools to be reliably used with different respondent groups. To put it in the nutshell, CAT is about running a series of ‘adaptive’ tests as a preliminary stage to obtain a proxy of a respondent’s level of ability, which then allows fine-tuning the subsequent main testing procedure by choosing out those tools only that best dovetail with this very respondent’s skill profile and other attributes, as relevant in each particular case.