R&D Outcomes


Shin, Hyo Jeong, Andersen, Nico, Horbach, Andrea, Kim, Euigyum, Baik, Jisoo, Zehner, Fabian
This project reports on the feasibility of automatic scoring systems for text responses from the 2016 ePIRLS assessment.
Steinmann, Isa
Developing questionnaires for international large-scale assessments can be challenging. This white paper addresses questionnaire development for educational assessment projects and the language used in its items.
Download PDF507.88 KB
Wools, Saskia, Drijvers, Paul, Feskens, Remco, Molenaar, Dylan, van der Scheer, Emmelien
An outcome from the first R&D call, this report addresses the question how to evaluate the validity of results of international large-scale assessment programs (ILSAs) that incorporate technology-enhanced items, paying special attention for the comparability of results between countries.


Cortés, Diego, Dominitz, Jeff, Romero, Maximiliano, Meinck, Sabine
This report informs and provides recommendations on technical standards and reporting in international large-scale assessments.
Download PDF688.65 KB
He, Qiwei, Gonzalez, Eugenio J.
Using data from ICILS 2018, this report focuses on the nine countries and regions assigned to the computation thinking model to better understand missing responses.
Chen, Yunxiao , Oka, Montonori, von Davier, Matthias
The first outcome from the IEA R&D fund call two. This report discusses the construction of scaling models for large-scale assessments in education, applying methods to PIRLS 2016 data.
Download PDF405.4 KB
Tyack, Lillian, Khorramdel, Lale, von Davier, Matthias
The first output from IEA's R&D call one, this report assesses how to validate human rater scores for graphical responses using AI.
Download PDF558.68 KB