Ed: This guest post is by Alan Ruby, senior scholar at the University of Pennsylvania’s Alliance for Higher Education and Democracy who also serves as a consultant to the World Bank, an adviser to the Nazarbayev University in Kazakhstan, The HEAD Foundation in Singapore, and the American Institutes of Research. Click here to view original article.
Nearly 50 years ago, 40 classmates and I spent the last two weeks of November taking our higher school certificate examinations. In a cavernous, hot, and poorly ventilated hall, we sat in widely-spaced rows, writing essays, solving mathematics and science problems, and answering multiple-choice questions.
The tests were set and evaluated by teachers and academics employed by the state examinations board in Sydney some 300 miles away. The results were used for university admissions and the award of state and national scholarships.
Some employers also used them to shortlist job candidates for interviews. It was what we now call a “high stakes exit exam”. What, with hindsight, was most remarkable was that we had no doubts about the fairness of the tests or the integrity of the process. We trusted that everyone’s efforts would be judged by the same standards and that the results were good indicators of achievement.
This memory stirred up as I reviewed the reports and publications of seven years of work that was funded by the Russian government through a World Bank-administered trust fund, Russian Education Aid for Development (READ).
The main objective of READ is to improve student learning outcomes by helping countries design and implement stronger and more credible student assessment systems. READ funded work in target countries chosen by the donor: Angola, Armenia, Ethiopia, the Kyrgyz Republic, Mozambique, Tajikistan, Vietnam and Zambia.
Working from national action plans, READ financed a lot of capacity building, innovation, and experimentation at the country level. This included training opportunities, ranging from Master’s degree programs for assessment experts from Mozambique to training programs in formative assessment for classroom teachers in Vietnam.
Ethiopia invested READ resources in the development of a more coherent national system of school inspection and monitoring educational quality. Zambia focused on strengthening its national assessment agencies and the use of assessment results to improve teaching and learning. Tajikistan took on the reform of its university entrance examinations, putting in place a new test that would address existing concerns about fairness, integrity, and access.
READ also financed the creation of a set of knowledge products that are a significant contribution to the science of educational assessment. These range from assessment policy and strategy case studies from countries like Chile and New Zealand to a five volume tool kit (the National Assessments of Educational Achievement series).
By sharing these public goods with OECD, UNESCO’s Institute for Statistics, and Brooking’s Center for Universal Education, and other similar agencies, READ has strengthened the global community of assessment practitioners.
It also tapped into and supported existing global partnerships and alliance, such as the Global Partnership for Education, the US Agency for International Development and the UK Department for International Development that are improving education outcomes in developing economies.
One significant tool created through the READ program is a global, evidence-based template for diagnosing the strengths and weaknesses of national and provincial examination programs, much like the one I participated in at the end of high school.
This framework also provides a basis for reviewing classroom assessment practices and national and international large-scale surveys that are part of many formal systems for monitoring and assessing student outcomes. The template was used to take a snap shot of the ‘health’ of the assessment systems in the eight countries benefiting directly from READ funding. This was the baseline to monitor developments over the life of the trust fund. The outcomes were surprisingly good, given the short life span of the program and the long lead times usually involved in making changes in educational assessment practice.
Six READ countries made significant progress on at least one major aspect of their assessment system. Angola, Ethiopia, and Zambia improved national survey systems. Vietnam and the Kyrgyz Republic spread the use of classroom assessment techniques. Mozambique strengthened the examinations system. The other two READ countries, Armenia and Tajikistan, also showed some progress and are well positioned to continue to develop.
All of these investments contribute directly or indirectly to better and more credible systems for assessing learning outcomes. Individual learners benefit because the systems are more likely to be fair and less prone to corruption. They produce results objectively and consistently. Teachers benefit by having better information to guide their classroom practice. Policy makers and educational leaders benefit by seeing where more resources are needed. All benefit because stronger assessment systems produce more credible and robust credentials that serve as bridges to work and further learning for all.
While the technology of assessment has changed a lot in the last 50 years, there is still a lot to be done to make national and provincial systems in many developing and emerging economies fair, reliable, and focused on the quality of learning.
Ensuring the integrity of assessments is important and valuable to all, but there is also a lot to be gained from improving the ways in which learning outcomes data is analyzed, reported to the public, and used to improve classroom practice and program design. Work in these areas will be necessary if the sustainable development goal of “inclusive and equitable quality education” is to be attained.
We sometimes talk about this in terms like “effective monitoring and accountability mechanisms,” “evidence based policies”, and “quality frameworks”. While these are useful concepts, a simpler conception is the creation of an assessment system that earns the trust of students, teachers, parents, and the public by operating in a transparent, fair, and consistent manner.
The HEAD Foundation Commentary is a platform to provide timely and, where appropriate, policy-relevant commentary of topical issues and contemporary developments. The views expressed by the authors are solely their own and do not reflect opinions of The HEAD Foundation.