A framework for using consequential validity evidence in evaluating large-scale writing assessments


Article de revue

Contributeurs:

État de publication: Publiée (2014 Février )

Nom de la revue: Research in the Teaching of English

Volume: 48

Numéro: 3

Intervalle de pages: 276-302

URL: https://www.jstor.org/stable/24398680?seq=1#metadata_info_tab_contents

Résumé: The increasing diversity of students in contemporary classrooms and the concomitant increase in large-scale testing programs highlight the importance of developing writing assessment programs that are sensitive to the challenges of assessing diverse populations. To this end, this paper provides a framework for conducting consequential validity research on large-scale writing assessment programs. It illustrates this validity model through a series of instrumental case studies drawing on the research literature conducted on writing assessment programs in Canada. We derived the cases from a systematic review of the literature published between January 2000 and December 2012 that directly examined the consequences of large-scale writing assessment on writing instruction in Canadian schools. We also conducted a systematic review of the publicly available documentation published on Canadian provincial and territorial government websites that discussed the purposes and uses of their large-scale writing assessment programs. We argue that this model of constructing consequential validity research provides researchers, test developers, and test users with a clearer, more systematic approach to examining the effects of assessment on diverse populations of students. We also argue that this model will enable the development of stronger, more integrated validity arguments.