International Conference on Language Testing and Assessment

Keynote Speech 1: Using an Assessment Use Argument to guide classroom-based assessments

Speaker: Lyle Bachman, University of California, Los Angeles

 

Lyle Bachman Professor Emeritus of Applied Linguistics at the University of California, Los Angeles. He is a Past President of the American Association for Applied Linguistics and of the International Language Testing Association. In 1998 he won the TESOL/Newbury House Award for Outstanding Research, has twice (1991, 1997) won the Modern Language Association of America’s Kenneth Mildenberger Award for outstanding research publication, and in 2012, along with Adrian Palmer, won the Sage/International Language Testing Association award for the best book published in language testing. In 1999 he was selected as one of 30 American "ESL Pioneers" by ESL Magazine; in 2004 he was given a Lifetime Achievement Award by the International Language Testing Association, and in 2010 he received the Distinguished Scholarship and Service Award from the American Association for Applied Linguistics. He has published numerous articles and books in the area of language assessment.

 

Prof. Bachman teaches courses and conducts practitioner training workshops in language assessment and serves as a consultant in language assessment for universities and government agencies around the world. His current research interests include validation theory, classroom- based language assessment, standards and linking in assessment, and issues in assessing the academic English of English learners in schools.

 

Abstract:

If one considers the number of classrooms around the world, the number of students per class, the number of classroom-based assessments that are used in each class per year, and the number of decisions that are made on the basis of these, it is obvious that in terms of sheer numbers, more students are affected by classroom-based assessments per year than by large-scale assessments.  Although most decisions made by classroom teachers are relatively low-stakes formative decisions, many medium- to high-stakes summative decisions, e.g., passing to the next grade, are also made on the basis of classroom-based assessments.

Because many of these decisions are relatively high-stakes, it is essential that classroom teachers have the knowledge, skills, and tools to enable them to develop and use classroom-based assessments that they can justify to stakeholders, e.g., students, parents, and school authorities.

Many current approaches to evaluating the quality of an assessment are based on the conceptualization of validity, instantiated in a set of qualities or standards (e.g., American Educational Research Association, American Psychological Association, & National Council on Measurement in Education, 2014; International Language Testing Association, 2000, 2005).  Standards are difficult to apply in practice because 1) the qualities or standards are not explicitly related to each other, 2) they assume that a particular set of qualities or standards will add up to some undefined global notion of assessment value, without explicitly stating if or how the different qualities or standards are prioritized, and 3) they provide little guidance for how to proceed with the evaluation of assessment quality.  In developing an “argument-based approach” to evaluating the quality of an assessment, Kane (2006, 2012) shifted the focus from validity as a concept to validation as a process, in which the test developer and test user articulate the claims in an inferential argument and then, in a validation argument, provide evidence to support these claims.  However, while argument-based approaches to validation deal explicitly with the links between test takers’ performance and score-based interpretations, they are unclear about how interpretations relate to assessment use—decisions and consequences.  That is, these approaches do not specify how concerns about the validity of interpretations are related to the consequences of assessment use.

Bachman and Palmer (2010) have extended Kane’s argument-based approach, shifting the focus from the validation process to the purpose for which the process is intended:  demonstrating to stakeholders that the uses of the assessment are justified.  Their approach, assessment justification, comprises two activities:  1) articulating an Assessment Use Argument (AUA) that specifies the links from test taker’s performance to interpretations, to decisions, to consequences, and 2) providing backing, or evidence to support the AUA.

In this presentation I provide an overview of an approach to classroom-based assessment (Bachman & Damböck, 2018) that adapts Bachman and Palmer’s general approach to the development and use of classroom-based assessments.  First I describe the role of assessment in teaching and learning.  Next I describe how teachers can articulate an AUA for their classroom assessments.  I then describe the process of developing assessment task templates that are linked to the claims in the AUA.  Finally, I describe how these templates can be used to create multiple assessment tasks.  Throughout the presentation, I illustrate the concepts and procedures of the approach with a concrete example of a classroom-based assessment.

 

Keynote Speech 2: Issues with language test validation theory and practice

Speaker: Barry O’Sullivan, British Council

Professor Barry O’Sullivan is the Head of Assessment Research & Development. at the British Council. He has undertaken research across many areas on language testing and assessment and its history and has worked on the development and refinement of the socio-cognitive model of test development and validation since 2000. He is particularly interested in the communication of test validation and in test localisation. He has presented his work at many conferences around the world, while almost 100 of his publications have appeared in a range of international journals, books and technical reports. He has worked on many test development and validation projects over the past 25 years and advises ministries and institutions on assessment policy and practice.

 

He is the founding president of the UK Association of Language Testing and Assessment (UKALTA) and holds honorary and visiting chairs at a number of universities globally. In 2016 he was awarded fellowship of the Academy of Social Science in the UK, and was elected to Fellowship of the Asian Association for Language Assessment in 2017.

 

Abstract:

In the UK and Europe, the approach to language test development has been strengthened by the broad recognition of the importance of underlying measurement models in addition to the traditional focus on construct. The socio-cognitive model of test development and validation has contributed significantly by recognising that the construct is operationalised through the candidate, test and scoring systems that define the test. By recognising that the candidate is the focus of the model, the approach is clearly both construct and participant-driven.

 

There remain significant flaws in the measurement-driven approach to language test theory that continues to dominate USA based and influenced scholars:

•the continued dependence on the long discredited nomological network approach

•the reliance on complex and sometimes inappropriate statistical tools which are neither well understood nor interpreted (Choi, 2018)

•the promotion of complex argument structures which deliberately shift the focus away from construct while failing to acknowledge the implications to their arguments of taking a primarily measurement approach

•the failure to address the audience issue (who are these arguments for, and how does this affect the argument’s content and construction)

 

In this paper, I will argue that one of the unintended consequences of language testing theorists shying away from the term construct validity, is the risk of adding to the proliferation of weak or non-existent construct models. This situation is likely to exacerbate the trend seen in some commercial tests of deliberate construct under-representation or even of tests that are essentially construct-free.

 

Keynote Speech 3: Validity aspects of score reporting 

Speaker: Richard J. Tannenbaum, Educational Testing Service

Richard J. Tannenbaum is a General Manager in the Research and Development Division of Educational Testing Service (ETS).  In this role, Richard has strategic oversight for multiple centers of research that include more than 100 directors, scientists, and research associates.  These centers address foundational and applied research in the areas of academic-to-career readiness, English language learning and assessment (both domestic and international), K-12 student assessment, and teacher quality and diversity. Prior to this position, Richard was the Senior Research Director for the Center for Validity Research at ETS. Richard has also held the title of Director of Licensure and Certification Research at ETS where his primary responsibilities included providing measurement expertise in support of educator credentialing and building standard-setting capabilities. Richard earned the ETS Presidential Award for Extraordinary Accomplishments for his innovative design and implementation of a multistate standard-setting process. 

Richard holds a Ph.D. in Industrial/Organizational Psychology from Old Dominion University.  He has published numerous articles, book chapters, and technical papers. His areas of expertise include assessment development, licensure and certification, standard setting and alignment, and validation.

 

Abstract:

Scores and their intended meaning are communicated to stakeholders through some form of a score report, whether that report is paper-based or digital, static or dynamic. Stakeholders rely on the information presented in the report to guide their subsequent policies, recommendations, plans, and actions. The likelihood that such score-based decisions are reasonable is dependent, in part, on the relevance and accuracy of the reported scores and supporting documentation, and the ability of the stakeholders to comprehend the provided information in the way intended. Building score reports that are meaningful and interpretable, and that support proper use is directly related to the fundamental concept of validity. In this talk, I will discuss some of the key concepts and practices that are intended to support the validity of score reports. 

 

Keynote Speech 4: A challenge for language testing: The assessment of English as a Lingua Franca 

Speaker: Tim McNamara, The University of Melbourne

Tim McNamara is Redmond Barry Distinguished Professor in the School of Language and Linguistics at The University of Melbourne, where he was involved in the founding of the graduate program in applied linguistics and the Language Testing Research Centre. His language testing research has focused on performance assessment, theories of validity, the use of Rasch models, and the social and political meaning of language tests. He developed the Occupational English Test, a specific purpose test for health professionals, and was part of the research teams involved in the development of both IELTS (British Council/University of Cambridge/ IDP) and TOEFL-iBT (Educational Testing Service). He is the author of Measuring Second Language Performance (1996, Longman), Language Testing (2000, OUP) Language Testing: The Social Dimension (2006, Blackwell, with Carsten Roever) and Fairness and Justice in Language Assessment (2019 in press, OUP, with Ute Knoch and Jason Fan). In 2015 he was awarded the Distinguished Achievement Award of the International Language Testing Association.  Tim has just completed a term as the President of the American Association for Applied Linguistics (AAAL).  He is a Fellow of the UK Academy of Social Sciences, and a Fellow of the Australian Academy of the Humanities.

 

Abstract:

This paper argues that the primary challenge facing language testing at the moment is the need to face the implications for assessment of the reality of English as a Lingua Franca.  Given that much of the world’s business, its education (including national and international conferences) and its political interaction is conducted in English as a Lingua Franca, it is remarkable that few if any language tests exist specifically directed at measuring competence in English as a Lingua Franca communication.  The reasons for this are complex, but are clearly associated with the fact that, as Messick pointed out, values are at the heart of the constructs in educational assessment.  What values underlie the resistance of our field to the testing of English as a Lingua Franca? This paper tries to set out the radical implications for our field of an embrace of the construct of English as a Lingua Franca for the design of English language assessments, and the likely types of resistance that would result.  The issue reveals the fundamentally value-driven and political character of language testing, a notion our field continues to experience as a major challenge.

News

  • No content

Important Dates

  1. Proposal Submission Available
  2. June 13, 2018
  3.  
  4. Proposal Submission Deadline
  5. August 31, 2018
  6.  
  7. Acceptance Notification
    September 21, 2018

 

Online Registration Deadline

November 16, 2018