Plenary Speaker Profile (2005-6)
Richard J. Shavelson
Margaret Jacks Professor of Education, Professor of Psychology (by courtesy) and Senior Fellow in the Stanford Institute for the Environment
Stanford University
Reforming the Preparation of Secondary Mathematics Teachers: An Innovative Four-Year Undergraduate Program

Significant progress has been made in the assessment of student science learning and achievement. This progress has been made both conceptually, with a cognitive model underlying what we mean by achievement, and procedurally, with a technology for assessing different aspects of achievement. The conceptual model conceives science achievement as the acquisition of declarative (knowing that), procedural (knowing how), schematic (knowing why), and strategic (knowing when and where knowledge applies) knowledge, and the ability to reason along a learning trajectory. This model provides the conceptual base for developing alternative tests of science achievement and learning. The conceptual model combined with assessment technologies in a particular science-knowledge domain provides a basis for assessing science achievement. While there is no one-to-one correspondence between test-item format and knowledge tapped, generally speaking declarative knowledge is most readily assessed by multiple-choice and short answer questions; the structure of this kind of knowledge can be assessed with concept maps. Procedural knowledge can be assessed especially well with performance assessments. Schematic knowledge can be assessed by a variety of formats including clusters of multiple-choice items, predict-observe-and-explain items, and open-ended items. Schematic knowledge comes into play especially when students are challenged with new tasks, but students also use schematic knowledge when they are being assessed on the other knowledge types. To be sure, the item format does not insure that the items tap the knowledge intended. For this we turn to methods for validating interpretations of items and clusters of items that include cognitive analyses ("think aloud") and statistical analyses (e.g., confirmatory factor analysis). In this presentation, we shall discuss how the science assessment framework is applied in both large-scale summative assessment, namely the new framework for the Science Assessment in the National Assessment of Education Progress ("The Nation's Report Card"), and in formative assessment, namely a study of teachers' use of formative assessments embedded in science curriculum.

Richard J. Shavelson is the Margaret Jacks professor of education, professor of psychology (by courtesy) (1995-present) and senior fellow in the Stanford Institute for the Environment (2005-2007) and former dean of the School of Education at Stanford University (1995-2000). Before joining Stanford, he was dean of the Graduate School of Education and professor of statistics (by courtesy) at the University of California, Santa Barbara (1987-1994). Prior to joining the UCSB faculty, he was professor of education at UCLA (1973-1987) and director of the RAND Corporation's Education and Human Resources Program (1980-1985). He has also served as president of the American Educational Research Association; is a fellow of the American Association for the Advancement of Science, the American Psychological Association, and the American Psychological Society; and is a Humboldt Fellow.

His current research is in the areas of measurement, psychometrics, and related policy and practice issues, especially in science education. His measurement research involves working closely with teachers and scientists in the development of performance and other assessments in science education, and their evaluation along psychometric, cost, classroom use and social impact lines. Recently his research has focused on linking assessment methods with a working definition of achievement that includes declarative, procedural, strategic, and meta-cognitive knowledge. His policy work currently focuses on assessment of learning in higher education, and the quest for accountability. Co-author with Professor Noreen Webb of the book, "Generalizability Theory: A Primer," Shavelson's other psychometric publications include research on the dependability of performance assessments used in work and education. His policy research includes a National Research Council Publication edited with Lisa Towne, "Scientific Research in Education," and two monographs on alternative designs of indicator systems for monitoring the health of the nation's mathematics and science education systems.