The two key questions which are at the heart of assessment design are also essential considerations when constructing the rubric
- "What should students know and be able to do?"
- "How will I know when they know it and can do it well?"
Consideration of a third question, "How might students effectively show their learning?", can help confirm the choice of a particular assessment task as being the best way to collect evidence about whether and how well intended learning outcomes have been met.
The literature around the design and use of rubrics commonly identifies three common elements for a rubric:
dimensions of quality/criteria - assessment can address a variety of intellectual, knowledge or practical competencies that relate to a specific academic discipline or across disciplinary boundaries.
levels of mastery - achievement are described according to terms such as excellent, good, needs improvement and not yet achieved.
commentaries/descriptions - this element of the rubric provides a detailed description of the defining features that should be found in the work at a particular level of mastery.
Two other additional elements that appear in some discussions of rubrics are:
organizational groupings - students are assessed for multidimensional skills such as teamwork that involves problem solving techniques and various aspects of group dynamics. In some cases these are treated as additional to the core disciplinary criteria being assessed.
descriptions of consequences - this is a rubric feature that offers students insight into various lessons of their work in a real life setting (e.g. ‘this work demonstrates competencies at a level appropriate for a beginning practitioner to deal with simple client cases'; ‘this work contains calculation errors that are likely to have significant negative consequences in the workplace').
The five rubric elements offer rich categories for academics to develop their evaluation procedures to fit the learning needs of their student population.
Allen and Tanner (2006) suggest the act of developing a rubric, whether or not it is subsequently used, instigates a powerful consideration of one's values and expectations for student learning, and the extent to which these expectations are reflected in actual classroom practices.
Designing a rubric is not an easy task. The development of an analytical rubric forces a lecturer to define specifically the depth of knowledge and level of skills that students need to demonstrate competency. It is an attempt to make discrete a fuzzy collection of possible ways an individual student could respond to an assessment task. Consequently, reflection on student responses can often be an important part of designing and revising an analytical rubric, because student answers may hold conceptions and misconceptions that have not been anticipated by the designer (Allen and Tanner, 2006).
Steps in design
Develop a list of the qualities/criteria students are expected to display as a result of completing the assessment task. These should be explicit in the assessment task specification but may need to be made transparent to students if they are not.
Decide the number of levels of performance/proficiency to be identified remembering that for many types of assessment task it is not possible for assessors to meaningfully distinguish more than four levels of ‘passing' performance. There is also a view that students struggle to make sense or use of more than three levels (achieved, mostly achieved, not yet achieved).
A number of descriptors can be used to denote the levels of performance (with or without accompanying symbols for letter grades) for example:
Scale 1: Exemplary, Proficient, Acceptable, Unacceptable
Scale 2: Substantially Developed, Mostly Developed, Developed, Underdeveloped
Scale 3: Distinguished, Proficient, Apprentice, Novice
Scale 4: Exemplary, Accomplished, Developing, Beginning
Develop commentaries to describe each level of proficiency for each criterion
This task of expanding the criteria is an inherently difficult task, because of the requirement for a thorough familiarity with both the elements comprising the highest standard of performance for the chosen task, and the range of capabilities of learners at a particular developmental level.
It is useful to begin by identifying each end of the scale and then filling in the middle. For example,
- identify the characteristics of the highest level of performance for each criteria
- determine the characteristics of unacceptable performance (that which has not achieved the minimum required standard)
- consider what levels of performance fits between these two extremes.
Common advice (Moskal, 2000) is to avoid use of words that connote value judgments in these commentaries, such as "creative" or "good" (as in "the use of scientific terminology language is ‘good'"). These terms are essentially so general that they become valueless in terms of their ability to guide a learner to emulate specific standards for a task, and although it is admittedly difficult, the standards need to be defined in a rubric.
Develop organizational groupings/holistic scales for broad, multi-dimensional skills.
If appropriate, develop descriptions of consequences that might follow in a real life setting if the student attempted to function with their assessed the level of learning/proficiency.
This description of the consequences could be included in a criterion called "professionalism." Following this practice in the construction of a rubric might help to develop students' understanding of the standards of the discipline or profession, and away from the perception that a rubric is just a guide to giving a particular teacher what he or she wants.
Once the draft rubric has been developed get comments and feedback from colleagues, and students, to ensure that the descriptions are as clear as possible, do not contain ambiguous statements that could be interpreted in unintended ways and link to the intended learning outcomes for the assessment tasks and the topic.