Critical Thinking Writing Rubric

On By In 1

The VALUE rubrics were developed by teams of faculty experts representing colleges and universities across the United States through a process that examined many existing campus rubrics and related documents for each learning outcome and incorporated additional feedback from faculty. The rubrics articulate fundamental criteria for each learning outcome, with performance descriptors demonstrating progressively more sophisticated levels of attainment. The rubrics are intended for institutional-level use in evaluating and discussing student learning, not for grading. The core expectations articulated in all 16 of the VALUE rubrics can and should be translated into the language of individual campuses, disciplines, and even courses. The utility of the VALUE rubrics is to position learning at all undergraduate levels within a basic framework of expectations such that evidence of learning can by shared nationally through a common dialog and understanding of student success.

Preview the Critical Thinking VALUE Rubric: click to expand

Download the Critical Thinking VALUE Rubric at no cost via AAC&U's Shopping Cart (links below):

Definition

Critical thinking is a habit of mind characterized by the comprehensive exploration of issues, ideas, artifacts, and events before accepting or formulating an opinion or conclusion.

Framing Language

This rubric is designed to be transdisciplinary, reflecting the recognition that success in all disciplines requires habits of inquiry and analysis that share common attributes. Further, research suggests that successful critical thinkers from all disciplines increasingly need to be able to apply those habits in various and changing situations encountered in all walks of life.

This rubric is designed for use with many different types of assignments and the suggestions here are not an exhaustive list of possibilities. Critical thinking can be demonstrated in assignments that require students to complete analyses of text, data, or issues. Assignments that cut across presentation mode might be especially useful in some fields. If insight into the process components of critical thinking (e.g., how information sources were evaluated regardless of whether they were included in the product) is important, assignments focused on student reflection might be especially illuminating.

Glossary

The definitions that follow were developed to clarify terms and concepts used in this rubric only.

  • Ambiguity: Information that may be interpreted in more than one way.
  • Assumptions: Ideas, conditions, or beliefs (often implicit or unstated) that are "taken for granted or accepted as true without proof." (quoted from www.dictionary.reference.com/browse/assumptions)
  • Context: The historical, ethical. political, cultural, environmental, or circumstantial settings or conditions that influence and complicate the consideration of any issues, ideas, artifacts, and events.
  • Literal meaning: Interpretation of information exactly as stated. For example, "she was green with envy" would be interpreted to mean that her skin was green.
  • Metaphor: Information that is (intended to be) interpreted in a non-literal way. For example, "she was green with envy" is intended to convey an intensity of emotion, not a skin color.

Acceptable Use and Reprint Permissions

For information on how to reference and cite the VALUE rubrics, visit: How to Cite the VALUE Rubrics.

Individuals are welcome to reproduce the VALUE rubrics for use in the classroom, on educational web sites, and in campus intra-institutional publications. A permission fee will be assessed for requests to reprint the rubrics in course packets or in other copyrighted print or electronic publications intended for sale. Please see AAC&U's permission policies for more details and information about how to request permission.

VALUE rubrics can also be used in commercial databases, software, or assessment products, but prior permission from AAC&U is required. For all uses of rubrics for commercial purposes, each rubric must be maintained in its entirety and without changes.

Can critical thinking be assessed with rubrics?

Rubrics are rating forms designed to capture evidence of a particular quality or construct. The quality of the measure obtained from a rubric depends on how well it is designed. If the rubric is poorly designed, the rating is confounded or inaccurate. The quality of the rubric is also affected by the skill of the rater using the rubric.  When using a rubric it is necessary to train and calibrate the raters to use the rubric well to assure that ratings are accurate and consistent across all raters. Ratings using rubrics cannot be benchmarked against national comparison groups or compared to other ratings made by other rater groups.

Rubrics are a popular approach when the goal is largely developmental; rubrics are good pedagogical tools.   Issues arise, however, when rubrics are used for summative assessment.

  1. Rubrics are very imprecise measures.  Typically a good rubric will have three to five categories.  Any more than that and applying the rubric in practice falls apart.  In this way rubrics are analogous to grades.  The range F-A is really more than most of us can handle and we often report 95% of our grades in the C-A range.   We sense that overly fine distinctions in grading might be illusory because of the multiplicity of factors which go into giving a fair grade.  Rubrics address this by attempting to focus our attention on only a single dimension, e.g. critical thinking, of what may go into grading.  But even so, our minds are not nearly as capable of making the kinds of discriminations which a well designed test can make.  We need to keep it simple; 3-5 categories are plenty. With rubrics as with grades, there will be lots of “B” students.
  2. Construct validity is a concern with home-grown rubrics.  Does the group writing the rubric have a good grasp of the target construct?  Is that group independent, fair-minded and strong enough not to be drawn into the “local meanings” pit?  In other words, as related to critical thinking, does the rubric measure critical thinking as that concept is most widely understood, or does the rubric only reinforce a local meaning that is too heavily weighted toward one discipline or another and which does not connect well with what the larger world means by critical thinking?  
  3. Reliability is a concern when untrained raters apply rubrics.   Are those who will apply the rubric well trained in its use so that inter-rater reliability is achieved?   Even if the rubric is good, the raters may apply it with such variability that the score an individual project receives can differ widely.  National workshops time and again demonstrate how a group of faculty can rate the same student essay and yet when all the ratings are pulled together, a bell shaped curve results, not the clear consensus that expected.  This demonstrates the importance of training raters with paradigmatic examples and practicing before doing the actual ratings. A variation of the reliability problem occurs when faculty rate the work of their own students, except here the strong tendency is to give higher ratings that people from other departments might assign.
  4. Confounding the target with other things is a concern when applying rubrics.  The problem of reliability in the application of the rubric is matched by the problem of the validity of the application, meaning there is a tendency with rubrics to forget what we are supposed to be evaluating.  Instead of looking only at the critical thinking, for example, raters may also mix in a little about their impressions about the writing style or the content knowledge they may be unable to give due critical thinking credit for things like irony or satire.   So a Jon Stewart editorial might get a low score on a critical thinking rubric because the raters did not dig deep enough to see the arguments that undergird his satire.  

The validity and reliability of rubrics (rating forms) is judged by the Kappa Statistic, and as a result this is a potentially weaker measure of critical thinking than the other validated standardized instruments available through Insight Assessment.

For information about the  Holistic Critical Thinking Scoring Rubric (HCTSR) or  the Professional Judgment Rating Form (PJRF)

For the INSIGHT Assessment Product Catalog or more information about Percentiles, Norms and Comparison Groups
  

0 comments

Leave a Reply

Your email address will not be published. Required fields are marked *