Moving Towards Standards Based Grading in History

*My featured image comes from an article by Ken O’Connor on the merits of Standards Based Grading. It’s a good read!

As teachers we are constantly striving to better serve our students. Some times this means trying new ideas and sometimes this means dusting off old ideas. When you have been teaching for a while it is difficult to not let the constant clamor for ‘new’ beget cynicism. After all, age is no guarantee of wisdom, but neither is youth a guarantee of innovation.

I have not met many teachers who are neutral on standards based grading. Generally, teachers are excited and all-in with it or they hate it, depending on their past experiences. There are many variations of standards based grading, and doing it systemically can create a lot of logistical issues; This does not include the shift in mindset it requires of parents, students, and teachers.

No system is perfect, and many people have a tendency to judge larger systems or models of education based on an individual experience with that model, be it good or bad. I have certainly done that before. There are so many variables in the art teaching, it is easy to point a finger at a system that is uncomfortable than to try to work through difficult problems and find solutions. However, there is always some insight that can be gained to improve teaching practice, even if you decide that standards based grading is not for you.

I no longer teach at a school that uses standards based grading. However, I believe in the spirit of it; how it honors growth over time, includes clearer and more visible learning objective, offers opportunities for better feedback, and generally supports learning over GPA outcomes. (That’s a tall order to sell, I know). So, I have been reflecting on how to integrate the best parts of SBG into a traditional grading system.

Start with a Single Point Rubrics

Don’t want to jump all the way into standards based grading? Take a step towards it by using single point rubrics to guide feedback and assessment. They are simpler for students to understand, easier for teachers to create, and make the success criteria more visible. These have a lot of flexibility for including essential content along with the skill. Point values could be added to each criteria/component in order to make a single point rubric work within a traditional grading system. This also makes it possible to weight certain criteria of components within a single summative task.

Example 1: National Center for History in Schools: Historical Thinking Standards

This example is focused around skill only; the criteria were developed from unpacking the specific tasks embedded within the overarching skill standard. This could be applied to any historical content that was being used to instruct and practice causation. Some may prefer this style to a proficiency scale. Shout-out to one of my science colleagues for the formatting of this rubric.

Single-Point Rubric Example

Example 2: AP World History Modern

The first example is just for one of the AP historical thinking skills; the second is for a unit capstone project in my AP World History class. Single point rubrics have a lot of flexibility whether they are written with skill, content, or processes in mind. They can also be used to offer formative feedback prior to a summative grade. On the second example, I added points to each criteria to make it work with a traditional grading system. I have seen really amazing examples of these from other teachers and other disciplines…don’t judge them solely based on mine.

Single-Point Rubric Examples

Build Proficiency Scales

I like the spirit of single point rubrics that focus on clear criteria for proficiency and help target feedback towards skills as opposed to tasks. However, historical thinking involves showing proficiency in many processes, some of them foundational, that are linked under the larger umbrella of a skill. For example, sourcing a document successfully involves recognizing the main idea of a primary source, identifying the most significant or relevant source information, making inferences from source information, making connections between the source information and the source itself, and using the insight gained within the context of the argument. Teaching the larger skills requires understanding how to break the skill into smaller pieces and use that structure to scaffold instruction. I think it helps students to understand that process and have a tool that allows them to gauge their own success in learning and applying the skill.

My example below follows the traditional Marzano format. I also have used a student-friendly version that I have found useful because of its simplified formatting. I also like that it more clearly explains possibilities for score 4.0 and moving beyond proficiency. The spirit of scaffolding a skill standard to better instruct and assess is the same in both examples. The design of the student-friendly rubric comes from a book by Danelle Elder that I found useful in my first year of using a standards based grading model.

The challenge here is how to translate the scores of 0-4 to a traditional A-F scale for the purposes of semester grades and GPAs. Many of the systems that I have seen have proficiency at the score of 3.0 translate somewhere between B+ and A-. There are a lot of challenges around this, but also a lot of potential for how we think and speak about learning and scoring.

Proficiency Scale Examples
Student-Friendly Rubric Examples

Design Assessments around Standards

Assessments can do more than test students on content knowledge. Good proficiency scales and/or rubrics can support the creation of assessments that clearly test students historical thinking. Of course, knowing content is a very important part of this, it’s just that a significant amount of content is acquired at the score 2.0 level before students use it to show proficiency on some of the skills.

These assessments do not need to be massive block-long exams. Often, shorter summative assessments allow a really good sense of a student’s skills.

The example below is from an assessment that was asking students to analyze primary source documents using the HIPP strategy. These questions were part of a larger summative. Document sourcing is an easy skill to assess this way because it can be done without multiple choice questions that test historical content.

Skill based assessments can still test content, it just generally goes in a Score 2.0 section if that content is going to be assessed with traditional MCQs, matching questions, etc. The backwards design process also helps prioritize the specific content so that I can more easily balance depth and breadth.

I have used multiple choice questions before, but MCQs present a number of difficulties. It can be challenging to write really good questions with good distractors that tease out a student’s proficiency based on what they select. MCQs can be written more simply and clearly to test finite knowledge, but guessing also presents a challenge to accurately gauging student knowledge and skill proficiency. I have used such MCQs before, but generally do so with formative assessments and practice. On summative exams I like to use different short-answer strategies that requires students to select specific historical evidence and use it towards some purpose.

For example, in a World History class that is studying the World Wars, I would have various formative ways of assessing students on vocabulary, key concepts, historical evidence, etc using traditional MCQs that ask for memorization of finite knowledge. However, a summative assessment would include content questions such as:

  • Identify and Explain ONE piece of evidence that supports the claim “The First and Second World Wars were the same war separated by a twenty year peace.”
  • Identify and Explain ONE long-term cause of the First World War.
  • Explain how the German-British naval arms race contributed to the context of World War One.
  • Identify and Explain ONE turning point during World War II.

These can be written to assess foundational skills within the context of the larger standard(s) being assessed. Score 3.0 questions might go in greater depth towards a targeted skill, ask students to construct an argument, or capture student reasoning around a historical claim. These would also be short-answer style, though the question/prompts might be different. There is a lot of flexibility in question creation, and written student answers allow for a better understanding of student proficiency. This is research based too! At least according to “Assessment Strategies for a History Exam, or, Why Short-Answer Questions Are Better than In-Class Essays” by Alexander Maxwell published in The History Teacher. It is available on JSTOR.

The book “A History Teaching Toolbox” by Russel Tarr was also a great resource that helped me rethink small ways of redesigning assessments for efficiency. Russel Tarr also a website, Tarr’s Toolbox, that is filled with some great teaching resources.

In Conclusion

Just thinking about how standards are used in creating rubrics, proficiency scales and assessments can move towards the spirit exemplified in standards based grading without the logistical problems that often come with enacting it. Many of the ways we can rethink rubrics and outcomes, the role of skill standards, and feedback is just about trying to do better for our students. A good idea can be a good idea regardless of the system or pedagogical model it comes from.

2 thoughts on “Moving Towards Standards Based Grading in History

  1. Pingback: C3 Historical Thinking Skills Proficiency Scales

  2. Pingback: ChatGPT & Standards-Based Grading

Leave a Reply