No matter the system, at the end of the day, a teacher inevitably creates a rubric. Whether assessing an essay, a project, or a skill of some kind, rubrics provide the structure that helps students understand what success looks like. While different styles of rubrics exist, they all serve the fundamental purpose of making expectations transparent. A well-designed rubric bridges the gap between instruction and assessment, offering students a roadmap for growth while giving teachers a tool for seemingly fair and consistent evaluation.
Over the past few months, I have explored a wide range of rubric models. My goal was to develop something that threads the needle between standards-based proficiency scales and a competency-based rubric. I experimented with some complex designs, but ultimately landed on a rather basic structure erring on the side of simplicity. This post captures my thoughts on all types of rubrics as I have prepared to integrate aspects of competency instruction and assessment in my classroom.
Analytical & Product-Based Rubrics
Analytical or product-based rubrics are the most common. I’ve been using them since I started teaching, and they were certainly not new then. While they are an improvement over simple point totals or letter grades, they have several drawbacks for me.
First, they reinforce a fragmented view of learning. Product-based rubrics are difficult to design in a way that doesn’t feel like a glorified checklist to students—though that’s not the intent. Analytical rubrics for essays, in particular, tend to isolate and over-complicate criteria for what is ultimately an interconnected process. I feel my Harkness rubric (below) does exactly this. Sometimes, a student that only performs a single role, but does so well, adds tremendous value to a Harkness discussion. This is not captured in the rubric. Likewise, I endlessly tweak the weighting of my rubric components, never being fully satisfied with how it reflects a strong Harkness performance.

The nature of these rubrics also tends to limit student creativity and risk-taking. In a standards-based or competency-based model, where the goal is to continually challenge and enrich students, a poorly designed rubric can have the opposite effect, inadvertently restricting growth.
Similarly, analytical rubrics can over-quantify complex assignments. If a rubric assigns 10 points to argument strength, what is the exact difference between a 9 and a 10 or a 7 and an 8? Students may find this frustrating, and teachers often struggle to justify small distinctions in scoring. The way these criteria are framed is also open to significant interpretation and ultimately reflects the values of the rubric creator. Some rubrics prioritize creativity, others emphasize neatness, and some weigh argumentation more heavily than contextualization. At the end of the day, despite their structured appearance, analytical rubrics are just as subjective as other scoring formats.
In an age of AI, I also wonder how many analytical rubrics are simply generated by AI tools to save teachers time. While better prompting can produce better results, if teachers relinquish too much control over scoring guidelines, what does that mean for the integrity of assessment? Do teachers really need a rubric for every assignment throughout the year, or is there a way to create fewer, higher-quality rubrics that can be applied across multiple situations?
Standards-Based Rubrics & Proficiency Scales
I believe standards-based rubrics and proficiency scales are a step in the right direction. I have used them for years and have seen students thrive with them.
I appreciate how these rubrics and scales help reduce grade inflation by clearly defining skill levels. I also value their direct alignment with standards, ensuring that assessments focus on mastery rather than compliance. Personally, I find it frustrating whenever creativity, neatness, or similar subjective qualities are included in a rubric—these elements often feel misplaced in an objective assessment of learning.
From a professional growth perspective, using standards-based rubrics and proficiency scales has helped me better align my assessment and instructional strategies. My formative assessments are now more clearly connected to summative assessments, and my lessons are far more reflective of those connections. These rubrics promote a growth mindset among students and have pushed me to think more intentionally about the skills I teach and how I scaffold them. I can’t escape the idea that they are, therefore, far more valuable to me than the student – at least directly.

One of the most common critiques I hear—and have experienced myself—is that standards-based rubrics and proficiency scales can feel abstract or detached from specific assignments. To bridge that gap, I often create an assignment-specific checklist to help students see the direct connection between the task and the proficiency scale. However, this adds an extra step for the teacher.
If poorly designed, proficiency scales can be overly rigid, limiting how students demonstrate their understanding. A formulaic structure that constrains creativity and innovation risks failing for the same reason many analytical rubrics do—it prioritizes structure over meaningful demonstration of learning.
Even as I explore the next steps in my own practice, this should not be seen as an outright critique of standards-based rubrics or proficiency scales. Too often in education, something “new” is parroted as a cure-all while the “old” is dismissed entirely. The reality is that many students can and do learn effectively with a variety of rubric styles. The key is thoughtful design and intentional implementation.
Competency-Based Rubrics
My understanding of competency-based rubrics is still largely theoretical. I have explored as many examples as I could find, though there have been fewer than I expected. In response, I have started creating my own, refining them as I consider how they would function in a classroom setting.
One of my biggest concerns is the logistical challenge of implementing competency-based rubrics at scale. Since students progress at different rates across a wide range of skills within each competency, tracking everything effectively seems daunting. Unfortunately, I have yet to find a tech tool that fully addresses these challenges.
I am less worried about the potential for arbitrary language around what it means to be “competent,” as my proficiency scales already define those expectations for each skill. Instead, my greater concern is how to consolidate multiple skills within a competency into a single rubric that maintains the holistic nature of the competency without fragmenting the learning process.
For example, the competency of “Developing Questions and Planning Inquiries” consists of five distinct components. To achieve mastery in the overall competency, does a student need to demonstrate proficiency in all five, or would four out of five be sufficient? When a new system is introduced, students and parents tend to ask a lot of “What if?” questions, and they deserve clear, thoughtful answers.
Most of my concerns are logistical rather than philosophical. I’m excited about how these new rubrics will allow me to focus more on feedback. Interestingly, a recent grading philosophy survey I conducted with students and parents revealed a notable contrast—while parents prioritized final letter grades, students overwhelmingly valued feedback. I’m also eager to see how these rubrics will emphasize application of learning, encourage interdisciplinary thinking, and (hopefully) empower students to take ownership of their learning. Lofty goals, I know, but ones worth striving for.
Here’s where I am at the moment: I’ve created a single-page rubric for each competency as an overview for students and parents. I’ll likely turn these into posters for easy reference. In class, I use more detailed proficiency scales for each skill within the competency, which I hope will provide a clearer structure for instruction and stronger scaffolding for learning.
This approach gives students and parents both a bird’s-eye view of the bigger picture and a worm’s-eye view of the specific skills being developed. Is “worm’s-eye view” a thing? I had to google it to make sure.
