In 2015-2016 we planned a new middle school, with a minimalistic standards based grading system. In the same year, a math teacher piloted standards based grading in one of her grade 10 classes. The success of these two experiences led to whole school work on assessment in 2017-2018, culminating with an inservice by Ken O’Connor, based on our interest in his book A Repair Kit for Grading: Fifteen Fixes for Broken Grades. A coordinator was hired to lead the process in 2018-2019, which happened to coincide with a new PowerSchool administrator, and in Fall 2019 we launched standards based grading across the entire school.
Not without some issues, of course.
But the issues haven’t given rise to any serious roadblocks, even if at times parents aren’t sure what grades mean and we teachers are left scratching our heads over how our inputs produced this or that Powerschool output.
All in all I think we can say the transition has gone well.
Faculty are indeed a bit weary of talking about assessment. It’s been multiple years now that this has been our school focus, so the creeping fatigue is understandable. Departments aren’t all on the same page and standards, though written, aren’t necessarily part of the school’s DNA. But we’ve successfully abandoned A-F grades. We’ve successfully separated course content from behavior – no longer is a student’s grade lowered because of a uniform violation or late work, things not directly associated with demonstrated student knowledge in a particular academic discipline. We report less by type of assessment and more by what is learned. And we are slowly aligning school privileges and identifying tutoring needs based on real need. Plus, the stage is set for a culture of students revising subpar work, or even good but not great work, to cyclically continue to improve. All good stuff.
But. You knew there was a but. And there is.
But can we say that instruction has actually changed for the better? To what degree have we assimilated the changes, as teachers, in a way that supersedes reporting, interfacing with PowerSchool, toeing the new line, and so on. The big question:
Are students likely to learn more due to our new assessment system? This was our hope.
So far it’s hard to say. And it’s hard to research. First, our scale has changed from A-F to 1-7, differently defined. We can’t compare one to the other to see if more learning is going on. And as our assessment leader pointed out, which group of teachers should we work with to see if we’re making a difference? The early adopters who have been operating quite close to our intended design? Or the early adopters who have been more liberal in their adoption? Or the late adopters who may be on the right track, but don’t have much experience yet? Or the late not-quite adopters who need more time to get on board?
And there’s one other interesting question, at least from the point of us research-practitioners. If we research instructional change and student outcomes based on any instructional changes due to the new assessment system too early, and find no change, are we damaging the potential for the new assessment practices to bring about change? Is it actually wisest not to ask ourselves yet if it’s “working?”
But. When will be the right time?