Yesterday, my advisees turned in their post-graduation plans, one piece of their senior portfolio. They’d worked on these essays with another teacher for the last week. They were focused and did excellent work.
After I received the essays, I counted them, hoping that this time, I’d get a 100 percent turn-in rate.
(I don’t remember the last time all my advisees turned in an assignment on time. Last year, the typical on-time turn-in rate hovered around 50 percent, which devastated me. I’m not exactly sure how to conduct a class that way.)
The count this time? I got 14 out of 18, or 78 percent. Not bad, but not good.
Then I decided to see if I could identify the four students who didn’t turn in their essays without looking at the stack. How predictable is this data? I wrote down four names, then checked, then didn’t know how to feel when I found out I was 100 percent correct.
Is it a good thing that I could make this prediction? Does it mean that I know which students are struggling? Or is the opposite true — that the achievement gap is so strong that struggling students have no chance? What’s my role as a teacher if I always get the same result?
As a next step, I asked our counselor to see if she could also name the four students who hadn’t completed the essay. Even though she has limited contact with my advisees, she identified three out of four correctly.
What does this tell me? As a school, we know our students well and can identify which students are struggling. Unfortunately, if we can make these kinds of predictions, that means we’re not intervening effectively enough. We’re not addressing the achievement gap. We’re seeing our students, seeing the cracks, and letting them fall through.