Tagged: assessment

 /  By  / comments Please comment!

Measuring reading growth: Not just test scores

favicon In our data-driven society, no matter where you go, everyone cares about the same thing: results. If you can’t quantify your gains, then too bad for you.

The same goes for reading instruction. It’s the end of the year, and I’ve been thinking a lot about results. Have my efforts made a difference? Are my students better readers than there were back in September? And if they are, how do we know?

Let’s get one thing out of the way: Tests matter most. Therefore, I’m relieved that students performed well on our online reading assessment, and I’m particularly pleased that Kindlers improved more than non-Kindlers.

But tests are just tests. They don’t paint the whole picture.

That’s why I believe that several measurements are necessary to assess student progress in reading. This year, we’ve tried these data points:

  • How many books / pages the students have read,
  • Reading stamina
  • Reading fluency
  • Whether students say they enjoy reading,
  • Whether students identify as readers,
  • How well students can independently use the reading strategies we’ve taught.

My goal next year is streamline this list down to 2-4 key indicators. That way, all teachers and students can track their progress on common, agreed-on criteria.

Which data points do you think are the most important? Or, do you suggest others? favicon

 /  By  / comments 4 comments. Add yours!

Assessment: Fast is better than deep

 I’ve always believed that assessment is at the heart of good teaching and learning. Students improve only by knowing what’s expected of them and by getting feedback about how to get there.

When I began teaching, my colleagues and I devoted huge amounts of time devising the perfect assessment program. We joked about our three-dimensional rubrics and sometimes-indecipherable coding system. Some students and parents got what we were trying to do, but others replied, “Can’t you just give us a grade already?”

They were right. Our assessment system was great, but often it took way too much time for students to get their work back.

And the main point of assessment is not to evaluate students’ skills and knowledge but rather to help students grow.

That’s why, the last several years, I’ve focused on getting my students’ work back fast. Especially on practice assignments, extensive comments take too much time. Students like reading personal notes from their teachers, but they prefer knowing whether they’re on track.

If assessment takes too long, students forget about the assignment, start thinking the teacher is unorganized and incompetent, and begin caring less about the class.

This summer, I’ve been reminded of the fast-is-better-than-deep rule. I’m a student in a library and information science class, and my professor has taken several weeks to return assignments. At one point, we were on Major Assignment #3 when she hadn’t yet graded Major Assignment #1. This was inexcusable, and I quickly lost interest in the course.

My point is not to bash my professor but rather to emphasize the importance of assessing student work quickly. If there is a large gap between when students turn in assignments and when they get them back, then students don’t know how they’re doing and how to improve.

Perhaps even more important, the notion of time gets messed up. The best classes, I believe, have one narrative. A group of students and a teacher engage themselves in a story of growth. Although the story isn’t always linear (there are always flashbacks and subplots), the teacher must preserve the master narrative. You can’t be in more than one place at one time.

Of course, fast assessment is very difficult, especially if you have more than 120 students. But unless it’s a culminating project, I’d rather spend two minutes per student on two assignments than four minutes per students on one. Even though scenarios take the same amount of time (eight hours), my students get an extra opportunity to practice and get better. 

 /  By  / comments 2 comments. Add yours!

We have to make reading more public

 It’s typical for young people to love writing and hate reading. Writing is expression; writing is communication; writing is art. On the other hand, reading is boring; reading is private; reading is lonely.

This year, my students wrote 16 essays and read 12 books. They all said they became better writers. Only a few said the same thing about reading.

How is this possible?

It’s because writing is more public than reading. It’s more out there. You write something, and it’s on the computer screen or on a piece of paper. Even if you don’t want help on your writing, it’s in the world, all your thoughts and grammar mistakes right there, ready for a teacher or a peer or a writing mentor to critique, ready to talk about in a writing conference.

Because writing is more public, students feel they can improve their writing skills more quickly than their reading skills. Writing is a craft, while reading is just something you’ve done forever. It’s easier for students to have a growth mindset with writing than with reading.

That notion has to change. If we’re going to push our students to read challenging texts, we need to convince them that reading is a complex intellectual skill that involves much more than decoding and comprehension.

To do that, we need to make reading more public, more out there. We must challenge students to talk about their reading, both to us and to each other. We can’t be afraid to ask students to read aloud and process how they’re making meaning of a text. We have to build classrooms that celebrate reading “mistakes” as examples of growth.

Most of all, reading needs a product in schools that is equal to writing’s essay. Right now, there is no equivalent artifact. Sure, teachers have their reading questions and their Socratics and their book reports and other fancy projects. But very little exists that documents a student’s reading process and understanding of a text. Annotations come closest to achieving this purpose, but few teachers have taken them seriously (yet).

This summer, I plan on thinking about what can be done to make reading more like writing for my students. I want them to feel like they can track their growth as readers and to show evidence of their reading journey.

Please let me know if you have ideas. 

 /  By  / comments Please comment!

Assessment: Process as important as product

At the beginning of the semester, I considered myself fairly knowledgeable about assessment. After all, as a teacher, I had participated in overhauling my first school’s assessment system and participated in campaigns for standard-based grading. Nevertheless, this class — particularly my peers in discussions and class activities — pushed my thinking about assessment in three important ways.

First, I learned about the distinction between formative and summative assessment and the importance of both types. In our transformations, my partners and I struggled at first. We knew the difference between the two but had trouble lining up our goals and objectives to their corresponding assessments. After taking a look at the Carnegie Mellon University’s Eberly Center for Teaching Excellence’s website (http://goo.gl/x6aEN), however, our understanding became much clearer. I appreciated the succinct definitions and the easy-to-understand examples.

Second, I learned that the assessment of process goals is as important as the assessment of content goals. This was a major shift for me. After all, as a teacher, I have been taught — especially over the past decade, since the advent of No Child Left Behind — that content is king. Process is secondary. What matters most is whether the student gets there, not how she got there. But Professor Loertscher and my peers cajoled my thinking. Particularly with information literacy and 21st century skills, which emphasize critical thinking, the assessment of process is critical. The Big Think — in which students practice metacognition and consider their growth in product and process — is also an important step for teachers and teacher librarians.

Third, much of my learning came in reading the Common Core and the AASL standards closely and thoroughly. I’ve become obsessed with the Common Core’s ELA standards and the controversy surrounding David Coleman, who argues against pre-reading activities and reader response pedagogy. Despite the ongoing debate about the Common Core, I found the standards helpful for interdisciplinary projects and for learning that involves research and support from teacher librarians. Specifically, the focus on argument and evidence aligns well with several learning models, specifically “Take a Stand.” I also appreciated perusing the AASL standards, especially their focus on problem solving, critical thinking, and the production and sharing of knowledge. If the library is to become a 21st-century learning commons, our curriculum and assessment must center on the exchange of ideas.

My reading this semester has deepened my understanding of curriculum and assessment. Shifting from being a classroom teacher to a teacher librarian has encouraged me to consider assessment differently. My journey has led me to believe in the importance of assessing not just the discrete content standards but also the more fundamental aspects of learning — the why and the how.

 /  By  / comments 4 comments. Add yours!

Students value screencasts as feedback

favicon Last week, I did a little experiment: Instead of offering written feedback on my students’ essay drafts, I recorded screencasts.

It was fun and took about the same amount of time — five to eight minutes — as typing comments in Google Docs.

Here’s an example. Note: The screencast is five minutes long, but you’ll get the gist within the first minute. Don’t feel like you need to watch the whole thing. It’s not particularly scintillating! Also, the volume is a little low.

I made sure to ask my students what they thought of the screencast idea. All but two students preferred getting oral comments. They liked that I was thinking through their paper, trying to make sense of their ideas.

On the other hand, students noted that screencasts — especially short ones — cannot offer specific, targeted feedback. If my purpose is to give general comments about a paper’s focus and organization, then the screencast is perfect. If my goal is to talk grammar, it’s best to go the written route.

One of my students said, “Why don’t you do both?” Very funny. He doesn’t understand the English teacher’s paper load.

But it does get me thinking. It makes sense that I read a student’s essay three times: once for content (screencast), once for grammar (written comments), and once to grade (highlighting a rubric). When combined with a student’s peer editor and online writing mentor, that’s sufficient support in a typical two-week essay window.

Plus, the screencasts are more human. They give students the feeling that a real person — not just an English teacher — is reading their work. I think it’s a great way to communicate care. favicon

 /  By  / comments 4 comments. Add yours!

Responding to student work with screencasts

favicon I just finished up an experiment. With the first drafts of my students’ essays, instead of writing comments in Google Docs, I recorded screencasts to offer spoken rather than written feedback.

This is not a new idea. I’ve done it before here and there, and so have others (Shelly Blake-Plock’s post is excellent), but I’ve never done screencasts with an entire class before.

Until today.

So here’s how it went: I used Jing, a free screencasting program that allows you to record your computer’s screen and your voice for up to five minutes. Once you’ve done that, you can save your screencast to your computer or post it (with a URL) to screencast.com. It’s pretty simple.

So instead of reading my students’ essays and then giving written comments in the margins, I talked through the essays like a live AP English reader would. In other works, I did a “think aloud.”

I’m not going to post any examples here yet — until I get my students’ permission and their feedback about this process — but my initial hope is that these screencasts will approximate a virtual (one-way) writing conference. I’m wondering if hearing a person’s voice (instead of reading a person’s comments) will spur more students to deeper revision.

Many time-crunched teachers will ask, “Doesn’t this take forever?” Actually, not really. You can record up to five minutes, but my screencasts averaged about three. Then it takes about a minute to upload to screencast.com, during which time I take a much-needed break to refresh my head and surf the web. Once the uploading is finished, I copy and paste the link to the student’s writing review template on the bottom of the essay. Overall, then, the process takes about five minutes per essay, which isn’t horrific.

No, you can’t offer line-by-line commentary. You can’t get into the nitty-gritty of word choice or syntax. These screencasts are good for the big stuff — overall focus, thesis, organization, quality of evidence. They’re great to give students a holistic assessment of their work in the formative stage.

Please let me know what you think of this experiment. It’s entirely possible that it’ll be a failure, but I’m hopeful that it will give my students more of a “live” version of how readers try to understand their writing.

I’ll be sure to post an update after I get my students’ reactions. favicon

 /  By  / comments 2 comments. Add yours!

Group grades: Another way to increase homework

favicon For a long time now, I’ve thought about ways to increase homework completion.

The Nightly Text was fairly successful, but still, homework decreased precipitously on weekends.

Last unit, without too much fanfare, I introduced a new idea to encourage students to read and annotate The Awakening.

I called it “Group Annotations.”

Up until this book, I regularly gauged my students’ reading by checking their annotations. It was simple: I’d go around, table by table, and do a spot check.

This time, I made a small change: Your annotation score was based on your overall team’s score.

That meant: If you did your annotations but your peers didn’t, you’d lose. And vice versa: If your peer did their annotations and you didn’t, you’d hurt them.

The results were excellent. Homework completion was more than 95 percent.

More than any other reading homework assignment I’ve done this year, Group Annotations encouraged students to do their reading nightly, to annotate closely, and to be prepared for classroom discussion.

My students didn’t want to be the one bringing down their team.

A bit of a warning: This idea likely would not work everywhere. After all, students have to care about each other and demonstrate social responsibility. In addition, the practice is a bit unethical; it’s a totally individual assignment with no group product that is being assessed collectively.

But it worked, and that’s what counts the most.

It’s intriguing to me how much better my students did with group accountability. When they’re working for themselves, they sometimes get lazy. When they’re working for me, they sometimes do so begrudgingly. But when they’re working for each other, their drive kicks in. favicon

 /  By  / comments 2 comments. Add yours!

The Nightly Text as formative assessment

favicon My experiment with the Nightly Text, this unit’s ongoing homework assignment, has been a major success.

Reading’s up, homework’s up, and the quality of discussions is up, too.

One additional benefit of the Nightly Text is that it’s been great for formative assessment.

Too often as teachers, we wait too long to find out that our students are falling behind. We spend so much time developing engaging culminating projects and daily lessons that we don’t recognize how important it is assess whether our students are making solid progress.

That’s where formative assessment comes in, and that’s how the Nightly Text is helpful.

When I receive a text, I get a quick snapshot of a student’s understanding. If a student is off point, I can intervene immediately instead of waiting until the next day.

Here’s an exchange I had tonight with a student (about The Awakening). Part of the homework was to write an analytical question for tomorrow’s Socratic seminar.

Student: Do you think women are still under men’s control?

Me: Maybe a good question for a social studies class, but there’s nothing in the book that will help you answer that. Text me back.
Student: Why do you think Kate Chopin decided to write against women’s gender roles in society?
Me: You’re getting closer, although this question relies heavily on speculation rather than textual analysis. Try to ask a question about the last 2 pages. Text me back!
Student: On the last page, Edna hears her father’s voice and her sister’s voice. Why do you think she hears her family’s voices and not Robert’s voice?
Me: OK, that’s good.
Student: Yes!

My student’s first question, although interesting, was not appropriate for a text-based discussion. His second attempt was closer — by centering on a major theme in The Awakening — but it was too broad and wouldn’t encourage his peers to delve into the text.

After a little direct prodding, however, my student was able to write a question that — although not perfect — will be a solid one for tomorrow’s discussion.

Sure, I could’ve checked his question tomorrow, but that would’ve been last minute. My student would’ve gone into the discussion without confidence.

Now, both my student and I can rest comfortably. He feels prepared to contribute, and I know that every student will have at least one solid question to ask. favicon

 /  By  / comments 1 comment. Add yours!

An experiment on cheating

favicon My students have good morals — most of the time. When it comes to cheating on quizzes, though, their standards are different from mine.

I give a weekly quiz that includes a little reading comprehension, a few tone words, and a couple literary terms. Although not a huge part of the curriculum, the quizzes are important in making sure that students have breadth of knowledge for the upcoming AP test.

At the beginning of the year, I told my students I trusted them not to cheat on the quizzes. In fact, we went over the answers immediately, and students corrected their own quizzes. This process worked until I noticed that the students’ scores seemed high — perhaps too high.

I am not the type of teacher who distrusts his students. After all, I believe in them. But to test my suspicion, the following week, I had students grade their peers’ quizzes. The first time, grades plummeted, but after that, the scores quickly rose again.

Then there was last week. Two students who sit next to each other had almost identical (wrong) answers. I approached them, and both students denied they’d cheated.

So today, I decided to do an experiment. I made four versions of the same quiz. (Teachers do this all the time. I’ve done it before, too.) In my opinion, it’s a complete waste of a teacher’s time, but in the school game — which includes cheating — it’s sometimes a necessary evil.

(One may make the rebuttal that the quiz itself — my decision to assess traditionally — is setting up the students to cheat. I don’t buy that argument.)

What were the results of my experiment? Well, quiz scores declined drastically, but not as much as I’d feared. My analysis is that many students have likely been cheating on 1-2 questions per week (and that some have been copying the entire quiz). My hunch is that students chose to cheat because it’s easier and takes less time than studying.

Now that my little experiment on cheating is over, the question is what I do next. Do I tell them what happened? Do we talk about cheating? Or do I continue making four versions?

What do you think? favicon