As I write this we are in the heart of football season, one of the best times of the year. Just a few weeks ago as summer was drawing to a close, pundits and “experts” from all across the country began making their predictions on what the upcoming seasons had in store.
I live in the middle of the mitten, known to many others as mid-Michigan. Where I live, the majority of people have loyalties to three football teams, the Michigan State Spartans, The University of Michigan Wolverines, and the Detroit Lions. Back on July 15th, prior to any games being played, the Spartans were not ranked by the “experts” to be in the Top 25 teams. The Wolverines were picked by “experts” to be possible national title contenders and the Detroit Lions were predicted, by computer simulations, to start their season 0-10.
Well, a month into football season it is safe to say we now know why they play the games. The Wolverines are a mess, the Spartans are up and down every week, and the Lions are actually being considered as one of the best teams in the NFL. I am so glad the experts and the computers don’t have the power to actually label and define the winners and losers before the season ever starts. I mean, that would be crazy wouldn’t it? To simply look at the rosters of the teams, to enter that data into a computer, and then let the computer decide who the winners and losers are… enter assessments in most K-12 schools today.
How often have you heard people predict the future of a student because of his/her demographic information? How often have schools been designated winners or losers because of an algorithm? How often are teachers asked to give a student a test so that a computer can simply predict success on a future test? How many “experts” come to schools before the school year even starts to describe what they see as the needs.
In my job I have the opportunity to work with teachers and administrators from all around the country. One of the most common complaints I hear from both groups is, “We test our kids too much!”. That is an interesting statement. Now, I am going to say something that is extremely controversial. I completely DISAGREE. I actually believe that we do not test our students enough. What???? I know. It sounds crazy, but hear me out.
One of the reasons we have so many high stakes tests in schools today is because people, often bureaucrats working in state capital buildings, believe that we need an honest and accurate assessment of what our students can and cannot do. This sounds reasonable enough. The issue is, that often in classrooms, how, when, and what we assess does not match up with these so called “high stakes tests” requested by “experts” so we feel a disconnect and tension. As a result, those same bureaucrats come back into our schools, through legislative action, and ask for us to administer more tests so that we can get earlier and more frequent progress updates, or in some places, predictors of future success. In other words, we ask students to simply take tests to predict their success on future tests. There are some assessments available that actually use their ability to correlate and predict future performance as a selling point.
Let’s take this approach and connect it back to the football metaphor used at the beginning of this post. Jim Harbaugh is the coach for the University of Michigan football team. If he bought into the hype this summer regarding his team’s anticipated success (some may say he actually did) and he simply leaned into this prediction without making his own assessments of strength or weakness, without creating weekly game plans, without structuring practices to reduce weaknesses and highlight successes, would they ever be able to live up to the prediction? No way. On the flip side, what if the Detroit Lions bought into the belief that they would be winless for the first two months of the season. What would be the point of showing up to practice every day?
In schools everywhere we have been conducting, what I believe to be, educational malpractice. We have decided to label kids, label schools, and label districts as a result of a computer algorithm. We have taken quantifiable data, entered it into a computer, a computer programmed by humans, so that we can then predict future success or failure and we don’t give it a second thought. We do not question the reliability of these predictions. We do not ask for evidence that that these simulations will reflect actual long term results. We take it at face value that a student identified as “not proficient” will be “not successful”.
In our classrooms we have taken this same approach on a smaller level. We have begun classifying assessments as either formative or summative, often using formative as predictive assessments and summative as the assessments that “really matter.” We may use weekly quizzes, computer adaptive assessments, or department created assessments as a means to simply predict later success. In our grade books we rationalize this by counting these “formative assessments” as a smaller weight while the summative carry the larger burden. But again, this is far from being pedagogically sound.
Formative assessment, by its very name suggests that it should inform future action. A formative assessment is similar to a score at the end of the 1st quarter. It is an observation made by a coach at practice that his linemen are not getting low enough before the snap of the ball. It is an assessment that does not predict the future but informs the coach on what might be adjusted to help create future success. Final scores are not printed after the first quarter or after practice in the middle of the week.
Teachers, when you label an assessment as formative or summative prior to using it, you lessen it’s value. Every assessment is both formative and summative. It is how you use it, not what you call it. A good coach makes adjustments after every practice, after every game, after every season. He uses what he knows to make corrections. This is using an assessment formatively. A good coach also understand that every game matters. Using a game this week as a tune up for a game in the future is a recipe for disaster. Good coaches expect the best from their players at every practice and at every game. It is through this mindset of all things being equal that a belief in the accuracy of results can come, which then allows for honest correction.
In your classroom, ask yourself, are you sending a message that some assessments matter more to students? If you are calling some assessments formative, prior to even seeing the results, have you already diminished its value and its ability to give you reliable data to make adjustments from? If you are calling some assessments summative are you already indicating that no matter the results, you will not be making adjustments or provide new opportunities for improvement? In your school are you giving assessments to students three times a year just so that those who wear suits and work in cozy offices can use their crystal balls to predict future success, therefore completely diminishing your ability to use that data to change the future?
This week, work to assess kids regularly. Every question you ask, every assignment you give is an opportunity to assess students. These are all opportunities for students to demonstrate mastery and for you to make adjustments. It is up to us to prove that we know our kids and that we can fine tune our instruction based on evidence. If we don’t, then we can’t be surprised when the suits show up with bubble sheets and number 2 pencils asking for more.
On the 15th of each month, I will send out my 2 Cents to The Lasting Learners e-mail group. Sign up today and get my latest thoughts on leadership and assessment…and honest, it’s only ONE e-mail a month: http://eepurl.com/cQwHA1
But these are just my 2 cents!
Feel free to read more of my thoughts at https://schmittou.net
Or check out my books on Amazon:
Comment below, share with friends, continue the conversation…
1 thought on “Testing to predict a test=malpractice”