On Data, Part Five: Out of Focus

Have you ever picked up a pair of binoculars in haste only to be frustrated by their lack of focus? The initial image may not resemble anything you recognize. If you know how to use binoculars, the first thing you do is adjust the focus mechanism until you see something you can make sense of.

Using inferior data and poorly considered assumptions to understand reality can be a lot like looking through a pair of poorly focused binoculars and being forced to draw conclusions based on what you see.

In a post entitled "Doing Program Evaluation Scientifically," Jason Radford, a PhD student at the University of Chicago, writes today:
"...the capacity of organizations to do their own high-standard evaluations represents probably the single biggest barrier to accountability. Anyone can do research, but to do good research by social scientific standards requires specific training in hypothesis testing, data collection design, and data analysis. If the accountability movement wants to succeed, it needs to develop the financial and technical resources necessary for organizations to develop this capacity."
From my perspective, the accountability movement in public education has put enormous emphasis on promoting the use of data in schools and almost entirely overlooked what it means to do meaningful research.

In essence, public schools have not only been told to use blurry binoculars with no focusing mechanism to make decisions, they've been asked to spend an incredible amount of time producing them. That public school employees overwhelming lack both the time and expertise necessary to create and analyze the kind of data that might actually tell us something about causal relationships is conveniently ignored.

As Peter Gwynn pointed out in "Dawn of the Dumbest Data," entering 2,500 data points based on questions pulled from websites and written on the train to work in order to claim that students did not understand a standard before it was taught is nothing more than a monumental waste of time.

Even when student data is created responsibly, what of the opportunity cost involved in creating it? The Department of Education's response to the elimination of the social studies, arts, sciences, and physical education we've recently seen in schools across the country has been to increase the number of tests students are required to take. In New York, high schools often spend inordinate amounts of time not only taking Regents Exams, but also preparing for them. And the PARCC Consortium, tasked with assessing students' improvement in the Common Core State Standards, promises to soon replace the Regents with even more tests.

Could the enormous amount of time and money going into creating tests, preparing for them, analyzing them, and gaming them be better apportioned toward different endeavors?

Where should we draw the line between time spent on data creation and time spent engaging students?

Might well-trained educators working in districts with systems for internal accountability informed by test scores, but not driven by them, provide an alternative path toward school improvement?

How can we ensure the binoculars we're using have the focusing mechanisms necessary to find the knowledge beneath the data?

I hope I've illustrated over the past few days the ways in which public education's data use has systematically hindered the ability of "failing" schools to educate their students.

Tomorrow, in my last post in this series, I will look at the cheating scandals this failed use of data has produced and explain why we will, unfortunately, not suffer a global knowledge meltdown as a result. If you'd like to know more about the recent Atlanta cheating scandal, tonight's PBS NewsHour will feature it as one of its stories.

Comments

  1. And speaking of opportunity loss, if we are to use standardized testing as a means of evaluating our students' abilities and our education system, then it seems counterproductive to take away potential teaching and learning opportunity (classroom hours) in order to administer these tests.

    Frankly, I'm not even sure it's within the parameters of what was intended when states started enforcing "seat time" laws. Standardized tests aren't really instructional in the same way that a unit exam or a pop quiz might be. I think I'd be much less resistant to the proliferation of standardized exams in our school systems if they were to administer them on non-school days such as during the summer. The students won't like it, and probably the parents won't either, but it seems like a better option than continuously diminishing classroom time by adding more standardized tests to the mix.

    ReplyDelete

Post a Comment

Popular Posts