On Data, Part Four: Advocating for Data-Informed Decision Making

On Thursday I explained why I'm wary of data-driven instruction enthusiasts. Friday I showed how dumb data is sometimes created in schools. Yesterday I drew a comparison between Wall Street's inappropriate use of data and similar misuse of standardized tests in public schools. Today I'm going to explain why I think we should eschew data-driven decisions in favor of data-informed decisions.

NCLB made standardized test scores the sole indicator of school success in 2002. Today lawmakers are slowly but surely voting for teacher evaluation systems that consider how students perform on standardized tests across the country. These laws indicate an increasing trust in standardized tests and statistical models such as VAM to do the hard work of defining school and teacher quality.

On July 27 Jeff Henig, a professor of education and political science at Columbia University, published a guest post on Rick Hess's blog entitled "Policy by Algorithm." In it, Henig notes the utility of data and algorithms to help systems like public education make decisions, but he cautions,
"...the high promise of policy by algorithm mutates into cause for concern when data are thin, algorithms theory-bare and untested, and results tied to laws that enshrine automatic rewards and penalties. Current applications of value-added models for assessing teachers, for example, enshrine standardized tests in reading and math as the outcomes of import primarily because those are the indicators on hand. A signature element of many examples of contemporary policy by algorithm, moreover, is their relative indifference to specific processes that link interventions to outcomes; there is much we do not know about how and how much individual teachers contribute to their students' long-term development, but legislators convince themselves that ignorance does not matter as long as the algorithm spits out a standard that has a satisfying gleam of technological precision." 
What are the effects of a commitment to algorithms that rely on data that do a poor job of measuring a system's objectives? If the goal of education is to teach students to answer standardized test questions, then standardized tests are an excellent metric to use, in the same way that profit is an excellent indicator of a business's success. If education's goal, however, is to prepare students to be engaged citizens and critical thinkers, then we must acknowledge that standardized tests are currently of extremely limited utility.

Cornered by a growing consensus among policymakers that test scores alone are appropriate indicators of student, teacher, and school success; public schools can be constricted by a requirement that decisions be data-driven rather than data-informed.

Given its obvious drawbacks, why is this misguided use of data forced on "failing" schools?

I wonder if larger systems are more likely to embrace policy by algorithm than smaller systems? As systems get bigger and bigger, must they rely algorithms more and more? Would a system in which all participants were intimately involved in its processes be as likely to trust decisions to algorithms as a system in which policymakers have almost no experience with its everyday processes? In other words, in the context of schools, would politicians put as much value on standardized test scores if they sat in the classroom every day?

It seems to me that an increasingly blinkered faith in the decision-making ability of algorithms might be indicative of a few things:

1) a system in which the policymakers are disconnected with the participants and may mistrust their judgement (e.g. politicians distrusting teachers, administrators, and/or district officials)
2) a system so large and cumbersome that the various facts on the ground cannot be conveyed to the public practically except in the form of numbers
3) an intellectual laziness that favors numbers over testimony
4) policymakers' respect for the political cover numbers can provide in the face of unpopular decisions

Comparing the state's use of algorithms to Google's, Henig continues:
"Google makes up for what it might lack in theory and process-knowledge by continually tweaking its formula. The company makes about 500 changes a year, partly in response to feedback from organizations complaining that they have been unjustly "demoted," but largely out of a continued need to stay ahead of others who keep trying to game the system in ways that will benefit their company or clients. State laws are unlikely to be so responsive and agile."
If standardized test scores are to inform our decision making, then, as educators, our assumptions about their data must be reexamined constantly, as each new test is administered. We would analyze the test and discuss its capacity to measure what it purports to measure. In doing so, we would involve students and be privy to the test designer's motives and methods. Changes would be made to the way data is used and the importance we give to it, year after year. But when lawmakers, test designers, and schools do not share a common understanding and yet attempt to move together toward a common goal enshrined in law by legislators, facilitated by test designers, and enforced upon "failing" schools (all the while ignoring the nuances that must come with the use of data); the end result may not be ideal for students. It may, however, create useful numbers for lawmakers.

What if we didn't have a political system in love with high-stakes testing? What would a school that used data effectively in that environment look like? First and foremost, it would collaboratively create a clear mission statement appropriate for its student population. Second, it would decide on what indicators, when met, would authentically demonstrate progress toward that mission. Third, it would create a strategy for achieving those indicators. Lastly, and most importantly, it would constantly rethink the usefulness of its indicators and the data they create, its strategy for meeting those indicators, and, on occasion, the mission statement itself. Critical to this endeavor would be engaged professionals, parents, and community members. This is the kind of environment in which data-informed instruction could be incredibly useful.

Henig finishes his post:
"Both data and algorithms should be an important part of the process of making and implementing education policy, but they need to be employed as inputs into reasoned judgements that take other important factors into account. The last thing we need are accountability policies that undermine education as a profession or erode the elements of community and teamwork that mark and make good schools. But when law and policy outrun knowledge, the results are likely to be unanticipated, paradoxical, and occasionally perverse."
 I couldn't have said it better myself.

Comments

Popular Posts