In the summer and fall of 2008, it became clear that global financial markets were collapsing. While the disaster caught many off guard (including Alan Greenspan), there had been a small number of individuals who had long predicted such a crisis for exactly the reasons that it occurred.
In retrospect, it seems that only three qualities would have been required to foresee the financial meltdown:
1) a rudimentary grasp of basic economic principles
2) a desire to understand the 2000's housing boom rather than merely profit off it
3) an understanding of the debts that underlay subprime mortgage bonds and collateralized debt obligations
It is safe to say that nearly everyone either investigating or investing in the housing boom had the first quality. It must also be true that at least a significant number of people involved possessed the second. It was, however, a mass misunderstanding of subprime mortgage bonds (SMBs - essentially the debt of a bundle of mortgages) and collateralized debt obligations (CDOs - much more complicated than SMBs and worth clicking on the link) that led the global economy off a cliff. This misunderstanding was caused largely by data misuse and corruption.
SMBs and CDOs consisted of the debt owed by thousands upon thousands of people on their mortgages, many of whom were in no position to pay their loans back. The shadow banking system lent the mortgages and then sold the debt to investors around the world in the form of SMBs. Wall Street would take the SMBs that appeared so toxic that nobody would buy them and manipulate ratings agencies like Moody's into assigning them investment grade credit ratings (some as high as "triple A") by dividing them up and repackaging them into CDOs. This was done by exploiting the ratings agencies' models' reliance on inappropriate data and assumptions about market conditions. The green light to invest in these essentially worthless bets in the ability of day laborers to pay off $800,000 homes as if they were US Treasury Bonds was one of the chief causes of the crisis.
How did these CDOs receive "triple A" ratings?
Investment grade ratings were placed on CDOs largely because data was used inappropriately. Debtor FICO scores were one example. The FICO score average of a pool of mortgages in a CDO, for example, was requested by the ratings agencies as a means of determining its investment grade. Relatively easy to manipulate, FICO scores provide an obscure picture of an individual's ability repay a loan. Moreover, relying on a group of mortgages' FICO score average is less telling than examining each score individually. Knowing the ratings agencies models, and staffed with much higher paid talent, Wall Street bond trading desks chopped up SMBs and repackaged them into CDOs with the intent to exploit the models' reliance on this kind of data.
Other flaws in the ratings agencies' models included:
- the assumption that housing prices would continue rising (in such a world, debtors who had trouble paying off their mortgages could always refinance and obtain a new loan)
- ignorance of and apathy toward the fraud occurring in predatory lending
- lack of information regarding the markets in which the loans had been made
(And none of this is to say anything of the reality that the ratings agencies were often in the pockets of the banks.)
It's hard to imagine that anyone who purchased these SMBs and CDOs knew anything about what was in them. (Some of the people who had a clue bet against them and made fortunes.) It would have been nearly impossible to gather all the pertinent details without somehow finding out which slivers of which mortgages were in the portion of the CDO that you owned, traveling to meet and question the people who'd taken the loans, and investigating the conditions of the markets they'd borrowed in. Rather than do all of that, accepting a "triple A" rating from Moody's seemed vastly preferable.
In essence, the ratings agencies' misuse of data allowed their models to assign ratings that obscured the reality that lay beneath them.
So confusing were these securities (and so deluded were the banks by the credit ratings they'd helped to manufacture) that when the market started to unravel, nearly every major Wall Street investment bank was caught holding enormous amounts of risk, both in the securities themselves and in the insurance contracts that had been taken out on them known as credit default swaps. (Credit default swaps were essentially bets that the debtors would default, many of which were notoriously held by AIG FP. When the government bailed out AIG, a significant portion of those taxpayer dollars went to filling the coffers of banks like Goldman Sachs who had made such bets.)
Thanks in part to the ratings agencies' flawed models, the system had fooled not only investors, but also the banks. The notion that these models would guide investors to their desired exposure to risk cost the world nearly $12 trillion.
Anyone with the time and discipline to investigate the underlying conditions of the housing boom would quickly have concluded its unsustainability. That, in combination with a recognition of the pace and volume with which its risk was being sold around the world, would surely have sent any sane person screaming about impending doom.
The crisis demonstrated not only the myth of rational markets (markets are, after all, manmade) but that when an endeavor's decision-making relies heavily on inappropriate data rather than knowledge, and assumptions not regularly reexamined, it will ultimately fail.
When financial markets systematically make poor decisions over an extended period of time, liquidity disappears, credit dries up, and markets stops functioning.
Why is all of this on an education blog?
What happens when "failing" public schools systematically make decisions with inappropriate student data? Given that the same ideological push that has long perpetuated the notion of the rational market (seemingly without consideration for its lust for success or need for regulation) is also promoting its accountability methods in public schools with a disregard for similar pressures to succeed (e.g. high-stakes testing or the stake of political careers), hesitancy should strike one as prudent.
On Thursday I explained why I'm wary of the push toward data-driven instruction, especially in "failing schools." Yesterday I provided an example of how I've seen data created in a public school that impeded useful decision making. For one more example of how data has been used inappropriately in schools, scroll down to the comments section from my post yesterday.
In light of my posts over the past two days, I hope this post has drawn a clear connection between the systemic cracks in the financial system and the effects of federal policy on public schools. I further hope that connection makes clear the caution with which education policymakers should be proceeding in their effort to hold schools accountable.
Tomorrow I'll explore the relationship between a system's size and its reliance on data, as well as methods of using knowledge and quality data in an effort to make smarter decisions in our schools.