The 2012 results from the Program for International Student Assessment (PISA) are out. To celebrate the failure of public education “PISA Day” has been organized by those now establishing private, centralized executive control over once public institutions. PISA Day is the latest in a string of reality shows to litter our cultural space, the latest salvo of disinformation against the public governance and public purpose of schools and educational processes more generally.
There is plenty of useful things to read regarding what’s wrong with the PISA exam and how the results are being manipulated. But, there’s a 600 pound gorilla in the room that has not been fully recognized (and it’s not poverty I’m referring to, which has, for example, received attention here). It’s the crisis of authority that is evidenced by the particular character of the present charade. The character is evident in the media reports about PISA and how those reports were orchestrated, and the anti-science that confounds ranking with measurement. Examining each helps us grasp the content and significance of the disinformation itself.
Media and the Crisis of Authority
As is now the norm, monopoly news media outlets have dutifully parroted the views of Arne Duncan and the US dominated Organization of Economic Cooperation and Development (OECD), which creates and administered the PISA assessment. These reports of PISA results confirm not a decline in the capacities of American youth but rather a decline in the quality of journalism. Editors seem to have vanquished the idea that one should distinguish fact from opinion, an editorial from a report.
Take for example this news item. It relies so heavily on ideologically driven language to report PISA results that one cannot help but for at least a moment accept its social Darwinian view!
- Duncan’s assessment that PISA reveals “educational stagnation” is parroted without even the slightest awareness that the claim is both false and designed to mis-categorize education as an economic institution, subject to the logic of the “boon” and “bust” cycles of capitalism.
- We are told that student performance has “flat lined” — as if resuscitators should now be deployed in districts across the country (if only that healthcare plan actually worked).
- We are told others are “racing past” the US, as if human learning and the social responsibility to educate the young is best understood as an olympic track meet.
- US students are “treading water” — yet Duncan and his clan offer not even a dingy — students must swim harder they say. No wonder they’ve flat lined!
Thus, assertions, buzz words, and euphemisms occupy the space of analysis that might contribute to informing the public about the subject of education.
The point is this: the very language used to discuss the PISA assessment reveals a complete breakdown in standards of authority and authoritative discourse. Known authorities on the subject of education are disregarded, disparaged or otherwise marginalized and outrightly ignored. The views of lackeys of Duncan and his clan are mandated. There is actually very little if any real information presented in major news items about PISA.
The original purpose of publicly available news was to form public views as a basis for establishing legitimate actions of public authority. As authority is generally understood as a legitimate form of power — whether in the sphere of science or governance — such moves evidence a crisis of authority. The anti-public education crusaders must attack legitimate authority, as it stands as a block to their quest for power. That is, the usurping power cannot be legitimated. In place of legitimation stands assertion, diversion, non-sequiturs and other means for blocking thoughtful discussion of what such assessments might tell us. The outlook presented in much of the media is both backward and known to be false.
Orchestrating Public Opinion and the Crisis of Authority
Instead of creating a space for serious discussion about what international comparisons can tell us about schools in the US, this year’s PISA results were carefully orchestrated such that only pro-failure-of-public-education views would dominate major media outlets. It is worth quoting this piece from the Economic Policy Institute at length:
It is usual practice for research organizations (and in some cases, the government) to provide advance copies of their reports to objective journalists. That way, journalists have an opportunity to review the data and can write about them in a more informed fashion. Sometimes, journalists are permitted to share this embargoed information with diverse experts who can help the journalists understand possibly alternative interpretations.
In this case, however, the OECD and ED have instead given their PISA report to selected advocacy groups that can be counted on, for the most part, to echo official interpretations and participate as a chorus in the official release. These are groups whose interpretation of the data has typically been aligned with that of the OECD and ED—that American schools are in decline and that international test scores portend an economic disaster for the United States, unless the school reform programs favored by the administration are followed.
The Department’s co-optation of these organizations in its official release is not an attempt to inform but rather to manipulate public opinion. Those with different interpretations of international test scores will see the reports only after the headlines have become history.
Such manipulation in the release of official government data would never be tolerated in fields where official data are taken seriously. Can you imagine the Census Bureau providing its poverty data in advance only to advocacy groups that supported the administration, and then releasing its report to the public at an event at which these advocacy groups were given slots on a program to praise the administration’s anti-poverty efforts? What if the Bureau of Labor Statistics gave its monthly unemployment report in advance to Democrats, but not to Republicans, and then invited Democratic congressional leaders to participate in the official release?
Again, the point is this: such orchestrations are frank admissions that what is being pursued cannot be legitimated, they are in fact illegitimate actions in the public eye — and this public eye must be turned against careful and objective analysis and given, instead, the view that hides the real crisis of governance, an act which, ironically, fuels that very crisis.
The Ranking Theory of Measurement is Against Science
In all the discussions about PISA, a key and simple fundamental flaw has gone unnoticed. This flaw appears with all discourse on assessment in the present “era of reform.”
If I take any five persons, I can line them up, from the tallest to the shortest. Even if there is very little difference in height, it is very unlikely any two individuals would be exactly the same height. With this practice, have I measured anything? Indeed not! This rank order presents no concrete information regarding the actual height of any of my participants, only their relative place in a rank order, the nature of which is dependent on the other participants. In fact, knowing the tallest person in this group tells me very little about their height, by itself.
Now take the example of ranking the aggregate performance of students from various countries on say a math test. Certainly, we can rank order these aggregate performances, whether we use raw scores, percentages, or so-called benchmarks.
Increasingly, these rankings are discussed as measures of the quality of a school system, with the system whose students ranked at the top the distribution deemed “the best” and so on. Thus, we have a measure of school quality! To assert that the rank is, ipso facto, a measure of school quality is an arbitrary move.
Yet, the same problem holds as before. Just as height was never actually measured in the process of ranking by height, so too here we have no measure of school system quality, efficiency, or whatever. The tests are designed to assess student’s knowledge and ability. To rank countries on the basis of their aggregate student performance on tests tells only – you guessed it – which countries out ranked other countries. Even if the distance between means is quite small, as it often is, such rankings can be produced.
But these rankings do not produce any measurement. They may prove useful in stimulating useful debate, or harmful government mandates. But in either case, they most certainly do not measure “school quality” or the “effectiveness of the system” (or whatever buzzword is used) for the simple reason that the act of ranking does not yield a measurement of quality — or anything else.
To measure school quality, one would need to define school quality, demonstrate what its elements are and that these elements vary quantitatively, and on that basis, build a measuring device that yields number results isomorphic with objective changes in the school or school system. One would need a measuring tool for which yields outcomes that are not merely a reflection of the tool itself (here, I reference the problem of imposing the assumption of a normal distribution when no evidence for that assumption exists).
Of course, this is a fool’s errand, as “school quality” is, at the end of the day, a value judgment, quite unlike common physical properties we are accustom to measuring. Talk of measurement hides this fact of value judgment, and when one considers that education is a cultural endeavor, this realization has profound consequences.
Thus, the entire practice of publicly presenting international comparisons of test results as league tables and in turn measures of school system quality is arbitrary, and thus properly understood as pseudo-science and ultimately against authoritative knowledge. Arbitrary power is particularly enamored with the impostor of science: a few tables, a bar graph and the headline that drove that data.
In the end, this method gives rise to a crisis of authority not only in the sphere of governance, but in scientific fields as well. It thus is a problem that extends well beyond lies about the test scores themselves.