Sunday, November 4, 2012
Heads versus Chairs
Monday, November 8, 2010
Peer-reviewed
The reliance on strict rules to evaluate the intellectual quality of academic publications is by no means a unique American phenomenon. With the integration of university degrees across the European Union, professors and researchers are increasingly evaluated in terms of well-defined scales that rate the quality of anything put into print. In Belgium, for example, academic publications are grouped broadly into letter categories (A, B, C) with subdivisions within each.
The first purpose for this standardization is to rate professors within a university, and then, secondly, to compare regional and national systems to each other.
Thus a web site urging students to attend French-speaking universities in Belgium will compare the total number of publications in peer-reviewed journals within the Walloon system to other European Union areas.
Here is an example of the kind of claim used to compare one European region to another:
“Various international surveys show that Belgium is one of the countries that publishes most and whose publications are among the most often cited, with regard to its number of inhabitants and to its gross domestic product. This international visibility is confirmed by numerous publications in renowned scientific journals. In 2003, the European Commission published its “Third Report on science and technology indicators 2003”. This report assesses the quality of publications in the major universities of the EU countries and rates those of Belgian researchers highly.”
http://www.studyinbelgium.be/start.php?lang=en&rub=3
The number of peer-reviewed publications is then compared to the per capita ration of university trained researchers within a regional economy. So if Belgium has a higher density of researchers within the general population, this is interpreted as an indication that the Belgian economy supports growth through universities. The next statistic linked to peer-reviewed publications and density of researchers is the number of new companies started in a region. The more spin-offs and start-ups, the better the integration between universities and the economy must be, for new technology firms are often derived from university research. Hence the famous research belts around universities specialized in technological research.
The problem arises when these indicators are used in a reverse manner so that they become rules for hiring and firing faculty, for structuring universities, for evaluating students. These indicators may show that a university is operating successfully, but they may not at all be the reason for its success. Requiring that researchers publish in peer-reviewed journals is in a sense pushing the indicator, i.e., trying to artificially increase the numbers that once were a neutral sign of educational accomplishment. If researchers used to publish only half their articles in peer-reviewed journals, and the rest as book chapters, conference proceedings, and editorial-board journals, they will not have necessarily increased their intellectual productivity by now publishing 75% of their work in peer-reviewed journals. They may well be accomplishing as much as they did before, they are just changing the media they use to publish.
Furthermore, there is no reason to believe that peer-review actually produces innovative research. In fact one could argue it produces more mainstream conclusions that are less likely to disturb existing norms. The really radical approach to a research question may well appear in a small journal catering to a select group of readers, rather than in the official institutional journal.
Quality indicators run the risk of stifling exactly that which they are measuring when they become mandatory rules, for they tend to produce conformity
So to return to the Belgian example above, Belgium has a high rate of highly rated, peer-review publications, which is used to claim that Belgium has a better university system than other parts of Europe. However, the same statistic is also an indication that Belgium is much stricter in policing its academics and that it more aggressively enforces rules requiring faculty to publish in peer-reviewed journals.
While there is no question that Belgium has excellent universities, and we should all be so privileged as to teach there, the question remains whether the Belgian universities are truly better than those in other regions, where a faculty member’s curriculum vitae might not be so strictly evaluated. Is it possible that British or Dutch universities are also excellent, they just don’t worry as much about indicators as much as the Belgians do?
At every level of the university system, from the classroom to the EU-wide comparison, a grading system has to distinguish between those students who follow instructions carefully and those who have really smart ideas. Relying on indicators and then enforcing them is very much like having homework written out neatly and turned in on time –this is very important, to be sure. Still, the indicators to the extent that they are mandatory are likely to become indictors of how well the administrative apparatus operates, rather than signs that the ideas on the page are clever.
Given that as teachers and administrators we are all interested in having students learn more than punctuality and proper form, we should be clear that measuring indicators does not foster creative intelligence, it might just do the opposite.
Monday, September 13, 2010
The continued importance of Content
There are of course many reasons why faculty and administration don’t feel comfortable sitting next to each other in the same room. But we'll leave most of those aside to focus on a basic difference. No matter how theoretical and abstract a professor’s work may be, it always involves a distinct commitment to a specific content. There is a subject area, a set of texts, or data, a problem with many thorny questions to solve. Something tangible that motivates and inspires, students in the class, researchers in the lab, writers at the keyboard.
What is troublesome about administrative operation, and indeed most management techniques generally, is their disengagement from the specific content of the work they are managing. Just as many successful store managers don’t really have to care about the product they are selling beyond the basic ability to interact with customers, so too administrators do not need to know the specifics of faculty research. They rely on general formulas to determine the success or failure of that research, but these formulas leave unaddressed the specific material questions that the research addresses. Whether you write on the history of medieval cities in Tuscany or methane gas abatement in coal mining facilities you are judged by general indicators, such as student enrollment, number of publications, placement of students, that have no direct connection to the actual subject matter of your research.
From the administrative perspective, it is important to evaluative criteria that reach across difference departments and colleges so that the many apples and oranges within a university can be compared. From the faculty perspective these general categories often have an implicit bias towards one type of research over another, even as they make no explicit attempt to judge the qualitative material of research.
Without directly addressing the long history of critiques made against the rational organization of knowledge and culture, we could jump to one key early debate in this to compare Kant’s architectonic organization of knowledge into a system in which the philosopher places individual sciences in relation to each other, in order to evaluate both how complete their claim to knowledge is and to judge whether these sciences set together into a whole serve the ethical needs of humanity, and Hegel’s historical account of how the material substance of knowledge and art both enables and restricts the expression of ideas.
Hegel argued that Kant’s formal organization of the basic preconditions of knowledge or beauty failed to account for the physical, hands-on, material substance that underlies any expression of thought. This debate between Kant and Hegel has implications for every discipline in the university. So for example, you could ask: Does architecture consist in a schematic plan drawn on paper or a computer screen, or is it the space created out of light, air, and the stone, wood, glass or steel as it has been shaped into a unity?
In general, most academic knowledge is created from an engagement with materials—data, texts, objects. Kant, even at his most formal remove, knew that science requires empirical data in order to develop reliable results. He added though, that the scientist sometimes did not have the full picture of what his research meant, and it was the job of the architectonic philosopher to bring the many strands of knowledge together into a coherent whole.
I hope all the administrators out there reading this blog appreciate the comparison to a Kantian/Socratic philosopher.
Alas, as it turns out, of course, the criteria in a university are more economic, than Kantian, even if they share a similar formal apparatus.