Monday, November 8, 2010

Peer-reviewed

The reliance on strict rules to evaluate the intellectual quality of academic publications is by no means a unique American phenomenon. With the integration of university degrees across the European Union, professors and researchers are increasingly evaluated in terms of well-defined scales that rate the quality of anything put into print. In Belgium, for example, academic publications are grouped broadly into letter categories (A, B, C) with subdivisions within each.

The first purpose for this standardization is to rate professors within a university, and then, secondly, to compare regional and national systems to each other.

Thus a web site urging students to attend French-speaking universities in Belgium will compare the total number of publications in peer-reviewed journals within the Walloon system to other European Union areas.

Here is an example of the kind of claim used to compare one European region to another:

“Various international surveys show that Belgium is one of the countries that publishes most and whose publications are among the most often cited, with regard to its number of inhabitants and to its gross domestic product. This international visibility is confirmed by numerous publications in renowned scientific journals. In 2003, the European Commission published its “Third Report on science and technology indicators 2003”. This report assesses the quality of publications in the major universities of the EU countries and rates those of Belgian researchers highly.”

http://www.studyinbelgium.be/start.php?lang=en&rub=3

The number of peer-reviewed publications is then compared to the per capita ration of university trained researchers within a regional economy. So if Belgium has a higher density of researchers within the general population, this is interpreted as an indication that the Belgian economy supports growth through universities. The next statistic linked to peer-reviewed publications and density of researchers is the number of new companies started in a region. The more spin-offs and start-ups, the better the integration between universities and the economy must be, for new technology firms are often derived from university research. Hence the famous research belts around universities specialized in technological research.

The problem arises when these indicators are used in a reverse manner so that they become rules for hiring and firing faculty, for structuring universities, for evaluating students. These indicators may show that a university is operating successfully, but they may not at all be the reason for its success. Requiring that researchers publish in peer-reviewed journals is in a sense pushing the indicator, i.e., trying to artificially increase the numbers that once were a neutral sign of educational accomplishment. If researchers used to publish only half their articles in peer-reviewed journals, and the rest as book chapters, conference proceedings, and editorial-board journals, they will not have necessarily increased their intellectual productivity by now publishing 75% of their work in peer-reviewed journals. They may well be accomplishing as much as they did before, they are just changing the media they use to publish.

Furthermore, there is no reason to believe that peer-review actually produces innovative research. In fact one could argue it produces more mainstream conclusions that are less likely to disturb existing norms. The really radical approach to a research question may well appear in a small journal catering to a select group of readers, rather than in the official institutional journal.

Quality indicators run the risk of stifling exactly that which they are measuring when they become mandatory rules, for they tend to produce conformity

So to return to the Belgian example above, Belgium has a high rate of highly rated, peer-review publications, which is used to claim that Belgium has a better university system than other parts of Europe. However, the same statistic is also an indication that Belgium is much stricter in policing its academics and that it more aggressively enforces rules requiring faculty to publish in peer-reviewed journals.

While there is no question that Belgium has excellent universities, and we should all be so privileged as to teach there, the question remains whether the Belgian universities are truly better than those in other regions, where a faculty member’s curriculum vitae might not be so strictly evaluated. Is it possible that British or Dutch universities are also excellent, they just don’t worry as much about indicators as much as the Belgians do?

At every level of the university system, from the classroom to the EU-wide comparison, a grading system has to distinguish between those students who follow instructions carefully and those who have really smart ideas. Relying on indicators and then enforcing them is very much like having homework written out neatly and turned in on time –this is very important, to be sure. Still, the indicators to the extent that they are mandatory are likely to become indictors of how well the administrative apparatus operates, rather than signs that the ideas on the page are clever.

Given that as teachers and administrators we are all interested in having students learn more than punctuality and proper form, we should be clear that measuring indicators does not foster creative intelligence, it might just do the opposite.

No comments:

Post a Comment