Sunday, November 4, 2012
Heads versus Chairs
Saturday, November 3, 2012
All Stick, No Carrot--How to Judge Scholarship
The trouble with this approach is that often an argument that seems radically new has only a short lifespan and once a trendy line of reasoning has passed, little remains of the argument and the scholar who made it. Thus the double insecurity of the Ivy League department: is this young scholar truly innovative and will he/she continue to innovate in the long run.
Monday, November 8, 2010
Peer-reviewed
The reliance on strict rules to evaluate the intellectual quality of academic publications is by no means a unique American phenomenon. With the integration of university degrees across the European Union, professors and researchers are increasingly evaluated in terms of well-defined scales that rate the quality of anything put into print. In Belgium, for example, academic publications are grouped broadly into letter categories (A, B, C) with subdivisions within each.
The first purpose for this standardization is to rate professors within a university, and then, secondly, to compare regional and national systems to each other.
Thus a web site urging students to attend French-speaking universities in Belgium will compare the total number of publications in peer-reviewed journals within the Walloon system to other European Union areas.
Here is an example of the kind of claim used to compare one European region to another:
“Various international surveys show that Belgium is one of the countries that publishes most and whose publications are among the most often cited, with regard to its number of inhabitants and to its gross domestic product. This international visibility is confirmed by numerous publications in renowned scientific journals. In 2003, the European Commission published its “Third Report on science and technology indicators 2003”. This report assesses the quality of publications in the major universities of the EU countries and rates those of Belgian researchers highly.”
http://www.studyinbelgium.be/start.php?lang=en&rub=3
The number of peer-reviewed publications is then compared to the per capita ration of university trained researchers within a regional economy. So if Belgium has a higher density of researchers within the general population, this is interpreted as an indication that the Belgian economy supports growth through universities. The next statistic linked to peer-reviewed publications and density of researchers is the number of new companies started in a region. The more spin-offs and start-ups, the better the integration between universities and the economy must be, for new technology firms are often derived from university research. Hence the famous research belts around universities specialized in technological research.
The problem arises when these indicators are used in a reverse manner so that they become rules for hiring and firing faculty, for structuring universities, for evaluating students. These indicators may show that a university is operating successfully, but they may not at all be the reason for its success. Requiring that researchers publish in peer-reviewed journals is in a sense pushing the indicator, i.e., trying to artificially increase the numbers that once were a neutral sign of educational accomplishment. If researchers used to publish only half their articles in peer-reviewed journals, and the rest as book chapters, conference proceedings, and editorial-board journals, they will not have necessarily increased their intellectual productivity by now publishing 75% of their work in peer-reviewed journals. They may well be accomplishing as much as they did before, they are just changing the media they use to publish.
Furthermore, there is no reason to believe that peer-review actually produces innovative research. In fact one could argue it produces more mainstream conclusions that are less likely to disturb existing norms. The really radical approach to a research question may well appear in a small journal catering to a select group of readers, rather than in the official institutional journal.
Quality indicators run the risk of stifling exactly that which they are measuring when they become mandatory rules, for they tend to produce conformity
So to return to the Belgian example above, Belgium has a high rate of highly rated, peer-review publications, which is used to claim that Belgium has a better university system than other parts of Europe. However, the same statistic is also an indication that Belgium is much stricter in policing its academics and that it more aggressively enforces rules requiring faculty to publish in peer-reviewed journals.
While there is no question that Belgium has excellent universities, and we should all be so privileged as to teach there, the question remains whether the Belgian universities are truly better than those in other regions, where a faculty member’s curriculum vitae might not be so strictly evaluated. Is it possible that British or Dutch universities are also excellent, they just don’t worry as much about indicators as much as the Belgians do?
At every level of the university system, from the classroom to the EU-wide comparison, a grading system has to distinguish between those students who follow instructions carefully and those who have really smart ideas. Relying on indicators and then enforcing them is very much like having homework written out neatly and turned in on time –this is very important, to be sure. Still, the indicators to the extent that they are mandatory are likely to become indictors of how well the administrative apparatus operates, rather than signs that the ideas on the page are clever.
Given that as teachers and administrators we are all interested in having students learn more than punctuality and proper form, we should be clear that measuring indicators does not foster creative intelligence, it might just do the opposite.
Sunday, November 7, 2010
Assisting Asssitants
The job market has been such that universities have, for some time now gotten, a much higher quality pool of incoming faculty than would have been the case twenty years ago. We have had time to watched some of these people go through the acclimatization process of leaving their high-grade graduate programs to settle into mainstream universities.
We all know that the decisions about who gets to teach at primo universities and who ends up somewhere else are not so finely tuned. There are a fair number of high quality, brand new scholars who land at universities that really are not used to having such hot house flowers on their faculty. What becomes of these delicate researchers and writers in the tussle of tenure and administrative review? What becomes of their great promise? Why do some cruise on to publish lots of fine books and articles, while others stick to their one track?
University administrations would love to know how to separate the long term producers from those who settle into a comfortable routine after tenure. I am definitely not here to conjure some answer to this perennial question. There are lots of people out there making such judgments. Universities have an enormous array of reviews and evaluations to pick the wheat from the chafe.
And while the pressure of a deadline has a wonderful effect concentrating the mind on finishing a manuscript, more needs to said about how the review process creates a conformity that undermines its own goal of fostering faculty productivity.
Review processes very often insist that faculty publish in one kind of journal rather than another. For example, there is the concept of a mainstream flag ship journal, one that represents the best scholarship in a given field. For some universities, it is important that their faculty demonstrate their scholarly prowess by publishing in these journals. At other universities, publishing in mainstream journals is a sign of mediocrity, that a scholar is not really cutting edge.
But the rule varies from one discipline to another, from one scholar to another. The problem is when university administrations make broad rules in favor of one over the other, without considering the character of each contribution, i.e. when the quality of an article is judged by the journal in which appears. For young faculty this problem is heightened because very often they went to a graduate school where one rule applied and then they end up teaching at a university where the opposite rule governs tenure decisions.
Add to this-- the general unwillingness of bureaucracies to allow for flexibility. Every educational institution I have ever attended has governed its internal decisions with the presumption that its rules are the only true and correct ones. There is a long list of German departments in this country who all believe they are the best. The University of Michigan has no trouble thumbing its nose at the University of Chicago. And while UC Riverside may understand that it is not in the same league as Princeton, it will insist that its junior faculty follow the California state conventions for demonstrating scholarly excellence, never mind what they told you back east.
I am pulling these examples out of thin air, there are no hidden stories behind them, I am not thinking of anyone in particularly as I write this summary of 20 years experience. I may be totally unfair to the individual institutions but the tendency is common enough,
Still, I have heard department heads of big, Midwestern universities declare that they would never let their best students apply to an Ivy League graduate program, because “they don’t have a comprehensive curriculum there.” Similarly, I have seen Ivy League professors quietly pass over State university PhDs because they don’t come from "truly innovative programs."
OK, so we all know academia is full of picky jealousy.
The trouble arises for junior faculty who have not yet mastered the different standards. And the real problem is that in the long run, the pressure to switch from one standard of scholarship turns clever thinkers into conformists. If you were trained to find the hottest new trend in art coming out of Europe, you are going to have a hard time publishing in a flagship journal. Likewise, if you think like a social scientist about journals, you French colleagues may smile in disbelief.
While the Ivy League can readily afford to toss away excellent scholars, because there is always another wave of brilliance rolling in, other universities might pause to consider the varieties of scholarly accomplishment, to bend a little more.
Sunday, September 19, 2010
Writing Excess
The curse of minimalism and the free market is that so very often students deliver their homework just in time with only the most basic answers. They often write just what an assignment requires, rather than going beyond the bare bones expectations to show what additional knowledge they have. There are many reasons for this minimalist habit. and yes, of course, we can’t forget sloth and laziness. There are a great number of tasks I finish too late and just barely, but there is also a general pervasive cultural sense nowadays that when it comes to intellectual questions—too much is something to avoid. Write clearly about one idea—a simplicity that makes simple. My professor in grad school, Sander Gilman, would often point out that if you set a minimum, it quickly becomes the maximum. If you lay out a basic administrative standard, most people will perform only up to that requirement, rather than exceeding it.
When writing, why give just one explanation, when you can come up with eight?
There is a point where the drive for efficiency turns into laziness, where having completed only what is required, does not result in more high quality work in other subjects, but instead just a great empty lull.
In a different cultural moment, in a different historical period, we would strive to overwhelm a question with answers. We would layer one possible explanation on top of another, give theories that blend into each other, cite book after book rather than just the one canonical work that everyone has read. The love of the esoteric, the curiosity to explore trivial and unknown subjects has been wiped out by the demand that intellectuals produce efficiently and often.
So this is the paradox: ordinary students can give back the answer on the test that comes from the textbook, extra ordinary students write much more, but to do so they have to get lost in other books--i.e. waste time doing more than the class requires. Similarly, regular academics can crank out articles for the c.v., but let's have more lunatics who waste their time reading irrelevant tomes.