In routine annual faculty performance evaluations, normalizing these evaluations across different academic fields can be challenging. Leeds School of Business at the University of Colorado Boulder uses academic analytics to do this and as a result has built trust among faculty by creating a transparent methodology for assessing performance. A key component of this methodology is to compare faculty to others in their own field rather than comparing them to others in the same business school but in different fields. This approach encouraged buy-in and satisfaction with what is often a divisive process.
The root of the problem
Consider that it may be less common to get published in an “A” publication in a four-year window in Field A than in Field B. How should you rate a Field A faculty member in comparison to a Field B faculty member who each have one “A” publication in that window? If similar standards apply to both faculty members, and they are given the same rating, it is arguably unfair to faculty in Field A. However, if the Field A faculty member arbitrarily receives a rating of 5 on a 5-point scale, and the Field B faculty member receives a 4, the Field B faculty member may resent those in Field A and feel like they are given preferential treatment. It is a lose-lose situation.
A new methodology
The solution centers on comparing performance not to other units within a school but rather to a faculty member’s peer set in their own field. Knowing that fields differ in frequency of publication and citation, Leeds developed “field-specific rulers” that show where quantitative productivity markers put a faculty member in the distribution of their own field. This process first identifies the schools and their tenure-track scholars to use in comparisons and for identifying appropriate metrics. (Leeds uses two metrics. The first takes the average productivity within three publication “buckets”: “A” publications1 journals, Financial Times journals, and a custom list of additional A journals developed by each area. The second metric examines citations to papers published in the last five years in any outlet.) The next step determines what article (or citation) counts represent the 10th, 20th, … 90th percentile in each field. And last, the rulers are applied in faculty’s annual evaluations of research.
Importantly, Leeds assumes this is a starting point for faculty evaluations and empowers the academic units for each field to use their judgment when addressing metrics that over- or understate a person’s record. For example, an individual’s rating may exceed what the ruler suggests due to a prestigious award for a paper, or a solo-authored publication. An individual’s ratings can deviate from the ruler result based on qualitative considerations, but any academic unit should, in aggregate, be appropriately comparable to another based on the results of the quantitative data. This means that, by and large, those in the 90th percentile in Field A receive the same overall rating in research as those in the 90th percentile in Field B. The bottom line: by creating a starting place for research evaluations that is objectively based on an individual’s field peer set, each unit and each faculty member will clearly understand this component of their performance.
Has it worked?
Despite some initial concern about whether numbers supplanted judgment, the implementation of this methodology was a success. To support the effort toward transparency, Leeds produced an analysis that illustrated the units that gave the most 5s for research (and 4s, 3’s, etc.) and shared it with the field units to demonstrate their use of similar standards, even if some units wound up with more 5s. As shown in the figure below, Field C and Field E gave 70%+ ratings of 5, while Field B gave only 35% ratings of 5. The analysis showed that those fields giving out higher ratings had higher Median Composite averages than other fields (for faculty members who earned a 5 rating).
It is important to emphasize that the ruler provides evaluators only a starting point for assessments. But that starting point is a fair, objective and transparent tool. Under this system, both faculty members and field unit leaders clearly comprehend the performance evaluation process and can build a culture of trust based upon increased transparency in their units.
1 For “A” publications, we use the list since it is a widely available list that is broadly accepted by many as a good indicator of top “A” publications.
*This article is based on Leeds’ presentation to at the 2018 Fall Forum in Miami, Florida.