The United States National Research Council had conducted a survey, and compiled a report, on United States Research-Doctorate Programs approximately every 10 years, although the time elapsed between each new ranking had sometimes exceeded 10 years.
Data collection for the most recent report began in June 2006;[1] it was released on September 28, 2010. These rankings did not provide exact ranks for any university or doctoral program; rather, a statistical range was given. This was because "the committee felt strongly that assigning to each program a single number and ranking them accordingly would be misleading, since there are significant uncertainties and variability in any ranking process."[2]
Two series of rankings were offered:
The R-rankings were based on regression analysis. According to the NRC, this analysis was "based on an indirect approach to determining what faculty value in a program" and was done by first asking a sample faculty group to rate a number of programs in their area, and then using a statistical analysis "to calculate how the 20 program characteristics would need to be weighted in order to reproduce most closely the sample ratings." In doing so, the rankings "attempted to understand how much importance faculty implicitly attached to various program characteristics when they rated the sample of programs." Weights were assigned to each of characteristic varied by field.[2]
The S-rankings were survey-based: Faculty were "asked about the importance of 20 characteristics ... in determining the quality" of a type of program. Weights were assigned to determinant according to the results, varying by discipline.[3]
The factors included in these computations included[4]
the number of publications per faculty member, citations per publication (except in computer science and the humanities), fraction of the faculty supported by grants and number of grants per faculty member, diversity of the faculty and students, student GRE scores, graduate student funding, number of Ph.D.s and completion percentage, time to degree, academic plans of graduating students, student work space, student health insurance, and student activities.
The rankings have both been praised and criticized by academics.
Physicist Peter Woit stated that historically the NRC rankings have been the "gold standard" for academic department ratings.[5] The rankings were also called "the gold standard" by biomedical engineer John M. Tarbell[6] and in news releases by Cornell University[7] and the University of California.[8] The Center for a Public Anthropology praised the National Research Council's 2010 rankings as "an impressive achievement" for its move away from reputational rankings and toward data-based rankings, but also noted that the lack of specific rankings reduced clarity even as it improved accuracy.[9]William Colglazier and Jeremiah P. Ostriker defended the rankings in the Chronicle of Higher Education,[10] responding to a critique by Stephen M. Stigler.[11]
Sociologist Jonathan R. Cole, one of the members of the NRC committee that produced the ranking, critiqued the final result. Cole objected to the committee's choice not to include any "measures of reputational standing or perceived quality" in the survey, which he called "the most significant misguided decision" in the recent study. Cole also critiqued the various statistical inputs and the weight assigned to each.[12] The Computing Research Association and various computer science departments also expressed "serious concerns" about vaguely defined reporting terms leading to inconsistent data, inaccuracies in the data, and the use of bibliometrics from the ISI Web of Knowledge despite its poor coverage of many computer science conferences.[13][14][15][16][17]Geographers A. Shortridge, K. Goldsberry, and K. Weessies found significant undercounts in the data and poor sensitivity to "noise" in the rankings, concluding that "We caution against using the 2010 NRC data or metrics for any assessment-oriented study of research productivity."[18] The rankings were also critiqued by sociologist Fabio Rojas.[19]
^Bernat, Andrew; Grimson, Eric (December 2011), "Doctoral program rankings for U.S. computing programs: the national research council strikes out", Communications of the ACM, 54 (12): 41–43, doi:10.1145/2043174.2043203
^Shortridge, Ashton; Goldsberry, Kirk; Weessies, Kathleen (2011), "Measuring Research Data Uncertainty in the 2010 NRC Assessment of Geography Graduate Education", Journal of Geography, 110 (6): 219–226, doi:10.1080/00221341.2011.607510