Assessment in higher education was a reform movement that emerged in the United States in the early 2000s to spur improved learning in higher education through regular and systematic measurement. The campaign was a higher education corollary to the standardized testing required in K-12 schools by the No Child Left Behind Act. By the latter 2010s the bureaucratic demands of assessment advocates were being reconsidered in higher education even by some of those who had played a major part in promoting them.[1]
Advocates of systematic assessment in higher education promoted it as a process that would use empirical data to improve student learning.[2] They envisioned that colleges would identify measurable and clear descriptions of intended learning, gather evidence to determine whether students' actual learning matched the expectations, and use the collected information to improve teaching and student support.[3] Institutions of higher education implemented systems of creating, collecting, and reporting in response to increased demands from accrediting agencies, which had promoted the concept as necessary to satisfy political demands for accountability, including from Spellings Commission launched in 2005.[1][4]
Advocates of assessment insisted that colleges should be able to distill their intended student learning outcomes into statements and related data at the level of the course, each program or major, and for the institution overall.[2] The internal process of analyzing and discussing the evidence about what students know and can do would transform teaching and learning for the better.[5]
The growth of demands for campus assessment data contributed to an industry of software products offered to colleges. In 2019 one professional association catalogued more than 60 assessment-related technology products offered by vendors to schools.[6]
In a 2018 New York Times opinion piece titled "The Misguided Drive to Measure Learning Outcomes," Molly Worthen criticized the assessment profession for creating an elaborate, expensive, "bureaucratic behemoth" lacking an empirical foundation.[7] Robert Shireman, an advocate for student access and success, has written that the system evolved in a way that “prevents rather than leads to the type of quality assurance that has student work at the center.[8]” Erik Gilbert, a professor of history, argued that assessment in higher education has little effect on educational quality and that accrediting agencies require institutions to invest time and resources in collecting data that is not useful for improving student learning.[9]
Some leading assessment practitioners have been critical of common practices in the field.[10] David Eubanks, an assessment director, has observed that sample sizes in most course- and program-level assessments are so low that they cannot provide meaningful information.[11] In 2019, Natasha Jankowski, Director of the National Institute of Outcomes Assessment, described the current state of assessment as a “hot mess” and allowed that, “[t]here are good reasons why faculty hate it. It's real and it's earned."[12] In January 2020, the professional association of campus assessment professionals adopted a "foundational statement" intended to clarify the profession's purpose.[13]
In July 2020, the National Advisory Committee on Institutional Quality and Improvement established a subcommittee, chaired by David Eubanks, to examine how accrediting agencies approach the assessment of student success.[14] The subcommittee found that federal and accreditor standards it examine did not require expensive and bureaucratic monitoring approaches. Instead, the subcommittee pointed to peer reviewers with inflexible expectations as creating an impression that has sometimes steered colleges in unproductive directions.[15]