Michelle Longley, Memorial University of Newfoundland
The increase in accountability and workload of teachers and the subsequent growing demand for reliability and validity in the allocation of grades and levels places a heavy demand on the assessment process (Campbell, 2005). Bennett (2002) argued that paper and pencil tests were barely adequate to measure minimum competencies required for low-level roles in industrial settings and fell woefully short of providing measures of sophisticated knowledge and skills students need for 21st century work and citizenship. He claimed that this fundamental model of assessment was developed a century ago and current methods of testing are incapable of validly measuring sophisticated intellectual and psychosocial performances.
Due to the accountability systems linked to students’ performance on tests, teachers are using weak but rapid instructional methods, such as lecture and drill-and-practice, to race through the glut of recipes, facts, and test-taking skills they are expected to cover; and pressure for better measurement of stated learning outcomes has resulted in a demand for more frequent and formal assessment (Conole & Warburton, 2005). Rather than measuring students’ capacities to use tools, applications, and media effectively, various forms of mediated interaction are typically not assessed (Bennett, 2002).
Additionally, the increasing number and diversity of students creates many difficulties for instructors to support student learning and assessment (Peat & Franklin, 2002). According to Cassady & Gridley (2005), students with high levels of cognitive test anxiety worry over potential failure, utilize ineffective study strategies, and tend to procrastinate, making valid assessment challenging.
Advances in technology are creating new possibilities and promises for assessment and developing environments capable of capturing observations and studying authentic behaviors not possible in a conventional classroom setting (Bennett, 2002). Technology offers many new opportunities for innovation in educational assessment through rich new assessment tasks, reporting and real-time feedback (Scalise & Gifford, 2006) as well as anytime, anyplace marking, synchronous and asynchronous moderation, and self and peer assessment (Campbell, 2005).
Computer-assisted assessment has considerable potential both to ease assessment load and provide advanced and powerful modes of assessment (Conole & Warburton, 2005). The use of technology in assessment allows for more complex item types compared with paper-based assessments including the use of audiovisual materials and more multifaceted interactions between learner and computer (Conole & Warburton, 2005). Furthermore, the reporting mechanisms available within computer-assisted assessment systems provide richer data about the students than were available from paper-based assessments, thus providing the opportunity to record student interactions and analyze these to provide an understanding of learning (Conole & Warburton, 2005). Bennett (2002) learned that computer-assisted assessments using wikis, games, augmented realities, and asynchronous discussions allowed for similarly rich observations in performance assessment.
Peat and Franklin (2002) found that in larger classes, computer-assisted assessment allowed staff to have more time to interact face-to-face with students and students had more opportunities to gain immediate, quality feedback. E-assessment, or the use of ICTs in assessment will encourage higher-order thinking skills, and support the assessment of social skills and group work through digital portfolios (Buzzetto-More & Alade, 2006). E-portfolios have a number of advantages over those that are paper-based as they support: a greater variety of artifacts; are multimedia driven, accessible by a large audience, contain meta-documentation; easy to store; and may serve to promote a student academically or professionally (Buzzetto-More & Alade, 2006).
According to Bennett (2002), technology is central to authentic assessment. Students taking tests online reported lower levels of perceived test threat (Cassady & Gridley, 2005). Furthermore, the use of computers in assessment has shown a reduction in the occurrence of cheating (Clariana & Wallance, 2002). Learners can take the test at home or in their own time if they are trusted to do so, and this can be a virtue through instant marking and feedback (Thewall, 2000).
In summary, technology could offer ways of extracting information for assessment purposes to serve both classroom and external assessment needs, including providing customized feedback to students for reflection about their knowledge and skills, learning strategies, and habits (Pellegrino & Ouellmalz, 2010).
A concern related to computer-assisted assessment is whether the use of multiple-choice questions is really suitable to assess higher-order thinking skills (Conole & Warburton, 2005). But if sufficient care is taken in their construction, item-based testing may examine the full range of learning outcomes (Conole & Warburton, 2005). Thewall (2000) insisted that multiple-choice tests and computer-based assessments could have a place in the highest levels of education, but should be used in conjunction with other assessments to cover areas beyond their scope. Furthermore, the inclusion of multimedia, such as sound and video clips or animated images, could improve the level of comprehension of a question (Valenti, Cucciarelli & Panti, 2002).
One of the barriers to CAA has been the cost of commercial software and the personal time that it takes to become familiar with the programs (Conole & Warburton, 2005). However, there are free, easy to use products for creating online assessments (Thewall, 2000). As computer hardware becomes cheaper, connectivity easier, and software development more rapid, computerized learning and assessment simulations arguably will become ubiquitous (Vendlinski & Stevens, 2002).
Students with special needs or disabilities such as a student with dyslexia may exert more cognitive resources in interpreting questions, but student perception of CAA indicated that it provided a more level playing field to demonstrate knowledge (Peat & Franklin, 2002). Providing disabled students with specially adapted input software and hardware such as touch screens or speech browsers could decrease this exertion in cognitive resources (Peat & Franklin, 2002).
Bennett, R. E. (2002). Inexorable and inevitable: The continuing story of technology and assessment. Journal of Technology, Learning, and Assessment, 1(1)
Buzzetto-More, N., & Alade, A. (2006). Best practices in e-assessment. Journal of Information Technology Education: Research, 5(1), 251-269.
Campbell, A. (2005). Application of ICT and rubrics to the assessment process where professional judgment is involved: the features of an e‐marking tool. Assessment & Evaluation in Higher Education, 30(5), 529-537.
Campus, A. (2003). WebCT and online assessment: The best thing since SOAP? Educational Technology & Society, 35(2), 62.
Cassady, J. C., & Gridley, B. E. (2005). The effects of online formative and summative assessment on test anxiety and performance. Journal of Technology, Learning, and Assessment, 4(1).
Clariana, R., & Wallace, P. (2002). Paper–based versus computer–based assessment: key factors associated with the test mode effect. British Journal of Educational Technology, 33(5), 593-602.
Clarke-Midura, J., & Dede, C. (2010). Assessment, technology, and change. Journal of Research on Technology in Education, 42(3), 309-328.
Conole, G., & Warburton, B. (2005). A review of computer-assisted assessment. Research in Learning Technology, 13(1). 17-31.
Coulby, C., Hennessey, S., Davies, N. and Fuller, R. (2011), The use of mobile technology for work-based assessment: the student experience. British Journal of Educational Technology, 42(2), 251–265.
Croft, A. C., Danson, M., Dawson, B. R., & Ward, J. P. (2001). Experiences of using computer assisted assessment in engineering mathematics. Computers & Education, 37(1), 53-66.
Hancock, T. M. (2010). Use of audience response systems for summative assessment in large classes. Australasian Journal of Educational Technology, 26(2), 226-237.
Marks, A. M., & Cronje, J. C. (2008). Randomized items in computer-based tests: Russian roulette in assessment? Educational Technology & Society, 11(4), 41-50.
McDonald, A. S. (2002). The impact of individual differences on the equivalence of computer-based and paper-and-pencil educational assessments. Computers & Education, 39(3), 299-312.
Miller, T. (2009). Formative computer‐based assessment in higher education: The effectiveness of feedback in supporting student learning. Assessment & Evaluation in Higher Education, 34(2), 181-192.
Peat, M., & Franklin, S. (2002). Supporting student learning: the use of computer–based formative assessment modules. British Journal of Educational Technology, 33(5), 515-523.
Pellegrino, J. W., & Quellmalz, E. S. (2010). Perspectives on the integration of technology and assessment. Journal of Research on Technology in Education, 43(2), 119-134.
Poggio, J., Glasnapp, D. R., Yang, X., & Poggio, A. J. (2005). A comparative evaluation of score results from computerized and paper & pencil mathematics testing in a large scale state assessment program. The Journal of Technology, Learning and Assessment, 3(6), 30-38
Porter, A., Polikoff, M. S., Barghaus, K. M., & Yang, R. (2013). Constructing aligned assessments using automated test construction. Educational Researcher, 42(8), 415-423.
Scalise, K., & Gifford, B. (2006). Computer-based assessment in e-learning: A framework for constructing "intermediate constraint" questions and tasks for technology platforms. The Journal of Technology, Learning and Assessment, 4(6).
Sclater, N., & Howie, K. (2003). User requirements of the “ultimate” online assessment engine. Computers & Education, 40(3), 285-306.
Sim, G., Holifield, P., & Brown, M. (2004). Implementation of computer assisted assessment: lessons from the literature. Research in Learning Technology, 12(3).
Thelwall, M. (2000). Computer-based assessment: a versatile educational tool. Computers & Education, 34(1), 37-49.
Valenti, S., Cucchiarelli, A. & Panti, M. (2002). Computer based assessment systems evaluation via the ISO9126 quality model. Journal of Information Technology Education, 1(3), 157-175.
Vendlinksi, T., & Stevens, R. (2002). Assessing student problem-solving skills with complex computer-based tasks. The Journal of Technology, Learning and Assessment, 1(3).