Educational software evaluation

From EduTechWiki - Reading time: 2 min

Draft

See also instructional design method

There are several ways to evaluate software:

Feature evaluation[edit | edit source]

E.g. Dalgarno (2004) proposes three broad categories:

  • Categories of cognitive task
  • Categories of input technique
  • Categories of system response
Cognitive task
  1. Attending to static information
  2. Controlling media
  3. Navigating the system
  4. Answering questions
  5. Attending to question feedback
  6. Exploring a world
  7. Measuring in a world
  8. Manipulating a world
  9. Constructing in a world
  10. Attending to world changes
  11. Articulating
  12. Processing data
  13. Attending to processed data
  14. Formatting output
Input technique
  1. Typing
  2. Valuators
  3. Key pressing
  4. Pull down menus
  5. Menu lists
  6. Buttons
  7. Icons
  8. Hot spots
  9. Hypertext
  10. Scroll bars
  11. Media controls
  12. Selecting
  13. Dragging
  14. Drawing
System response
  1. Displaying
  2. Presenting media
  3. Presenting cues
  4. Branching
  5. Assessing answers
  6. Generating feedback
  7. Updating world
  8. Generating world
  9. Processing data
  10. Searching
  11. Saving and loading

Conceptual evaluation[edit | edit source]

Geissinger (1997) starts with the question "Can this product actually teach what it is supposed to?" and uses Barker & King's (1993:309) four categories:

Category

Discussion

Quality of end-user interface design

Investigation shows that the designers of the most highly-rated products follow well-established rules & guidelines. This aspect of design affects usersí perception of the product, what they can do with it and how completely it engages them.

Engagement

Appropriate use of audio & moving video segments can contribute greatly to usersí motivation to work with the medium.

Interactivity

Usersí involvement in participatory tasks helped make the product meaningful and provoke thought.

Tailorability

Products which allow users to configure them and change them to meet particular individual needs contribute well to the quality of the educational experience.


Belfer, Nesbit, & Leacock, T. proposed a Learning Object Review Instrument (LORI)

References[edit | edit source]

  • Barker (1995). Evaluating a model of learning design. In H. Maurer (Ed.) Proceedings, World Conference in Educational Multimedia & Hypermedia. Graz, Austria: Association for the Advancement of Computing in Education.
  • Barker, P. & King, T. (1993). Evaluating interactive multimedia courseware -- a methodology. Computers in Education 21 (4), 307-319.
  • Baumgartner, P. & Payr, S. (1996). Learning as action: A social science approach to the evaluation of interactive media. In Carlson, P. & Makedom, F. (Eds.) Proceedings, World Conference in Educational Multimedia & Hypermedia. Boston: Association for the Advancement of Computing in Education.
  • Belfer, K., Nesbit, J., & Leacock, T. (2002) Learning object review instrument (LORI). Version 1.4
  • Peter R Albion, Heuristic evaluation of educational multimedia: from theory to practice HTML
  • Dalgarno, B. (2004). A classification scheme for learner-computer interaction. In R.Atkonson, C.McBeath, D. Jones-Dwyer and R.Phillips (eds) Beyond the comfort zone, 21st annual conference of the Australasian Society for Computers in Learning in Tertiary Education, Perth, Australia. Available: PDF. (This paper describes environments, but is useful for deciding on which criteria you will select a tool)
  • Geissinger H (1997) "Educational Software: Criteria for Evaluation". ASCILE '97 HTML.
  • Reiser, R.A. & Kegelmann, H.W. (1994). Evaluating instructional software: A review and critique of current methods. Educational Technology, Research & Development 42(3), 63-69.

Licensed under CC BY-SA 3.0 | Source: https://edutechwiki.unige.ch/en/Educational_software_evaluation
17 views |
↧ Download this article as ZWI file
Encyclosphere.org EncycloReader is supported by the EncyclosphereKSF