We can't evaluate e-learning if we don't know what we mean by evaluating e-learning
Phillips, R. (2005) We can't evaluate e-learning if we don't know what we mean by evaluating e-learning. Interact: The Learning Technology Support Service Newsletter, 30 .
|PDF - Authors' Version |
Download (68kB) | Preview
This issue of Interact is about evaluating the effectiveness of e-learning. Critics of e-learning have regularly noted that there is little evidence of its ability to improve learning outcomes, despite substantial worldwide investment in its development, and its wide uptake. Other articles in this issue will discuss this issue and provide evidence of e-learning ‘working’.
Even when research about e-learning has been published showing that it is effective, or at least no less effective than other approaches, misgivings are held about the validity of that research. E-learning represents a convergence of several fields, including education, computer science, design and media studies. Its multidisciplinary nature and rapid evolution has led to individual researchers taking different approaches, deriving from their individual contexts, to evaluation and research, with little reflection on the appropriateness of their approach.
Research into e-learning is complex, and this has not been sufficiently recognised. Part of the complexity arises from a lack of clarity about the meaning of the terms ‘e-learning’ and ‘evaluation’, and the nature of research into e-learning.
I have recently written about how the term e-learning is used in a one-size-fits-all fashion which confuses discussion about it, and proposed that we should classify e-learning applications in terms of the interactions between: student and student; student and teacher; student and resources; and student and computer (Phillips, 2004). This may help ensure that people are clear about what they’re discussing.
|Publication Type:||Journal Article|
|Murdoch Affiliation:||Teaching and Learning Centre|
|Item Control Page|