Monday, October 26, 2009

12. To evaluate ed tech, set learning goals & assess student progress toward them (OK but what does this approach miss?)

It's Monday so let's talk about another one of those things I no longer (quite) believe about evaluation of educational uses of technology. Definition: “Evaluation” for me is intentional, formal gathering of information about a program in order to make better decisions about that program.

In 1975, I was the institutional evaluator at The Evergreen State College in Olympia, Washington. I'd offer faculty help in answering their own questions about their own academic programs (a “program” is Evergreen's version of a course). Sometimes faculty would ask for help in framing a good evaluative question about their programs. I'd respond, “First, describe the skills, knowledge or other attributes that you want your students to gain from their experience in your program.”

“Define one or more Learning Objectives for your students” remains step 1 for most evaluations today, including (but not limited to) evaluating the good news and bad news about technology use in academic programs. In sections A-E of this series, I've described five families of outcomes (goals) of technology use, and suggested briefly how to assess each one.

However, outcomes assessment by itself provides little guidance for how to improve outcomes. So the next step is to identify the teaching/learning activities that should produce those desired outcomes. Then the evaluator gathers evidence about whether those activities have really happened, and, if not, why not. Evidence about activities can be extremely helpful in a) explaining outcomes, b) improving outcomes, c) investigating the strengths, weaknesses and value of technology (or any sort of resource or facility) for supporting those activities.

Let's illustrate this with an example.

Suppose, for example, that your institution has been experimenting with the use of online chats and emails to help students learn conversational Spanish. As the evaluator, you'd need to have a procedure for assessing each student's competence in understanding and speaking Spanish. Then you'd use that method to assess all students at the end of the program and perhaps also earlier (so you could see what they need at the beginning, how they're doing in the middle, and what they've each gained by the end).

You would also study how the students are using those online communications channels, what the strengths and weaknesses of each channel are for writing in Spanish, whether there is a relationship between each student's use of those channels and their progress in speaking Spanish, and so on.

Your findings from these studies will signal whether online communications are helping students learn to speak Spanish, and how to make the program work better in the future.

Notice that what I've said so far about designing evaluation is entirely defined by program goals.: the definition of goals sets the assessment agenda and also tells which activities are most important to study. I've labeled this the Uniform Impact perspective, because it assumes that the program's goals are what matter, and that those goals are the same for all students.

Does the Uniform Impact perspective describe the way assessment and evaluation are done? Do any assessments and evaluations that you know go beyond the suggestions above? (Please add your observations below by using the “Comments” button.)

PS. “Ten Things” is gaining readers! The alias for the table of contents – http://bit.ly/ten_things_table – has been clicked over 3,200 times already. Thanks! If you agree these are important questions for faculty and administrators to consider, please add your own observations to any of these posts, old or new, and spread the word about this series.

No comments:

Post a Comment

What do you think?