Wednesday, October 21, 2009

K. Evaluation should be mainly formative and should begin immediately.

Earlier, I described some old beliefs about program evaluation. I used to assume that evaluation of TLT had to be summative ("What did this program accomplish? Does that evidence indicate this program should be expanded and replicated? continued? or canceled?"). The most important data would measure program results (outcomes). You've got to wait years to achieve results (e.g., graduating students) and the first set of reults may be distorted by what was going on as the program started up. Consequently, I assumed, evaluation should be scheduled as late as possible in the program.

Many people still believe these propositions and others I mentioned earlier this week. I still get requests: "The deadline for this grant program is in two days. Here's our draft proposal. Out of our budget of $500,000, we've saved $5,000 for evaluation. If you're able to help us, please send us a copy of your standard evaluation plan."

Yug!

Stakeholders need to see what's going on so they can make better, less risky decisions about what to do next. Getting that kind of useful information is called “formative evaluation.” (By the way, a stakeholder is someone who affects or who is affected by a program: its faculty, the staff who provide it with services, its students, and its benefactors, for example.)

In the realm of teaching and learning with technology (TLT), formative evaluation is even more important than in other realms of education. The program is likely to be novel and rapidly changing, as technologies and circumstances change. So the stakeholders are on unfamiliar ground. Their years of experience may not provide reliable guides for what to do next. Formative evaluation can reduce their risks, and help them notice and seize emerging opportunities.

Formative evaluation also can attract stakeholders into helping the evaluator gather evidence. In contrast, summative evaluation is often seen as a threat by stakeholders. “The summative evaluation will tell us we're doing well (and we already know that). Or perhaps the evaluator will misunderstand what we're doing, creating the risk that our program will be cut or canceled before we have a chance to show what this idea can really do. And no one reads those summative reports anyway unless they're looking for an axe to use on a program. So, no, I don't want to spend time on this and, if I'm forced to cooperate, I have no reason to be honest.” In contrast, formative evaluations should be empowering - a good evaluation usually gives the various stakeholders information they need in order to get more value from their involvement with the program.

What many folks don't realize is that formative evaluation requires different kinds of data than summative evaluation does.

Summative evaluation usually must focus on data about results -- outcomes. But outcomes data by itself has little formative value. If you doubt that, consider that a faculty member has just discovered that the class average on the mid term exam was 57.432. Very precise outcomes data. But not very helpful guidance for figuring out how to teach better next week.

In contrast, a formative evaluation of an educational use of technology will often seek to discover a) what users are actually doing with the technology, and b) why they acted that way (which may have nothing to do with the technology itself). (For more guidance on designing such evaluations, see "The Flashlight Approach" and other chapters of the Flashlight Evaluation Handbook.

Corollary #1: The right time to start evaluating is always "now!" Because the focus is likely to be on activities, not technology, the evaluation of the activity can begin before new technologies or techniques go into use. Baseline data can be collected. And, even more importantly, the team can learn about factors that affect the activity (e.g. 'library research') long before new technology (e.g. new search tools) are acquired. This kind of evaluation can yield insights to assure that the new resources are used to best effect starting on day 1 of their availability.

Corollary #1: When creating an action plan or grant proposal, get an evaluator onto your planning team quickly. An experienced, skillful evaluator should be able to help you develop a more effective, safer action plan.

Corollary #2: when developing your budget, the money and effort needed for evaluation (what the military might call 'intelligence gathering”) may be substantial, especially if your program is breaking new ground.


What are the most helpful, influential evaluations you've seen in the TLT realm?
Did they look like this? What kind of information did they gather? Next week, I'm discuss how our current infatuation with learning objectives has overshadowed some very important kinds of evidence, and potentially discouraged us from grabbing some of the most important benefits of technology use in education, benefits that can't be measured by mass progress on outcomes.

1 comment:

  1. Cheering Steve on, I want to reinforce the necessity of having an evaluator involved in the program design--because the evaluation design and the program design are inter-related and as the program plan evolves and the evaluation plan evolves you will see areas where adjustments are essential. An evaluator sees things differently and those insights will strengthen the program design and ensure that the evaluation design is 'do-able' and will yield the knowledge you are seeking.

    ReplyDelete

What do you think?