Tuesday, October 27, 2009

Evaluation Methods: Each user is unique. Assess each one first, then look for patterns

On Monday, I talked about my belief, as a novice evaluator and educator, that evaluation (and teaching) should be organized around programmatic goals: describe every student should learn, study each student's progress toward those goals, and study the program activities that are most crucial (and perhaps most risky) for producing those outcomes.

After some years of experience, however, first at Evergreen and then as a program officer with the Fund for the Improvement of Postsecondary Education (FIPSE), I realized that this Uniform Impact was valuable but limited.

In fact, I now think there are two legitimate, valuable ways to think about, and evaluate, any educational program or service:
  • Uniform Impact: Pay attention to the same learning goal(s) for each and every student (or, if you're evaluating a faculty support program, pay attention whether all the faculty are making progress in a direction chosen by the program leaders).
  • Unique Uses: Pay attention to the most important positive and negative outcomes for each user of the program, no matter what those outcomes are.
You can see both perspectives in action in many courses. For example, if an instructor gives three papers an “A,” and remarks, “These three papers had almost nothing in common except that, in different ways, they were each excellent,” the instructor is using a Unique Uses perspective to do assessment.

Each of these two perspectives focuses on things that the other perspective would miss. A Unique Uses perspective is especially important in liberal and professional education: they both want to educate students to exercise judgment and make choices. If every student had the same experiences and outcomes, the experience would be training, not liberal or professional education.

Similarly, Unique Uses is important for transformative uses of technology in education, because many of those uses are intended to empower learners and their instructors. For example, when a faculty member assigns students to set their own topics and then use the library and the Internet to do their own research, some of the outcomes can only be assessed through a Unique Uses approach.

What are the basic steps for doing a Unique Uses evaluation?
  1. Pick a selection, probably a random selection, of users of the program (e.g., students).
  2. Use an outsider to ask them what the most important consequences have been from participating in the program, how they were achieved, and why the interviewee thinks their participation in the program helped cause those consequences (evidence).
  3. Use experts with experience in this type of program ( Eliot Eisner has called these kinds of people 'connoisseurs' because they have educated judgment honed by long experience) to analyze the interviews. For each user, the connoisseur would summarize a) the value of the outcome in the connoisseur's eyes, using a single or multiple rating scales
  4. The connoisseur would also comment on whether and how the program seems to have influenced the outcome for this individual, perhaps with suggestions for how the program could do better next time with this type of user.
  5. The connoisseur(s) then look for patterns in these evaluative narratives about individuals. For example, the connoisseur(s) might notice that many of the participants encountered problems when, in one way or another, their work carried them beyond the expertise of their instructors, and that instructors seemed to have no easy strategy for coping with that.
  6. Finally, the connoisseur(s) write a report to the program with a summary judgment, recommendations for improvement, or both, illustrated with data from relevant cases.
To repeat, a comprehensive evaluation of almost any academic program or service ought to have both Uniform Impact and Unique Uses components, because each type of study will pick up findings that the other will miss. Some programs (e.g. a faculty development program that works in an ad hoc manner with each faculty member requesting help) are best served if the evaluation is mostly about Unique Uses. A training program (e.g., most versions of Spanish 101) is probably best evaluated using mainly Uniform Impact methods. But most programs and services need some of each method.

There are subtle, important differences between these two perspectives. For example,
  • Defining “excellence”: In a Uniform Impact perspective, program excellence consists of producing great value-added (as measured along program goals) regardless of the characteristics or motivations of the incoming students. In contrast, program excellence in Unique Uses terms is measured in part by generativity: Shakespeare's plays are timeless classics in part because there are so many great, even surprising ways to enact them, even after 400 years. The producer, director and actors are unique users of the text.
  • Defining the 'technology”: From a Uniform Impact perspective, the technology will be the same for all users. From a Unique Uses perspective, one notices that different users make different choices of which technologies to use, how to use them, and how to use their products.
For more on our recommendations about how to design evaluations, especially studies of educational uses of technology, see the Flashlight Evaluation Handbook. The Flashlight Approach, a PDF in Section I, gives a summary of the key ideas.

Have any evaluations or assessments at your institution used Unique Uses methods? Should they in the future? Please click the comments button below and share your observations and reactions.

PS We're over 3,300 visits to http://bit.ly/ten_things_table. So far, however, most people seem to look at the summary and perhaps one essay. Come back, read more of these mini-essays, and share more of your own observations!

1 comment:

  1. Because so much of K12 education has become Uniform Impact based upon data from NCLB and its various state spin offs, I feel it is more important than ever for higher education to focus its attention on the Unique Uses any student forges with a particular curriculum. More and more students are taught to regurgitate to the test as district pressures mount to meet ever-rising AYP standards. Choice-making and the critical thinking that comes with it is sacrificed on this alter. It has to become the focus again, then, in higher ed.

    Ideally, to me Uniform Impact should be the scaffold from which Unique Uses evaluation then builds. Students or faculty in professional development demonstrate a "global movement" toward a defined goal (let's say improved test scores), but then we explore individual methods/learning to get there - (what worked and what didn't). Multiple successful paths (Unique Uses) to the same goal (Uniform Impact) can then be shared and explored for future learning.

    It is the best of both worlds.


What do you think?