Major revision: December 6, 2009
Belief #2 (of my ten old beliefs) asserted that, to improve what students learn, use technology to change the teaching/learning activities (e.g., active learning, faculty-student contact and the rest of the seven principles of good practice.) And, I believed, the only practical, acceptable way to measure such changes was through changes in 'test' scores (i.e., whatever means faculty had traditionally used to estimate student learning in that program).
Belief #2 isn't false, but I now think it falls way short of describing how to improve what students learn, and how to evaluate how valuable that investment may have been.
So here's Belief B (my new beliefs are lettered A-J): The value of using technology in academic programs usually stems from changing what students learn, not just changing the activities by which they learned the old material. Obviously new, technology-dependent content has appeared in almost every field (e.g, the widespread use of geographic information systems and other computer databases as tools for scholarly research). Less obvious, perhaps, are changes in traditional goals of liberal learning such as 'writing' or 'research skill.' This chunk of our web site sketches ways in which almost all the basic goals of college education could be reexamined.
Among the important changes in "what" should be learned: Technology uses in the world widen what each person can do, professionally, as a citizen and personally. Their need for wise judgment is therefore greater. And, as they age and the world changes, they will need to learn new skills more frequently. So the academic program should invest more time on student research, creative work, fieldwork, etc. One consequence of this emphasis: more variety of learning, within courses, within majors, and across the institution.
Obviously, when students have more options, there's more chance that they'll flounder or misuse the freedom that they're gradually being given. The faculty response to floundering ought to be to help students gradually take responsibility for their own learning, individually and in teams. (Similarly, faculty and staff will need to collaborate in order for programs to succeed. More on how to achieve such collaboration below).
How to evaluate possible gains from such changes in what students learn: If the goals and content have changed, you can't use scores on last year's test. It probably tested some things you no longer teach, and some of your exciting new stuff isn't on last year's test. What do you do?
- Each year, collect both the 'tests' (what students are asked to do in this degree program) and a random sample of their results (what students did on those 'tests') across the courses in the program. This collection of evidence would include exams, student projects, homework assignments, online discussions in courses, and student portfolios, for example. The evidence might come only from senior courses (to gauge program outcomes), or from courses across the curriculum (to help look at student learning as they progress through the program).
- Commission a set of stakeholders to serve as an evaluation team (e.g., representatives from faculty, benefactors, students, employers, graduate schools)
- Ask your team to critique your evidence, year by year.
- After examining how 'tests' and performance have changed, does your team favor the evolution in what students have been learning? What they see as the strengths and weaknesses of your evolving program? What advice do they offer?
What do you think? Has your program ever looked at evidence about the results of teaching improvement? What do you think of my suggestion? What do you think accreditors would think of this approach? PS. If you'd like to see a table with my ten old beliefs on the left, and my 10 new ones on the right, see: http://bit.ly/ten_things_table Next week: old and new beliefs about distance learning.