Sunday, December 06, 2009

What I Once Believed - A Summary

Back in the 1960s, 70s, and 80s, I thought technology use powerful tools for active learning, faculty-student contact, student collaboration (and the rest of Chickering and Gamson's seven principles) to leverage a transformation in how students learned. The dimensions of revolution wouldn't stop at seven, of course. For example, words would be joined by images, numbers, and video as media by which students could acquire information. Students previously excluded by their location, schedule, disabilities or other aspects of situation would find new gateways to learning.


The driver of the revolution would be the power and excitement of the day's emerging technology, a technology that would enable faculty and students to do what they most wanted to do:
  • Self-paced tutorials on mainframe computers promised a world in which each student would be guided toward the fastest possible individual progress through materials, at least a third faster than conventional teaching/learning activities;
  • Microcomputers running word processors, spreadsheets, and BASIC would enable students to learn by designing, analyzing, composing, serving. The work would be creative, a different vision of individualization
  • Videodisc and the graphic user interface would power a shift toward the visual;
  • HyperCard would give everyone the ability to use hypertext to navigate interdisciplinary webs of knowledge and create their own maps of learning;
  • Students siting side by side, collaborating on the same computer, would help one another to new skills and new understanding;
  • Then email expanded cooperations that could reach across barriers of time and space;
  • Boom! Gopher servers and then the Web put the whole planet onto our hard drives: an explosion of access to expertise;
  • If revolutions are marked by explosions, surely a revolution was coming: the decades since 1970s have been marked a series of blasts as promising new technologies rocked higher education, one after another...


Of course, words, textbooks, and lectures would not be eliminated by this blitz; sometimes a good clear explanation with Q&A is just the ticket.


But, I thought, let's prepare to discard the theory that huge lecture halls and wonderful textbooks are the way to deliver learning (i.e. facts plus inspiration) while cost-effectively expanding our mass system of higher education.

In fact, whether students were on campus or off, I believed, faculty and students would soon combine to make the whole notion of 'delivering' education obsolete. “Delivery” was an image that had never matched reality – learning never has been “transmitted” into student minds by talking into their ears.


Each year, I was persuaded that the newest technology would finally be the key to unlocking all of this. The results would be greater achievement, a larger and more varied student body, and, most important of all, students who would, far more than in previous decades, be committed to learning.


This vision of transformation could be achieved, I assumed, by allowing the various specialists the freedom to do their various jobs:
  • Faculty would teach their courses. But big new instructional packages and powerful tools in student hands would result in a wave of exciting changes in the nature of each of those courses.
  • Information technology specialists would explain to faculty and students how to use each new technology. Then the faculty would take over to redesign the courses.
  • Evaluation specialists would measure progress: percentage improvements in performance, more students (and more kinds of students), cost savings...

Want more? Read entries 1-13 of this series. How well does this summary describe what you and your colleagues once thought about the coming computer revolution in higher education? Was this where you were? Did you start at a different place?

Summary of what I now Suggest

What I Now Believe about Transforming Teaching and Learning (with Technology)
Stephen C. Ehrmann, December 6, 2009


This brief essay summarizes several of the most important and controversial arguments made in a series of blog posts entitled "Ten Things I (no longer) Believe about Transforming Teaching and Learning with Technology." (http://bit.ly/ten_things_table) That original set of things I now believe are lettered in the original essays. I'ved used those letters below as references [in brackets], so that this summary can easily be used as a print handout. The original essays provide more detailed arguments, plus examples to illustrate each point I've tried to make, as well as covering other components of the argument not included in this brief summary. I'd like to acknowledge and thank my colleague, Steven W. Gilbert; the thoughts that follow are at least as much his as mine. However, I take sole responsibility for how they're expressed.


This summary is in two parts. Part I may be of greater interest for faculty and others who are responsible for what students learn. Part II may be of greater interest to those staff who have responsibility the facilities and services that faculty and students use to achieve such goals.




I. A VISION WORTH WORKING TOWARD


What students should learn: Obviously in a digital age, students need to use computers and the Web in order to prepare for the world of work.


Equally important: technology often gives people choices (as workers, citizens, and individuals), if they have the wit, skill and wisdom to take advantage of selected possibilities.


Academic programs ought to use similar technologies to prepare students to make such choices and to cope with the dangers that such choices can create. One way to do that is to take advantage of technology to enlarge various forms of active learning: student research, learning by designing and composing, field work, considering how prior experiences provide evidence of changing skills,... [B]


How Students Should Learn: To support that kind of emphasis on active learning, students ought to learn to use digital tools and resources that can then be employed frequently, in college and afterward. Pay particular attention to technologies that can potentially save time on the mechanics of thought and action. Word processing, for example, was initially appealing because it saved a lot of time in making simple revisions. As revision and other elements of writing became easier and quicker, writing became more like sculpting in clay (rather than in stone); many students learned to refine their thinking by refining their writing. When computing makes the mechanics easier, the attention of faculty and students can turn to more sophisticated skills and more complex phenomena. This is just one of several reasons why these suggestions for improving teaching and learning so often mention the motive of saving time for students and faculty.[E]


Assessing and evaluating what students learn: When technology is used to diversify learning, assessment and evaluation can't merely be organized around preset goals for what all students must learn (the “uniform impact” perspective). The complementary approach to assessment and evaluation (“unique uses”) focuses on what each person actually did do with their opportunities, and why. This kind of unique uses assessment is especially important, for example, in helping faculty respond to differences among their students, in order to help all students learn. [L] (For more on how faculty can use technology to respond to student differences, see http://www.tltgroup.org/resources/diversity/.)


The interdependence of what, how and who: This approach to education requires both students and faculty to reach out outside the classroom, and outside traditional classroom hours of teaching, for fresh resources and new options for active learning. The technology used to bridge space and time for those purposes can also be used to help current students carry heavier loads and engage other students who wouldn't otherwise have had the motivation or means to learn. The goal of improving what is learned and how it's learned should be inseparable from the goal of improving who can learn. [C]


II. HOW THE INSTITUTION CAN ORGANIZE TO ACHIEVE AND SUSTAIN SUCH IMPROVEMENTS


It's not possible to achieve this constellation of improvements through great leaps forward. A three year grant to redesign each course in the curriculum won't do the trick. Staff are already too busy to spare the time, or take the risks of dropping balls already in the air. That's because, in any non-profit organization with committed staff, staff's work will already have expanded to fill available time and budget (Parkinson's Law and the Revenue Theory of Costs). Furthermore, courses that don't continually evolve will soon become outdated and then abandoned.


Therefore a promising way to foster continual improvement is for all, or most, faculty to take a large number of small, safe steps: enough of them to that staff can gradually and cumulatively improve practice and results without jeopardizing either their sanity or their institution's budget.


For this strategy to work, it's also important for the program to set one, or a few, overall directions for improvement that can be advanced through such steps.


For example, staff and stakeholders in an engineering program might agree on a ten year campaign to help students to understand climate and sustainability issues well enough so that, by the time they graduate, the students will have each learned to respond to those issues in a manner of their own choosing. Such an ambitious goal can't possibly be achieved either by a) a one time curriculum reform or b) devoting just one course to that outcome. A better way to achieve the goal is for many program faculty to gradually improvise, changing their courses bit by bit over the years so that, ultimately, students graduate with sophisticated skills and experience in this area.


It's not easy to maintain focus for the five-ten years needed to achieve visible, meaningful improvements in program outcomes. Higher education has always suffered from attention deficit disorder. And unpredictable changes in technology use in the disciplines, and in the world, makes it even harder to maintain attention on one or two directions for cumulative change. Sustained attention on a particular vision worth working toward requires:
  1. a broad base of faculty and staff with a history of believing this particular goal is crucial for the program and meaningful for them personally;
  2. stable, patient, dogged leadership;
  3. continual scrounging around the world for the next set of small steps that each faculty member and staff member may want to try or adapt [I];
  4. developing relationships with other organizations needed to operate and sustain the program (for example, the OneMBA and iLabs examples described in [C];
  5. an unusual degree of collaboration and information sharing [F];
  6. as opportunities become available, hiring and retaining staff who have the motivation and skills to make lasting contributions to the program (which will almost inevitably include an inclination to work collaboratively with other faculty and staff); and
  7. a program of continual evaluation that helps faculty and staff detect quickly whether their small steps are adding up or not (and, if not, why not). some of this evaluation will be done in teams, some by individuals. [H]


A pluralistic approach to formative evaluation: One of the skills that almost everyone will need to learn over time is inquiry: assessment and evaluation. The most important function of evaluation is to help each person see what he or she is doing, and the best way to get such information is to look for yourself. So the institution must (via small steps) help each staff member with the training, tools, and support to ask the right questions about his or her own practice. This kind of inquiry will usually be carried out by individuals working alone and in small groups (e.g., faculty learning communities). [K]


Taking advantage of these improvements: It inevitably takes years for a program to gradually, cumulatively reorganize its work, the skills of its people, and its relationships with other organizations in the world. The good news: it also takes years for competing programs to catch up, if they ever do. And, especially if the program has been using evaluation to guide and document improvement, its achievements and strengths can be used to increase the program's visibility, help it attract the staff and students it wants, and also attract support from benefactors, grants and other sources. [A]

Tuesday, December 01, 2009

Why Invest in Info Tech in Higher Ed? 1994!

 AAHESGIT Listserv Retrospective November 21, 1994

This Google Doc:  http://bit.ly/TLTG-Why-Invest-94RETRO

"Why Invest in Info Tech?"  

  • How much have/haven't conditions and rationales changed in 15 years?  
  • How well are we doing?

Below is the full unexpurgated text of a listserv posting from Nov. 21, 1994 - written by the listserv moderator, Steven W. Gilbert.

[Thanks again to Chuck Ansorge for keeping and compiling the AAHESGIT listserv messages from 1994-2006!]




WHY SHOULD A COLLEGE OR UNIVERSITY INVEST

(MONEY, STAFF TIME, BUILDING SPACE, ETC.)

IN EDUCATIONAL USES OF INFORMATION TECHNOLOGY?

WHY SHOULD AN INDIVIDUAL?

 

I.  RATIONALE:  General

 

o       The Transformation is Inevitable Anyway

(The transformation of teaching and learning based in part on

the integration of applications of information technology:

this transformation has already begun.  The challenge is to

direct this transition and make it less painful and more

consistent with YOUR educational mission.)

o       Students Have Already Changed

(Most institutions report significant changes in the

composition of their student bodies.  Many faculty report

that many students are more resistant to reading and don't

respond as well to traditional courses.)

o       Build More Effective Communities of Learners-Scholars

(There are already many examples of communities of scholars

that support mutual learning via the Internet in ways and on

a scale that go beyond anything that ever happened before.)

o       Regain Public Trust

(While our leading universities are still considered the best

in the world, support for many higher education institutions

has eroded.)

o       LONG (not short) RANGE:  "ACADEMIC PRODUCTIVITY"

(Whatever we mean by "academic productivity," we aren't going

to achieve it in colleges and universities in just a few more

years.)

 

II.  RATIONALE:  Faculty

 

o       Personal Productivity (Word-processing, EMail)

(Almost no one is arguing about the value of this any more.   [Clarification:  In the late 1980s and early 1990s there were serious arguments about the potential damage that word-processing would do to writing skills. By late 1994, almost no one was still arguing against using these tools.  SWG 20091201]

EMail may be the application in the 90s that brings people to

the Internet in the way that word-processing brought people

to microcomputers in the 80s.)

o       Clearer, Easier Presentation

(Using presentation and image manipulating software and

computer-driven projection devices in the classroom to

display information more clearly or easily.  Doesn't change

the basic approach to teaching or what is taught.)

o       Widen Instructional Bottlenecks

(Experienced faculty know the topics in their courses where

many students have trouble and fall behind.  Teachers develop

or find applications  of information technology that can help

meet these specific learning challenges.)

o       Better Communication with Wider Range of Students

(Use information technology to offer information in a variety

of formats to meet the learning needs and preferences of a

widening range of students.)

o       Teach New Content Better

(Emerging from the recent work of scholars in some

disciplines are some topics and approaches that can be

represented and communicated much more effectively using

information technology than using conventional print.  I've

got examples from geography/geology, American Lit., some

foreign language -- more examples welcome!)

 

III.  RATIONALE:  Institutional Leaders

 

o       Competition for Students, Faculty

("If we don't have computers in our ...., we'll lose students

to ..."

"When prospective students or faculty members visit our

campus, they always ask to see the computer facilities.")

o       Improve Quality of Teaching, Learning

(The experience base of the "early adopters" has convinced

more and more of them that information technology really can

be used to improve student motivation and learning.

Developing quantitative data to confirm this is quite

difficult.  The anecdotal evidence and public demand may

outweigh the need for data -- for a while?)

o       Better Access to Education for Wider Range of Students

(Information technology -- especially "distance education"

seems to be the ONLY way that the increasing variety of

students can have access to a high quality college education.

This applies to students whose location, finances, or work

schedules don't permit them to participate easily in

conventionally scheduled and conventionally presented

courses.)

o       Information Literacy for Students

(The library community is leading this campaign and reminding

learners and teachers how important it has become for all to

master skills of finding, evaluating, and using information

-- especially via information technology.)

 

 

 


Thursday, November 05, 2009

M. Improve infrastructure conservatively, cumulatively

This week, we're discussing a program's foundational technologies for teaching and learning: learning management systems, suites of productivity software, and basic, semi-specialized tools that are widely used such as mathematics software and online databases. Monday's post described a traditional strategy: upgrade these utilities periodically, in order to keep up with, or pass, one's competitors by providing the best, fastest, richest tools and resources that the program can afford for its faculty, students, and staff. 

Today I would add that there's a creative tension that needs to be managed very carefully between:
  • The need to help your faculty and students make progress, especially improvements in practice and outcomes selected as long-term priorities by their departments and by the institution as a whole.
  • The need to avoid disrupting delicate patterns of incremental, cumulative improvement of teaching/learning skills and materials.
One way to reduce the chances (or frequency) of disruption is to consider how to minimize that risk right from day #1.  


For example, over the years, many mathematics tutorials and modeling packages have come and gone. Sometimes they disappear and no new technologies provide an escape route.  Yesterday's faculty are left to mourn now-obsolete curricular materials that had been built to take advantage of the vanished software.  Software designed for instructional purposes, especially when using proprietary materials, poses a known risk of disruption when the package becomes obsolete. 

In contrast, since about 1980, students in a variety of disciplines have been using spreadsheets to do calculations, modeling, gaming, and simulations. Any faculty member who began developing curricular materials based on VisiCalc (a spreadsheet of that period) could have made incremental curricular progress from that day to this, even though, over the decades, their tools might have shifted from VisiCalc (which no longer exists) to Lotus 1-2-3 (which no longer exists) and then to Excel , and then to OpenOffice or Google Docs.  The transitions could happen with  minimal disruption because new spreadsheets are usually able ot read the files of old spreadsheet programs and because the interfaces are usually similar. So materials and skills could develop incrementally and cumulatively, even while one technology was being swapped for another. And it's a reasonably good bet that an instructional practice based on spreadsheets will continue to have good chances for incremental, cumulative improvement as unpredictable new, more powerful spreadsheets make their appearance in future years.


Consider a current challenge to smooth progress: electronic portfolios. Some uses of ePortfolios imply that the institution will, over a period of years, store and use student works, reflections, faculty assessment, and other feedback about the student's work. These records are all linked together. So, for example, the institution will need to know for years to come which project that each comment was about, which projects were for the same assignment in the same course, and, of course, which projects were done by the same student.



Suppose, for example, a department decides today to buy ePortfolio technology #1 in order to organize, use and store all that interlinked information. That institution must be able to use those records, and add to them, even when ePortfolio technology #1 is replaced by technology #2.  Therefore the export of those files and their linkages from technology #1 to technology #2 must be easy, quick, and inexpensive.


What I'm suggesting, in other words, is that the institution think cautiously about using ePortfolios for this purpose if there isn't high confidence that they can export all those records and relationships from technology #1 and then import them easily into a hypothetical technology #2. The sustainability of the innovation and the continuity of the technology are related issues.  (Notice that this is only sometimes a problem. For example, if ePortfolios are used in courses and are finished when the course ends, one probably only needs to be able to export the web of project plus comments, without a need to import the record to any other ePortfolio system. It's the academic record uses of ePortfolios that pose a more daunting requirement for technological continuity across vendors.

Is the technology which you might buy today likely to gracefully give way to successor technologies, while not disrupting those innovative educational practices which are the justification for today's purchase?


PS That risk of disruption is the bad news. Here's the good news. Every year we have more technologies that are useful, reliable, inexpensive, and sustainable (and thus almost beneath notice).  So our options for technological foundations for sustainable educational improvements are multiplying every year.


What have you seen? Do these observations mirror what your program already does? Contradict current practice? 


Sunday, November 01, 2009

13. Make great leaps forward in technology infrastructure (??)

I once would have agreed that the big job for any chief technology officer is to assess the adequacy of major systems: the foundation on which the faculty teach and the students learn. If it's possible to do the job better now, replace the old systems. If there are important things that people can't yet do, provide new technology to help them do those things. Obviously, if you're going to make such changes, you should be ambitious, right?  Your faculty and students deserve the best, most modern, and exciting tools that your institution can reasonably afford. And, if you make a splash in the process or get big discounts from vendors introducing a new line, so much the better.

The important thing is to get things done, and get them done quickly. Consultation, especially too much consultation, will bog you down, and then nothing will happen. So check with a few tech-savvy friends among the faculty and staff, buy the new technology, and then announce what you've done. In the end, hopefully, they'll thank you for it.

That kind of strategy used to make sense to me.

However one impact of this process, at too many institutions, has been festering resentment by many faculty and staff, who feel repeatedly betrayed. The skills and materials that relied on the old technology-- much of that was rendered useless with each leap forward. Suddenly there was new, often unreliable, expensive, mysterious Technology to learn. Once again the experts had to become novices. And, when faculty members ask the institution to support technologies Y or Z, they are told, "Sorry, but we already bought technology X. Use that instead."

Even worse, the previous leap forward in infrastructure -- the advance made a few years back -- was supposed to promote a certain kind of educational improvement.  However, some of those betrayed faculty now simply drop off the bandwagon rather than retool yet again.  Others are now distracted by the demands of the new technology and the need to rework those parts of their courses that had been supported by the old infrastructure - instructions, assignments, assessments. And, of course, the new leap forward may be justified by a new educational priority which implicitly displaces the old one.  

I've been writing this series of blog posts because of the important improvements in teaching and learning that technology makes possible. But the sad truth is that many such TLT improvements have been derailed by rapture of the technology. Thoughtless changes in infrastructure can disrupt the very improvements that they are supposed to support.

Is this an accurate description of how technology infrastructure has been built at your institution?  Have you seen infrastructure changes that disrupt the educational improvements they were supposed to support?  If your institution has been able to make steady educational progress while periodically updating its infrastructure, how was this done? Please share your experiences below. If this series of blog posts have been good food for thought, please tell your colleagues.

Tuesday, October 27, 2009

Evaluation Methods: Each user is unique. Assess each one first, then look for patterns

On Monday, I talked about my belief, as a novice evaluator and educator, that evaluation (and teaching) should be organized around programmatic goals: describe every student should learn, study each student's progress toward those goals, and study the program activities that are most crucial (and perhaps most risky) for producing those outcomes.

After some years of experience, however, first at Evergreen and then as a program officer with the Fund for the Improvement of Postsecondary Education (FIPSE), I realized that this Uniform Impact was valuable but limited.

In fact, I now think there are two legitimate, valuable ways to think about, and evaluate, any educational program or service:
  • Uniform Impact: Pay attention to the same learning goal(s) for each and every student (or, if you're evaluating a faculty support program, pay attention whether all the faculty are making progress in a direction chosen by the program leaders).
  • Unique Uses: Pay attention to the most important positive and negative outcomes for each user of the program, no matter what those outcomes are.
You can see both perspectives in action in many courses. For example, if an instructor gives three papers an “A,” and remarks, “These three papers had almost nothing in common except that, in different ways, they were each excellent,” the instructor is using a Unique Uses perspective to do assessment.

Each of these two perspectives focuses on things that the other perspective would miss. A Unique Uses perspective is especially important in liberal and professional education: they both want to educate students to exercise judgment and make choices. If every student had the same experiences and outcomes, the experience would be training, not liberal or professional education.

Similarly, Unique Uses is important for transformative uses of technology in education, because many of those uses are intended to empower learners and their instructors. For example, when a faculty member assigns students to set their own topics and then use the library and the Internet to do their own research, some of the outcomes can only be assessed through a Unique Uses approach.

What are the basic steps for doing a Unique Uses evaluation?
  1. Pick a selection, probably a random selection, of users of the program (e.g., students).
  2. Use an outsider to ask them what the most important consequences have been from participating in the program, how they were achieved, and why the interviewee thinks their participation in the program helped cause those consequences (evidence).
  3. Use experts with experience in this type of program ( Eliot Eisner has called these kinds of people 'connoisseurs' because they have educated judgment honed by long experience) to analyze the interviews. For each user, the connoisseur would summarize a) the value of the outcome in the connoisseur's eyes, using a single or multiple rating scales
  4. The connoisseur would also comment on whether and how the program seems to have influenced the outcome for this individual, perhaps with suggestions for how the program could do better next time with this type of user.
  5. The connoisseur(s) then look for patterns in these evaluative narratives about individuals. For example, the connoisseur(s) might notice that many of the participants encountered problems when, in one way or another, their work carried them beyond the expertise of their instructors, and that instructors seemed to have no easy strategy for coping with that.
  6. Finally, the connoisseur(s) write a report to the program with a summary judgment, recommendations for improvement, or both, illustrated with data from relevant cases.
To repeat, a comprehensive evaluation of almost any academic program or service ought to have both Uniform Impact and Unique Uses components, because each type of study will pick up findings that the other will miss. Some programs (e.g. a faculty development program that works in an ad hoc manner with each faculty member requesting help) are best served if the evaluation is mostly about Unique Uses. A training program (e.g., most versions of Spanish 101) is probably best evaluated using mainly Uniform Impact methods. But most programs and services need some of each method.

There are subtle, important differences between these two perspectives. For example,
  • Defining “excellence”: In a Uniform Impact perspective, program excellence consists of producing great value-added (as measured along program goals) regardless of the characteristics or motivations of the incoming students. In contrast, program excellence in Unique Uses terms is measured in part by generativity: Shakespeare's plays are timeless classics in part because there are so many great, even surprising ways to enact them, even after 400 years. The producer, director and actors are unique users of the text.
  • Defining the 'technology”: From a Uniform Impact perspective, the technology will be the same for all users. From a Unique Uses perspective, one notices that different users make different choices of which technologies to use, how to use them, and how to use their products.
For more on our recommendations about how to design evaluations, especially studies of educational uses of technology, see the Flashlight Evaluation Handbook. The Flashlight Approach, a PDF in Section I, gives a summary of the key ideas.

Have any evaluations or assessments at your institution used Unique Uses methods? Should they in the future? Please click the comments button below and share your observations and reactions.

PS We're over 3,300 visits to http://bit.ly/ten_things_table. So far, however, most people seem to look at the summary and perhaps one essay. Come back, read more of these mini-essays, and share more of your own observations!

Monday, October 26, 2009

12. To evaluate ed tech, set learning goals & assess student progress toward them (OK but what does this approach miss?)

It's Monday so let's talk about another one of those things I no longer (quite) believe about evaluation of educational uses of technology. Definition: “Evaluation” for me is intentional, formal gathering of information about a program in order to make better decisions about that program.

In 1975, I was the institutional evaluator at The Evergreen State College in Olympia, Washington. I'd offer faculty help in answering their own questions about their own academic programs (a “program” is Evergreen's version of a course). Sometimes faculty would ask for help in framing a good evaluative question about their programs. I'd respond, “First, describe the skills, knowledge or other attributes that you want your students to gain from their experience in your program.”

“Define one or more Learning Objectives for your students” remains step 1 for most evaluations today, including (but not limited to) evaluating the good news and bad news about technology use in academic programs. In sections A-E of this series, I've described five families of outcomes (goals) of technology use, and suggested briefly how to assess each one.

However, outcomes assessment by itself provides little guidance for how to improve outcomes. So the next step is to identify the teaching/learning activities that should produce those desired outcomes. Then the evaluator gathers evidence about whether those activities have really happened, and, if not, why not. Evidence about activities can be extremely helpful in a) explaining outcomes, b) improving outcomes, c) investigating the strengths, weaknesses and value of technology (or any sort of resource or facility) for supporting those activities.

Let's illustrate this with an example.

Suppose, for example, that your institution has been experimenting with the use of online chats and emails to help students learn conversational Spanish. As the evaluator, you'd need to have a procedure for assessing each student's competence in understanding and speaking Spanish. Then you'd use that method to assess all students at the end of the program and perhaps also earlier (so you could see what they need at the beginning, how they're doing in the middle, and what they've each gained by the end).

You would also study how the students are using those online communications channels, what the strengths and weaknesses of each channel are for writing in Spanish, whether there is a relationship between each student's use of those channels and their progress in speaking Spanish, and so on.

Your findings from these studies will signal whether online communications are helping students learn to speak Spanish, and how to make the program work better in the future.

Notice that what I've said so far about designing evaluation is entirely defined by program goals.: the definition of goals sets the assessment agenda and also tells which activities are most important to study. I've labeled this the Uniform Impact perspective, because it assumes that the program's goals are what matter, and that those goals are the same for all students.

Does the Uniform Impact perspective describe the way assessment and evaluation are done? Do any assessments and evaluations that you know go beyond the suggestions above? (Please add your observations below by using the “Comments” button.)

PS. “Ten Things” is gaining readers! The alias for the table of contents – http://bit.ly/ten_things_table – has been clicked over 3,200 times already. Thanks! If you agree these are important questions for faculty and administrators to consider, please add your own observations to any of these posts, old or new, and spread the word about this series.

Wednesday, October 21, 2009

K. Evaluation should be mainly formative and should begin immediately.

Earlier, I described some old beliefs about program evaluation. I used to assume that evaluation of TLT had to be summative ("What did this program accomplish? Does that evidence indicate this program should be expanded and replicated? continued? or canceled?"). The most important data would measure program results (outcomes). You've got to wait years to achieve results (e.g., graduating students) and the first set of reults may be distorted by what was going on as the program started up. Consequently, I assumed, evaluation should be scheduled as late as possible in the program.

Many people still believe these propositions and others I mentioned earlier this week. I still get requests: "The deadline for this grant program is in two days. Here's our draft proposal. Out of our budget of $500,000, we've saved $5,000 for evaluation. If you're able to help us, please send us a copy of your standard evaluation plan."

Yug!

Stakeholders need to see what's going on so they can make better, less risky decisions about what to do next. Getting that kind of useful information is called “formative evaluation.” (By the way, a stakeholder is someone who affects or who is affected by a program: its faculty, the staff who provide it with services, its students, and its benefactors, for example.)

In the realm of teaching and learning with technology (TLT), formative evaluation is even more important than in other realms of education. The program is likely to be novel and rapidly changing, as technologies and circumstances change. So the stakeholders are on unfamiliar ground. Their years of experience may not provide reliable guides for what to do next. Formative evaluation can reduce their risks, and help them notice and seize emerging opportunities.

Formative evaluation also can attract stakeholders into helping the evaluator gather evidence. In contrast, summative evaluation is often seen as a threat by stakeholders. “The summative evaluation will tell us we're doing well (and we already know that). Or perhaps the evaluator will misunderstand what we're doing, creating the risk that our program will be cut or canceled before we have a chance to show what this idea can really do. And no one reads those summative reports anyway unless they're looking for an axe to use on a program. So, no, I don't want to spend time on this and, if I'm forced to cooperate, I have no reason to be honest.” In contrast, formative evaluations should be empowering - a good evaluation usually gives the various stakeholders information they need in order to get more value from their involvement with the program.

What many folks don't realize is that formative evaluation requires different kinds of data than summative evaluation does.

Summative evaluation usually must focus on data about results -- outcomes. But outcomes data by itself has little formative value. If you doubt that, consider that a faculty member has just discovered that the class average on the mid term exam was 57.432. Very precise outcomes data. But not very helpful guidance for figuring out how to teach better next week.

In contrast, a formative evaluation of an educational use of technology will often seek to discover a) what users are actually doing with the technology, and b) why they acted that way (which may have nothing to do with the technology itself). (For more guidance on designing such evaluations, see "The Flashlight Approach" and other chapters of the Flashlight Evaluation Handbook.

Corollary #1: The right time to start evaluating is always "now!" Because the focus is likely to be on activities, not technology, the evaluation of the activity can begin before new technologies or techniques go into use. Baseline data can be collected. And, even more importantly, the team can learn about factors that affect the activity (e.g. 'library research') long before new technology (e.g. new search tools) are acquired. This kind of evaluation can yield insights to assure that the new resources are used to best effect starting on day 1 of their availability.

Corollary #1: When creating an action plan or grant proposal, get an evaluator onto your planning team quickly. An experienced, skillful evaluator should be able to help you develop a more effective, safer action plan.

Corollary #2: when developing your budget, the money and effort needed for evaluation (what the military might call 'intelligence gathering”) may be substantial, especially if your program is breaking new ground.


What are the most helpful, influential evaluations you've seen in the TLT realm?
Did they look like this? What kind of information did they gather? Next week, I'm discuss how our current infatuation with learning objectives has overshadowed some very important kinds of evidence, and potentially discouraged us from grabbing some of the most important benefits of technology use in education, benefits that can't be measured by mass progress on outcomes.

Monday, October 19, 2009

11. Evaluating TLT: Suggestions to date, and some old beliefs

For the next couple weeks, I'll be writing about evaluation of eLearning, information literacy programs, high tech classrooms, and other educational uses of technology.

Actually, I've been commenting on evaluation in many of the prior posts, so let's begin with a restatement of suggestions I've made over the last 2 months in this blog series:
  1. Focus on what people are actually doing with help from technology (their activities, especially their repeated activities).
  2. Therefore, when a goal for the technology investment is to attract attention and resources for the academic program, gather data about whether program activities are establishing a sustainable lead over competitors, a lead that attracts attention and resources.
  3. When the goal for technology use is improved learning, focus on whether faculty teaching activities and student learning activities are changing, and whether technology is providing valuable leverage for those changes. (Also assess whether there have been qualitative as well as quantitative improvements in outcomes.)
  4. When the goal is improved access (who can enter and complete your program), measure not only numbers and types of people entering and completing but also study how the ways faculty, staff and students are using technology make the program more (or less) accessible and attractive (literally).
  5. When the goal is cost savings, create models of how people use their time as well as money. And focus on reducing uses of time that are burdensome, while maintaining or improving uses of time that are fulfilling.
  6. When the goal is time-saving, also notice how the saving of time may transform activities, as in the discussion of Reed College in the 1980s, where saving time in rewriting essays led to subtle, cumulative changes in the curriculum and, most likely, in the outcomes of a Reed education.
  7. Gains (and losses) in all the preceding dimensions can be related. So your evaluation plan should usually attend to many, or all, of these dimensions, even if the rationale for the original technology use focused on only one. For example, evaluations of eLearning programs should examine changes in learning, not just access. Evaluations of classroom technology should attend to accessibility, not just learning.
Years ago, I might have looked at a list like this, and also agreed that:
  1. Evaluation should assess outcomes. (how well did we do in the end?)
  2. Evaluation should therefore be done as late as possible in the life of the initiative or project, in order to give those results a chance to become visible (and to resolve startup problems that might have initially obscured what the technology investment could really achieve).
  3. Corollary: When writing a grant proposal, it's helpful to wait until you've virtually completed the project plan and budget before calling in someone like me to write an evaluation plan for you. Just ask the evaluator to contribute the usual boilerplate by tomorrow; after all, evaluation plans are pretty much alike, right?
  4. Corollary #2: If the project succeeds, it will be obvious. If it fails, evaluation can be a threat. So, when developing a budget for your project or program, first allocate every available dollar for the real work. Then budget any dollars that remain for the evaluation.
Do those last four points sound familiar? Have any of those four ideas produced evaluation findings that were worth the time and money? (Tell us about it.) When you plan a project, what purposes do you have for the evaluation? In a couple days, I'll suggest some alternative ideas for evaluation.

PS. This Friday, October 23, at 2 PM ET, please join Steve Gilbert and me online for a live discussion of some of these "Ten Things". Please register in advance by going to this web page, scrolling down to October 23, and following the instructions. And, to help us plan the event, tell us which ideas you'd especially like us to discuss.

Friday, October 16, 2009

Fundamental Question for Massive, Sudden Transition to Online Teaching/Learning


Today, Oct 16 2pm EDT Home base Web Page     Login
Suppose you are a faculty member teaching an on-campus course that has already begun and suddenly find that you cannot meet with your students for the next 3 weeks. What can you do online that would be better than this "generic assignment"?

Generic Assignment
For the next 3 weeks, read, watch, reflect/do, and write a paper - based on the syllabus you have already received for this course.
  • Read: Specific selections of text - in books, other printed format, or available on the Web;
  • Watch: Videos or other media available on the Web or television;
  • Reflect/Do: Answer these questions or do these problems; 
  • Write: A paper or report on a topic covered by the readings & media. 
What 3 things could most faculty members learn easily and quickly so they and their students: 

1. Use online options that most experience as worthwhile improvements on the generic assignment described above?
2. Worry less about being embarrassed by their first efforts to teach/learn online under these conditions?

NOTE: These "3 things" would be helpful for almost any efforts to enable and encourage more faculty members to try teaching online for the first time. Even just trying one or two online additions to a course they already teach on campus.

Thursday, October 15, 2009

J. Support: Teach Faculty to Solve Problems

“It just struck me the other day...Life is adversity. That is the meaning of life. We crave adversity. We need to get into trouble and stay in trouble...Teachers who retire go back to teaching because they need to be in trouble again."  - Garrison Keillor, “News from Lake Wobegon,” Feb 25, 2008


On Monday, I summarized my former belief that TLT units should teach faculty two things about emerging TLT ideas and material:
  1. Teach them enough about a new technique or technology so that they can decide whether to learn how to use it ("why") and, for those interested,
  2. 'How' to use it.
And my post assumed that it would be specialized, paid staff who would teach both kinds of lessons.

Those two kinds of support, why and how, aren't enough, however. Nor does any university have enough staff to provide the teaching and help needed for continual improvements in teaching and learning (with technology) across the curriculum. Let's start with the missing links in the content of support; then we'll conclude with a fresh look at who should provide that support.

WHY, HOW, AND (?)

To improve educational results, it's usually necessary to help faculty and students make qualitative changes in what they have been doing. Putting 'old wine into new bottles' isn't enough. Unfortunately, when faculty use technology to alter course activities, they can easily be ambushed.

Consider difficulties such as these:
  • A faculty member begins teaching online. Some students begin to fall behind.
  • An instructor adds some challenging new online assignments as homework, with the intent of building on that experience in class; but more students than usual arrive admitting that they haven't done their homework.
  • Students begin discussing issues in a chat room. The conversation splinters. Two students get into a violent argument.
  • In response to an assignment, students create web sites. Some projects are good on the content, but badly organized. Others are well organized and easy to navigate, but the substance is shallow. How should the projects be graded?
  • Each student or small group is working on a different topic of their own choosing. Many choose to work on problems that are each outside the faculty member's comfort zone. The instructor doesn't have time to do the reading needed to become sufficiently expert in all of these areas.
  • The instructor's teaching takes an adventurous turn. However, some students object that 'this is not how this course is supposed to be taught,' and complain to the dean. Student ratings of the course take a dive, and the faculty member's tenure case is coming up soon.
Many faculty resist a new TLT approach because they sense that it could lead to unexpected problems, and most professors and instructors know they're being offered no preparation for coping with those TLT dilemmas.

Here are a couple suggestions to help faculty deal with such problems:
  1. Organize faculty seminars to discuss case studies that each describe one such problem. Cases might be just a paragraph or two, briefly describing a problem, or a bit more elaborate (video clip; artifacts such as transcripts of online discussions). For each case, participating faculty discuss their own experiences: how they interpreted their version of that situation, how they responded, and what happened next. Usually most participants are surprised at how many different ways there are to interpret such a situation, and how many options there are to respond.
  2. After a little practice with such disguised case studies, it's easier to do what Steve Gilbert calls a 'clinic.' One of the participating faculty members describes a problem that he or she has seen personally, perhaps something that's troubling them now. Then the other participants share their own experiences with similar problems, and their suggestions for how to respond now.
Technology's role in academic improvement is analogous to the role of yeast in baking a cake. The staff in TLT support units need to be cake specialists, not just yeast specialists: they need extensive personal experience in using various technologies, old and new, for teaching and learning. But I don't know of any institution that has remotely enough staff to serve their faculty. That's especially true for programs that want to engage most or all of their faculty in improving teaching and learning (with technology).

FACULTY MUST SUPPORT ONE ANOTHER

If a program or institutions to improve teaching and learning on the large scale that technology enthusiasts hope to see, much of the help needs to come from the faculty themselves. That's true even if the TLT improvements are usually low risk, low cost, increments. The professional TLT staff's role should be to support, organize and sustain those faculty-to-faculty efforts. The only way for such mass engagement to happen is if faculty want it, and if their departments and the institution recognize and reward faculty who help their colleagues. [The focus of this post is how faculty can help one another. But that faculty effort can also be complemented with support from trained student technology assistants.]

Not all faculty need do the same things to help their colleagues. The scholarship of teaching and learning provides one set of possibilities. The teaching case study seminars above are another; the cases should be created and published by faculty (the clinic discussions should help identify candidates) and the seminars should be led by faculty. Similarly, 'scrounging' for TLT ideas and materials needs to be done mainly by faculty. And to develop the constellation of support workshops described earlier requires faculty participation as well.

I'm curious. Does your institution's support service for faculty go beyond the 'why' and the 'how?' Does your program encourage faculty to help each other? Can such faculty engagement be scaled up enough so that, over the years, a large proportion of the faculty can comfortably, cumulatively improve their courses?

Monday, October 12, 2009

Improving Teaching and Learning with Technology-Conflicting(?) Schools of Thought

Interesting post by Phil Long (University of Queensland, TLT Group Senior Consultant, and formerly Senior Strategist at MIT) about how to think about improving teaching and learning (with technology).

I think Phil, Steve Gilbert, and I each have slightly different views about how to proactively improve teaching and learning with technology (TLT) in an academic program. Dramatizing our disagreement will, I hope, be an aid to deepening and widening the conversation. Here's my summary of what each of the three of us currently think:
  1. Wait until external conditions are really demanding (a near crisis, perhaps). Then marshall your forces and push for a big change that responds to that crisis. A big change might be, for example, a combination of a curricular redesign, a fresh approach to teaching and learning, and the facilities to support both of them.
    If there is no external pressure, try rallying staff effort and resources around an inspiring vision of the future. Use that enthusiasm to create change that will last. Change will come faster when change agents can take advantage of a crisis, however. An evolutionary metaphor is suggested by Phil and by a comment by Trent Batson about Phil's post. I think that's misleading, however. Evolution is a 'mindless' metaphor to apply to programs, but Phil (and I) each tend to think in terms of faculty and staff who are trying to change the larger institution or program of which they are a part. (Phil Long, as translated by SteveE)
  2. In contrast, Steve Gilbert has been working to promoteevolution in small steps, an inductive approach to improvement that emerges from relatively independent actions taken by each faculty member. SteveG suggests that staff help each faculty member find or invent small steps that make sense to that individual faculty member. Then help them use feedback to guide what they're doing. Finally, help them each to share their ideas and materials with a few more colleagues who can quickly adapt them with little or no risk or expense.
    SteveG rarely talks about helping faculty to change in any particular direction. I think he's wary of the lure of Big Changes. Remember what Newton said: Every action causes an equal and opposite reaction. Big pushes create big pushback. The small approach is sneakier, producing change that is too invisible, and too grounded in faculty freedom, for anyone to oppose. (Steve Gilbert, as translated by SteveE)
  3. Here's my perspective: identify small steps being made by faculty (here SteveG and I agree). Then try to spot a subset of those changes that could be the beginning of something big and important for the programs' students, faculty and other stakeholders. Then start consciously supporting progress in that direction through small steps and, where warranted, big steps. When identifying directions for improvement, pay special attention to outside pressures and rewards:e.g., falling enrollments and the potential to increase enrollment; trends in thinking in the discipline. (SteveE)
Do you buy any of these strategies? Have a fourth to suggest? or perhaps you think the whole idea of a proactive strategy to improve teaching and learning is futile?

10. TLT support: Why and How

You can't understand teaching if you ignore learning. And you can't understand either unless you pay attention to the facilities, resources, and tools used to accomplish them: classrooms and computers, libraries and the web, and other such 'technologies.' At one time staff could ignore classrooms, textbooks, and other traditional technologies because the choices were few, and universally familiar. That's no longer true. Especially in the last decade, the options have multiplied. Because these technology options are not equally good, equally easy, or equally inexpensive, the choice of technologies requires conscious attention, just as teaching and learning themselves do.

That close relationship of teaching, learning, and their technologies
is one reason why it's important for institutions to have units that
function as TLT Centers, real or virtual.
A virtual TLT Center is a constellation of two or more units such as faculty development, technology support, the library, the facilities program that supports classrooms, distance learning, and departmental TLT experts -- units that work so closely together that they act like a single service provider. For example, their staffs continually learn about each other's resources and from one another's experiences; that way each staff member can draw on all the capabilities of the virtual center.
These things I do believe.

But some of my beliefs have changed. I once believed that, when helping faculty, TLT staff needed to focus on (just) two things:
  1. WHY: Teach enough about a new technology and its teaching/learning uses so that instructors would want to learn more, and, for those who are persuaded,
  2. HOW to teach in those ways.
Is that a good summary of the kinds of help that TLT staff provide faculty at your institution? Or is there something additional that faculty are taught about emerging TLT topics? Please post your observation by clicking 'comments' below.

My second old belief was that support for faculty should be provided directly and entirely by experts in TLT support. At your institution are there people in addition to TLT staff who provide such support?

My third old belief is that this training should be entirely interdisciplinary: faculty are specialized by discipline but TLT staff are not. So this faculty support service should be 'one size fits all departments.' Is that true at your institution?

PS Anyone who knows the work of "the Steves" knows how many of the thoughts in this series come wholly or partly from Steve Gilbert. Our thinking has been so intertwined over so many years that it's not even possible to point out which of the observations in this series originated from him and which from me.

PPS You probably know that this post is part of a series called 'Ten Things I (no longer) Believe about Transforming Teaching and Learning with Technology.' If you like these posts, please spread the word. Perhaps you can use these ideas to help with a more intentional approach to TLT planning.

And join us online for a free, live discussion of these issues on Friday, October 23, at 2 PM ET. It's part of our FridayLive series. If you don't already have a FastPass, click here to register. Thanks!

Wednesday, October 07, 2009

I. Programs make faster, better educational progress when they're world class scroungers

Earlier this week I described my mistaken belief that one should pay most attention to the newest ideas, especially if you can create your own idea or phrase, or at least your own wrinkle, and then claim the credit for being first.

The folly of that belief was pounded home for me in 1996. That was the year that Arthur Chickering suggested that we write an article on how to use technology to implement the 'seven principles of good practice in undergraduate education.' He and Zelda Gamson had summarized these seven lessons from educational research a decade earlier.

I replied that such an article was unnecessary. “Everyone knows how to do this already,' I told him. 'According to your seven principles, when students cooperate, educational outcomes usually improve. Anyone can see that using email can provide new avenues for students to cooperate. And the kinds of complex, real world projects made possible by computing often compel students to work in teams. Who needs an article to tell them that? It's old news.” Chickering persisted. So we wrote the article, got it published in a little newsletter, and soon put it on the web. Very quickly, 100 people per month were visiting our article. Then 200, and 400. A decade later, over 5,000 people per month were taking a look at it . Not bad for an article about ideas so obvious that I'd thought an article totally unnecessary.

Crucial question: How do you spread ideas and skills from the 5% of faculty for whom they're old news to those who would also respond, “That's wonderful!” if they ever heard about the idea or tried the skill? These blog posts are about using technologies in a way that can improve what's learned, who learns, and how they learn. To achieve that kind of change, engaging large numbers of mainstream faculty can be important. Each of them may not need to change what they're doing very much, but they each would probably need to change a little. Suppose it's a change they'd like if they ever heard about it; how can we help them notice the possibility in time?

Steve Gilbert, Flora McMartin and I did a major research study for MIT and Microsoft several years ago. Microsoft had made a multi-year, $25 million grant to MIT, and chunks of that money were being awarded to MIT faculty to do pioneering projects involving educational uses of technology. Our research: discover factors that influenced whether the best of these innovations were ever used by faculty other than their original developers.

One story from this MIT/Microsoft study suggests an important lesson for any program that wants to accelerate the pace of improving teaching and learning with technology.

Pete Donaldson is a Shakespeare scholar at MIT. For years before the Microsoft grant became available to MIT faculty, Pete had been experimenting with ways for his students to use film clips (without violating copyright) in their papers and online discussions. He'd had some success, enough to give workshops on the topic and to be a keynoter at the Shakespeare Association of America, where he gave a spectacular demonstration. His use of video clips, however, relied on an assembly of expensive equipment. Then he received a grant from the MIT/Microsoft iCampus program. The support enabled programmers to work with him, and to figure out a much more inexpensive strategy. The resulting software was called the Cross Media Annotation System (XMAS). Pete used the SHAKSPER, a popular listserv in the Shakespeare community and a mailing list of people who had attended his prior workshops to ask if anyone would like to use this free service in order to incorporate film clips into their Shakespeare courses. Quite a few did, especially because they knew and trusted Pete. One comment we heard from several adaptors: Pete wasn't threatening because he wasn't a techie, himself. He was like them. So if he could use XMAS, so could they.

The story is not all success. XMAS ought to be a great tool for film courses taught by film scholars, even more than for Shakespeare courses taught by English professors.

But Pete Donaldson is not a member of that community of film scholars, doesn't go to their conferences, doesn't know their listservs, and doesn't write in their journals. Nor do the other English faculty he has helped.

At some point, XMAS and Donaldson's techniques for using it may be adapted by a film scholar who, like Pete, uses the idea for teaching and for research and who, like Pete, has a yen to help his or her colleagues. And then the use of XMAS may begin spreading like a virus in that community.

Let's pull these threads together.

In the real world, instructors rarely have much time to uncover new ideas. Nor can they can take many risks (e.g., fear of embarrassment, wasted time when they're already over-committed, risk to a tenure case). That's one reason why new ideas about teaching and learning tend to spread so slowly. However, it can help to hear about such ideas from peers with a reputation for this kind of improvement (especially from peers who teach similar courses to similar students, even at other institutions).

Therefore, I suggest that any institution that wants to make unusual progress in TLT ought to help create and sustain faculty learning communities whose members often (a) teach similar courses, and (b) come from different institutions. If those similar courses have similar students, and the faculty have similar styles, so much the better. That way, if one faculty member has an idea, or uses a technology, or has a puzzling experience, it should be relatively easy for others to emulate. And, by including faculty from other institutions, you and your colleagues will hear about new low threshold steps much more quickly.

You can't search everywhere for everything. That's another reason why it's so important to set one or two focused priorities. Those priorities should help faculty and staff focus their searches for ideas. Become a world class scrounger and borrower of appropriate teaching ideas and materials from around the world! Ironically, that's also a great way for faculty members and their program to get a reputation as world class innovators.

PS. If you don't have much money, search for great ideas in countries where money has been scarce for some time.

Monday, October 05, 2009

Group Nanovation = Open House?

From Steve Gilbert
Extending impact beyond the event.
An "Open House" can extend faculty sharing beyond the location and schedule of the event itself.  What could be done DURING and AFTER the event to enable and encourage MORE faculty members to take advantage of the options offered in that event to improve teaching and learning with technology?  To try some of those improvements more than once?  To collect feedback about their own attempts?  To help some colleagues do the same?
Dave MacInnes (Guilford College) described several key factors during our online discussion of successful Nanovation last Friday (10/2/2009).  For a few other lessons we learned (obstacles, ideas, strategies), digital archive, text chat transcript, etc., from that session, click here .  And watch for future Frugal Innovation sessions.   WHAT OTHER WAYS COULD HELP EXTEND THE REACH OF AN "OPEN HOUSE"? 

Dave's recommendations for a Fluidly Structured Event with Carefully Selected Faculty Presenters.
  • Schedule:  1-2 hours?  No absolute starting or ending time for participants - can enter or leave whenever they wish, stay as long or briefly as they wish - low risk of being "trapped" and wasting time!  Offer enough variety to engage most participants for 20-40 minutes if they wish to stay that long.
  • "Presenters":  Feature faculty members who are already recognized for strong personal interests in relevant topics, issues, etc..  Identify and invite 6-8 faculty members to be presenter/mentors DURING the event.  Faculty members who are likely to be respected by colleagues and who are likely to be willing and able to respond to colleagues' subsequent requests for help with similar tasks AFTER the event.  
  • Mixture of major and minor presentations:  Include some presenters/presentations about
    A.  "big" topics - activities or skills that take some substantial effort, time and obviously result in substantial changes in teaching/learning;  and
    B.  "small" topics - LTAs, potential Nanovations - that can be introduced or "gotten" in a few minutes
  • Like Poster Sessions:  Encourage some presenters to prepare as if they were offering a poster session at an academic conference.  Prepare some visual display and/or handouts to enable passers-by to make quick decision, quickly get enough to permit follow-up activities;  prepare to introduce the main ideas, resources in a few minutes (<5).  [This item added after the online session by Steve Gilbert]
  • Advertise: Use email to advertise to whole faculty - emphasize flexible timing
  • Location:  Multiple rooms - further reduces fear of getting stuck in a session;  emphasizes idea of multiple options available to meet varied needs, interests
  • Repetition:  At least once per year.   Build expectations and reputation of providing useful info without wasting time of presenters or participants.

9. We are unique. Avoid 'not invented here.' (NOT)

Monday posts in this series describe things I no longer believe, things that relate to making major improvements in teaching and learning by taking advantage of technology. Here's a big one.

"Our program is unique. And, so far as we know, no one else is yet doing what we propose to do." So far as I know, I am the inventor of that phrase. I coined it in 1977, while writing a grant proposal. Our proposal emphasized our college's uniqueness in higher education, and the uniqueness of our proposed project.

I was especially proud of that phrase, 'so far as we know:' It was a truthful way of halfway admitting that my 'literature search' had not been very thorough. Today, I still remember that I was worried that, if I were to search more energetically, I might discover that someone else had already used the educational idea that we were proposing. And if someone else were already doing it, we'd have to abandon my grant proposal, right? What funder would be interested in supporting the second institution to try a not-absolutely-new idea?

That was part of a cluster of related beliefs that I held:
  1. My institution is unique (or at least highly unusual.)
  2. The newest idea is the most important idea. Even if it's not truly new, pretend it is. In higher education, we get energy from changing what we do from time to time, even if we change from A to B and, after memories have faded and new people have joined the staff, back to A again.
  3. To get a grant, it's important to be first (at least the first of any program like yours, of which there are almost none). Here's more on the goal of being first with a new technology, another belief that I now think is deceptive.
  4. Don't do anything that was not invented here; we're unique so it won't work here (or, by admitting it came from elsewhere, we lose the chance to say we invented this version ourselves).
  5. A technology correlate: when a new technology or teaching technique appears, investigate it by spending time and money to pilot test it locally. What you can learn from a single local pilot test is far more valuable and relevant than what you could learn by spending time and money to discover what 50 people learned by testing it at other institutions.
Do these propositions make sense to you? Why or why not? (Click “COMMENT” below to leave a post.) Later this week I'll discuss what I'd suggest now, instead, about 'not invented here' and its implications for a counter-intuitive approach to course improvement. This post will also build on last week's post recommending the Treblig Cycle.

Saturday, October 03, 2009

H. Faculty support for programmatic improvement: The Treblig Cycle

This "Ten Things" series of posts is discussing some counter-intuitive ideas about how technology can enable major, long term improvement in academic programs: improvements in what is learned, who learns, and how they learn.

Such deep programmatic improvements are more likely to develop when most faculty feel that a change is important enough to warrant patient, persistent effort over a period of years. In these days, when money is tight and competition ferocious for many academic programs, an unusual number of faculty may feel this way.

What kind of faculty support could help such sweeping programmatic improvements develop?

The most problematic requirement for such faculty support is scale: the need to involve most faculty in this academic program. If the program's leaders hope to improve what's learned, who learns, and how they learn, they need to help most faculty develop some new skills, tools, and materials.

My colleague, Steve Gilbert, has been developing the concept of 'frugal innovations,' innovations that can work, and spread, when time and money are scarce. He recommends a cycle of individual improvement and peer-to-peer sharing of those experiences and mateirals. He has called this process 'nanovation.' Lewis Hyde would call it a 'circle of gifts.' I call it the Treblig Cycle (pronounced treb'lig). If you're curious why I suggest that name, read this article'.

As I interpret what Steve has been saying, the Treblig Cycle consists five steps:
  1. A faculty member learns about (or invents, or reinvents) an improvement for teaching and learning with technology. The materials or tools needed should be freely available or nearly free to this faculty member and his/her colleagues. To make this cycle work, the improvement should also be low risk, obviously rewarding, possibly time-saving, and easy to learn. Steve has called such tools and materials “Low Threshold Applications” and such improvement ideas “Low Threshold Activities.” We usually refer to both as “LTAs”. "Low threshold" is a relative term, not an absolute. Something which is low threshold for some people in their institutional context may be expensive, high risk, or too hard to learn for other faculty in a different institutional context. For the Treblig cycle to work, however, the improvement must be low threshold for most people who learn about it. And, for the Treblig Cycle to help the strategic change of interest, this particular LTA should be an incremental step in that direction. If the faculty are trying to slowly and, eventually, dramatically improve the creative skills of their graduates, for example, then this LTA should help advance that effort just a little bit: a tiny step in the right direction. The fact that many faculty agree that this programmatic change is important - that's one of the things that attracts their attention to this LTA.
  2. The instructor tries the improvement, and finds it rewarding. (If it weren't rewarding for him or her, the process would stop here.)
  3. He or she tries the idea again, gathering feedback to guide the activity and/or to describe its outcomes;
  4. In the process of trying the idea, he or she may also tweak, personalize or otherwise improve it;
  5. He or she helps at least two colleagues inside or outside the institution to begin this same cycle; in other words, these colleagues are now at step 1 of the Treblig Cycle. If each of them in turn gets two or more colleagues to enter the cycle, the low threshold improvement will spread in an accelerating way.
The Treblig Cycle is more likely to work if the environment rewards (a) faculty sharing information with colleagues inside and outside of the institution, (b) improvements that aid the chosen programmatic goal.

Obviously, many improvements can't be spread by the Therblig Cycle. Some improvements don't meet a widely felt need so they won't be rewarding enough to excite their users to share them. Other improvements aren't low threshold for most people.

So, if your academic program is developing a strategic academic/technology plan for the next 5-10 years, or considering which of several strategic options to choose, ask whether each proposed strategic change could be implemented with the help of the Treblig Cycle.

Relying on the Treblig Cycle does not eliminate the need for faculty support units. Quite the contrary. Faculty support units can use the Treblig Cycle as a tool for supporting faculty. For example, the faculty support unit could search for relevant LTAs, could create materials describing the LTAs, and find more ways to encourage faculty to share such ideas. (We'll return to some of these themes in coming weeks.)

To summarize: crucial elements for applying the Treblig cycle to transformative uses of technology are (a) agreeing on a direction for change that reflect widely felt needs among the faculty, (b) collecting Low Threshold Applications and Activities that many faculty would find rewarding, and (c) encouraging the sharing of such ideas and movement in those directions.

Your comments? Can you imagine an academic program or institution using the Treblig Cycle to support a 5-10 effort to transform itself, eg., internationalizing its curriculum? Developing a world class reputation for the design skills of its graduates? Does the Treblig Cycle suggest a reasonable route to a slow revolution?

Thursday, October 01, 2009

Ever Nanovated?

From Steve Gilbert
Have you ever nanovated?    
  • Tried an improvement in teaching/learning with technology - more than once?
        [Alternative:  Tried an improvement once and never again?]  
  • Gotten some feedback about that improvement and changed it?
        [Alternative:  Didn't get any feedback or ignored feedback?]
  • Helped at least two colleagues make similar improvements - in ways that made it likely they would try it more than once, collect feedback, help at least two more colleagues... etc.?
        [Alternative:  Didn't help anyone?  Helped some colleagues but they didn't help others?]
Help!  Survey!  
Please respond to our brief online survey about nanovation. 
Your responses will help us prepare for online discussions of examples and factors that support or hinder nanovations.  

Join the first, probably most exploratory, of these online discussions tomorrow: 
Note:  Do you have other ways of describing or confirming a successful dissemination and use of an improvement in teaching and learning with technology?  What other ways have you used for identifying successful dissemination and use of improvements in teaching and learning with technology?