Monday, August 31, 2009

4. To help your bottom line, offer distance learning (10 things I no longer believe)

I used to believe that buying into technology for distance learning would help higher budgets by expanding revenues while simultaneously cutting costs (staff costs per student and also facilities spending per student).

Revenue: use distance learning to bring in new students, each paying tuition and/or driving additional state funds.

Cutting costs: The more students per faculty, the lower the cost of faculty per student, so tuition revenue per student comes closer to covering costs. That's simple math. I didn't like the idea of giant distance learning classes. So I urged that distance learning programs focus on attracting more students to under-enrolled courses and degree programs, increasing class sizes from, say, 5-10 students up to 15-20, cutting staff costs/student in half.

Many of us pointed out that distance learning could cut capital costs as well. The institution could serve more students without building and maintaining more classroom buildings, for example.

Similar arguments are made for potential economic gains from hybrid courses and degree programs: ones that reduce but do not eliminate use of classrooms.

Spend on technology in order to save money - it's obvious.

If skeptics doubted the argument, I could point to an example such as the Open University in the United Kingdom, a gigantic institution devoted purely to distance learning. The way it organizes courses and supports learners is entirely different from traditional institutions. The UK does a careful job rating instructional quality, and the OU's programs were rated comparable to those of other bachelor's degree programs. But its cost per student was only about 60% that of campus based institutions in the UK.

Buy technology and develop distance learning in order to make your budget healthier. Evaluative measure: total revenue; operating and capital costs per student. (That's what I used to believe.)

What have you seen? What did you believe, and what do you believe now?
1. Have you seen any examples of technology use saving money on the teaching/learning side of the house? distance learning? on campus? hybrid?
2. Are there any kinds of technology investment in teaching/learning that can predictably save money?

Please click the comment button below (Click the word "Comments" a bit to the right of "Posted by Steve_Ehrmann," below) and tell us what you've seen and what you think.

Later this week I'll summarize what I now believe about technology and how to control costs.

Wednesday, August 26, 2009

C. Distance (and campus) learning reconceived (10 things I believe)

My conception of ‘distance learning’ has changed over the last thirty years. My old vision, still held by many folks, was explained in the previous post. To explain why and how I’ve changed my mind, let me tell you about five programs I’ve seen over the years.

As you think about each story, consider that a large fraction of your “distance learning” students may well also take courses on your campus. They’re actually not ‘distant.’ But online learning offers them flexibility to have a part-time or full-time job to cover educational costs, to study when and where they want, to carry heavier loads, and to graduate sooner.

1. Bridging distance to include new students, and teachers: About twenty years ago, my friend Prof Nick Eastmond of Utah State taught an educational technology course to distant learners using audioconferencing. Nick invited me to be a guest lecturer for an upcoming session. “You want me to come to Logan?” I asked. “No, just call this phone number at the beginning of the class hour,” he responded, “and talk with my students about evaluation.” I was startled by what I found when I ‘arrived.’ If the students had all been in a room on campus, listening to my disembodied voice coming from a speaker, I probably would have felt distant. But the students, Nick, and I were just talking together on the phone. It was quite intimate. He invited a different expert from around the country to talk with his students by phone every week.

2. Reaching different kinds of students so all students can learn better:
In the early 1990s, Leslie Harris was teaching a composition course at a small college. Students would read articles and then write an essay, summarizing evidence and arguing their own points of view about an issue. His students were reading about family and cultures. But they knew relatively little about families or cultures outside their own small world. In addition, the students shared too many beliefs and preferences. In short, there was little room for a real exchange of facts and views.

So, with a friend teaching a similar course at a dissimilar college, Leslie scheduled a series of online chats. At designated times each week, students from the two colleges would enter the same chat room in order to discuss a topic drawn from shared course readings. The students would also collectively contribute to a listserv discussion between the two classes. These conversations could easily grow impassioned. I recall one student writing in the chat room, “I am punching you in the nose!” That provided plenty of energy and direction to the essays students wrote later.

Although difficult to implement, the idea worked well enough that Harris organized several such pairs of composition courses in the early 1990s. Differences among students can be an asset when those differences can lead to productive dialogue, helping students become more self-aware. Bridging distance can help faculty increase such differences by design.

3. Slowing down the conversation in order to improve learning: During that same period, Karen Smith tried a new technique in foreign language learning at the University of Arizona. About forty students in Spanish IV spent some course time using a computer discussion board instead of going to a language laboratory. They wrote to each other in Spanish about many topics, including some issues they chose for themselves (e.g., planning a class party; advising a fellow student who was depressed). Initially these students used the bulletin board by going to a computer lab at scheduled class times. Soon they realized that they could add entries whenever they had access to a computer and a modem. Their contributions to the online conversation were graded on their fluency, not their grammar.

Meanwhile, many other students were assigned to use word processing to write in Spanish instead. And a third large group continued to use the language laboratory. In other respects, all three groups were taught in the same way.

Outcomes of Karen’s experiment: I visited the course that semester, and it was the most enthusiastic group of students I’ve ever met, before or since. Several said, “This is the first time I’ve actually used Spanish!” Second, they liked the deliberate pace of conversation. They pointed out that face-to-face exchange allows little time for silent thought. However, when using the discussion board, these students had plenty of time to interpret what had just been ‘said,’ and then to thoughtfully compose their replies. Third, they could focus on what to say in Spanish without being distracted by worries about pronunciation.

Amazingly, oral examination revealed these students had learned to converse in Spanish better than students who had trained in the language laboratory. The ‘discussion board’ students had become already adept at thinking in Spanish so that was no problem for them during the oral exam; instead they could focus on pronunciation and speed, and they did well. Karen concluded that these students were attracted to invest more time and energy on the course, which would also help explain their superior performance.

Even for undergraduates, joining any discipline is akin to learning a second language and culture. It’s about learning to communicate with a community, learning its language and adopting its values. How do you teach your students to ‘speak’ math, political science, or art? How do they learn to talk together as they do the work of the discipline? Is face-to-face always ideal? Or, sometimes, would a slower tempo and a little distance help them learn new conversational skills? Could you use those technologies to give each student different conversational partners, such as more advanced students or experts? Do participants in such conversations also need specialized tools? For example, should a participant be able to call up a painting, X-ray, or clip of news footage, and then point to elements of that image or video as they discuss it?

4. Global diversity can be a defining strength for a program: My first three stories were from decades past – their lessons do not depend on glitzy new technology. My fourth story is from today, but it’s still not high tech: five leading institutions on four continents collaborate in offering an executive MBA in global management. The US partner in “OneMBA” is the University of North Carolina, Chapel Hill. Students enroll at one of the five institutions and come to campus monthly for meetings of half their courses. Meanwhile, the other half of the curriculum is taught online, each course led by a faculty team from the participating universities. Students from the five institutions work together in virtual teams. In addition, four times during their program, OneMBA students gather to learn and do research together. Significantly, when they meet in the US, they do not come to Chapel Hill; instead, they work together in Washington DC. The world is the laboratory for the students and faculty in OneMBA. By bridging time and space, they’ve created a program that could never have been offered on campus.

5. Including new learners and more diverse resources while saving money: Several institutions have recently developed laboratory equipment that can be used by undergraduates from a distance. Students can plan and carry out their experiments over the Web, and then download their data for analysis. They are running real, physical experiments, not simulations. Zhejiang University in China has pioneered the use of such online laboratories for distance learning in engineering.

MIT’s iLabs software enables institutions to network their online laboratory equipment together, supporting a multi-institution constellation of laboratories with shared tools for scheduling equipment use, storing student data, and other logistics.

However, iLabs has implications beyond reaching ‘distant learners.’ When using a traditional undergraduate laboratory, twenty or thirty students might share ten experimental stations, equipment used intensively for the hour or two of the laboratory session and then lying idle most of the week. That idle, costly, aging equipment has been one reason why undergraduate laboratories are so rare. In contrast, with iLabs, only one experimental station may be needed for a large group because students can take turns using that station, day and night. In fact, a single iLab may have so much capacity that students from other institutions can use the equipment as well. In fact, students in Africa have been using MIT laboratory, and vice versa, at hours when the host institution makes little use of its own equipment.

MIT’s reorganization of the concept of laboratory learning has implications:
• for access (learners half way around the world; learners on campus doing research at night),
• for the character and quality of education (with iLabs as with the web, the more institutions create iLabs, the greater the variety of laboratory equipment available to all participating institutions), and
• for costs (each station and its laboratory space can be used far more cost-effectively because they are used 24x7; even if institutions share the costs of expensive laboratory equipment, paying part of the cost is less than paying the full cost).

Summing up this essay: the terms “distance learning” and “online learning” suggest to many people that a) all the students are far away, b) the goal is to offer courses that are ‘comparable’ with campus-bound offerings.

But these stories all contradict that stereotype.
They each improve upon what was previously possible when campus students were either in a classroom, or studying alone, and using only campus resources.

Second, in each case, the improvements in content, access (who can learn), and methods all resulted from the same technology-supported strategy. Eastmond’s use of audioconferencing enabled experts to bring in updated content, access for distant or busy learners, and tighter community, for example. Smith’s use of online discussion made education more accessible and also powered a far better, more engaging approach to learning Spanish.

Third, all but one of the stories involved hybrid strategy: distance and time were used to improve learning, but those elements were complemented with some kind of face-to-face interaction, sometimes nontraditional (e.g., OneMBA faculty and students doing field research together).

Your institution probably offers at least some online learning courses. It is certainly using technology in some ways to enrich education on campus. I suggest bringing those two efforts closer together. Faculty and support services should work together, using available technology, to help improve courses and programs in all three of these dimensions, simultaneously:
What: How can we update and improve course content, even just a little?
Who: How can we help even just a few additional people learn (including people whose differences might be assets for our program)? and
How can each of these people learn a little more effectively than before?

Be pragmatic. When designing a program or improving a course, start with where you need to start (e.g. we want to serve this group of learners, or we want to make this change in content, or we want to make teaching and learning more effective). But then immediately consider how to use technology so that one change can achieve not only your initial aim but also net gains in the other two dimensions as well.

Things I hope you’ll comment about, below; to the right of 'posted by Steve Ehrmann,' click the word "Comments":
1. Does your institution already have any courses or academic programs whose technology-enabled design improves “what, who, and how” simultaneously?
2. If you're a leader at your institution, what kinds of changes in support services could make such a comprehensive approach work? What roles can evaluative feedback play in guiding such academic improvement so that it does succeed in all three dimensions?

PS If you like this series of posts, 'Ten Things I (no longer) Believe About Transforming Teaching and Learning with Technology' and think others should be debating their value and implications, please spread the word. If you got the word by Twitter, please retweet. Thanks!

Sunday, August 23, 2009

3. Teach distant learners as much like f2f as possible (10 things I no longer believe)

Here's what I once believed about distance learning (e.g., telecourses, online learning):

Who learns: Distance learning was for isolated students who lived many miles from campus. Each student worked alone, connected by a slender technological bridge to (some of) the intellectual riches housed on that particular campus.

Teaching methods
: The roots of distance teaching lay in old technologies:
  • the book (distance learning began when a teacher first said to a student, ‘go away, read this book, think about it, come back, and we’ll talk) and
  • the auditorium (‘go sit in the twentieth row and take notes while I talk’).
Because those presentational teaching methods were OK on campus, they were also OK for distance learning, we believed. Yes, computer courseware and online discussion provided some enrichment. But distant students learned mostly by reading text, watching lectures (on a screen), and doing homework, just as resident students mostly did. Perhaps that's why hundreds of studies could detect no statistically significant difference in student test scores. It's what students do that mostly determines what students learn. And, if you strip away the appearances, 'distant learners' and 'campus learners' were doing much the same thing. Threat to quality? Nonetheless, some people argued, these new students were isolated from each other, deprived of the faculty member’s personalized attention and unable to use the campus’s most prized resources (its library, laboratories, playing fields, and dormitory bull sessions, for example). Distance learning would necessarily be at best a little inferior, at worst a fraudulent education tempting needy students to take the easy way to an empty credential. Evaluation criteria for this use of technology:
  1. Outcomes comparable to those from courses taught on campus?
  2. Net gains in numbers of students enrolling and graduating?
  3. Net economic gains? (additional tuition and fees paid by these extra learners; lower costs of teaching students off campus).
About this series of posts: One at a time, we’re discussing ten things I once believed about transforming teaching and learning with technology. The first five beliefs are strategies for using technology: 1. to attract resources, 2. to improve learning outcomes, 3. to increase the number of students enrolling and graduating (this post), 4. to increase revenue while cutting costs, and 5. to make work easier. I post one old belief every Monday. On Wednesday or Thursday of that week, I post what I now believe instead. Later this week, I’ll suggest that the label “distance learning” (or ‘online learning’ when used as a synonym) has become dangerously misleading: most of the students aren't distant. And trying to give them the same kind of education that the campus has provided wastes an exciting opportunity. To see a table that summarizes all ten old and new beliefs, with links to the posts that have appeared so far, see

Monday, August 17, 2009

B. To improve what's learned, change goals. To assess progress, compare apples and oranges (10 things I believe)

 Major revision: December 6, 2009

Belief #2 (of my ten old beliefs) asserted that, to improve what students learn, use technology to change the teaching/learning activities (e.g., active learning, faculty-student contact and the rest of the seven principles of good practice.) And, I believed, the only practical, acceptable way to measure such changes was through changes in 'test' scores (i.e., whatever means faculty had traditionally used to estimate student learning in that program).

Belief #2 isn't false, but I now think it falls way short of describing how to improve what students learn, and how to evaluate how valuable that investment may have been.

So here's Belief B (my new beliefs are lettered A-J): The value of using technology in academic programs usually stems from changing what students learn, not just changing the activities by which they learned the old material.  Obviously new, technology-dependent content has appeared in almost every field (e.g, the widespread use of geographic information systems and other computer databases as tools for scholarly research). Less obvious, perhaps, are changes in traditional goals of liberal learning such as 'writing' or 'research skill.' This chunk of our web site sketches ways in which almost all the basic goals of college education could be reexamined.

Among the important changes in "what" should be learned: Technology uses in the world widen what each person can do, professionally, as a citizen and personally. Their need for wise judgment is therefore greater. And, as they age and the world changes, they will need to learn new skills more frequently. So the academic program should invest more time on student research, creative work, fieldwork, etc. One consequence of this emphasis: more variety of learning, within courses, within majors, and across the institution.

Obviously, when students have more options, there's more chance that they'll flounder or misuse the freedom that they're gradually being given. The faculty response to floundering ought to be to help students gradually take responsibility for their own learning, individually and in teams. (Similarly, faculty and staff will need to collaborate in order for programs to succeed. More on how to achieve such collaboration below).

How to evaluate possible gains from such changes in what students learn: If the goals and content have changed, you can't use scores on last year's test. It probably tested some things you no longer teach, and some of your exciting new stuff isn't on last year's test. What do you do?

My suggestion:
  1. Each year, collect both the 'tests' (what students are asked to do in this degree program) and a random sample of their results (what students did on those 'tests') across the courses in the program. This collection of evidence would include exams, student projects, homework assignments, online discussions in courses, and student portfolios, for example. The evidence might come only from senior courses (to gauge program outcomes), or from courses across the curriculum (to help look at student learning as they progress through the program). 
  2. Commission a set of stakeholders to serve as an evaluation team (e.g., representatives from faculty, benefactors, students, employers, graduate schools)
  3. Ask your team to critique your evidence, year by year.
  4. After examining how 'tests' and performance have changed, does your team favor the evolution in what students have been learning? What they see as the strengths and weaknesses of your evolving program? What advice do they offer?
We'll return to this kind of evaluation in section L as we discuss the 'unique uses' approach to assessment and evaluation.

What do you think? Has your program ever looked at evidence about the results of teaching improvement? What do you think of my suggestion? What do you think accreditors would think of this approach? PS. If you'd like to see a table with my ten old beliefs on the left, and my 10 new ones on the right, see: Next week: old and new beliefs about distance learning.

Saturday, August 15, 2009

2. Improving learning: Use technology to improve 'test' scores

Of the ten things I once believed (beliefs I now consider misleading or false), #2 is 'If you want to improve what people learn in a demonstrable way, use technology to improve test scores.'

For decades, skeptics about the value of each new technology have challenged its proponents to show that the use of that technology causes gains in test scores.

Accepting the terms of that question, the proponents of distance learning have boasted that their students score just as well on tests as students in comparable courses on campus: 'no significant difference'. And I remember feeling great when seeing Kulik's meta-analysis of research on computer-aided learning showing that, typically, students using computers learned about 1/3 faster than students who did not use computers.

At first, no one questioned the terms of the question itself: Does technology X (e.g., facilities on a campus) cause better learning than technology Y (e.g. some distance learning infrastructure)?

Do you see the fallacy? Well, consider this version of that same question about the learning impact of a more familiar technology: paper, "Let's measure educational achievement by two sets of courses. One set of courses will use paper. The other group of courses will have no paper. Will the paper-aided learners score higher on exams, on the average, than the paperless learners? How much higher?"

Silly questions. Although paper has valuable uses for learning, sheets of paper don't cause anyone to learn. (Try taping a sheet of paper on your head, and see how much you learn if you wear it there all day.) 'Tell us whether the paper is used for textbooks,' you might insist. Even then, you'd probably hesitate about predicting gains in test scores; you'd want to know how good the textbook was, and whether students actually read it or not. And you'd also have a right to ask how the paperless group was studying.

Well, paper is a technology. A textbook, campus, a computer, and the Web are all technologies, too. None of them 'cause' learning.

Technology is just a tool. Its value for learning lies in what teachers and students do, thanks to their use of that technology: their teaching/learning activities. How much learning results from making a technology available? That depends on the activity and on the circumstances.

I used to talk about two ways that teaching/learning activities could be enhanced by using the right technology:
  1. "Help a popular teaching/learning activity occur better, more frequently, or with less effort (.g., using PowerPoint to improve the legibility of a faculty member's notes on the board)" and/or

  2. Make a hitherto little-used activity so much easier or richer that the instructor or student changes the course activities themselves. For example, in the 1980s, distant learners rarely communicated with each other. Today, thanks to email, discussion boards, and chat rooms, discussion among distant learners is common, and research suggests that such discussion improves learning outcomes.

Which activities are most likely to improve outcomes? In 1986 Chickering and Gamson answered that question by describing 'seven principles of good practice.' A decade later, Chickering and I wrote a widely-read article in 1996 summarizing how each of those seven principles could be implemented with technology. In recent years, I've greatly expanded those seven sets of suggestions.

(Notice that all this still accepts the basic terms of the original question: 'When trying to demonstrably improve the value of what students learn, the goal should be to improve performance on traditional tests of learning outcomes. That's the only practical, politically feasible way to show that computer use can improve what students learn.'

Well, what do you think of my old belief?
  • Have you seen any such gains in test scores resulting from the use of digital technologies such as computers, clickers, portfolios or the web itself? evidence of a lack of such gains? or even lower test scores when digital technologies are used in certain ways in courses?
  • In your program, has any such evidence ever played a role in budgeting for technology or planning teaching improvement?
  • Are there other ways in which technology use has improved what students learn in your program? If so, suppose someone challenged you to provide evidence that the student learning had improved and you couldn't cite improvements in test scores. What evidence would you gather instead?

PS. If you'd like to see a table with my ten old beliefs on the left, and my 10 new ones on the right, see:

Monday, August 10, 2009

A. How to use TLT to Attract People & Resources (10 Things I Believe)`

To attract attention, people and money to your program (and to do some other good things), use technology to support important changes in how students learn, what they learn, who can learn, and what it costs for them to learn. This series of posts is about how to do that successfully. Each week, I'll suggest a commonsense suggestion now seems misleading or even false to me; the earlier post this week was about attracting people and resources by spending big bucks on a hot new technology. The second post each week suggests an alternative strategy that is more likely to achieve long term success. Today's counter-suggestion: to attract resources, use technology to help support an emerging academic strength of your program. This strength may depend crucially on some use of technology but will usually not be defined by that technology.

For example, in 1973, Alverno College was a small, obscure Catholic women's college in Milwaukee. That year they began to develop an abilities-focused curriculum. In the decades since, Alverno has become a global leader in this arena. That's one reason the College was recently named one of the nation's top ten institutions for teacher preparation by the George Lucas Educational Foundation and why so many educators and policy makers visit Alverno each year. In early years of this work, Alverno pioneered the use of video recording for assessing student performance (e.g., skills of group work). In recent years, they have developed a distinctively useful electronic portfolio system for their students. They used technology one of many tools to develop their academic strength.

Another example: Over the last decade, Worcester Polytechnic Institute (WPI) has become well known for its popular study abroad program. Thanks in part due adroit use of common technology, most WPI juniors now work on team projects while studying abroad for a term. It took many years for WPI to build the relationships, skill sets, and reputation needed for the program to expand to its current scale. But, for that reason, their lead over competing institutions in this area can't be erased just because another university has bought into a new technology (or built an expensive building, or hired a high profile professor). And WPI's home page today reports that, according to, WPI graduates are in the top 10 nationally for starting salaries.

A third example: In 1969, MIT became one of the first institutions in the country to create an Undergraduate Research Opportunities Program. Computing and the Internet have vastly expanded the variety and importance of these undergraduate projects. Over the years a growing fraction MIT's staff and students have become participants, and UROP has gradually become a major asset to MIT's recruitment.

Technology isn't the star at any of these programs. But each institution has made pragmatic use of technology to continually develop the programmatic strength that has made them so visible and attractive over recent decades.

So focus on how students learn as you think about using technology as a lever to make your program more attractive to students, potential staff, benefactors and others. I stress 'how students learn' because we're talking technology, and technology is a tool for doing things. So what do you want your graduates, students, and faculty to become known for doing? Undergraduate research? 21st century communications skills? teamwork? their response to climate change? ethical behavior? community engagement? Every new year new academic options are opened, and others are enriched, by the increasing range of inexpensive, flexible technologies.

Achieving this kind of academic success will usually a range of improvements across your program: staff skills, curriculum, library and other support resources, relationships outside the institution, and more. It can take many years before your success is real, visible, and widely valued by the world. By that time, this year's hot new technology will have long ago become irrelevant, so it's usually a bad bet to start your planning with such a technology.

Instead, ask yourselves, 'How can we use today's (or yesterday's) technology to take the next step toward distinctive strength for our program?" Consider technology that is easy to learn, inexpensive to maintain, and flexible for use in changing circumstances. Rely only on technologies that are likely to be painless to replace and/or extremely long-lived; if the vendor goes out of business, would your staff or faculty need to develop new materials or skills?

Measure of success for this strategy: is the academic program developing sufficient visibility and reputation to attract people and resources (relative to competing programs), over many years? (But don't expect your admirers and supporters to know exactly what technologies you're currently using!)

QUESTION: Have you seen degree programs or institutions that have used technologies as key ingredients to enhance distinctive educational strengths (what their students do as they learn, and what their students can do after graduation)? What lessons do you learn from how they achieved that technology-enabled success?

PS Next week: what I once believed, and what I now believe, about how to use technology to improve learning outcomes, and what kind of evaluation is most likely to reveal the value (or lack of value) of those changes in learning.

Sunday, August 09, 2009

1. Be the First to Buy Hot New Technology (#1 of Ten Things I No Longer Believe)

(Revised November 22, 2009)

This series of posts begins with five goals for programmatic change, each of which can be advanced by some use of technology (e.g., digital technologies, new buildings, academic resources -- in this series we focus mainly on the digital technologies but the logic will often apply to other facilities as well.) Those five goals are using technology to:
  1. Attract outside attention, good people (students and faculty), and money to the program
  2. Improve learning outcomes (results)
  3. Enroll more (and more kinds of) students
  4. Save money
  5. Save staff time
We begin with using technology to attract attention, good people and money to your program. The old strategy for doing so: Be the first. Your program should leap to spend big bucks on the hot new technology that everyone has just started talking about. Be #1 before your competitors can claim that honor. Use the purchase to identify your program with that technology in order to make your program more visible and thereby attract good people (students and staff) and gain financially (grants, gifts, discounts from the vendor of that technology).

For example, promise that every new student or staff member will get popular technology X, or have access to service Y. Or, if technology Z is in hand, publicize how many of your people or courses use it. Cite the amount of money spent on the technology to dramatize your program's commitment. Use the name of the technology or its vendor in conjunction with the name of your program in recruitment and grant proposals (e.g., "We are the Z University").

Obviously, programs also hope to use their technology somehow for educational gains. But what distinguishes this strategy from the four I'll describe in coming weeks is the hope that the acquisition of the technology will itself lead to visibility and thereby to material gains.

More often than not, this strategy seems to be associated with an educational theory that might be labeled, "Technology is magic." Advocates might simply assert, "This technology is so exciting in its potential and is easy enough to use that faculty and students are bound to embrace it and do any number of good things with it!!!"

HOW TO EVALUATE THIS STRATEGY'S SUCCESS: Did your program actually achieve lasting gains in visibility and thereby material gains, relative to its competitors? (Each of these posts will include brief notes about how to evaluate its strategy.) That's strategy #1.

Think back five or more years ago; do you know of an academic program that has followed this strategy? Did they achieve the results they sought? To write about it (and feel free to leave the program anonymous), click the 'comment' button below and respond to any of the following questions:
  • Were there gains in visibility, staffing or money? Did those gains diminish quickly as some competitors waited for prices to drop and then bought the same technology? Did other competitors wait until the buzz about technology X faded, and then buy hot new technology Y?
  • Worse, were their efforts to use technology X for programmatic improvement sabotaged by technological change itself? For example, if hot new technology X was great for visualization, two years later did the program's innovative staff shift focus to a newer technology Y, a technology that promised a very different educational improvement? Did each of these promises whip by so quickly, each new one distracting attention from the last new one, so that none were ever fully realized?
  • Once the program bought into the new technology, was it difficult to find enough qualified support staff who also understood education well enough to help staff learn to use it effectively for teaching?
In most programs I've seen over the last thirty years, these big investments in a hot new technology often failed to produce the material or educational gains that been the topic of such much hype a few years earlier. My next post in this series suggests an alternative way to use technology to increase a program's visibility and thereby to attract people and resources: a strategy much more likely to produce lasting gains.

Saturday, August 08, 2009

Ten Things I (no longer) Believe About Transforming Teaching and Learning with Technology: Introduction

(revised November 23, 2009)

Working on a curriculum committee? planning a new academic building? are you a chief information officer? director of a teaching center? Welcome to this discussion!

I've been writing a series of posts about how to improve (or even transform) college education.  I'm 60 now, and I've begun to realize that I've changed my mind about a lot of things I used to believe in that area.

I first heard about an imminent technology revolution in higher education when I was a senior in high school.  Initially the change was to affect how people learned and what they learned (e.g., programming). Later the hopes also grew to include who could learn (e.g., distance learning) and a reduction in the costs of instruction.

I heard more about this coming transformation when I was in college, in graduate school, as a program officer for the Fund for the Improvement of Postsecondary Education (1978-85), as a senior program officer with the Annenberg/CPB Projects (1985-96), at the American Association for Higher Education (1996-97), and since Steve Gilbert and I founded the Teaching, Learning, and Technology Group (1998- ), a non-profit that supports over 100 colleges and universities around the world.

The buzz continues, continually changing its details. The catalyst would be mainframe engineering analysis and design programs. It would be simulations, computer-aided instruction, microcomputers, computer-aided design, videodisc, email, distributed laboratories, hypertext, Gopher servers, math tools, the Web, asynchronous learning networks, national learning infrastructure, streaming video, ePortfolios, eLearning, blended courses, course redesign, iPods, clickers, lecture capture, smart phones, digital cameras, social networking,...

Each new technology changed important elements of the direction -- students learning to program in BASIC on microcomputers would learn to think logically and creatively for themselves, videodisc would make education more visual and interactive, HyperCard would trigger a wave of hypertextual, interdisciplinary thinking...

There are common threads to this rapidly flickering vision, ideas that I've heard repeatedly over the decades:
  • Education will shift the student's role from passive to active. 
  • Education will become more self-paced. 
  • Many new kinds of students will be involved. 
  • As more of the explaining function of education becomes embodied in technological materials, the faculty role will shift from always being the 'sage on the stage' toward spending more of their time coaching and managing the learning process("a guide on the side"). 
  • Although the upfront costs would be large, there will be cost savings, too. 
  • And the exponential improvement in the power and efficiency of computer chips will help assure that the wave of change that we could already see would soon accelerate. 
Those are frequently repeated hopes.


Since 1966, most predictions I've heard for technology-driven, paradigm-shifting change in the "how" of learning have not come true - not pervasively, not yet.

For example, most of today's students still seem to do most of their learning through what instructors and textbooks explain to them, for example. The students' job is still mainly to take notes, preparing to verify on the test that they have understood what they have been told. (Do you agree that this was, and is, the norm in colleges? If not, what was the norm 20-40 years ago? has the norm changed?). I think the norm has changed a little, but not in the ways most people predicted years ago.

Please add your own observations (click the 'comment' button below.)

What claims for technology-driven change in learning did you hear, 5-10 years ago or more?

What did you think that academics would need to do in order to achieve that improvement in learning on a programmatic scale?

Have any of those hoped-for improvements in learning actually happened across your program?
Which predicted changes in learning have not yet occurred on that scale? Why not?
  • Were those goals for changes in learning actually unattainable or over-valued?
  • Were there problems with the proposed strategy for achieving those changes?
  • Was the new technology of the day inadequate to the task?
Link to: The first thing I no longer believe: to attract attention, people and resources, be the first to buy into the hottest new technology.

Friday, August 07, 2009

Priorities for TLTG Online Symposium 2009 - Fac/Pro Dev/Support

Priorities for Faculty/Professional Development/Support Online Symposium

August, Sept, 2009, the TLT Group, Inc.
Frugal Innovations: Faculty Roles and Programmatic Support
What should we focus on first, next?

Include Exclude 1st results from similar online poll 8/4

This Web Page:

Try to Include topics, issues, questions, activities, challenges, strategies - In what order?

We seek participants' input for determining the
order in which we address some of the following in this 2009 Symposium. We will also welcome
polite responses to "How could you have been so stupid as to omit
XXXYYYZZZZZ, which should be one of the top priorities for this

  • Cope with $ Cuts
    How to cope with current budget cuts just for now? or forever? Planning/hoping for return to "normalcy"?

  • Info for Decisions
    What kinds of information or data will really effect related decisions (by whom?) in the next 12 months?

  • Nanovation, Extermissions, & Milli-Everetts
    See also: "Milli-Everetts - Smallest Essential Steps for Incremental Exponential Education Revolution: Nanovation?"
    Nanovation = Intentional use of new resources for "sharing forward" improvements
    and innovations in ways that multiply impact (like "viral marketing,"
    - morally and intellectually responsible version of "Ponzi
    scheme"/Pyramid/Chain Letter; Extermission = Intentional, informal outreach - used during TLT Group's online
    sessions about Twitter in July, 2009.
    Milli-Everett = The smallest effort required to enable and encourage at least one teacher to at least:

  • Make one improvement in one course,
  • Get a little feedback and improve that improvement,
  • Help two at least colleagues each make similar improvements - and each help at least two more colleagues ....
  • Mid-Level Roles
    Role of mid-level academic professionals (dept
    chairs, division leaders, deans, …)in success/failure of dissemination, innovation, ...
  • Gmail Decision
    Tools that appear to have great potential for
    teaching/learning and become even more accessible when all students and
    others have Gmail accounts - implications for budget, policy, operations, teaching/learning
  • Using Un-Owned Resources
    Web 2.0 -
    the difference between "free beer and a free puppy"; who is
    responsible for what kind of support and usage policies, standards when
    the college or university cannot own or control the resource? - implications for budget, policy, operations, teaching/learning
  • Handheld Devices
    Love em or leave em? Use in courses or forbid in classroom? Clickers, PDAs, smart phones, netbooks, audio & video recorders -
    - implications for budget, policy, operations, teaching/learning
  • Back-Channel Communication
    Constructive "back-channel" communication - anonymous or not,
    sanctioned or not!
  • Brief Hybrids
    Taking advantage of these compact combinations of plans, media, activities, references, to make small low-risk, low-cost course improvements: Dipping a toe in
    the shallow, quiet end of the raging technology sea!
  • Textbooks: Evolution or Extinction?
    Emerging alternatives and variations. How
    changes in technology are both impeding and facilitating the
    opportunities for students to take their own notes and engage actively
    with readings and other learning resources.
    The myth of standardized resources for independent learning.

  • Fundamental Paradox of Faculty Development
    Recognize and try to reconcile these two approaches:
    A. Rational: emphasizes data, assessment, "scalable" strategies
    B. Personal: emphasizes trust, individual differences, relationships and growth

    Address these provocative-but-useful questions:
    Why is Faculty Development a low institutional priority?

    Why are so many
    faculty members still, in 2009, ''assessment-phobic'' and ''rubric-ignorant''? [Maybe we need to
    find Latin or Greek terms for ''assessment-phobic'' and ''rubric-ignorant' -
    ha! ha??]

    What could be done to avoid or reduce disproportionately large budget cuts for faculty development? So that when we return to better economic times, a larger portion of institutional resources will be committed to faculty development?
  • Collection & Dissemination Challenge
    Agree/disagree/deal with: Most conscientious and valuable efforts to collect, assemble, publish, and get others to use valuable practices, policies, and ideas for improving teaching and learning in higher education fail to reach more than a few percent of all faculty - ever, certainly not in a year or two. Its getting even more difficult to get anyone to pay attention to any message or information - even about resources they need and are already entitled to!
    How can we enable and encourage effective exchange of information and innovations among Symposium participants, their own Institutions, even more widely!
  • Faculty Learning Communities - Communities of Practice

  • Even More Important!
    "How could you have been so stupid as to omit
    XXXYYYZZZZZ, which is the one that REALLY important to me? It should be one of the top priorities for this

Return to Top

Try to Exclude the Seductively Plausible and Unrealistically Contingent:
topics, issues, activities
, challenges, strategies

We'll try to exclude from our discussions topics, etc. that are

Seductively Plausible but we already know they just don't work or they're unlikely to be
accomplished in a short time with scarce resources

Unrealistically Contingent on
major changes in campus culture that are still not within reach (e.g., changing the promotion/tenure system)

Return to Top