Sunday, September 4, 2005

Online Classroom as a Hybrid Place

This post originally appeared in my blog Digital Amalgam, and was moved to this blog in July, 2007.

From Cats in the classroom: Online learning in hybrid space, posted on FirstMonday:
It is not necessary for all of us to look at the same ugly carpet to create knowledge together successfully.
In this article by Michelle Kazmer, there's an interesting exploration of the idea that people who are learning together online are sharing an online space (or "place," as Michelle prefers to call online classrooms) while at the same time all occupying physical spaces. She points out that these physical spaces have an effect on what the virtual learning space becomes:
students occupy online space at the same time they are occupying and engaging with their local physical space; and the circumstances of their physical surroundings shape the shared online space.
This article made me think of my previous post, in which I pointed out that I cross paths with all these people who are busy doing other things, but then we all end up in a meeting together. People in online classrooms are crossing paths, too. And just as our awareness, in the physical world, of where people have been and where they are going can help us shape community, I suspect that awareness of others' physical spaces can contribute to community building, too. I wonder if Nancy White would call this another community indicator?

I'm not sure I do this anymore (or if I'm just not conscious of it), but in phone calls I used to always picture the place where the person on the other end of the line was while talking to me. If it was someone who's home or office I hadn't seen, I'd sometimes get flustered not being able to "see" the person in their environs. Other times, I've just made up their home or office. (As I type, I'm realizing that I do indeed still do this--and every poor telemarketer or customer service person I talk to shares the same boring cubicle with every other one.)

Don't know what all this means. Maybe we really need to have a sense of physical space. Actually, I think Michelle's got it right in saying that it's not about space, but about place.

In any case, it's fun to think about the online classroom as a kind of virtual nexus of physical worlds, and what the implication is for community building.

(Here's an idea I posted in my e-teaching coach blog related to this.)

Technorati tags: , , , ,

Tuesday, August 30, 2005

Awareness as a Community Indicator

This post originally appeared in my blog Digital Amalgam, and was moved to this blog in July, 2007.

Inspiration this morning came from discovering Nancy White's liberal use of the Technorati tag 'community_indicators'. I enjoyed reading a blog entry she pointed to about "community of the path" (Debra Roby) and it got me to thinking about how I cross paths with so many at work, usually in meetings, and how we all get intertwined through a few overlapping responsibilities, and also how we choose to interact with one another at those crossover points.

I've been very aware lately of the ways in which I'm interacting with others, and sometimes not liking what I see in how I've chosen to interact. But as I've become more aware, I've started to notice something. Somehow, my interactions feel more "connected." I guess what I mean is that I don't feel as much (with some of the folks with whom I'm interacting) that I'm just crossing paths, but that I'm seeing more of where they're coming from and where they're going (what those other responsibilities are), and how that plays a role in how they choose to interact with me. I'm more aware, too, I've where I'm coming from and how that helps determine the choices I make in interaction.

Friday, August 26, 2005

Top Myths about Online Learning: Disembodiment

This post originally appeared in my blog Digital Amalgam, and was moved to this blog in July, 2007.

A few months ago, I was reading the book Stiff: The Curious Lives of Human Cadavers, by Mary Roach. It's a great book, and has nothing to do with educational technology, but the word "disembodied" appeared in it a lot. The use of that word was quite literal in this book, but it kept reminding me of the less literal use of the word as it's often bandied about in describing the experiences of online learners.

I did a quick Google search for
+"disembodied" +"online learning" which at the time turned up 597 hits. Today it's turning up 677. Are there more skeptics out there? Or more people fighting the skeptics? Only digging through the search results will tell. It's worth looking at a few of the hits. One is a book review of book called On the Internet, which takes a negative stance on the Internet in general and on online learning in particular, which at one point draws on the same philosopher as the author does to refute the idea that engaging in the Internet is an act of disembodiment:
For Merleau-Ponty, there can be no experience outside the body and he would conclude that any warning about the dangers of disembodied experiences are pointless because such a thing is not possible.
The reviewer's overall point is that any new technology goes through this long period of misconception and myth. Can we please move beyond this period soon for online learning? It makes my job difficult when I discover that many of my colleagues have a mental model of online learning as I talk to them about how we might advance our goals related to it. As I'm talking with them about which courses and programs we might offer online, I often forget that in their mind this is a second-rate learning approach, and one that will end forever the close, personal relationships that students and faculty will have.

When I finally snap out of it and remember what they're thinking, I do my little song and dance reminding them that in online courses most students and instructors alike report that they're able to develop closer and mor meaningful relationships with more of their classmates/students. I try to explain that the asynchronous nature of onling learning does worlds to enhance interpersonal contact, plus reflection and critical thinking. I expound on the wonder of the tools in helping to better facilitate collaborative learning. And still, I know they're thinking that online learning is about disembodiment.

I think the best thing that can happen for these folks is that they take, and then teach, an online course themselves. But not all of them will, so I'll keep doing my song and dance.

Actually, more an more I can already begin to see a shift in thinking, and the myth of embodiment seems less present among my colleagues. However, I'm still mindful when I'm talking to someone who's never experienced online learning to deliver the "elevator pitch" part first--trying to make sure we have a shared picture of what I mean by online learning. Every time I do this, I confirm that the myth was alive and well because as I describe my view/experience of online learning I get a lot of "really?" and "oh, I didn't realize that."

So practice your anti-disembodiment elevator pitch. Let's see if we can kill off this myth in short order.

Wednesday, August 24, 2005

Blogs versus Discussion Boards, Continued

This post originally appeared in my blog Digital Amalgam, and was moved to this blog in July, 2007.

I've been enjoying a current thread on the EDUCAUSE Instructional Technology (INSTTECH) listserv about blogging versus discussion boards as tools for teaching and learning. The whole thread harkens back to posts (see posts on 9/8/2004, 9/12/2004, and 9/14/2004) here and elsewhere about this topic. I remain fascinated by the thinking going into the use of blogs as instructional tools, and also tools for development of communities.

(Maybe this interest will be enough to get me back to blogging regularly! I've been away from this for so long.)

A colleague of mine had a great insight about this thread, when I sent her a link to it. She pointed out that there might be something missing with regard to the "blog culture" when folks are comparing them to discussion boards. She points out that blogs can't be looked at just as a tool. She has an interesting point here, I think--that we have to consider the weight of the "cultural shift" that might be necessary to use blogs effectively.

But then, can't the same be said for using discussion boards effectively? More exploration of this to come...

Saturday, August 20, 2005

Origins, Part V: Exploring Institutional Research

This post originally appeared in my short-lived blog Truth and Beauty, and was moved to this blog in June, 2007.


The kinds of challenges I came up against while thinking through the issues I describe in Origins posts III (quality in distance learning) and IV (inputs versus outputs) have led me over the last year to an interest in the world of Institutional Research (click to read Wikipedia entry on I.R.).

I recently applied to the graduate certificate program in I.R. at Penn State University. I hope to complete the program online as a way of further exploring the field and finding out if it's what will hold my interest for the rest of my career in Higher Education. I'm delaying my plans for a doctoral program right now, hoping that the certificate program would give me a chance to hone in on some ideas for my dissertation, assuming I end up doing something related to I.R...

Perhaps the best thing for this post would be to simply include here the personal statement I wrote for my application to Penn State. This is the long form--I discovered after I wrote it that I had to limit it to one page, so the version I submitted is a bit shorter. In any case, this provides some good background related to why I'm interested in I.R. (and repeats some of what I've said in earlier "orgins" posts):

I never would have predicted that I’d become interested in data. In my positions of increasing responsibility related to technology in education, though, I have faced continued pressure to justify the use of technological tools for teaching and learning. I’ve been asked to demonstrate the effectiveness of these tools in achieving intended outcomes, and have often struggled to put together the data to provide such illustrations. My search for the right information, and enough of it, has gradually led me to a fascination with data and its implications for educational decision-making.

This search also raised for me a lot of questions about data and effectiveness. As I began to undertake the job of illustrating technology’s value, I discovered that data alone didn’t tell the whole story. In fact, I often found data meaningless in the absence of other important information: What are the goals and intended outcomes? What are the specific strategies and applications we’ve deployed toward reaching these outcomes? How do we know which inputs are supporting, or creating barriers to, our success? What about information and data from other institutions—can this tell us something about ourselves? These questions have often been harder to answer than I thought they would be, but I learned that the real value of the data was not only in what it could tell you, but also in what it made you question.

One example of learning the value of data—specifically its relationship to goals and outcomes—comes from Marlboro College in Vermont, where I led curriculum development for a graduate program focused on technology in education. Marlboro College was at the time a school decidedly non-technological and more than a bit resistant to technology, but I feel that we succeeded in communicating the value of technology in education. We did this by asking what the core institutional values and goals were. A student-centered curriculum is a fundamental part of the college’s mission, and we were able to show how technology increases the institution’s and instructors’ ability to be student-centered. Since many of our students in this program were teachers themselves, we also focused the curriculum on developing goal setting and problem solving skills. Students were asked to develop technology-based solutions to real educational problems and eschew using technology for its own sake. I had learned that data must be viewed within a context of institutional goals, and applied this insight to my work.

As director of distance education at Southern New Hampshire University, I continued to work on tying my efforts to institutional objectives. I also began to find that data could be tremendously useful in plotting strategies for working toward objectives. Enrollment, student satisfaction, and student success (drop/withdrawal/failure rates and grade distributions) all became important indicators to me, but their utility went beyond providing evidence for a match with the strategic plan. What these data started to do was to tell me stories of the experiences that our students and faculty were having. I was able to use the stories that the data were telling to look at our inputs—the technology we were deploying, the marketing strategies, student support services, and faculty development. I learned that the data could give us hints about how to design the right combination of inputs to grow the programs.

At North Shore Community College, I am on the senior staff of the college’s Academic Affairs component. I continue to employ the lessons I’ve learned about data and asking the right questions. My job is largely about setting the strategic direction for educational technology at the institution, and connecting that strategy to other areas in Academic Affairs, Student and Enrollment Services, Administrative Affairs, and Institutional Advancement. I’m a little closer to a 33,000-foot view of the planning process now, and I’ve discovered that information about goals and inputs, while valuable, is not all that’s needed to make decisions. Also necessary is the ability to see that although these things drive the institution’s planning efforts, strategy requires more than a simple laundry list of goals, inputs, and outcomes. A process must be applied for all this data to be turned into strategy.

In an effort to advance my strategic efforts, I’m looking outward. I want to know what the data are “out there.” What have other institutions set as their objectives? What are their inputs? How well are they doing in reaching their outcomes? As an advisory board member of the National University Telecommunications Network (NUTN), I’ve been contributing my ideas to a distance learning benchmarking initiative. The initiative seeks to scaffold individual institutional strategies by providing results data, in the aggregate, from a wide variety of participating schools. Through my participation in the effort, I’m trying to remind the developers that results can’t be “unbundled” from the best practices. Both the outcomes data and the inputs must be a part of the planned benchmarking tool because either one of these alone loses meaning.

All too often, I’ve witnessed institutional leaders, policy makers and legislators confuse data with real outcomes. The statistics provide an important part of the story, but they are only a representation, and the real story includes so much more. While in graduate school, I took a course at MIT in system dynamics. After we spent a semester building complex models and simulating the behavior of economic systems over time, our instructor included a lecture on “Truth and Beauty.” The simulations, he warned us, may have been beautiful, and they may even have told us a powerful story. However, they shouldn’t be confused with reality—which is far more complex than we’d ever be able to model.

I worry that this kind of confusion is sometimes at the heart of educational decision-making. In numerous recent speaking engagements, United States Department of Education Secretary Margaret Spellings has said “In Texas we like to joke ‘In God we trust. All others, bring data.’” As it turns out, that phrase can be attributed to the late W. Edwards Deming, father of Total Quality Management (TQM) and continuous improvement—a pioneer in scientific management. W. Edwards Deming has also been reported to say, “Data will provide you with three percent of what you really need to know.”

My experience confirms that, in an age of accountability in education, it’s worth trying to find and record the data. The ability to search for the other 97% “…of what you really need to know,” however, must be part of the academy’s institutional research and effectiveness capacity. I hope to be able to explore these ideas, and translate them into practice, as part of Penn State’s graduate certificate program in Institutional Research.

Friday, August 19, 2005

Origins, Part IV: Inputs versus Outputs, versus Process

This post originally appeared in my short-lived blog Truth and Beauty, and was moved to this blog in June, 2007.


As I've learned the ways that the academy is viewed, it seems that a lot if boils down to inputs and outputs--inputs being the "stuff" that makes up a college, from buildings to curriculum to faculty and support staff; outputs being student success rates, research, etc.

It appears to me that in this age of accountabilty in education that there is some confusion about which of these things we ought to be paying most attention to. What really matters about a college? If you look at what the accreditors have looked at for many years, you'd probably say that what matters are the inputs. Accreditors have traditionally looked primarily at things like how well the curriculum is structured, how governance works, how many volumes there are in the library, how much faculty and administrators are paid, and other things related to the composition of the institution.

More and more, however, we're starting to see a trend toward looking more closely at the outputs. Especially for public institutions, there is an increasing pressure to track and report student success and other outcome measures. Public and private institutions alike are finding that both regional and national/professional accreditors are putting more and more outcomes assessment into their accreditation standards.

I think that both of these views on what makes for a good institution come up short. For me, it feels that we need to understand that the sustainable success of an institution lies neither in its inputs or its outcomes. Instead, it is the capacity of the institution to really understand the connection between its inputs and outputs that matters, in my view. An institution that's doing poorly on outcomes needs to be able to assess why it's doing poorly--which of its strategies or tactics (inputs) are acting as barriers to success? An institution that's doing well needs to know how to sustain success by understanding which inputs are bolstering it.

In either situation, an institution has to have the capacity to understand that over time there are likely to be shifts in what sustains or blocks success, and they need good planning capacity to lay out evolving strategies and tactics.

Maybe this capacity--the ability to assess and to plan--is itself an input. But I think that we should look at the institutional research and assessment capacity as the third leg of a tripod that includes inputs, outcomes and this process of analyzing and planning.

Saturday, July 23, 2005

Origins, Part III: Demonstrating Quality in Distance Learning

This post originally appeared in my short-lived blog Truth and Beauty, and was moved to this blog in June, 2007.


Part of what's been going on in my professional life that has led me to this interest in and exploration of Institutional Research and Effectiveness has been an increasing pressure (to some extent self-imposed) to demonstrate quality and effectiveness of distance learning initiatives. Since online teaching and learning is still so new, there seems to be quite a lot of scrutiny. Questions about whether online courses are effective as traditional classroom learning abound.

As a manager of distance learning initiatives, I've struggled to find ways to demonstrate the quality and effectiveness of these programs. I know they're high quality, and I know that the faculty and students involved in them generally find them even more effective than their traditional courses. But how to demonstrate this to the skeptics?

It seems that one direction this has led folks in the field is toward creating frameworks for quality measurement in distance learning. It's interesting to me that rather than doing extensive studies and data gathering about teaching and learning outcomes we've instead decided to create models that help us evaluate the quality of the inputs to distance learning courses. But then, it seems that the history of assessment in higher education is about this balance between evaluating inputs and looking at the outcomes.

More on this in Origins: Part IV.

Friday, July 8, 2005

Origins, Part II: Why "Truth and Beauty"?

This post originally appeared in my short-lived blog Truth and Beauty, and was moved to this blog in June, 2007.


Why call this blog "Truth and Beauty"?

When I was pursuing my Masters at the Harvard Graduate School of Education, I cross-registered for a course in System Dynamics at MIT's Sloan School of Management. I had worked with Peter Senge while at PBS and was intrigued by his idea of systems thinking. I thought that the course would help me better understand this concept and the whole idea of organizational learning.

It did indeed give me insights into Senge's ideas. What I didn't know, though, when registering for the course, was that it was going to be highly technical. We would use specialized software (STELLA and iThink) to build complex business models and run simulations of these models to observe how they behaved over time. There was a lot of equation writing and playing with different data inputs. We were tweaking the models to see what it would take to get them to behave in just the way we wanted them to. It was a lot of fun, and an incredible learning experience.

Toward the end of the semester, our instructor included a lecture on "Truth and Beauty." The models that we'd labored over, he warned, were not to be confused with the truth. They may have been beautiful, telling interesting stories and even giving us some insight into decision-making. There were not, however, reality--something far more complex than could ever be modeled.

I'm fascinated by this paradox about data and decision-making. We often treat data as gospel. But usually there are more questions than answers, when we really look close. So are the data pointing us to the truth, or just beautiful possibilities? Or are these the same?

I guess this is what Keats meant when he said "'Beauty is truth, truth beauty,' - that is all ye know on earth, and all ye need to know." (Ode on a Grecian Urn).

All ye need to know, indeed. There's way more I gotta know, and I'm going to keep looking.

Monday, June 20, 2005

Origins, Part I: Why a Blog for Effectiveness?

This post originally appeared in my short-lived blog Truth and Beauty, and was moved to this blog in June, 2007.

Recently, a confluence of events in my professional life has led me to explore the worlds of educational assessment and institutional effectiveness. I'll describe more about these events in follow-up "Origins..." posts.

Basically, though, I'm hoping this blog can become a resource for folks working in postsecondary education—with responsibility for or just an interest in institutional research/effectiveness, policy analysis, planning, and evaluation/assessment. There are a wide variety of resources out there on these topics, but I haven't yet come across a commentary-type editorial resource. This is not to say I'm the first to be blogging or providing commentary on activities in this world—only that I haven't yet discovered these kinds of resources.

As my disclaimer under "About This Blog" says, I am not by any means an expert in the field. I'm just an academic technology administrator with a great deal of interest in institutional research. Seems like these days, it behooves us all in the field of education to have such an interest... I hope that my naive perspective on all this can be a helpful resource.

I've simply been doing a lot of thinking about the related issues lately (for reasons upon which I'll soon elaborate), and I thought it might be best to do some of this thinking out loud and share it with others.

Welcome to "Truth and Beauty." I hope you find it interesting and engaging.

Monday, January 10, 2005

Re-Invention and Diffusion of Innovation

This post originally appeared in my blog Digital Amalgam, and was moved to this blog in July, 2007.

A while back, a colleague of mine and I wrote about Diffusion of Innovation theory and how it might relate to faculty development. Truth be told, we were familiar with Everett Rogers's work on diffusion of innovations and read a lot of secondary sources related to it, but neither of us had ever picked up the book Diffusion of Innovations and read it. I finally bought it recently, and I'm finding it infinitely readable for an academic book. And the most recent edition just came out, so it's got lots of up-to-date examples and includes references to recent adaptations of the theory, including Malcolm Gladwell's stuff (Tipping Point) and others.

One of the pieces I missed by not having read the book before is the idea of "re-invention." This is how an innovation is changed or modified by an adopter in the process of taking on the innvation and implementing it. Rogers says in the book that many researchers have largely ignored re-invention (so I don't feel so bad for missing it), and that much diffusion research has focused only on innovations that are adopted as developed.

What's interesting to me, working with teachers who are famous for "adopting and adapting" is that it's virtually impossible to track diffusion of innovations among faculty if you don't pay attention to re-invention. Teachers develop their own way of using an innovation so that it fits with their own approach and students' learning styles and abilities.

But it's not just that we should pay attention to re-invention. I think we should encourage it when pushing to have an innovation adopted. I think for the latent adoptors (which Rogers calls "early and late majority"), knowing that re-invention is an option may make the difference between adopting and not.

Let me take online courses as an example. I think most people who haven't taken or taught an online course have a mental picture of what an online course is, and that picture is either wrong or it's very fuzzy. Because early adopters have a high tolerance for ambiguity, they jump in and do an online course even though they aren't clear. For those other folks (I'm talking about teachers of online courses now, not students), we try to develop specific models and formats and templates for an online course--this helps give them a better picture and helps make adoption easier. But once adopted, a good teacher isn't going to want to stick with that format or template. The level to which re-invention is encouraged here could make the difference between that teacher sticking with it or not.

The more I write, the more I feel like I'm stating the obvious here. But this is a new insight that's emerging for me.

Earlier today, I was reading about "folksonomies" versus controlled vocabulary, and I think there's a similar re-invention dynamic here. But I'll have to write more about that later. Gotta get to work!

Sunday, January 9, 2005

Back at 'em...

This post originally appeared in my blog Digital Amalgam, and was moved to this blog in July, 2007.

Happy New Year!

Man, did I ever get consumed at the end of the year! Between the new job and the holidays... Yikes! Has it really been six weeks since I last posted an entry? Well, here's my first resolution for the year--get back to blogging!

More to come...