Saturday, August 20, 2005

Origins, Part V: Exploring Institutional Research

This post originally appeared in my short-lived blog Truth and Beauty, and was moved to this blog in June, 2007.


The kinds of challenges I came up against while thinking through the issues I describe in Origins posts III (quality in distance learning) and IV (inputs versus outputs) have led me over the last year to an interest in the world of Institutional Research (click to read Wikipedia entry on I.R.).

I recently applied to the graduate certificate program in I.R. at Penn State University. I hope to complete the program online as a way of further exploring the field and finding out if it's what will hold my interest for the rest of my career in Higher Education. I'm delaying my plans for a doctoral program right now, hoping that the certificate program would give me a chance to hone in on some ideas for my dissertation, assuming I end up doing something related to I.R...

Perhaps the best thing for this post would be to simply include here the personal statement I wrote for my application to Penn State. This is the long form--I discovered after I wrote it that I had to limit it to one page, so the version I submitted is a bit shorter. In any case, this provides some good background related to why I'm interested in I.R. (and repeats some of what I've said in earlier "orgins" posts):

I never would have predicted that I’d become interested in data. In my positions of increasing responsibility related to technology in education, though, I have faced continued pressure to justify the use of technological tools for teaching and learning. I’ve been asked to demonstrate the effectiveness of these tools in achieving intended outcomes, and have often struggled to put together the data to provide such illustrations. My search for the right information, and enough of it, has gradually led me to a fascination with data and its implications for educational decision-making.

This search also raised for me a lot of questions about data and effectiveness. As I began to undertake the job of illustrating technology’s value, I discovered that data alone didn’t tell the whole story. In fact, I often found data meaningless in the absence of other important information: What are the goals and intended outcomes? What are the specific strategies and applications we’ve deployed toward reaching these outcomes? How do we know which inputs are supporting, or creating barriers to, our success? What about information and data from other institutions—can this tell us something about ourselves? These questions have often been harder to answer than I thought they would be, but I learned that the real value of the data was not only in what it could tell you, but also in what it made you question.

One example of learning the value of data—specifically its relationship to goals and outcomes—comes from Marlboro College in Vermont, where I led curriculum development for a graduate program focused on technology in education. Marlboro College was at the time a school decidedly non-technological and more than a bit resistant to technology, but I feel that we succeeded in communicating the value of technology in education. We did this by asking what the core institutional values and goals were. A student-centered curriculum is a fundamental part of the college’s mission, and we were able to show how technology increases the institution’s and instructors’ ability to be student-centered. Since many of our students in this program were teachers themselves, we also focused the curriculum on developing goal setting and problem solving skills. Students were asked to develop technology-based solutions to real educational problems and eschew using technology for its own sake. I had learned that data must be viewed within a context of institutional goals, and applied this insight to my work.

As director of distance education at Southern New Hampshire University, I continued to work on tying my efforts to institutional objectives. I also began to find that data could be tremendously useful in plotting strategies for working toward objectives. Enrollment, student satisfaction, and student success (drop/withdrawal/failure rates and grade distributions) all became important indicators to me, but their utility went beyond providing evidence for a match with the strategic plan. What these data started to do was to tell me stories of the experiences that our students and faculty were having. I was able to use the stories that the data were telling to look at our inputs—the technology we were deploying, the marketing strategies, student support services, and faculty development. I learned that the data could give us hints about how to design the right combination of inputs to grow the programs.

At North Shore Community College, I am on the senior staff of the college’s Academic Affairs component. I continue to employ the lessons I’ve learned about data and asking the right questions. My job is largely about setting the strategic direction for educational technology at the institution, and connecting that strategy to other areas in Academic Affairs, Student and Enrollment Services, Administrative Affairs, and Institutional Advancement. I’m a little closer to a 33,000-foot view of the planning process now, and I’ve discovered that information about goals and inputs, while valuable, is not all that’s needed to make decisions. Also necessary is the ability to see that although these things drive the institution’s planning efforts, strategy requires more than a simple laundry list of goals, inputs, and outcomes. A process must be applied for all this data to be turned into strategy.

In an effort to advance my strategic efforts, I’m looking outward. I want to know what the data are “out there.” What have other institutions set as their objectives? What are their inputs? How well are they doing in reaching their outcomes? As an advisory board member of the National University Telecommunications Network (NUTN), I’ve been contributing my ideas to a distance learning benchmarking initiative. The initiative seeks to scaffold individual institutional strategies by providing results data, in the aggregate, from a wide variety of participating schools. Through my participation in the effort, I’m trying to remind the developers that results can’t be “unbundled” from the best practices. Both the outcomes data and the inputs must be a part of the planned benchmarking tool because either one of these alone loses meaning.

All too often, I’ve witnessed institutional leaders, policy makers and legislators confuse data with real outcomes. The statistics provide an important part of the story, but they are only a representation, and the real story includes so much more. While in graduate school, I took a course at MIT in system dynamics. After we spent a semester building complex models and simulating the behavior of economic systems over time, our instructor included a lecture on “Truth and Beauty.” The simulations, he warned us, may have been beautiful, and they may even have told us a powerful story. However, they shouldn’t be confused with reality—which is far more complex than we’d ever be able to model.

I worry that this kind of confusion is sometimes at the heart of educational decision-making. In numerous recent speaking engagements, United States Department of Education Secretary Margaret Spellings has said “In Texas we like to joke ‘In God we trust. All others, bring data.’” As it turns out, that phrase can be attributed to the late W. Edwards Deming, father of Total Quality Management (TQM) and continuous improvement—a pioneer in scientific management. W. Edwards Deming has also been reported to say, “Data will provide you with three percent of what you really need to know.”

My experience confirms that, in an age of accountability in education, it’s worth trying to find and record the data. The ability to search for the other 97% “…of what you really need to know,” however, must be part of the academy’s institutional research and effectiveness capacity. I hope to be able to explore these ideas, and translate them into practice, as part of Penn State’s graduate certificate program in Institutional Research.

No comments: