Before I started working in libraries, I taught research methods (and statistics) for over a decade to undergraduate and graduate students. I conducted my own research in the field of social science, presenting it at conferences and in publications. I currently assist two different library publications in their peer review process. I actually like research and statistics. Over the last few years, I’ve noticed an increase in focus on research in library circles. As it has become more necessary to focus on outcomes, progress, and effects–rather than simply usage—research projects have become a focal point. I think this is a worthwhile trend.
However, it concerns me that in many cases the rules of research and the conceptual issues that make research valuable do not seem to be acknowledged. For example, consider the concept of generalizability. Generalizability refers to the ability to extend the research findings and conclusions from a study, typically done on a sample to the larger population. In other words, if I conduct a sample on a library or two, generalizability refers to whether my results could apply to other libraries. In most library research I see today, the answer is no.
One reason for this is appropriate sample size. The results of a survey conducted at your library can only apply to your library. Yet often I hear of people taking the results from one location and applying it to another. Statistically this is not valid. Do you know that most statistics are not valid and are not considered generalizable if the sample size is less than 120?
Another concern of mine is issues of definition, in research lingo: conceptualization and operationalization of variables. Conceptualization refers to how a concept, such as “outcomes” or “learning,” is defined. That is, what do these terms mean to the researcher. Operationalization is how that concept is measured. In other words, how do you measure if something is learned or had an effect?
With some variables this is simple and straightforward. We all understand time and how we measure it. But some concepts can mean different things to different people, not to mention in different circumstances. Social science has struggled with this issue forever–how does one defines and measures abstract concepts, for example love? In libraries, we all understand and generally agree on what we mean by “a program” or even “attendance” and possibly even “library user.” But what about concepts such as “outcome,” “satisfaction,” or “learning?”
In many library studies, the definitions of concepts and their measurement become circular. An outcome of library use is learning. Learning can be measured by library use. As a researcher, I find this means nothing. Likewise, I’ve seen many studies that measure concepts based on perception.
For example, the library, having run a series of programs, asks attendees or library users what they thought. Did they view themselves as learning something at the program/library? Do they feel that it was valuable time spent?
These results are then taken as outcomes. When the public reports that they viewed themselves as learning something at the library, some present the information as a positive outcome–people learn things at the library! In fact, this is false. This example measures perception, what people think and feel, not what is occurring, learning or not learning.
Don’t get me wrong, people’s perceptions are very important! We want people to perceive us this way and these perceptions help us a great deal. The problem is with our presentation and conclusions. Our outcomes should be “people feel they have learned,” not “people have learned.”
The importance of this lies with credibility. For someone who knows and understands research, we may understand what it trying to be communicated, but it isn’t accurate. For a supporter, this may be forgiven, but for those who do not support libraries, this kind of misrepresentation can be fatal.