Friday, August 23, 2013

Comp N: Evaluation



Evaluate programs and services based on measurable criteria

Introduction

Evaluation of services and programs is a necessary component of the library profession. We must test the quality, impact, and need of these services in order to prove their worth and keep patrons coming back for more. Data gleaned from evaluation can help in planning for future programs and services and serve as a basis for constant improvement. It can help determine whether goals have been met, and if not, why not and what could be changed for the future. Evaluation has become more important as new technologies become available to libraries; because not every technology is right for every library, assessment must be done to make sure technologies are useful and cost-effective.

Evaluation can either be done at the end of a program, such as a satisfaction survey after a day-long workshop, or ongoing. Library staff must be knowledgeable of the techniques of evaluation, and they must be given time and support to do them properly. The library workers must also know the context of the evaluation – who wants the information, and for what purpose? How will the evaluation affect planning for the institution? These questions will determine what kind of evaluation will be done, how often, and reporting of the results. Some evaluations may be small, for internal use only; others may be large and ongoing for a governing body.

McClure (as cited in Haycock & Sheldon, 2008) lists six specific criteria that staff may want to assess in a program or service:

  • Extensiveness, or the scope of a service
  • Efficiency, or the time and money involved in the program
  • Effectiveness, or how closely the goals were achieved
  • Service quality, or whether the service met the needs of the patrons
  • Impact, and
  • Usefulness, or how appropriate is the service (p. 182-183).
LIS professionals can create assessments using these criteria, or many professional associations provide evaluation tools to use. For instance the Association of Research Libraries has adopted LibQUAL+, a commercial service that focuses on outcomes and service quality. LibQUAL+ has an annual survey that member libraries complete that measures user satisfaction. For some libraries, evaluation is part of a larger management process like Total Quality Management (TQM). This process helps a library through stages of improving services from the initial exploration to evaluation and expansion; it is generally a very complex process that requires help from an outside consultant. Other management processes like this that include evaluation are Six Sigma, Lean Thinking, and Balanced Scorecard. I learned about Six Sigma in LIBR 204, as well as LIBR 282, the IT Management seminar. In my career as a retail manager, I oversaw the change from Balanced Scorecard to Lean Thinking at my store. Constant evaluation and improvement of work processes, as well as continuous customer survey input, was a part of the Lean Thinking philosophy. For smaller-scale evaluations, an LIS professional can use standards such as the Reference and User Services Association’s guidelines for reference service, or the Association for College and Research Libraries’ standards for information literacy competency.

According to Evans and Ward, people judge services on four criteria: “They expect to get what they want, when they want it, at a cost that is acceptable to them, and delivered in a way that meets their expectations” (p. 224). Evaluation tells us if these service criteria are being met. If libraries do not do evaluations, they risk being left behind by businesses or organizations that better meet these expectations.

Evidence

Evidence 1, from LIBR 251, is an evaluation of four different library websites based on the design heuristics of Bruce Tognazzini and Jakob Nielsen, called 251 Homework 1a Website Evaluation. I chose two criteria from their list, and assessed two websites for each, one which followed the design principle and one which did not. The first measure was consistency. One website held to this well; it is stylistically consistent and all pieces of the site behave in a reliable manner. The second website did very poorly – it was inconsistent and unreliable. The second measure I used was visible navigation. There is actually a checklist from James Kalbach’s Designing Web Navigation that I could use to evaluate how well the websites met the good navigation guidelines. I liked being able to use a checklist from an expert to evaluate the websites, since I was just starting to learn about the topic. This assignment helped me to learn how to assess a library service (in this case, library websites) against expert guidelines to see how the service needs improvement.

Evidence 2 is also from LIBR 251, called 251 DrewieskeHomework3. This is a PowerPoint presentation I gave in Blackboard Collaborate (although I am not including the link to the session because it was full of technical problems); the assignment was to take the SJSU library LOTSS tutorial, use one of the web design principles to improve it, and conduct user testing. Then I took the data learned from the first testing sessions and improved the prototype one more time, after that conducting another round of user testing. I chose to work on the help and documentation features that the tutorial is currently lacking. The presentation describes my prototype, various testing sessions, and results learned from each. This evidence combines my own evaluation of the LOTSS tutorial using the web design standards set by Tognazzini and Nielsen, as well as user evaluations of my prototype. I had the users go through sections of the tutorial, then answer a set of questions I had prepared beforehand. These questions were open-ended ones, such as “How did you feel throughout this tutorial?” and “Are there any help features you prefer over others?” From this project, I learned how to use evaluations to actually improve a service, as the second round of users found the prototype to be better than the first group did.  

Evidence 3, from LIBR 287, is called Drewieske Instruction Observation. I have used this piece of evidence before in Competency B, but I wanted to focus here on using ACRL standards for information literacy in higher education and Universal Design (UD) principles to evaluate the teaching session I observed. I went to a stand-alone information literacy class given by an instructional librarian at the local university to a freshman English class. In my observation, I note she completed at least one ACRL standard, as well as many of the UD guidelines, with examples for each. This paper was a precursor to a final paper on UD and information literacy, into which I incorporated lessons from the observation session. This evidence shows I can evaluate a program involving people, since my other two pieces of evidence were about computer programs. I used criteria set forth by the ACRL and leaders in UD to measure how well the librarian did in her session. Even though my evaluation findings were not used for improving the program, I did learn for myself how this kind of evaluation would be useful in a real-world situation to make improvements.

Conclusion

The ability to evaluate a service or program is an important skill to have no matter the job. I have had to use this for many years in employee reviews and service changes in the retail sector. I anticipate using it even more often as different technologies come into use and need to be tested for effectiveness.

Reference

Evans, G. E., & P. L. Ward. (2007). Management basics for information professionals (2nded.). New York City, NY: Neal-Schuman Publishers, Inc.

McClure, C. R. (2008). Learning and using evaluation: A practical introduction. In K. Haycock & B. E. Sheldon (Eds.), The portable MLIS: Insights from the experts (pp. 179-191). Westport, CT: Libraries Unlimited.

No comments:

Post a Comment