Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

  • A Correlation Method for Collecting Reference Statistics (from College & Research Libraries, 1999, v.60, no.1)
    • U of South Carolina tried sampling approach for ref stats, and did some math to support the sample by correlating ref stats to door count.
    • Eventually, they used door count to estimate the number of in person ref stats pretty reliably, but they couldn't count phone or electronic stats
    • Pretty interesting...  pretty mathematical...
  • Reference Use Statistics: Statistical Sampling Method Works (p.45-57; from Southeastern Librarian)
    • Includes a lit review, describes what other schools are doing; the one about LSU was interesting: they asked the statistics department to help them, and they ended up sampling with 60 hours of 4103 total service hours for the year.  Error rate was 11.23%, which is actually pretty good.
    • Focuses on in person ref stats
  • A tool for all places: a web-based reference statistics system (from Reference Services Review)
    • First few pages do a great job of summarizing our concerns and questions.
    • Then talks about the online form that Texas A&M developed in HTML and ASP code, with screenshots and code included.
    • Too clunky for us.

Heather:

  • Miller, J.. (2008). Quick and Easy Reference Evaluation: Gathering Users' and Providers' Perspectives. Reference & User Services Quarterly, 47(3), 218-222. Retrieved November 11, 2009, from Social Science Module. (Document ID: 1473405461).
  • Evaluation of Reference Services--A Review. (from The Journal of Academic Librarianship v. 33 no. 3 (May 2007) p. 368-81, Pali U. Kuruppu)
    • Not as current or in depth as you might believe it would be.
    • Good background; covers more than just reference statistics, so explains some of the older arguments of various evaluation techniques for reference services. The older arguments and concerns are similar to the new. People have been trying to figure this out since the '70's.
    • Not necessary reading for this project; but provides some things to think about.
  • Introducing the READ scale: Qualitative statistics for Academic Reference Services (Bella Karr Gerlich and G. Lynn Berard, Georgia Library Quarterly, Winter 2007)
    • The READ scale, a pilot project at Carnegie Mellon characterizes reference transactions on a scale from 1 (least amount of effort and no specialized knowledge) to 6 (in depth research and instruction, collaboration, most time and effort) and collects that number as the information for each reference transaction.
    • Strengths include the ability to show that the majority and more complicated reference transactions take place away from the desk. Also requires no coding or infrastructure, could still be recorded on paper. Minuses include: difficult to implement in a larger institution (from both a data integrity and participant buy-in perspective), still need to include other data (time of day, at/away from desk, method of transaction (in person, email), so this would just be another piece of information to include)
    • Article loses credibility points for not mentioning any challenges, downsides, arguments against.

...