Motivations and uses: Evaluating virtual reference service

24
Motivations and uses: Evaluating virtual reference service from the users' perspective Jeffrey Pomerantz , Lili Luo School of Information and Library Science, University of North Carolina at Chapel Hill, CB 3360, 100 Manning Hall, Chapel Hill, NC 27599-3360, USA Available online 10 August 2006 Abstract The questions of whether chat reference service is beneficial enough to users to justify the costs of offering it, and how valuable it is to users in fulfilling their information needs, have been primary concerns for librarians providing the service, for library administrators managing the service, and for funding agencies paying for it. The present study combines a traditional evaluation of the user's satisfaction with the reference encounter, with details of the user's information use and the user's motivation for using the chat reference service. This evaluation study assesses the effectiveness of chat reference service in meeting users' information needs. © 2006 Elsevier Inc. All rights reserved. 1. Introduction With the increasing availability of computers and Internet access both within libraries and in modern society at large, online services have become among the most heavily used services libraries offer. Chat reference is one such service enabled by computing and networking, the use of which is increasing. The implementation, management, and evaluation of chat reference service have attracted much attention from library practitioners and researchers over the past several years. The questions of whether chat reference is beneficial enough to users to justify the costs of offering it, and how valuable it is to users in fulfilling their information needs, Library & Information Science Research 28 (2006) 350 373 Corresponding author. E-mail addresses: [email protected] (J. Pomerantz), [email protected] (L. Luo). 0740-8188/$ - see front matter © 2006 Elsevier Inc. All rights reserved. doi:10.1016/j.lisr.2006.06.001

description

Motivations and uses: Evaluating virtual reference service

Transcript of Motivations and uses: Evaluating virtual reference service

Page 1: Motivations and uses: Evaluating virtual reference service

Library & Information Science Research 28 (2006) 350–373

Motivations and uses: Evaluating virtual reference servicefrom the users' perspective

Jeffrey Pomerantz ⁎, Lili Luo

School of Information and Library Science, University of North Carolina at Chapel Hill, CB 3360,100 Manning Hall, Chapel Hill, NC 27599-3360, USA

Available online 10 August 2006

Abstract

The questions of whether chat reference service is beneficial enough to users to justify the costs ofoffering it, and how valuable it is to users in fulfilling their information needs, have been primaryconcerns for librarians providing the service, for library administrators managing the service, and forfunding agencies paying for it. The present study combines a traditional evaluation of the user'ssatisfaction with the reference encounter, with details of the user's information use and the user'smotivation for using the chat reference service. This evaluation study assesses the effectiveness of chatreference service in meeting users' information needs.© 2006 Elsevier Inc. All rights reserved.

1. Introduction

With the increasing availability of computers and Internet access both within libraries and inmodern society at large, online services have become among the most heavily used serviceslibraries offer. Chat reference is one such service enabled by computing and networking, theuse of which is increasing. The implementation, management, and evaluation of chat referenceservice have attracted much attention from library practitioners and researchers over the pastseveral years. The questions of whether chat reference is beneficial enough to users to justifythe costs of offering it, and how valuable it is to users in fulfilling their information needs,

⁎ Corresponding author.E-mail addresses: [email protected] (J. Pomerantz), [email protected] (L. Luo).

0740-8188/$ - see front matter © 2006 Elsevier Inc. All rights reserved.doi:10.1016/j.lisr.2006.06.001

Page 2: Motivations and uses: Evaluating virtual reference service

351J. Pomerantz, L. Luo / Library & Information Science Research 28 (2006) 350–373

have been primary concerns for librarians providing the service, for library administratorsmanaging the service, and for funding agencies paying for it. Evaluating chat referenceservices from the users' perspective, and exploring users' feedback on their use of the service,constitutes one solution to gauge the value and utility of the information provided by such aservice to users.

Library reference service, whether at the desk, by asynchronous media such as e-mail, or bysynchronous media such as chat, is generally provided in an interactive setting that involvestwo parties, the librarian and the user. Evaluations of the success of a reference transactionhave therefore traditionally taken into consideration the points of view of both of these parties:the quality of the answer provided by the librarian and the user's satisfaction with this answerand other aspects of the reference encounter. Both of these evaluation metrics are useful forevaluating a reference service; it is necessary both that answers provided by a service beaccurate and complete, and that the user be satisfied. These evaluation metrics are limited,however, in that they assess the outcome of the reference interaction immediately following itsconclusion. These metrics are concerned with a single point in time and do not look backwardsor forward in time: neither has the ability to identify what information need brought the user tothe service in the first place, or to assess the user's use of the information provided and itsimpact over the long term.

This methodological limitation is in part an artifact of reference work traditionally beingperformed at a desk: once a user leaves the library, it is difficult or impossible to follow-up toask about her use of the information provided. With the advent of virtual reference services,however, it became possible to follow-up with the user, as in this type of service the user isasked to provide an e-mail address. This made it possible to get in touch with users some timeafter they obtained answers to their questions to determine the use to which they put thisinformation. Even given this, however, few studies of virtual reference services haveinvestigated user's uses of the information provided by the service, or identified theinformation needs that give rise to the user's question to the service.

The present study combines a traditional evaluation of the user's satisfaction with thereference encounter, with details of the user's information use and the user's motivation forusing the chat reference service. This combination of methodologies may serve as a model forthe evaluation of other virtual reference services, in combination with other evaluationmethods and metrics. The goal of this evaluation study was to assess the effectiveness of chatreference service in meeting users' information needs. This was addressed by the investigationof three research questions:

1. What is the users' level of satisfaction with chat reference service?2. What motivates users to use chat reference service?3. How are users using the information provided by chat reference service?

This study was conducted in the context of the collaborative statewide service chat-basedreference service NCknows (www.ncknows.org). NCknows was launched for an 18-monthpilot phase in February 2004. Nineteen libraries participated in the pilot phase, and librariansin all nineteen of these libraries staffed the service during the pilot. These libraries included

Page 3: Motivations and uses: Evaluating virtual reference service

352 J. Pomerantz, L. Luo / Library & Information Science Research 28 (2006) 350–373

public, research university, community college, and government libraries and spanned the stateto include urban, suburban, and rural, and large and small libraries. The NCknows project wasstarted with funding from an Institute of Museum and Library Services (IMLS) LibraryServices and Technology Act (LSTA) grant and is coordinated by the State Library of NorthCarolina's Library Development Section (statelibrary.dcr.state.nc.us). The pilot phase ofNCknows has been completed, and in July 2005 the service had its final launch.1

2. Literature review

Evaluation of chat reference services is important, given the mixed response that suchservices have received, from both librarians and users. Coffman and Arret (2004a, 2004b), forexample, discuss the failure of many commercial reference services, and question whether chatis a viable medium for offering library-based reference service. Horowitz, Flanagan, andHelman (2005), for another example, describe the rise and fall of a chat reference serviceoffered by the Massachusetts Institute of Technology's libraries. Given that the viability ofchat reference for libraries has still not been definitively established, it is essential for librariesoffering chat reference services to evaluate the service.

Metrics for the evaluation of reference services fall into two broad categories, defined bythe perspective taken on the service: from the point of view of the user, and from the pointof view of the service itself. Saxton and Richardson (2002), in their review of the manymeasures that have been used in the evaluation of library reference services, refer to theseperspectives as the “obtrusive user-oriented approach primarily concerned with testing forlevels of user satisfaction with the service,” and the “query-oriented approach primarilyconcerned with testing the accuracy of answers to reference questions” (p. 33). In addition toanswer accuracy, other metrics from the service perspective include the size of the library'scollection, the resources used in the answer, and the cost of the service (Murfin, 1993),among others.

Two approaches to evaluating reference service from the user's point of view havetraditionally been employed: obtrusive and unobtrusive. Obtrusive evaluation employsobservation of the reference transaction, so that both the user and the librarian know that theyare being observed. Unobtrusive evaluation is a “secret shopper”-style methodology where theresearcher or a proxy asks a question as a user, so that the librarian does not know that he or sheis being observed. Crowley (1971) made the first recorded use of unobtrusive evaluation, andthis method has been employed for many reference evaluations since (Dewdney & Ross, 1994;Durrance, 1989; Ross & Nilsen, 2000; Weech & Goldhor, 1982). Though all of these datacollections were anonymous, it was possible for the researchers to collect data some timesubsequent to the reference transaction only because the proxies were known and available tothe researchers.

1 More libraries around the state have joined the project since the conclusion of the pilot phase. The list ofparticipating libraries can be found on NCknows' Web site.

Page 4: Motivations and uses: Evaluating virtual reference service

353J. Pomerantz, L. Luo / Library & Information Science Research 28 (2006) 350–373

Although proxies have been employed in evaluations of digital reference services (Kaske &Arnold, 2002), it is not necessary to do so because it is possible to elicit data from the user andstill maintain an unobtrusive methodology. This is accomplished through the use of an exitsurvey that the user fills out upon completion of the reference transaction. Exit surveys havebeen utilized for evaluations of e-mail reference services: Pomerantz (2004) describes surveysthat studied users' satisfaction with the AskERIC e-mail service and the composition of theAskERIC user population. Exit surveys are more common, however, for evaluations of chatreference services (Hill, Madarash-Hill, & Bich, 2003; Marsteller & Neuhaus, 2001; Ruppel &Fagan, 2002). Immediately following the reference transaction is the easiest, and perhaps theonly, time to collect data from the user of a digital reference service: the user is “present” at thatmoment, and once the user disconnects from the service may be difficult or impossible tocontact her to collect more data. Although it may be possible to contact a user of a digitalreference service after the conclusion of the reference transaction (via the user's e-mailaddress), this is rarely done.

The use of a reference service from the users' perspective may be broken down temporallyinto three stages: prior to the use of the service, during and immediately following the use ofthe service, and subsequent to the use of the service. Scriven uses the term evaluand to refer to“that which is being evaluated” (Scriven, 1991, p. 139). Thus, three evaluands may be positedwithin the timespan of a user's use of reference a service: the user's motivation for using theservice, the user's perception of the service, and how the user uses the information provided bythe service.

These three evaluands correspond to the three stages of an individual's movement through asituation, as posited in Dervin's theory of sense-making (1983). Dervin suggests that theultimate goal of a user's search for information is to bridge a gap between her internalcognitive state and external reality in order to make sense of the world in addressing someissue or concern in her life. The gap is the core premise of sense-making theory: Dervin (2003)posits that “discontinuity is an assumed constant of nature generally and the human conditionspecifically” (p. 271), which unavoidably places a gap between one's internal world andexternal reality. This gap provides the underlying motivation for the user to seek informationfrom an external source. Bridging this gap provides the motivation for the user to use particularinformation sources. How the information helped (or hurt) the user to bridge that gap mayreflect on the source that provided that information. One such information source may beanother person, in the form of a librarian in a virtual reference service.

There have not been many studies that examine users' motivations for using libraryreference services. Among the few studies that have interviewed library users concerning theirmotivations for using library services in general, Marchant (1991) explored the motivations foradults' use of public libraries and categorized these motivations as being derived from fourmajor life interests: family life, vocational growth, religion, and politics. Saracevic and Kantor(1997) concluded that users' reasons for using library services fall into three categories: for atask or a project; for personal reasons; and to get an object, information, or perform an activity.It is reasonable to assume that the information needs that motivate use of a library-based virtualreference service would fall into the same categories that motivate use of library services ingeneral.

Page 5: Motivations and uses: Evaluating virtual reference service

354 J. Pomerantz, L. Luo / Library & Information Science Research 28 (2006) 350–373

Mitchell (1982) defines motivation as “the degree to which an individual wants andchooses to engage in certain specified behaviors” (p. 82), which can be further divided to twocategories: intrinsic and extrinsic. Intrinsic motivation refers to “a drive arising within the selfto carry out an activity whose reward is derived from the enjoyment of the activity itself”(Davis & Wiedenbeck, 2001, p. 554). Extrinsic motivation refers to external incentives andrewards that motivate an individual to perform an activity (Shapira, Kantor, & Melamed,2001). For this study, motivation is characterized as either intrinsic or extrinsic, both of whichdrive the user to seek information. An example of an intrinsic information need is personalcuriosity; an imposed query, defined by Gross (1999) as a question from an individual seekinginformation on behalf of another person, is an example of an extrinsic information need.

Whether and how a users' information need has been satisfied by using an informationservice may be taken as indicators of both the value of the service and the utility of theinformation provided by the service. Variables of information value and utility measurementinclude affective results, accomplishment of information tasks and changes in workperformance, mental and physical effort expended in searching information, and monetaryvalue (Broadbent & Lofgren, 1993; Saracevic & Kantor, 1997; Su, 1992). In studiesexamining these variables, users were interviewed and surveyed immediately following theiruse of an information service; this particular time constraint, however, only allows forcollection of data of the perceived value of the information to the user. Users had not yet had achance to apply the information in solving their problems. What has been missing in thesestudies is a more realistic elicitation of user's valuing of information, that is, how users actuallyuse information to address concerns or solve problems.

Information use is defined here as the real life utility value of information for the user. Thisvalue is taken as an indicator of NCknows' success in providing help that fulfills users'information needs. Ahituv (1980) studied three approaches to valuing information. One ofthese is the “realistic value” approach, which attempts to measure the impact of information onthe outcomes of decisions and the performance of decision makers before and after gaining theinformation. This approach has been adopted by the present study to allow users to commenton how they used the information provided to them by NCknows to solve problems that arosefrom the situations in their life, in an effort to analyze how users' information needs have beenfulfilled.

3. Methodology

Two distinct methods were employed to collect data at different stages of the process ofusers' use of the NCknows service and the information provided to them. First, an exit surveypopped up for the user at the conclusion of a chat session. NCknows uses the 24/7 Referenceapplication to provide chat service (www.247ref.org). The exit survey was implemented usingthe functionality of the software itself: when the user left the chat session by clicking on theExit button, the exit survey popped up. Unfortunately if the user left the session in some otherway – by closing the browser window, or if the browser crashed, of if technical problems cutthe chat session short – the exit survey would not pop up.

Page 6: Motivations and uses: Evaluating virtual reference service

355J. Pomerantz, L. Luo / Library & Information Science Research 28 (2006) 350–373

The exit survey created for this study did not pop up for users who came into theNCknows service through the Public Library of Charlotte & Mecklenburg County(PLCMC)'s queue. Instead, at the conclusion of chat sessions, PLCMC's users receivedthe default exit survey provided by the 24/7 Reference company to their customers. This wasthe case because PLCMC had a contract with 24/7 Reference, separate from the rest ofNCknows, due to the fact that PLCMC launched their chat reference service in February2002, two years prior to NCknows' launch. To have the exit survey created for this studypop up for PLCMC's users would have required a separate negotiation between theevaluation team, PLCMC, and 24/7 Reference, which was unfeasible at the time that thisdata collection effort was launched. PLCMC is the largest contributor to NCknows' volumeof chat sessions, accounting for 43% of NCknows' total volume. Thus, a large percentage ofNCknows' users were not surveyed for this data collection effort. This will not be an issue infuture evaluations of NCknows, however, as PLCMC did not renew their individual contractwith 24/7 Reference, and instead joined the contract between NCknows and 24/7 Referenceas of February 2005.

The drawback of creating a customized exit survey, however, is that the exit surveyresponses were not automatically linked to chat session transcript numbers, whichfunctionality is available to services that do use 24/7 Reference's default exit survey. Theresearch team was able to manually link most (70%) of the exit survey responses to chatsession transcript numbers. This linking was performed using two criteria: the date and time ofthe chat session and the submission of the exit survey, and comparison of the e-mail addressesthat users provided to the NCknows service at login and on the exit survey. If a user did notprovide an e-mail address either at login or on the exit survey, sometimes transcripts could belinked using the date and time alone. The 30% of exit surveys that could not be linked totranscripts were those for which this was impossible. The NCknows service has, since theconclusion of this evaluation effort, gone back to using 24/7 Reference's default exit survey,precisely so that this linking is performed automatically.

The questions on the custom exit survey designed for this study were derived from Hirko(2004) and McClure, Lankes, Gross, & Choltco-Devlin (2002), both of whom makerecommendations for measures that may be collected when performing evaluations of digitalreference services. These sources are based on both best practice and the American LibraryAssociation's guidelines (2004) for providing digital reference service.2

The second method to collect data on users' use of the NCknows service was semi-structured interviews with NCknows users. The final two questions on the exit survey ask if theuser was willing to be contacted for a follow-up interview, and for his or her e-mail address. Ifthe user was willing to be contacted, the researchers sent the user an invitation e-mail, asking toset up a telephone interview. This e-mail message also contained the text of the interviewquestions, so that the user would have advance knowledge of the questions that the researcherswould ask. This message was sent to the user two weeks after their chat session because thisamount of time was judged to be long enough for the user to have applied the informationobtained from NCknows but short enough for the user to still have a clear memory of the chat

2 The exit survey can be found online at: http://purl.oclc.org/NET/POMERANTZ/NCKNOWS/exitsurvey.html.

Page 7: Motivations and uses: Evaluating virtual reference service

356 J. Pomerantz, L. Luo / Library & Information Science Research 28 (2006) 350–373

session. A time for a telephone interview was set up by e-mail; these telephone interviewstherefore took place two to three weeks after the user's chat session. The interviews consistedof five questions. Two primary types of data were collected in these interviews: users'motivation for using the NCknows service and how users used the information provided tothem.3

4. Results

From March through November 2004, a total of 4563 users submitted questions to theNCknows service, not via PLCMC's queue. Out of these, 392 users (8.6%) submitted the exitsurvey, and 284 users (6.2% of all users, 72.4% of users who submitted the exit survey)indicated willingness to be contacted. Of the users willing to be contacted, 73 (25.7%) repliedto the invitation e-mail and answered the interview questions either in a telephone interview orby e-mail. A total of 24 telephone interviews were conducted and 49 e-mail responses werereceived. These low response rates raise the issue of the effect of unit non-response bias onthese results, which will be discussed below.

4.1. Prior to use of the service

4.1.1. Use of other reference servicesA total of 67 users answered the question in the follow-up interviews whether they had ever

used any of the other reference services offered by their library. Of these respondents:4

! 48% (32) had used their library's desk reference service,! 19% (13) had used the e-mail reference service,! 33% (22) had used the telephone reference service,! 19% (13) had never used any other reference services,! 7% (5) did not answer the question directly, and! 7% (5) were librarians or library staff.

The purpose of this question was to determine if NCknows users were the same individualswho used other library reference services, or if NCknows was reaching new user communities.The results indicated that 19% of the users never used any of the library reference servicesbefore, which means that this percentage of NCknows users are new. Whether NCknows usersare new or existing library users, however, it was important to determine how users werelearning about the service. This motivated the following question.

3 The interview protocol can be found online at http://purl.oclc.org/NET/POMERANTZ/NCKNOWS/patron_followup.html.4 Note that these numbers sum to greater than 100%. This is due to the fact that some users had used more thanone of the other reference services offered by their library. Other purposes for which users used the libraryincluded to check books out, to bring their children for story time and other children's programs, to conductgenealogy research, and to use public access computers.

Page 8: Motivations and uses: Evaluating virtual reference service

357J. Pomerantz, L. Luo / Library & Information Science Research 28 (2006) 350–373

4.1.2. Discovery of NCknowsAnother follow-up interview question asked how the user had learned about the NCknows

service. A total of 68 users replied to this question, by both telephone and e-mail. Four routesby which users learned about the service emerged from responses:

1. The local library, or library-produced publicity materials (47, 69%): A librarianrecommended the service to users during a physical visit to their local library, or userscame across a mention of the service in materials prepared by their local library (e.g., librarynewsletters, flyers, Web site, or radio news coverage).

2. Online searching (8, 12%): The user came across the service either by searching in a searchengine for some topic and serendipitously retrieving a link to the service, or the user wasusing another Web site that contained a link to the service. These users did not know aboutthe service until they came across it online.

3. School system (8, 12%): The service was recommended to the user in a school setting (e.g.,a college orientation, mentioned by a teacher or professor in class, or a link from a school'sWeb site).

4. Friends and colleagues (5, 7%): Users had the service recommended to them by anotherindividual who knew about the service (e.g., a friend, classmate, or someone on a listserv towhich the user subscribes).

4.1.3. Motivation for using the serviceThe question as to users' previous use of other reference services had a follow-up

question that concerned the user's motivation for using NCknows rather than any of theother available reference services. A total of 68 users, among them five librarians, repliedto this question. Seven overarching categories of motivation emerged from users'responses:

! Convenience (32, 47%): Users used NCknows because of their perception that chatreference service is fast, efficient, and questions may be answered immediately; that theservice is easy to use, always available, and accessible from any computer with Internetaccess, unrestricted by physical location; and users found it to be less trouble than otherforms of reference service.

! Other means of seeking information were not helpful (10, 15%): Before trying NCknows,users had already used other means to search for information but did not get their questionsanswered by those means.

! Curiosity (9, 13%): Users were curious about the service and wanted to try it out.! Serendipity (8, 12%): Users came across the service online, often via a search engine,thought it might be useful in answering their questions, and gave it a try.

! Recommended by others (5, 7%): NCknows was recommended to the user by another useror reference service. The user chose to use NCknows because of this recommendation.

! Personal characteristics/habits (5, 7%): Users chose to use NCknows due to some personalcharacteristic or habit, such as English not being the user's native language, shyness, or anaffinity for computers.

Page 9: Motivations and uses: Evaluating virtual reference service

358 J. Pomerantz, L. Luo / Library & Information Science Research 28 (2006) 350–373

! Other reference services were not available (1, 1.5%): One user used NCknows becauseother reference services were not available at the particular moment that she was seekinginformation.

Some of these categories naturally overlap: one user, for example, indicated that he wasunable to find the information he was seeking via other means, came across NCknowsserendipitously, and then decided to use it out of curiosity. Another user indicated thatNCknows was recommended to him by another user who apparently came across the serviceserendipitously.

It is worth noting that the librarians who responded to this question used the service forone of two reasons: to seek help in answering questions they had received in their ownreference services, and out of curiosity. Those librarians who used the service out ofcuriosity, said that their library did not offer chat reference service, and they wished to reporton the NCknows service to their colleagues and library directors. The follow-up interviewsalso identified 3 users (4% of the total number interviewed) who used the servicespecifically for the purpose of testing it: a student in an MLS program, a librarian, and alibrary director. Most of the librarians who used NCknows were from within North Carolina.It was known within the library community in the state that the NCknows pilot was goingon, and that if it was deemed successful, additional libraries would be asked to join. It istherefore a little surprise that librarians and library directors from around the state would becurious about the service and would want to find out more about it before deciding whetheror not to join.

4.1.4. Motivation for asking the questionUsers were asked about their motivation for asking the question. A total of 72 users replied

and one user answered this question for two different chat sessions, bringing the total numberof responses to 73. Six categories of motivation emerged from users' responses:

! To answer a work-related question (51%): these questions concern activities, projects orproblems in which the user is engaged. These questions are split fifty/fifty betweenbusiness-related and school-related questions. Users who asked business-related questionswere mostly businesspeople who asked questions related to a current project. Users whoasked school-related questions were split between students and educators (both K-12teachers and professors in higher education). Students asked questions related to a currentcourse or assignment; educators asked questions both related to courses and assignments,and for purposes of planning courses or coursework.

! To answer a question that arose from the user's personal life (32%): Questions of this typebroke down into several sub-categories: Genealogy, hobbies or other activities, the pursuitof vocational or academic change, and plain curiosity.

! To conduct a known-item search (8%): In these questions, the user either knew the name ofa source but could not locate it, or the user knew the name of some piece of contentcontained within a source (e.g., the name of a poem or short story) but not the name of asource that contains that content.

Page 10: Motivations and uses: Evaluating virtual reference service

359J. Pomerantz, L. Luo / Library & Information Science Research 28 (2006) 350–373

! To answer a question about the library itself (3%): These questions concerned librarypolicies or services available online.

! To help others look for information (3%): These were imposed queries (Gross, 1998), wherethe questioner stated that he/she was asking on behalf of another.

! Other (3%): Users gave no indication of the motivation of the question. All of these usersresponded to the follow-up interviews by e-mail.

Fig. 1 presents some examples of users' questions.

4.2. The point of service

Chat sessions are handled for the NCknows service by both librarians in librariesparticipating in NCknows and the staff of the 24/7 Reference company. Because of this, the

Fig. 1. Examples of users' questions.

Page 11: Motivations and uses: Evaluating virtual reference service

360 J. Pomerantz, L. Luo / Library & Information Science Research 28 (2006) 350–373

exit surveys naturally include responses from users who chatted with both NCknows librariansand 24/7 staff. Of all exit surveys:

! 37.2% of responses are from users who chatted with NCknows librarians,! 19.3% chatted with 24/7 reference staff,! 13.7% chatted with both, and! 29.7% were indeterminate.

The 13.7% of users who chatted with both NCknows librarians and 24/7 Reference staffwere disconnected or logged out, and then reconnected to NCknows right away and werepicked up by a different librarian. The 29.7% of exit surveys that were indeterminate were sobecause, as mentioned above, the research team was able to manually link only 70% of the exitsurvey responses to chat session transcript numbers.

That users chatted with NCknows librarians almost twice as frequently as with 24/7 staffreflects two facts about the NCknows service. First, the greatest volume of users used theservice during NCknows' hours of service. The hours during which librarians from theparticipating libraries staff the NCknows service is 10 AM to 8 PM on weekdays, and 1–6 PMon weekends; 73% of NCknows users connect during these hours of service on weekdays, and40% on weekends. Weekends, however, account for only 18% of the total volume of sessionsreceived by NCknows. Thus, given the hours that NCknows librarians are staffing the serviceand the patterns in incoming traffic, there is more opportunity for a user to chat with anNCknows librarian than a 24/7 employee. Second, one of the terms of NCknows' contract with24/7 dictates that, during NCknows' hours of service, 24/7 staff cannot “pick up” a user until90 seconds after the user has logged in. This agreement is an attempt to ensure that NCknowslibrarians will chat with as many NCknows users as possible, but that if NCknows librariansare very busy, users will be picked up by 24/7 staff.

Table 1 presents data on measures of users' satisfaction with various aspects of theNCknows service. The user has not had sufficient time to use the information provided by thelibrarian so can only evaluate the reference service on the basis of immediate impressions. It isthe limitation of these immediate impressions as evaluation metrics for reference work that ledto the research team conducting follow-up interviews. Because the overarching project was anevaluation effort of the NCknows service, however, it was necessary to establish that userswere satisfied with their interaction with the service.

Note that users' satisfaction on all of these measures is high. People tend, when reporting ontheir satisfaction with a service, to be generous, especially when that service is provided by

Table 1Users' satisfaction on aspects of the chat service (n, %)

Satisfaction Speed Helpfulness Ease of use

Very satisfied 265, 68.48% 258, 67.72% 310, 81.15% 320, 82.69%Satisfied 91, 23.51% 95, 24.93% 50, 13.09% 55, 14.21%Dissatisfied 21, 5.43% 20, 5.25% 13, 3.40% 7, 1.81%Very dissatisfied 10, 2.58% 8, 2.10% 9, 2.36 5, 1.29%

Page 12: Motivations and uses: Evaluating virtual reference service

361J. Pomerantz, L. Luo / Library & Information Science Research 28 (2006) 350–373

another human being with whom the respondent has had some personal contact. This is oneexplanation for why reports of user satisfaction with reference services is so high, even whenusers report that the answer provided by the service was not complete (see, for example:Durrance, 1989; Richardson, 2002). This may also be the case here.

When users were asked to rate the speed of the service, their subjective perception played animportant role. Of 256 users who rated the service as “very quick”, 148 chat sessions were ableto be linked to their transcript numbers; the average length of these sessions was 20.2 minutes.Only 3 transcripts could be identified for which users rated the service as “very slow”; theaverage length of these sessions was 24.7 minutes. Due to the small sample size of “very slow”ratings, no statistical analysis is possible to determine whether these two groups of users aresignificant different. But users' narrative responses indicate that the definition of quick andslow is largely a subjective perception.

Table 2 presents the results from an alternative measure of satisfaction, the user'swillingness to recommend the service. NCknows scored very high on this measure. Thewillingness to recommend something – a product, a service, etc. – is a high bar; people aregenerally willing to complain in public but tend to be positive less often (InternationalCustomer Service Association, 2005; Society of Consumer Affairs Professionals in Business,2003). The fact that the overwhelming majority of respondents are very likely to recommendNCknows is a credit to the service.

4.2.1. Role of the userTable 3 presents demographic information about the user. It was important, in this

evaluation effort during NCknows' pilot phase, to determine the composition of the service'sprimary user communities. These data will assist the librarians providing the service to conductreference transactions appropriate to the user, and it will also impact the marketing of theservice after its launch. Combining the student categories accounts for 47% of all users, andcombining the educator categories (Teacher, Faculty, Librarian) accounts for 18.6% of allusers. The “other” category unfortunately accounts for the largest category of users; nearly athird (32.7%) of users who specified “other,” however, also provided an alternative role.Among these were genealogist (19.4%), retired (13.9%), active readers (11.1%), unemployed(8.3%), legal worker (5.5%), and business person (5.5%). Other “other” categories specifiedby only one or two users included artist, journalist, volunteer, grant evaluator, medical patient,church worker, and grandparent.

As discussed above, students accounted for nearly half of all NCknows users. Students arealso the largest group of users that had used the information provided to them by NCknows by

Table 2Users' willingness to recommend the chat service

Recommend Number Percentage

Very likely 337 87.53Maybe 39 10.13Unlikely 6 1.56Never 3 0.78

Page 13: Motivations and uses: Evaluating virtual reference service

Table 3Role of the user

Role Number Percentage

Other 110 29.65Student: undergraduate 86 23.18Student: graduate 50 13.48Parent 29 7.82Student: K-12 27 10.34Librarian 20 7.66Teacher: K-12 13 3.59Administrator 10 2.70Higher education faculty 9 3.45Adult educator 8 3.07Medical professional 6 6.25Teacher: pre-school 3 0.81Policymaker 0 0.00Politician 0 0.00

362 J. Pomerantz, L. Luo / Library & Information Science Research 28 (2006) 350–373

the time of the follow-up interview. This breaks down further as follows: of the users whohad used the information provided, 18.6% were graduate students and 9.3% wereundergraduates. The fact that students are the group that most immediately used theinformation provided may indicate that students seek information close to their immediatepoint of information need (for example, at the last minute before an assignment is due). Thesame might be said of administrators, who accounted for 11.6% of users who had used theinformation provided.

Students are the largest group of users that had used the information provided by thetime of the follow-up interview. Undergraduate students are also one of the largest groupsthat had not used the information provided (17.6%). Undergraduates form a bimodaldistribution: those who used the information provided quickly, or not at all. One possibleexplanation for this bimodality is that some undergraduates may seek information at the lastminute before an assignment is due and then use that information quickly, whereas othersmay change the direction of their research after having already sought some information ona topic.

Interestingly, the librarians who used the NCknows service were one of the largest groupsof non-users of the information provided (17.6%). All of these librarian-users interviewedwere from libraries other than the 18 participating in NCknows, and as discussed above, usedthe service for one of two reasons: to learn more about the service itself by using it, or to seekhelp from other librarians to answer questions they had received in their own referenceservices. The term “use” here is ambiguous: in the former case, the librarian-user wasexperimenting with the service, asking a question that did not necessarily reflect a genuineinformation need. Instead, the librarian-user's information need was fulfilled by the use of theservice rather than the information provided by the service, so they may have had no need toactually use the information provided. In the latter case, the librarian-user was asking an

Page 14: Motivations and uses: Evaluating virtual reference service

363J. Pomerantz, L. Luo / Library & Information Science Research 28 (2006) 350–373

imposed query, and so they were not the ultimate end-user of the information; the librarian-user may not have interpreted passing information along to the end-user as “use” of theinformation.

4.2.2. Users’ commentsThe exit survey contained the typical end-of-survey question, an open-ended question to

elicit any additional comments from the user about the service. Users' additional comments aremainly related to the various aspects of the service and reiterate their satisfaction ordissatisfaction with the service. These additional comments will be presented here simply aspositive and negative.

Mon and Janes (2003) counted unsolicited “thank you” messages received in response toanswers provided to e-mail questions received by the Internet Public Library, and report a 20%“thank you rate.” The additional comments provided by NCknows users showed a 13.4%thank you rate, counting only those comments in which the user used the words “thank you” or“thanks” or a number of misspellings of those words (e.g., “thankyou” or “thanx”). Countingthose comments in which the user made other positive comments (such as about the speed orefficiency of the service, or the helpfulness of the librarian, or how great the service is), theadditional comments show a 74.6% “positive comment rate.” The additional comments alsoshow a 27.6% “negative comment rate.” The percentages of positive and negative commentssums to greater than 100% because many comments were mixed. Fig. 2 presents some

Fig. 2. Users' additional comments on the exit survey.

Page 15: Motivations and uses: Evaluating virtual reference service

364 J. Pomerantz, L. Luo / Library & Information Science Research 28 (2006) 350–373

examples of the more glowing of the positive, the more critical of the negative, and the mostthoughtful mixed comments.

Two overarching themes emerged from the users' comments. These themes are likely to befamiliar to all reference librarians: the interaction between the user and the librarian and theusability of the software. These are the primary issues that affect users' immediate impressionsof the reference service. As discussed above, however, immediate impressions are limited asevaluation criteria, so follow-up interviews were conducted to collect data about the users' useof the information provided by the service.

4.3. Use of the information provided by the service

One of the questions on the follow-up interview concerned whether and how the user hadused the information provided by the librarian in response to the question. A total of 67 usersreplied to this question. One user answered this question for two different chat sessions,bringing the total number of responses to 68. Three categories of use emerged from users'responses:

1. Use: by the time they were interviewed, the user had used the information provided (63%).2. Partial use: the user had used the information provided and had found it partially useful, or

the user had partially used the information provided (12%).3. No use: the user had not used the information provided at all (25%).

Users who had used the information provided stated that the information helped them inaccomplishing one of five types of tasks:

1. Work-related uses: the information provided filled in a gap in the user's knowledge thatallowed her to accomplish a business-related or a school-related task (69% (n=22) werebusiness related tasks, 31% (n=22) were school related tasks).

2. Personal uses: the information provided filled in a gap in the user's knowledge that allowedher to achieve a personal goal (33%, n=43).

3. Found a known item: the information provided allowed the user to find the specificinformation source or piece of information for which she was searching (7%, n=43).

4. Helping others: only two respondents used NCknows because they were helping otherpeople seek information, but three (7%, n=43) said that they shared the information thatthey received with others.

5. In one case (2%, n=43) the user simply pointed out that he used the information provided,but did not provide details as to how.

These above uses, as might be expected, mirror users' motivations for asking a question.This may be the ideal situation for a reference service or indeed any service that providesinformation: that the information provided actually fulfills the information need that motivatedthe use of the service. Not all users, however, had had the opportunity or motivation to fullyuse the information provided.

Page 16: Motivations and uses: Evaluating virtual reference service

365J. Pomerantz, L. Luo / Library & Information Science Research 28 (2006) 350–373

Users who had partially used the information provided stated that there were three reasonswhy this was so:

! the user had gone to the information source provided and found this source to be unhelpful;! the user's question was not fully answered, but relevant information sources or useful leadswere provided, so that the user believed that a complete answer might ultimately result withmore research; and

! the user had not yet found the time to fully use the information provided, but what hadalready been used was useful.

There were three reasons why users had not used the information provided:

! The user had not yet had a chance to use the information provided, because they had not yetgotten to the stage in their work where they needed it.

! The information was provided too late, after the user no longer needed it. As mightbe expected, this issue was raised only by users for whom the librarian could notprovide a complete answer during the chat session. In such cases, librarians asked forthe user's e-mail address and followed up some time later by e-mail with moreinformation.

! The information provided did not fulfill the user's information need or the user's questionwas not answered. The information provided was either entirely incorrect, or insufficient inbreadth or depth to satisfy the user. There were two causes for the question not beinganswered: the user's question was so difficult or specific that the librarian was unable to finda complete answer or any answer at all, and limitations in access to fee-based services

Fig. 3. Examples of users' comments on their use of the service.

Page 17: Motivations and uses: Evaluating virtual reference service

366 J. Pomerantz, L. Luo / Library & Information Science Research 28 (2006) 350–373

limited what the librarian could provide to the user. An example of this latter case is that alibrarian and a user on the same university campus would have access to the same set ofdatabase subscriptions, whereas an off-campus user would not have the same access. Inthese cases, librarians relied more heavily on resources on the free Web, which naturallyrestricted the breadth and depth of information that the librarian was able to provide oncertain topics.

Fig. 3 presents some examples of users' comments on their use of the information providedto them.

5. Discussion

5.1. Non-response bias

Both the exit survey and the follow-up interviews suffered from low unit response rates:8.6% of NCknows users who came into the service via queues other than PLCMC's, submittedthe exit survey, and 18.6% of users who submitted the exit survey (1.6% of all NCknows users)were interviewed. As a result of this, it is imperative to investigate the effect of unit non-response bias on these results.

One method for investigating the effect of non-response is to collect data from the non-respondents to enable a comparison between the respondents and the non-respondentsaccording to relevant characteristics. It was unfortunately not possible for the researchers toconduct such a comparison. There are two types of non-respondents in this study: NCknowsusers who did not submit an exit survey, and users who submitted an exit survey but were notwilling to be contacted for a follow-up interview. It was impossible for the researchers to reachmany of the former category of users; it would have been illegal to contact the latter categoryof users.

The NCknows service asks, but does not require, users to submit their e-mail address whenthey connect to the service. If an e-mail address is provided, the 24/7 application e-mails acopy of the chat transcript to the user. Because it is not required, only 72% of users do in factprovide an e-mail address. Of these, a percentage are likely to be invalid, either because theuser deliberately submits a fake e-mail address, or because the user accidentally makes a typo.It is not known what percentage of the e-mail addresses submitted by users is invalid becauseNCknows does not validate these addresses; however, 4% of the e-mail addresses provided byusers on the exit survey were different from the e-mail address provided by the same userwhen they connected to the service. The non-respondents for the exit survey fall into twocategories: users who submitted a valid e-mail address when they connected to the service andusers who did not. Users who did not submit a valid e-mail address were of course impossibleto contact.

It would seem, however, that it would be possible to contact those users who did submit avalid e-mail address when they connected to the service. This is true technically (i.e., theresearchers had a valid e-mail address at which to contact the user) but not legally. These

Page 18: Motivations and uses: Evaluating virtual reference service

367J. Pomerantz, L. Luo / Library & Information Science Research 28 (2006) 350–373

users could not be contacted according to the requirements of the institutional review boardat the authors' institution. According to these requirements, any study involving humansubjects must allow participants to opt out of the study at any time. Because the exit surveywas Web-based, the researchers could not collect a signed consent form from the user.Instead, the exit survey itself became the consent form, by including the following statement:“Clicking the Submit Form button at the end of this survey constitutes consent to participatein this study.” If the user did not submit an exit survey, that must therefore be taken as arefusal to consent to participate in the study. The users who opted out of the exit surveycould not, according to the institutional review board's requirements, thereafter be contactedfor any other reason.

Similarly, users who opted out of the follow-up interviews could not be contacted. Asdiscussed above, the final two questions on the exit survey ask the user if she would bewilling to be contacted for a follow-up interview, and asked for the user's e-mail address. Ofthe users who submitted an exit survey, 72.5% indicated that they were willing to becontacted for a follow-up interview. These users were of course contacted. The remaining27.5% of users, however, by indicating unwillingness to be contacted, had opted out of thestudy. Thus, due to both technical and legal reasons, it was impossible to assess the effect ofnon-response bias on the results of the exit survey and the follow-up interviews, by collectingdata from the non-respondents.

Another method for investigating the effect of non-response is to compare respondents andnon-respondents according to those characteristics that are known about each group. Nothingis known about those users who did not submit an exit survey, except what they submittedwhen they connected to the service (a deliberately minimal set of data, including the user'sname, e-mail address, zip code, and question, all of which are optional), and what theydivulged in the chat session itself.

Considerably more is known about users who submitted an exit survey but were notwilling to be contacted for a follow-up interview. The chi-square statistic was used todetermine whether there were any significant differences in responses to questions on theexit survey, between those users who agreed to be contacted for a follow-up interview andthose who did not. Significant differences were identified for three of the exit surveyquestions: the user's satisfaction with the completeness of the answer, the helpfulness of thelibrarian, and the speed of the service (χ2 =14.33, df=3, n=373, p=0.002; χ2 =10.62,df=3, n=366, p=0.003; and χ2 =14.29, df=3, n=368, p=0.014, respectively). Users whowere willing to be interviewed evaluated the service provided by NCknows significantlymore positively than users who were unwilling to be interviewed. One possible explanationfor this is that users who were unsatisfied with the service might be disinclined to talk aboutit further, whereas users who were satisfied might be more willing to share this with theevaluation team. This interpretation actually flies in the face of the findings by theInternational Customer Service Association (2005) and the Society of Consumer AffairsProfessionals in Business (2003), which both found that dissatisfied customers aresignificantly more likely to talk about their negative experience with a company or a servicethan satisfied customers are to talk about their positive experience. Those studies, however,are concerned with word-of-mouth marketing and investigated customers' communication

Page 19: Motivations and uses: Evaluating virtual reference service

368 J. Pomerantz, L. Luo / Library & Information Science Research 28 (2006) 350–373

with friends, colleagues, and the like and not with the companies themselves. The findingsof the present study may indicate that users are willing to communicate their satisfaction ordissatisfaction to differing degrees, depending on with whom that communication is beingconducted.

5.2. Self-selection bias

There is a further methodological problem faced by exit surveys: the users who submit exitsurveys are self-selected. Even though the exit survey pops up at the completion of every chatsession, the user must voluntarily fill it out and submit it. There is no way to force users to dothis. After having just spent time in the chat session itself (an average of 14.6 minutes forNCknows sessions), many users are apparently unwilling to spend even an extra minute or twofilling out a survey.

Though there is no evidence to support the assertion that this is the case, the possibilitythat the exit survey respondents are significantly different from non-respondents alongsome dimension cannot be ruled out. Respondents therefore may not be representative ofthe general user population of NCknows, or of the population of users of chat referenceservices in general. Because, as discussed above, there is no way to obtain data fromusers given the existing methodological constraints other than by allowing users to self-select, the demographics of NCknows' user population are unknown. Indeed, the onlycharacteristic of this population that is known for certain is the use of NCknows. Further,while there have been many studies of the user populations of individual digital referenceservices, there have been no studies of the user populations of digital reference servicesin general. The lack of data about this overarching user population calls out for futureresearch to determine the “demographics” of the population of digital reference serviceusers.

It should be pointed out that exit surveys for other services do not have much betterresponse rates than this study's 8.6%: Hill et al. (2003) report a 14.2% response rate, whereasMarsteller and Neuhaus (2001) report an approximately 20% response rate to “at least part ofthe survey,” indicating that they too encountered unit non-response. Ruppel and Fagan (2002)do not report their response rate, but they do report that they received 340 exit surveyresponses. They also report an average of 9.5 questions per day for the semester in which theyconducted their study; for a 16-week semester (including reading and finals periods) thisallows the estimation of a 32% response rate.

5.3. Discussion of findings

Two recent studies of users of chat reference services asked respondents what referenceand other library services they had used previously to seek help (Horowitz et al., 2005;Johnson, 2004). These studies used different categories of library services and so are notdirectly comparable. Some categories did overlap, however, and also overlap with thecategories used in the present study. All three studies found that when users had usedanother reference service, the most common form used was desk reference, followed

Page 20: Motivations and uses: Evaluating virtual reference service

369J. Pomerantz, L. Luo / Library & Information Science Research 28 (2006) 350–373

distantly by telephone, and use of an e-mail service and no previous use of any referenceservice were roughly tied for third place. While the percentages of the user population foundby these three studies do not match, the rank order of these categories of previous use isidentical.

The motivation for asking users whether they had ever used any of the other referenceservices offered by their library was to determine if NCknows users were the same individualswho used other library reference services, or if NCknows was reaching new user communities.The majority of interviewees were already users of other forms of reference service, but nearly20% had never used any other reference service. An interesting avenue for future research willbe to determine the effect of marketing campaigns for the NCknows service on the percentagesof existing and new users.

Campaigns to market chat reference services should be targeted to user groups that arelikely to have a need for the service, such as in grade schools, colleges, and universities;businesses and other corporate settings; and social organizations. Johnson (2004) found thatthe largest group of users of the service he studied was undergraduates, followed by graduatestudents, with faculty as the smallest user group. This is echoed by the findings in Cicconeand VanScoy's study (2003). Other evaluation studies of chat reference services also reportedthat students (both undergraduates and graduates) are the largest group of users (Curtis &Greene, 2004; Kibbee, Ward, & Ma, 2002; Lee, 2004; Marsteller & Neuhaus, 2001) Thepresent study offered the respondents more categories for their role in the use of the service,but again, the rank order of these categories of the users' roles are identical with the abovestudies. Johnson, too, suggests that chat reference services may benefit from marketingefforts; these findings indicate which user groups a service might wish to target in marketingcampaigns.

Horowitz et al. (2005) report that the single greatest reason why users used the service wasits convenience: users did not want to or could not physically come to the library, and usersbelieved that the service could provide them with information rapidly. This was the reasongiven by approximately 50% of all users that responded both to Horowitz, Flanagan, andHelman's and the present study's surveys. The availability of the service at all hours of the dayand night is one factor in the convenience of the service. Horowitz, Flanagan, and Helmansuggest several more, including that the software be simple to use, and that it work well acrossdiverse networks.

Recall that the follow-up interviews, whether they were conducted by telephone or e-mail,took place two to three weeks after the user's chat session. The authors had judged this to be anadequate amount of time for users to have made use of the information obtained fromNCknows. The fact that a percentage of users had not used the information provided to themafter two to three weeks indicates that there is a set of users who are planning very far ahead,and seeking information considerably in advance of their point of need. This raises thequestion of whether using a chat reference service (that is, a synchronous medium) is the bestoption for these users. It may be that an e-mail service would be preferable, as e-mail isasynchronous and these users clearly can afford to wait, if the payoff is that the librarian will beable to spend more time formulating an answer and provide greater breadth and depth ofinformation.

Page 21: Motivations and uses: Evaluating virtual reference service

370 J. Pomerantz, L. Luo / Library & Information Science Research 28 (2006) 350–373

This raises the question of what media is most appropriate for what types of referenceservice, and how to encourage users to use a reference service in the most appropriatemedium. Horowitz et al. (2005) report that a majority of users believe that chat is a usefulmedium for simple questions. Kern (2003), however, reports that requests for researchassistance are highest in chat services. It thus appears that there may be a mismatch betweenusers' expectations of chat reference services and what these services are actually able andbest suited to provide. This too may be an area where chat reference services may benefitfrom improved marketing efforts. It will be necessary, however, before chat referenceservices may market themselves as vehicles for specific types of assistance, for librarians toidentify what types of assistance really are best suited by chat or other types of referenceservices.

5.4. Methodological enhancements

In future research, more effort needs to be devoted to solving the methodological issuesinvolved in recruiting users as study subjects in order to make the sample morerepresentative of the population of digital reference users. Other possible recruitingmethods might be considered for use. For example, instead of asking whether the user iswilling to have a follow-up interview in the exit survey, a follow-up survey could beautomatically e-mailed to users after two weeks. Alternative methods for recruiting subjectswould need to be tried to determine what garners the best response rate without violatingusers' privacy.

Another interesting area of research is the question of what users gain from using theinformation provided by a reference service. For example, do they find uses of the informationother than the original intended use? Are there physical or psychological benefits brought byapplication of the information? Answers to these questions would provide a way to furtherquantify the value and utility of the information or the service.

6. Conclusion

This study combines traditional evaluation methods of the user's satisfaction with thereference encounter, with a new approach made possible by the online nature of the service:elicitation of the user's information use and motivation for using the chat reference service.This analysis of users' motivations for seeking and use of information facilitates understandingof the complete process of reference interactions: motivations as prior to a reference sessionand usage as an extension of it. Placing the evaluation within the context of a user'sinformation need and how it has been fulfilled allows a holistic assessment of the value of chatreference services.

This combination of methodologies may serve as a model for evaluation of onlinereference services, and reference services in general: an exit survey to elicit the user'simmediate satisfaction with the reference transaction, and a follow-up interview to elicit dataabout the user's actual use of the information and how that information fulfilled the user's

Page 22: Motivations and uses: Evaluating virtual reference service

371J. Pomerantz, L. Luo / Library & Information Science Research 28 (2006) 350–373

information need. The major issue that needs to be addressed for this methodology to be trulyuseful is that of non-response: a method for eliciting a greater response rate on both exitsurveys and follow-up interviews needs to be developed. Once this is developed, themethodology presented here, in combination with other evaluation methods and metrics, willenable a broad and deep evaluation of the context, performance, and impact of referenceservice.

Acknowledgments

The authors wish to extend thanks to several individuals and groups. To Jeanne Crisp,Phil Blank, and the State Library of North Carolina's Virtual Reference Advisory Committeefor allowing the authors this opportunity to work on the evaluation of the NCknows service.To all of the librarians participating in NCknows, for taking the time to participate in thisevaluation effort. And to Charles McClure for his invaluable assistance in conducting thisevaluation effort.

References

Ahituv, N. (1980). A systematic approach toward assessing the value of an information system. MIS Quarterly,4(4), 61−75.

Broadbent, M., & Lofgren, H. (1993). Information delivery: Identifying priorities, performance, and value. In-formation Processing & Management, 29, 683−702.

Ciccone, K., & VanScoy, A. (2003). Managing an established virtual reference service. Internet ReferenceServices Quarterly, 8(2), 95−105.

Coffman, S., & Arret, L. (2004). To chat or not to chat: Taking another look at virtual reference, Part 1.Searcher, 12(7), 38−49. Retrieved November 16, 2005, http://www.infotoday.com/searcher/jul04/arret_coffman.shtml

Coffman, S., & Arret, L. (2004). To chat or not to chat: Taking yet another look at virtual reference, Part 2.Searcher, 12(8), 49−56.

Crowley, T. (1971). The effectiveness of information service in medium size public libraries. His informationservice in public libraries: Two studies (pp. 1−96). Metuchen, NJ: The Scarecrow Press.

Curtis, D., & Greene, A. (2004). A university-wide, library-based chat service. Reference Services Review, 32,220−233.

Davis, S., & Wiedenbeck, S. (2001). The mediating effects of intrinsic motivation, ease of use and usefulnessperceptions on performance in first-time and subsequent computer users. Interacting With Computers, 13,549−580.

Dervin, B. (1983, May). An overview of sense-making research: concepts, methods, and results to date. Paperpresented at the annual meeting of the International Communication Association, Dallas, TX.

Dervin, B. (2003). From the mind's eye of the user: The sense-making qualitative-quantitative methodology. In B.Dervin, L. Foreman-Wernet, & E. Lauterbach (Eds.), Sense-making methodology reader: Selected writingsof Brenda Dervin (pp. 270−292). Cresskill, NJ: Hampton Press.

Dewdney, P., & Ross, C. S. (1994). Flying a light aircraft: Reference service evaluation from a user's viewpoint.RQ, 34(2), 217−229.

Durrance, J. C. (1989). Does the 55 percent rule tell the whole story? Library Journal, 114(7), 31−36.Gross, M. (1998). The imposed query: implications for library service evaluation. Reference & User Services

Quarterly, 37, 290−299.

Page 23: Motivations and uses: Evaluating virtual reference service

372 J. Pomerantz, L. Luo / Library & Information Science Research 28 (2006) 350–373

Gross, M. (1999). Imposed versus self-generated questions: implications for reference practice. Reference & UserServices Quarterly, 39(1), 53−61.

Hill, J. B., Madarash-Hill, C., & Bich, N. P. T. (2003). Digital reference evaluation: assessing the past to plan for thefuture. Electronic Journal of Academic and Special Librarianship, 4(2–3). Retrieved June 24, 2005, from http://southernlibrarianship.icaap.org/content/v04n03/Hill_j01.htm

Hirko, B. (2004). VET: The virtual evaluation toolkit. Olympia, WA: Washington State Library.Horowitz, L. R., Flanagan, P. A., & Helman, D. L. (2005). The viability of live online reference: An assessment.

Portal: Libraries and the Academy, 5, 239−258.International Customer Service Association. (2005). Benchmarking study of electronic customer care. Chicago:

International Customer Service Association.Johnson, C. M. (2004). Online chat reference: Survey results from affiliates of two universities. Reference & User

Services Quarterly, 43, 237−247.Kaske, N., & Arnold, J. (2002). An unobtrusive evaluation of online real time library reference services.

Retrieved June 24, 2005, from http://www.lib.umd.edu/groups/digref/kaskearnoldunobtrusive.htmlKern, M. K. (2003, June). What are they asking? An analysis of questions asked at in-person and virtual service

points. Paper presented at the ALA Annual Conference, 9th Annual Reference Research Forum, Toronto,Ontario, Canada.

Kibbee, J., Ward, D., & Ma, W. (2002). Virtual service, real data: Results of a pilot study. Reference ServicesReview, 30(1), 25−36.

Lee, I. J. (2004). Do virtual reference librarians dream of digital reference questions?:A qualitative andquantitative analysis of email and chat reference. Australian Academic & Research Libraries, 35(2),95−109.

Marchant, M. P. (1991). What motivates adult use of public libraries? Library & Information Science Research, 13,201−235.

Marsteller, M.R., & Neuhaus, P. (2001). The chat reference experience at Carnegie Mellon University. RetrievedJune 24, 2005 from http://www.contrib.andrew.cmu.edu/~matthewm/ALA_2001_chat.html

McClure, C. R., Lankes, R. D., Gross, M., & Choltco-Devlin, B. (2002). Statistics, measures and quality standardsfor assessing digital reference library services: Guidelines and procedures. Syracuse, NY: Information Instituteof Syracuse.

Mitchell, T. R. (1982). Motivation: New directions for theory, research, and practice. Academy of ManagementReview, 7(1), 80−88.

Mon, L., & Janes, J.W. (2003). The thank you study: User satisfaction with digital reference service. Retrieved June24, 2005 from http://www.oclc.org/research/grants/reports/janes/jj2004.pdf

Murfin, M. E. (1993). Cost analysis of library reference services. Advances in Library Administration andOrganization, 11, 1−36.

Pomerantz, J. (2004). A repeated survey analysis of AskERIC user survey data, 1998–2002. In R. D. Lankes, J.Janes, L. C. Smith, & C. M. Finneran (Eds.), The virtual reference experience: Integrating theory into practice(pp. 11−41). New York: Neal-Schuman.

Richardson, J. V., Jr. (2002). The current state of research on reference transactions. In F. C. Lynden (Ed.),Advances in Librarianship, vol. 26. (pp. 175−230). New York: Academic Press.

Ross, C. S., & Nilsen, K. (2000). Has the Internet changed anything in reference? The library visit study, phase 2.Reference and User Services Quarterly, 40(2), 147−155.

Ruppel, M., & Fagan, J. C. (2002). Instant messaging reference: Users' evaluation of library chat. ReferenceServices Review, 30(3), 183−197.

Saracevic, T., & Kantor, P. B. (1997). Studying the value of library and information services. Part II. Methodologyand taxonomy. Journal of the American Society for Information Science, 48, 543−563.

Saxton, M. L., & Richardson, J. V., Jr. (2002). Understanding reference transactions: Transforming an art into ascience. Amsterdam, NY: Academic Press.

Scriven, M. (1991). Evaluation thesaurus (4th ed.). Newbury Park, CA: Sage Publications.Shapira, B., Kantor, P. B., & Melamed, B. (2001). The effect of extrinsic motivation on user behavior in a

Page 24: Motivations and uses: Evaluating virtual reference service

373J. Pomerantz, L. Luo / Library & Information Science Research 28 (2006) 350–373

collaborative information finding system. Journal of the American Society for Information Science &Technology, 52, 879−887.

Society of Consumer Affairs Professionals in Business. (2003). SOCAP customer contact study: Contact channeluse in customer relations. Alexandria, VA: Society of Consumer Affairs Professionals in Business.

Su, L. T. (1992). Evaluation measures for interactive information retrieval. Information Processing &Management,28, 503−516.

Weech, T. L., & Goldhor, H. (1982). Obtrusive versus unobtrusive evaluation of reference service in five Illinoispublic libraries: A pilot study. The Library Quarterly, 52, 305−324.