Proceed with Caution: Deepening Practitioner Concerns ... · participatory movement took hold...
Transcript of Proceed with Caution: Deepening Practitioner Concerns ... · participatory movement took hold...
Edward Benoit III and Amanda L. Munson 759
portal: Libraries and the Academy, Vol. 18, No. 4 (2018), pp. 759–779. Copyright © 2018 by Johns Hopkins University Press, Baltimore, MD 21218.
Proceed with Caution: Deepening Practitioner Concerns about Social Tagging within Digital CollectionsEdward Benoit III and Amanda L. Munson
abstract: Based on a comparison of two nearly identical surveys in 2010 and 2016, this study examines the deepening practitioner concern over the use of social tagging within digital collections during the past six years. While participants reported a greater level of experience and increased use of social tagging, concerns regarding control, consistency, and potential abuse strengthened over the six years of the study. The authors discuss several potential causes for this increase and suggest future directions for social tagging.
Introduction
The number of digital collections has quickly grown over the past decade. As the participatory movement took hold between about 2005 and 2010, some institu-tions began to allow social tagging within their digital collections—that is, they
employed keywords generated by users rather than specialists to classify and describe their online content. Early in the development of social media, the research community began exploring social tagging, its potential benefits, and its limitations. The research soon shifted to include tagging within traditional information retrieval systems such as databases, online public access catalogs (OPACs), and digital libraries.2 The studies con-tinued expanding through explorations of tagging development, organization, and the taggers themselves.3 While several studies highlight problems with tagging consistency and tagging abuse,4 few considered the practitioner’s perception of social tagging itself.5 This
mss
. is pe
er rev
iewed
, cop
y edit
ed, a
nd ac
cepte
d for
publi
catio
n, po
rtal 1
8.4.
Proceed with Caution: Deepening Practitioner Concerns about Social Tagging within Digital Collections760
Considering the opinions and concerns of digital librarians and archivists serves a vital role in the development of new digital platforms that better integrate tagging and quality assurance tools. A 2010 study of practitioners’ views regarding social tagging in digital collections found a generally positive perception, with limited concerns regarding control and consistency. With the further implementation of tagging in digital collec-tions during the past six years and the increased use of social media, one might expect a continued favorable view of social tagging. However, by definition, early adopters view new technology positively, while the early and late majority adopters are more skeptical.6 Likewise, the participants in the 2010 study had little practical experience with tagging in their collections. Their opinions might have shifted as theoretical op-portunities encountered experiential realities.
The following article compares the findings of the original 2010 study with one conducted in 2016 and addresses the following research question: How have digital librarians’ and archivists’ perceptions regarding the integration of social tagging within digital collections changed between 2010 and 2016? What concerns, if any, intensified?
Literature Review
There have been several studies on the use of social tagging in the digital collections of libraries, archives, and museums. Many investigations focus on measuring the benefits and limitations of tagging or on ways to improve it. Most studies center on one of several themes: comparing tags with controlled vocabularies, tagging as a method to expand traditional metadata (or fill in gaps), the use of tags to increase collection access, and concerns about the use of tags. The following section highlights studies related to each of these themes.
Testing Uncontrolled Vocabulary
One the largest areas of tagging research explores how user-generated tags compare with more traditional controlled vocabularies, such as Library of Congress Subject Headings (LCSH). Harry van Vliet and Erik Hekman conducted one such study to determine if tags generated by nonprofessionals differed in quality from those created by experts. They found that experts and nonprofessionals used different vocabulary. Museum personnel ranked the expert tags as more informative and the nonprofessional tags as more effec-tive search terms. The museum personnel may have had a slight bias, however. When the researchers compared the tags on their own, they found the tags created by both groups equally informative.7
Van Vliet and Hekman were not the only researchers to question whether profes-sional index terms differed from user-generated terms. Christine DeZelar-Tiedman compared user-generated tags in academic library catalogs with LCSH to determine the benefits of using tags to catalog items. The records she examined had six times as many tags as subject headings, and 53.9 percent of the tags contained concepts that the LCSH did not cover.8 In Athens, Greece, Constantia Kakali used data from Panteion University Library and its social OPAC as well as the results of a national survey of 81 cataloging librarians to study social tags.9 She found that 57.82 percent of the tags in the study
This m
ss. is
peer
review
ed, c
opy e
dited
, and
acce
pted f
or pu
blica
tion,
porta
l 18.4
.
Edward Benoit III and Amanda L. MunsonProceed with Caution: Deepening Practitioner Concerns about Social Tagging within Digital Collections 761
matched LCSH terms.10 She also determined that catalogers frequently agreed with the taggers’ choice.11 Unlike many other researchers, who declared controlled vocabularies better than social tags or vice versa, Kakali concluded that new and improved controlled vocabularies could be created from tags.
Praveenkumar Vaidya and N. S. Harinarayana conducted a study like those by DeZelar-Tiedman and Kakali. They compared the tags assigned to 100 titles on Libr-aryThing, a social tagging Web application, with the LCSH terms that appeared on the machine-readable cataloging (MARC) records for the same titles. They found that 78.5 percent of the tags had no equivalent LCSH term, 46.9 percent of the LCSH terms were also used by the taggers, and 88 percent of the titles had at least one tag that matched an LCSH term used in the MARC record.12 The tags Vaidya and Harinarayana observed contained bibliographic information, personal references, and opinions about the books. The tags that did not overlap with LCSH terms, they said, revealed “much informa-tion about the sources” and acted “as a bridge between the professional terms.”13 They named “homographs, synonyms, and polysemy” as the main limitations of the tags and an increase in access points as the main benefit.14
In 2006, Krystyna Matusiak tested the ability of user-created tags to make images in digital collections more accessible through user searches. Describing image collections has long been a difficult task due to the lack of text and the many possible interpretations of an image. Matusiak believed that a tagging system would allow users to describe im-ages from their own unique perspective, which would create more searchable metadata and more access points. She compared the description of images in a digital library to tags applied to images on the photo-sharing website Flickr and found that tags alone failed to provide adequate descriptions but could supplement metadata and include more diverse points of view.15
Expanding Metadata
The item-level description used by most digital collections requires significant labor from metadata specialists. Likewise, not every item’s content can be identified or de-scribed within a limited metadata framework. Social tags can help expand traditional metadata, identify the unidentified, and fill in gaps. The Library of Congress (LOC) conducted its own experiment with tagging and user comments to see if it could improve the visibility and accessibility of its image collections. Michelle Springer and her LOC colleagues uploaded images from two of the library’s historical photographs collections to Flickr. Within less than a year, users created more than 67,000 tags and left over 7,000 comments, enhancing the records of more than 500 items in the LOC’s Prints and Photographs Online Catalog.16
Increased visibility of digital resources is far from the only recorded benefit of tag-ging. Marek Sroka reported on the use of social tagging by Jewish cultural institutes to identify Holocaust victims in photographs taken before and during World War II. He noted two institutions that used this method to great success: The first was the Center
Social tags can help expand traditional metadata, identify the unidentified, and fill in gaps.
This m
ss. is
peer
review
ed, c
opy e
dited
, and
acce
pted f
or pu
blica
tion,
porta
l 18.4
.
Proceed with Caution: Deepening Practitioner Concerns about Social Tagging within Digital Collections762
for Jewish History in New York City, which placed its photos on Flickr for users to tag with information. The second was the Virtual Shtetl—an online, Polish-Jewish history archive that allowed users to add descriptions to photos. Tagging has also benefited music cataloging.17 Paul Lamere explained that music catalogers have difficulties in creating genre hierarchy to use for taxonomic purposes. He stated that tagging systems allow users to tag pieces of music with multiple genres, thereby creating their own hierarchy.18 Jean-Yves Delort also noted that social tags could make automatic indexing systems more effective and accurate by revealing which terms users employ more frequently to describe certain items.19
Finally, tagging can help complete description for items with minimal metadata. Edward Benoit III, one of the authors of this article, examined the use of domain-expert tags as supplemental metadata for minimally processed digital archives.20 The study participants created tags for a set of photographs and letters based on their assessed prior domain knowledge. Although Benoit concluded that domain knowledge could not serve as a controlling mechanism, the generated tags could expand the existing metadata.
Increasing Access and Use
Since the adaptation of social tagging in libraries, archives, and museums, its advocates promote their ability to increase access and use of collections. Not surprisingly, several
studies explore how users leverage tags for increased access of digitized materials using natural language rather than traditional controlled vocabularies. Sebastian Chan stated that serendipitous discovery in librar-ies, archives, and museums was an important aspect of user searches.
He noted that tagging systems have the benefit of re-creating serendipitous discovery when integrated into library online public access catalogs.21 Linda Zajac studied the Philadelphia Museum of Art’s website, which included a user tagging system and a list of randomly selected tags for users to browse. Though Zajac did not mention serendipi-tous discovery in her analysis, the browsing feature she described was possibly meant to re-create the sense of serendipity in a digital environment.22
Jen Pecoskie, Louise Spiteri, and Laurel Tarulli studied the effect of user-generated content on readers’ advisory services in Canadian libraries. They found that titles had more tags than subject headings assigned to them overall, but just under one-third of the bibliographic records in the study had no tags at all. The tags included single catego-ries, compound categories, and atypical compound categories. Some tags consisted of administrative notes or personal notes about the titles but lacked information that was standard in MARC records, such as authors’ names. Pecoskie and her colleagues stated that tags improved the ability to search for titles based on subject, protagonists, awards won, and the tone or mood of the story. The tags also tended to focus on the emotional aspects of the stories, whereas subject headings concentrated more on objective facts. Pecoskie, Spiteri, and Tarulli explained that the inclusion of the emotional information
. . . tagging systems have the benefit of re-creating serendipitous discovery when integrated into library online public access catalogs.
This m
ss. is
peer
review
ed, c
opy e
dited
, and
acce
pted f
or pu
blica
tion,
porta
l 18.4
.
Edward Benoit III and Amanda L. MunsonProceed with Caution: Deepening Practitioner Concerns about Social Tagging within Digital Collections 763
serves as a reader advisory because users often want that type of information before deciding what to read. They also stated that tagging and other user-generated content create a sense of community for the readers.23
Lucy Clements and Chern Li Liew studied the effects of social tagging in the Auckland Libraries system in New Zealand. They specifically looked at the ways in which library staff used the tags in their work and any problems they experienced. Clements and Liew interviewed 12 staff members ranging from library assistants to specialized li-brarians. The interviewees stated that they used the tags for keyword searching and browsing. One said that it made searching for works by a certain author easier because titles were tagged with the author’s name. Another said that performing several quick keyword searches for tags was easier than looking up the proper LCSH and searching for materials cataloged with that heading. Other librarians used the tagging system to add tags to titles when they felt that no subject heading fit the exact genre of a work. Still others mentioned that the tags allowed staff and users to apply local slang and spellings to the resources, which made them more culturally relevant to New Zealand-ers than the American-created LCSH. Some interviewees disliked the lack of structure in the tags and preferred to use LCSH instead, but others found that sorting features built into the tagging system reduced these issues. Clements and Liew stated that most of the interviewees believed the tags benefited the library system.24
Tagging Issues
The final theme of tagging research focuses on perceived (or realized) issues of tags and proposed solutions to address these issues. Although some archivists and librarians complain about the inconsistencies in user-created tags, Margaret Kipp and D. Grant Campbell argue that these inconsistencies, when made by multiple users, indicated re-lationships between the tags and shared index terms. In other words, users tried to tag items with the same term or taxonomy, but due to common misspellings, grammatical errors, and the like, they created separate tags instead.25 In a solo study, Kipp also found that tags often represented emotions, tasks to be completed, and time.26 Seth van Hooland, Eva Méndez Ro-dríguez, and Isabelle Boydens observed similar results in their study of the steve.museum pro-ject, a social tagging effort to improve public access to art in United States museum collections. Van Hooland and his coauthors reached a different conclusion than Kipp did, however. They argued that social tagging did not benefit libraries, archives, and museums because the user-gene-rated metadata were too contemporary. Tags that reflect the perspectives and search
. . . tagging and other user-generated content create a sense of community for the readers
Tags that reflect the perspectives and search methods of one generation might not be useful to future generations, who would likely have different perspectives and search methods.
This m
ss. is
peer
review
ed, c
opy e
dited
, and
acce
pted f
or pu
blica
tion,
porta
l 18.4
.
Proceed with Caution: Deepening Practitioner Concerns about Social Tagging within Digital Collections764
methods of one generation might not be useful to future generations, who would likely have different perspectives and search methods. Van Hooland, Méndez Rodríguez, and Boydens also found that many tags contained terms too general to be useful in a search and that many users created tags in only one session, showing that users were less engaged with the collections than steve.museum researchers liked to believe. Van Hooland’s team instead recommended user comments, stating that comments allowed users to provide information that could correct erroneous metadata or tell a story about an item in the collection, thus leading to a higher level of engagement.27
Some researchers have tested methods for correcting the issues van Hooland and his colleagues reported. For instance, Brian Matthews, Catherine Jones, Bartłomiej Puzoń, Jim Moon, Douglas Tudhope, Koraljka Golub, and Marianne Lykke Nielsen tested whether knowledge organization systems such as the Dewey Decimal Classification system could limit the creation of general or misspelled tags by controlling the vocabu-lary with preset terms.28 They found that participants liked the freedom and ease of use of the uncontrolled tagging system but felt that the system needed to suggest terms.29
Participants who used a knowledge organization system liked the consistency of the controlled vocabulary but only appreciated the suggested terms when they believed them useful.30 Some participants complained that irrelevant suggestions and the number of steps required to tag documents made the enhanced tagging system inefficient. Some believed the vocabulary too restrictive, and others stated that the limits on vocabulary would make the users think more deeply about the subjects of the papers.31 Overall, the researchers found that most participants liked the consistency of a controlled vocabulary and found tagging in general provided more ways to access items than regular subject indexing did.
Several years before the Matthews team’s study, Marieke Guy and Emma Tonkin argued that system-suggested tags reduced inconsistencies inherent in user-created terms but also encouraged hegemony, thereby reducing the diversity in the tags.32 Clearly, a need exists for a less biased way of making tags more consistent. In a study of the National Library of Australia’s Australian newspapers collection, Rose Holley revealed that users made a concerted effort to create consistency in tags when no tagging rules existed. The users developed their own guidelines designed to ensure their fellow researchers could find the information they needed. These guidelines included using natural order for names and phrases as opposed to listing the surname or most general term first, putting spaces between words and punctuation marks when necessary, employing hyphens to indicate subject hierarchy (similar to LCSH), using terms that were meaningful to oth-ers, adding occupations and dates of birth and death after the names of individuals, and allowing the creation of tags for personal reference.33 Another proposed solution to the problem of inconsistency is Spiteri’s suggestion that tagging systems adhere to National Information Standards Organization (NISO) guidelines. However, Spiteri also notes that NISO does not allow slang terms or jargon, which would limit the amount of diversity in tags much as Guy and Tonkin claimed system-suggested terms would.34
Another issue with social tagging is the potential for user abuse. Georgia Koutrika, Frans Adjie Effendi, Zoltán Gyöngyi, Paul Heymann, and Hector Garcia-Molina studied the impacts of spam and user abuse on tagging systems and possible ways to prevent them. The researchers created an algorithm to calculate to what degree each user’s tagging
This m
ss. is
peer
review
ed, c
opy e
dited
, and
acce
pted f
or pu
blica
tion,
porta
l 18.4
.
Edward Benoit III and Amanda L. MunsonProceed with Caution: Deepening Practitioner Concerns about Social Tagging within Digital Collections 765
activity conformed to the activities of other taggers. They then used this algorithm to locate tags created by malicious users. Koutrika and her team found that a small number of malicious users operating in an extensive digital collection with many non-abusive taggers did not significantly affect search results when they used their algorithm. Spam affected Boolean searches more. However, when the researchers introduced a moderator into the system to vet the tags, the Boolean searches improved. Koutrika and her coau-thors also simulated a large-scale, coordinated spamming attack that negatively affected both Boolean search systems and search systems based on the concentration of tags. The algorithm proved inadequate as a countermeasure in that instance, but Koutrika and her team stated that moderators employed to check the tags would likely be an effective means of combating targeted attacks.35
Despite the abundance of research on social tagging’s applications in libraries, archives, and museums and its benefits and limitations, few studies focus on the percep-tions and experiences of the library and archival professionals who work with digital collections. Like Clements and Liew’s examination of how general librarians view and use tags, this study focuses on practicing digital librarians and archivists, exploring their perceptions of the use of tagging in digital collections, its benefits, and its limitations.
Methodology
As noted earlier, this article compares the findings from two nearly identical studies con-ducted in 2010 and 2016. The original study began with the analysis of 10 semi-structured interviews of digital librarians regarding their personal and professional use of social tagging as well as their opinion of its place within digital collections. The interviews elicited mixed opinions, suggesting the need for further data gathering through a more substantial survey.
The 2010 survey included a combination of open- and closed-ended questions (30 total) based on the semi-structured interviews and was hosted on the Qualtrics online survey platform. Participation invitations were posted on eight different e-mail lists popular among digital librarians and archivists.36 The survey remained open for three weeks and collected data from 112 participants. The second study, in 2016, closely fol-lowed the earlier study with some minor adjustments to its 32 questions. For example, open-ended questions regarding the location of participants’ repositories were replaced with dropdown lists. The remaining questions for the 2016 version were based on the original 2010 study, and semi-structured interviews were not conducted. Participant in-vitations for the 2016 version used five e-mail lists.1 The survey remained open for three weeks and garnered 102 total respondents, with 73 completing the survey in its entirety.
The research question requires a longitudinal analysis of data from both surveys. A comparison of the demographic information of both sample populations indicates a relatively homogeneous cross sample population. Table 1 shows the comparative percent-ages for participant gender, age, and educational background. While the demographic data from the two surveys are similar, the types of institution where participants are employed differ slightly between 2010 and 2016 (see Table 2). The more recent study included more academic libraries and fewer institutions listed as “other” than in 2010. Several of the 2010 participants indicated working for library consortiums under the “other” category. Perhaps the transition toward more in-house digital collections depart-ments within academic libraries explains the difference between 2010 and 2016.
This m
ss. is
peer
review
ed, c
opy e
dited
, and
acce
pted f
or pu
blica
tion,
porta
l 18.4
.
Proceed with Caution: Deepening Practitioner Concerns about Social Tagging within Digital Collections766
Table 1.Demographics of the sample population, 2010 versus 2016
2010 2016
Gender Male 29.5% 20.5% Female 70.5% 72.6%Age 18–25 3.6% 4.1% 26–35 39.3% 37.0% 36–45 18.6% 17.8% 46–55 25.0% 26.0% 56–65 13.4% 12.3% 65+ 0.0% 2.7%Education Some college 0.9% 1.4% 4-year college degree 9.8% 5.5% Master’s degree 79.5% 82.2% Doctoral degree 8.0% 9.6% Professional degree 1.8% 1.4%
Table 2.Workplaces of the sample population, 2010 versus 2016
2010 2016
Institution type Academic library 46.4% 62.4% Archives/Historical society 14.3% 12.9% Museum 6.3% 2.0% Public library 4.5% 5.9% Special library 1.8% 4.0% Other 26.8% 12.9%Institution location United States 90.2% 83.2% Non-United States 9.8% 16.8%
This m
ss. is
peer
review
ed, c
opy e
dited
, and
acce
pted f
or pu
blica
tion,
porta
l 18.4
.
Edward Benoit III and Amanda L. MunsonProceed with Caution: Deepening Practitioner Concerns about Social Tagging within Digital Collections 767
The remaining data from the two studies were combined into a single data set for further comparative analysis. Several of the variables required recoding with SPSS, a software package used for statistical analysis, due to reversed Likert scales between 2010 and 2016. This was the only data manipulation required during the merging process. Since the surveys used ordinal dependent data (that is, noncontinuous scales such as Likert scales) and the data were not normally distributed, the comparative statistical analysis used Mann-Whitney U tests to determine whether the two studies were significantly different from each other.
Results
This article presents the results of the comparative analysis in four sections: use of social tagging, institutional use of social tagging, perceptions of social tagging within digital collections, and views on potential system features.
Use of Social Tagging
The surveys asked participants to indicate their knowledge of social tagging as well as their experience using tagging personally, professionally, or both. When asked to rank their knowledge of social tagging (1 = no knowledge; 5 = very knowledgeable), the 2010 participants indicated a significantly higher median (4) than the 2016 participants (3), U = 3722.5, z = –2.625, p = 0.009, where U assesses whether the two studies were signifi-cantly different from each other; z measures how far the score diverges from the mean; and p estimates the probability that the result has occurred by statistical accident. A low level of p indicates a high level of statistical significance. Personal use of social tagging increased slightly from a median of once per month to once per week, but the increase was not significant, U = 1956.5, z = –1.842, p = 0.065. The comparison of professional use of social tagging, however, found a statistically significant increase in use from once per six months to once per week, U = 865, z = –3.089, p = 0.002. Participants who indicated they had neither professional nor personal experience using social tagging were asked to indicate their reasons for not using tagging from a list of five possibilities. Significantly more participants agreed with four of the justifications in the earlier survey. Table 3 re-ports the associated Mann-Whitney U test results for these statements with statistically significant results highlighted in gray.
Institutional Use of Social Tagging
Participants answered several questions regarding their institution’s use of social tagging within and outside of digital collections. Figure 1 shows the comparative percentage of affirmative responses for each question. Between 2010 and 2016, participants reported a statistically significant increase in the number of institutions allowing social tagging or user commenting, U = 3314, z = –2.244, p = 0.025. The increase in the percentage of institutions using social tagging within digital collections was also statistically signifi-cant, U = 3319.5, z = –2.431, p = 0.015. Although the inclusion of user commenting within and outside of digital collections also rose between 2010 and 2016, the increase was not statistically significant.
This m
ss. is
peer
review
ed, c
opy e
dited
, and
acce
pted f
or pu
blica
tion,
porta
l 18.4
.
Proceed with Caution: Deepening Practitioner Concerns about Social Tagging within Digital Collections768
Tabl
e 3.
Reas
ons f
or n
ot ta
ggin
g, 2
010
vers
us 2
016
1 =
no;
2 =
yes
U
*
z†
p
‡
201
0 m
edia
n
201
6 m
edia
n
201
0 m
ean
2
016
mea
n
I hav
en’t
had
the
oppo
rtun
ity.
7 –3
.651
0.
001
2 1
1.86
1
I am
not
fam
iliar
with
tagg
ing.
15
–2
.581
0.
026
2 1
1.64
1
It do
esn’
t ben
efit m
y lif
e or
wor
k.
15
–2.5
81
0.02
6 2
1 1.
64
1I d
on’t
use
syst
ems t
hat a
llow
tagg
ing.
22
–3
.162
0.
005
2 1
1.71
1
I hav
e pr
ivac
y co
ncer
ns.
12
–1.3
68
0.3
1 1
1.43
1
Stat
istic
ally
sign
ifica
nt re
sults
are
hig
hlig
hted
in g
ray.
*U a
sses
ses w
heth
er th
e tw
o st
udie
s wer
e si
gnifi
cant
ly d
iffer
ent f
rom
eac
h ot
her.
†The
z-s
core
is a
mea
sure
of h
ow fa
r the
scor
e di
verg
es fr
om th
e m
ost p
roba
ble
resu
lt, th
e m
ean.
‡The
p-v
alue
is a
n es
timat
e of t
he p
roba
bilit
y th
at th
e res
ult h
as o
ccur
red
by st
atis
tical
acc
iden
t; a
low
leve
l of p
indi
cate
s a h
igh
leve
l of s
tatis
tical
sign
ifica
nce.
This m
ss. is
peer
review
ed, c
opy e
dited
, and
acce
pted f
or pu
blica
tion,
porta
l 18.4
.
Edward Benoit III and Amanda L. MunsonProceed with Caution: Deepening Practitioner Concerns about Social Tagging within Digital Collections 769
Perceptions of Social Tagging
The surveys asked participants to indicate their agreement or disagreement with 17 state-ments focused on the benefits or limitations of social tagging within digital collections. The mean responses for all statements increased, indicating a higher level of agreement in 2016. Comparative analysis of their respons-es found significant differences between 2010 and 2016 for five statements (see Table 4). The benefit statements account for only one of the significant increases, “Social tags identify the previously unidentifiable,” U = 3595, z = 2.151, p = 0.031. Participants were more concerned with potential issues and abuses of tagging systems in 2016, as shown by the significant increase in agreement with statements regarding uncontrollability, troublesome users, consistency issues, and sensitive topics.
The open-ended responses echo several of these findings. Both surveys asked par-ticipants for their general opinions regarding the use of social tagging within digital collections. A comparative analysis of the responses identified key differences between 2010 and 2016. The initial survey participants in 2010 framed their responses in terms of the possibilities of social tagging, with many stating that they lacked direct experience
Participants were more con-cerned with potential issues and abuses of tagging systems in 2016, as shown by the significant increase in agreement with state-ments regarding uncontrollabil-ity, troublesome users, consisten-cy issues, and sensitive topics.
Figure 1. Comparison of results from 2010 and 2016 regarding institutional use of social tagging.
This m
ss. is
peer
review
ed, c
opy e
dited
, and
acce
pted f
or pu
blica
tion,
porta
l 18.4
.
Proceed with Caution: Deepening Practitioner Concerns about Social Tagging within Digital Collections770
Tabl
e 4.
Perc
eptio
ns o
f soc
ial t
aggi
ng, 2
010
vers
us 2
016
1 =
not
impo
rtan
t; 5
= e
xtre
mel
y im
port
ant
U*
z†
p
‡
2
010
med
ian
2
016
med
ian
2
010
mea
n
201
6 m
ean
Soci
al ta
ggin
g en
gage
s use
rs w
ith th
e co
llect
ion(
s).
3169
0.
498
0.61
8 4
4 3.
95
4.01
Soci
al ta
gs id
entif
y th
e pr
evio
usly
uni
dent
ifiab
le.
3595
2.
151
0.03
1 4
4 3.
73
4.00
Soci
al ta
gs e
xpan
d th
e m
etad
ata
for a
giv
en it
em.
3415
1.
49
0.13
6 4
4 3.
94
4.12
Soci
al ta
gs co
mpl
ete
the
met
adat
a fo
r a g
iven
item
. 35
37.5
1.
877
0.06
3
3 2.
68
2.96
Soci
al ta
gs a
llow
inte
r-us
er co
nver
satio
n re
late
d
to th
e co
llect
ion/
item
s. 30
31.5
–0
.038
0.
969
4 4
3.75
4.
00So
cial
tags
shou
ld b
e in
tegr
ated
with
met
adat
a.
3204
0.
625
0.53
2 3
3 3.
24
3.34
Soci
al ta
gs ca
nnot
be
trus
ted.
33
82.5
1.
299
0.19
4 3
3 2.
73
2.91
Soci
al ta
ggin
g is
too
new
of a
tech
nolo
gy to
spen
d
time
impl
emen
ting
with
in co
llect
ions
. 32
24.5
0.
693
0.69
3 2
2 2.
30
2.42
Soci
al ta
gs a
r e to
o un
cont
rolle
d to
be
used
as
acce
ss p
oint
s for
item
s. 36
50.5
2.
27
0.02
3 2
3 2.
58
2.96
Allo
win
g so
cial
tagg
ing
will
ove
rload
the
colle
ctio
n
with
info
rmat
ion.
33
26.5
1.
074
0.28
3 2
2 2.
29
2.44
I am
conc
erne
d ov
er p
ossi
ble
trou
bles
ome
user
s. 35
13.5
2.
093
0.03
6 3
4 3.
14
3.43
Use
rs w
ill in
put i
napp
ropr
iate
term
s (i.e
., ex
plet
ives
). 34
30
1.81
8 0.
069
3 3
2.99
3.
21I a
m co
ncer
ned
over
the
cons
iste
ncy
of ta
gs.
3484
1.
993
0.04
6 3
4 3.
34
3.65
T agg
ing
with
in se
nsiti
ve to
pic c
olle
ctio
ns sh
ould
be
limite
d.
3719
2.
882
0.00
4 3
3 3.
00
3.41
Old
er u
sers
will
not
und
erst
and
a ta
ggin
g sy
stem
. 31
55.5
0.
74
0.45
9 3
3 2.
80
2.92
I am
conc
erne
d ab
out t
he m
igra
tion
of ta
gs to
futu
r e sy
stem
s. 32
81
1.23
2 0.
218
4 4
3.47
3.
69So
cial
tags
shou
ld re
mai
n se
para
ted
from
the
offici
al
met
adat
a of
a co
llect
ion.
31
24
0.62
1 0.
534
3 4
3.37
3.
48St
atis
tical
ly s
igni
fican
t res
ults
are
hig
hlig
hted
in g
ray.
*U
ass
esse
s w
heth
er th
e tw
o st
udie
s w
ere
sign
ifica
ntly
diff
eren
t fro
m e
ach
othe
r; †T
he z
-sco
re is
a
mea
sure
of h
ow fa
r the
scor
e di
verg
es fr
om th
e m
ost p
roba
ble
resu
lt, th
e m
ean;
‡Th
e p-
valu
e is
an
estim
ate
of th
e pr
obab
ility
that
the
resu
lt ha
s occ
urre
d by
st
atis
tical
acc
iden
t; a
low
leve
l of p
indi
cate
s a h
igh
leve
l of s
tatis
tical
sign
ifica
nce.
This m
ss. is
peer
review
ed, c
opy e
dited
, and
acce
pted f
or pu
blica
tion,
porta
l 18.4
.
Edward Benoit III and Amanda L. MunsonProceed with Caution: Deepening Practitioner Concerns about Social Tagging within Digital Collections 771
but viewed the potential benefits positively. For example, one participant said, “Social tagging is potentially useful for metadata augmentation, comments are definitely a useful metadata feedback mechanism if the resources are available to respond and implement change.” Another 2010 participant stated, “I think there is potential, but no one is sure how to make it work yet.” In contrast, more of the 2016 participants reflected on their own experiences with tagging. As one noted, “I am very interested in social tagging as a crowd-sourcing tool, and we undertook an outreach project to try and get alumni to comment and tag university photos through our digital collections, but response has been very limited.” Another stated, “I think it is immensely valuable to digital collec-tions” (emphasis added).
Reflecting the Likert-based questions, more participants in 2016 (34.6 percent) than in 2010 (23.4 percent) noted concerns regarding quality control and consistency. These issues mainly focused on perceptions rather than experience. As one 2016 partici-pant stated, “I think it would be wonderfully useful and encourage the cross-pollination of ideas across disciplines. Unfortunately, there is a fear that we would lose control of cataloging at our institution if we opened it up to the ‘masses’ despite my assurances to the contrary.” Others focused on how tagging “can sometimes get really messy, with misspellings or mis-identification of people” and complained, “Vetting that info can be problematic, as is getting user participation.” A participant in the 2016 study said, “My thoughts are somewhat mixed. While we have received some really useful information from users . . . we’ve also received relatively useless information that still requires staff time to review/ve/manage [sic].”
The comparison of open-ended responses also found a decrease in the number of participants with system-based issues preventing them from implementing social tag-ging from 23.4 percent in 2010 to 15.4 percent in 2016. This drop reflects technological development since 2010, when major digital collection content management systems, such as CONTENTdm®, did not support social tagging as they do now. Other issues arose in the responses in low numbers, such as calls for additional research, lack of user response, and the need for longitudinal development of tagging within systems.
Potential System Features
Based on the findings of the original pilot project, the surveys presented participants with a range of potential features within a digital collection system that could help ease their concerns. Table 5 lists the presented features with their associated compara-tive analyses. Participants ranked each feature on a scale from 1 (not important) to 5 (extremely important). Only two of the 14 features received a lower mean score in 2016 than in 2010—a user monitoring system and new tag notification—neither of which was statistically significant. The remaining 12 features were all viewed as more important in 2016 than in 2010, with more than half (seven) experiencing statistically significant increases. Those seven features were
more participants in 2016 (34.6 percent) than in 2010 (23.4 per-cent) noted concerns regarding quality control and consistency.
This m
ss. is
peer
review
ed, c
opy e
dited
, and
acce
pted f
or pu
blica
tion,
porta
l 18.4
.
Proceed with Caution: Deepening Practitioner Concerns about Social Tagging within Digital Collections772
Tabl
e 5.
Perc
eptio
ns o
f pot
entia
l tag
ging
syst
em fe
atur
es, 2
010
vers
us 2
016
1 =
not
impo
rtan
t; 5
= e
xtre
mel
y im
port
ant
U*
z†
p
‡
2
010
med
ian
2
016
med
ian
2
010
mea
n
201
6 m
ean
A u
ser m
onito
ring
syst
em
2644
–0
.921
0.
357
4 4
3.65
3.
59N
otifi
catio
n ev
ery
time
a ne
w ta
g is
add
ed to
the
colle
ctio
n
2758
.5
–0.4
76
0.63
4 3
3 3.
25
3.21
App
rova
l req
uire
d fo
r eve
ry ta
g ad
ded
to th
e co
llect
ion
32
72.5
1.
476
0.14
3
3 2.
61
2.89
Abi
lity
to b
lock
spec
ific u
sers
35
07.5
2.
432
0.01
5 4
4 3.
71
4.21
Requ
iring
use
rs lo
g in
prio
r to
crea
ting
tags
32
18
1.27
5 0.
202
3 4
3.23
3.
49A
utom
atic
tran
sfer
of t
ags i
nto
sear
chab
le m
etad
ata
32
51
1.40
8 0.
159
3 3
3.00
3.
34Re
quiri
ng in
dexi
ng a
ppro
val p
rior t
o tr
ansf
er o
f tag
s int
o
sear
chab
le m
etad
ata
34
20
2.05
1 0.
04
3 4
3.20
3.
67In
depe
nden
t sea
rch
syst
ems f
or o
ffici
al m
etad
ata
and
so
cial
tags
34
68
2.21
6 0.
027
2 3
2.37
2.
78Sp
ell-c
heck
ing
softw
are
with
in th
e ta
ggin
g sy
stem
44
03.5
5.
767
0 3
4 2.
56
3.77
Dis
play
ing
prev
ious
ly a
dded
tags
32
49.5
1.
43
0.15
3 4
4 3.
39
3.78
A u
ser a
ppro
val s
yste
m (s
uch
as th
umbs
up
or th
umbs
dow
n)
3162
.5
1.06
4 0.
287
3 3
2.80
3.
05So
cial
tagg
ing
tuto
rials
35
42
2.49
4 0.
013
3 4
2.99
3.
59Se
para
te sy
stem
s for
dig
ital c
olle
ctio
n an
d ta
ggin
g
3490
2.
326
0.02
2
3 2.
30
2.70
Inte
grat
ed sy
stem
s for
dig
ital c
olle
ctio
n an
d ta
ggin
g
3819
3.
575
0 3
3 2.
58
3.34
Stat
istic
ally
sign
ifica
nt re
sults
are
hig
hlig
hted
in g
ray.
*U a
sses
ses w
heth
er th
e tw
o st
udie
s wer
e si
gnifi
cant
ly d
iffer
ent f
rom
eac
h ot
her.
†The
z-s
core
is a
mea
sure
of h
ow fa
r the
scor
e di
verg
es fr
om th
e m
ost p
roba
ble
resu
lt, th
e m
ean.
‡The
p-v
alue
is a
n es
timat
e of t
he p
roba
bilit
y th
at th
e res
ult h
as o
ccur
red
by st
atis
tical
acc
iden
t; a
low
leve
l of p
indi
cate
s a h
igh
leve
l of s
tatis
tical
sign
ifica
nce.
This m
ss. is
peer
review
ed, c
opy e
dited
, and
acce
pted f
or pu
blica
tion,
porta
l 18.4
.
Edward Benoit III and Amanda L. MunsonProceed with Caution: Deepening Practitioner Concerns about Social Tagging within Digital Collections 773
1. ability to block specific users; 2. requiring indexing approval prior to transferring tags into searchable metadata;3. independent search systems for official metadata and social tags;4. spell-checking software within the tagging system;5. social tagging tutorials;6. separate systems for digital collections and tagging; and 7. integrated systems for digital collections and tagging.
The final two features appear contradictory; however, individual participants rarely ranked both statements as important—instead selecting one over the other. Spell check-ing (+1.21), integrated systems (+0.76), social tagging tutorials (+0.6), and user blocking (+0.5) received the highest average increased rankings. Interestingly, the ability to block specific users received the highest mean ranking for both 2010 (mean = 3.71) and 2016 (mean = 4.21). Likewise, separate systems for digital collections and tagging received the lowest average rankings for both 2010 (mean = 2.3) and 2016 (mean = 2.7).
Discussion
The comparison of how practitioners perceive social tagging in digital collections pro-vided surprising results. The expectation was that participants in the 2016 study would report fewer concerns and exhibit a more positive view of social tagging than those in the 2010 study since people tend to become more comfort-able with technology over time. The results do not support this expectation. As noted earlier, the more recent study found a significant increase in concerns regarding abuse, consistency, and de-scriptive control since 2010. Addition-ally, participants strengthened their calls for tagging system features that specifically address such concerns. The personal and professional use of social tagging has significantly increased since 2010, and more institutions now allow social tagging or user commenting within digital collections than did so then.
Although a direct explanation for the shift in perception remains unknown, several possibilities exist. One view might be that practitioners are reluctant to relinquish de-scriptive control over their collections or are fearful of new technologies. This view is not reflected in the open-ended responses, nor does it address the change over time. The consideration of when the original study was conducted might provide more insight. Many of the original participants were likely “early adopters” of new technologies who have a generally more positive view of technological developments and their potential benefits.38 In this case, as social tagging became popularized through social media devel-opment in the past six years, more late adopters with lingering concerns about tagging participated in the 2016 study.
The personal and professional use of social tagging has significantly increased since 2010, and more institutions now allow social tagging or user commenting within digital collections than did so then.
This m
ss. is
peer
review
ed, c
opy e
dited
, and
acce
pted f
or pu
blica
tion,
porta
l 18.4
.
Proceed with Caution: Deepening Practitioner Concerns about Social Tagging within Digital Collections774
The open-ended responses provide another possible explanation. In the original study, participants reflected on the potential use of tagging and often complained about the lack of tagging support in existing systems. This means most participants spoke not from actual experience but instead described what tagging could theoretically achieve. Such a focus on potential is reflective of the literature at the time. Early tagging studies highlighted success stories such as the Library of Congress Flickr project and Sroka’s positive results from the Center for Jewish History and the Virtual Shtetl projects.39 Even studies that recognized potential issues suggested that the usefulness of tags as supplemental metadata would exceed the risks.40
By 2016, many practitioners had moved from viewing tagging’s potential to a more experiential understanding with the growth of digital collections. Therefore, more of the 2016 participants referenced their experience with real collections. People who had a poor
experience tended to focus on their individual case rather than consider the broader aspects. Indeed, several of the more recent participants cited a lack of tag development within their digital collections as the justification for their negative views. Not surprisingly, larger insti-tutions such as the Library of Congress have a more extensive user-base to generate tags than smaller, more localized repositories have. Addi-tionally, research on promoting the generation
of tags remains lacking. Some more recent studies on gamification and the creation of tagging games show promise, but they are still too limited and lack large-scale testing.41
While some studies such as those of Koraljka Golub and her colleagues and of Guy and Tonkin found the use of system-suggested tags can help standardize tagging and generate more tags through prompting, these types of systems have not been adequately
built into digital collection platforms. Like-wise, practitioners cannot readily employ potential solutions such as the algorithm of Koutrika and her colleagues—and some may question if the time required to do so would outweigh the benefits of tagging itself. Overall, the deepening concerns of practitioners are not adequately addressed in either the current literature or the devel-opment of new tools and digital collection platforms. Although digital collection
management systems like CONTENTdm® allow user-generated tags and comments, they remain difficult to moderate and offer few of the features listed in Table 5. Additionally, customizations to proprietary software require significant time and expertise. System designers should work more closely with practitioners to identify potential solutions to the issues with tagging as new updates are developed.
Finally, while most statistically significant findings emphasize growing concerns, one should not discount the positive aspects. All the statements of perceived benefits of
Overall, the deepening concerns of practitioners are not adequately addressed in either the current literature or the development of new tools and digital collection platforms.
By 2016, many practitioners had moved from viewing tagging’s potential to a more experiential understanding with the growth of digital collections.
This m
ss. is
peer
review
ed, c
opy e
dited
, and
acce
pted f
or pu
blica
tion,
porta
l 18.4
.
Edward Benoit III and Amanda L. MunsonProceed with Caution: Deepening Practitioner Concerns about Social Tagging within Digital Collections 775
tagging, as shown in Table 4, increased from 2010 to 2016. Therefore, it is not necessary to throw the baby out with the bathwater. While there is room for additional quality control measures, practitioners still see the potential benefits of tagging.
Conclusion
The use of social tagging in digital collections, as well as other participatory applications, continues to increase. As a result, digital librarians and archivists increasingly move toward engaging users with various aspects of their collections. This engagement is not without issues. While many practitioners initially viewed social tagging as promis-ing, the past years saw an increase in their concerns regarding its applica-tion. A comparison of practitioner perceptions of social tagging between 2010 and 2016 found stronger calls for incorporating safeguards within tag-ging systems, including spell-checking features, tutorials, the ability to block users, and requiring that staff review tags before allowing them into the system. Despite these calls for safeguards, digital librarians and archivists remain optimistic about the potential benefits of tagging but also wary of uncontrolled and unmediated tags.
The comparative study identified several potential contributing factors for the change in perception. As more digital collections incorporate user tagging, comment-ing, or both, system developers need to address the practitioners’ concerns and desire for additional safeguards within their systems. Likewise, developers should encourage more direct feedback and utilize methods such as focus groups to elicit more specific desired features and options.
Academics and researchers must also further engage with practitioners by conduct-ing more studies in partnership with them and on larger scales. Such studies would help move discussions from the theoretical realm to practical applications. Additionally, future research should address different motivating techniques to increase the number of participating users, as well as the quality and quantity of their tags. These studies could include reward mechanisms such as point-based systems toward institutional membership or events, or recognition-based systems.
Moving forward, users will increasingly expect participatory aspects in their en-gagement of digital content. Therefore, it is imperative to address professional concerns now, before systems become overly complicated. Additional qualitative and quantitative studies should explore concerns and solutions at greater depths through combinations of focus groups, interviews, experiments, and case studies. Only through a mixed meth-ods approach that brings all stakeholders to the table will true solutions be uncovered.
Edward Benoit III is an assistant professor and coordinator of the Archival Studies program in the School of Library and Information Science at Louisiana State University in Baton Rouge; he may be reached by e-mail at: [email protected].
Amanda L. Munson received a master’s degree in library and information science in 2017 from the School of Library and Information Science at Louisiana State University in Baton Rouge.
. . . digital librarians and archivists remain optimistic about the potential benefits of tagging but also wary of uncontrolled and unmediated tags.
This m
ss. is
peer
review
ed, c
opy e
dited
, and
acce
pted f
or pu
blica
tion,
porta
l 18.4
.
Proceed with Caution: Deepening Practitioner Concerns about Social Tagging within Digital Collections776
Notes
1. For a more detailed discussion of early social tagging research, see Jane Hunter, “Collaborative Semantic Tagging and Annotation Systems,” Annual Review of Information Science and Technology 43, 1 (2009): 1–84. Additional examples include Chufeng Chen, Michael Oakes, and John Tait, “A Location Annotation System for Personal Photos,” in SIGIR ‘06: Proceedings of the 29th Annual International ACM SIGIR [Association for Computing Machinery Special Interest Group on Information Retrieval] Conference on Research and Development in Information Retrieval (New York: ACM, 2006), 726, http://doi.acm.org/10.1145/1148170.1148339; P. Jason Morrison, “Tagging and Searching: Search Retrieval Effectiveness of Folksonomies on the World Wide Web,” Information Processing & Management 44, 4 (2008): 1562–79; Peyman Sazedj and H. Sofia Pinto, “Time to Evaluate: Targeting Annotation Tools,” in Proceedings of the 5th International Workshop on Knowledge Markup and Semantic Annotation, Galway, Ireland, November 2005, http://ceur-ws.org/Vol-185/semAnnot05-04.pdf; and Edith Speller, “Collaborative Tagging, Folksonomies, Distributed Classification or Ethnoclassification: A Literature Review,” Library Student Journal 2 (2007), http://www.librarystudentjournal.org/index.php/lsj/article/view/45.
2. Jean-Yves Delort, “Automatically Characterizing Salience Using Readers’ Feedback,” Journal of Digital Information 10, 1 (2009), http://journals.tdl.org/jodi/index.php/jodi/article/view/268; Alton Y. K. Chua and Dion H. Goh, “A Study of Web 2.0 Applications in Library Websites,” Library & Information Science Research 32, 3 (2010): 203–11; Dimitris Gavrilis, Constantia Kakali, and Christos Papatheodorou, “Enhancing Library Services with Web 2.0 Functionalities,” in Research and Advanced Technology for Digital Libraries: 12th European Conference, ECDL 2008, Aarhus, Denmark, September 2008 Proceedings, ed. Birte Christensen-Dalsgaard, Donatella Casteli, Bolette Ammitzbøll Jurik, and Joan Lippincott (Berlin: Springer, 2008), 148–59, http://link.springer.com/chapter/10.1007/978-3-540-87599-4_16; Luiz H. Mendes, Jennie Quiñonez-Skinner, and Danielle Skaggs, “Subjecting the Catalog to Tagging,” Library Hi Tech 27, 1 (2009): 30–41; Tom Steele, “The New Cooperative Cataloging,” Library Hi Tech 27, 1 (2009): 68–77; Jezmynne Westcott, Alexandra Chappell, and Candace Lebel, “LibraryThing for Libraries at Claremont,” Library Hi Tech 27, 1 (2009): 78–81; Jennifer Trant, “Tagging, Folksonomy and Art Museums: Early Experiments and Ongoing Research,” Journal of Digital Information 10, 1 (2009), http://journals.tdl.org/jodi/index.php/jodi/article/view/270; Christine DeZelar-Tiedman, “Exploring User-Contributed Metadata’s Potential to Enhance Access to Literary Works: Social Tagging in Academic Library Catalogs,” Library Resources & Technical Services 55, 4 (2011): 221–33; Constantia Kakali, “A Utilization Model of Users’ Metadata in Libraries,” Journal of Academic Librarianship 40, 6 (2014): 568; Krystyna K. Matusiak, “Towards User-Centered Indexing in Digital Image Collections,” OCLC [Online Computer Library Center] Systems & Services: International Digital Library Perspectives 22, 4 (2006): 283–98; Michelle Springer, Beth Dulabahn, Phil Michel, Barbara Natanson, David Reser, David Woodward, and Helena Zinkham, “For the Common Good: The Library of Congress Flickr Pilot Project,” 2008, http://www.loc.gov/rr/print/flickr_report_final.pdf.
3. Alla Zollers, “Emerging Motivations for Tagging: Expression, Performance, and Activism,” in WWW2007: Proceedings of the 16th International World Wide Web Conference, Banff, Canada, May 8–12, 2007; Chei Sian Lee, Dion Hoe-Lian Goh, Khasfariyati Razikin, and Alton Y. K. Chua, “Tagging, Sharing and the Influence of Personal Experience,” Journal of Digital Information 10, 1 (2009), http://journals.tdl.org/jodi/index.php/jodi/article/view/275; Morgan Ames and Mor Naaman, “Why We Tag: Motivations for Annotation in Mobile and Online Media,” in CHI ’07 Proceedings of the SIGCHI [Special Interest Group on Computer-Human Interaction] Conference on Human Factors in Computing Systems (New York: ACM, 2007), 971–80; Tony Hammond, Timo Hannay, Ben Lund, and Joanna Scott, “Social Bookmarking Tools (I): A General Review,” D-Lib [digital library] Magazine 11, 4 (2005), http://www.dlib.org/dlib/april05/hammond/04hammond.html; Pauline Rafferty and Rob Hidderley,
This m
ss. is
peer
review
ed, c
opy e
dited
, and
acce
pted f
or pu
blica
tion,
porta
l 18.4
.
Edward Benoit III and Amanda L. MunsonProceed with Caution: Deepening Practitioner Concerns about Social Tagging within Digital Collections 777
“Flickr and Democratic Indexing: Dialogic Approaches to Indexing,” Aslib [Association for Information Management] Proceedings 59, 4–5 (2007): 397–410; Louise F. Spiteri, “The Structure and Form of Folksonomy Tags: The Road to the Public Library Catalog,” Information Technology and Libraries 26, 3 (2007): 13–25; Scott A. Golder and Bernardo A. Huberman, “Usage Patterns of Collaborative Tagging Systems,” Journal of Information Science 32, 2 (2006): 198–208; Margaret E. I. Kipp and D. Grant Campbell, “Patterns and Inconsistencies in Collaborative Tagging Systems: An Examination of Tagging Practices,” in Proceedings of the Annual Meeting of the American Society for Information Science and Technology, Austin, TX, November 3–8, 2006, http://eprints.rclis.org/archive/00008315/.; Margaret E. I. Kipp, “@toread and Cool: Subjective, Affective and Associative Factors in Tagging,” Proceedings of the 36th Conference of the Canadian Association for Information Science/L’Association canadienne des sciences de l’information (CAIS/ACSI 2008), Vancouver, Canada, June 5–7, 2008, http://eprints.rclis.org/11749/1/cais2008presentation.pdf.
4. Marieke Guy and Emma Tonkin, “Folksonomies: Tidying up Tags?” D-Lib Magazine 12, 1 (2006), http://www.dlib.org/dlib/january06/guy/01guy.html; Kipp and Campbell, “Patterns and Inconsistencies in Collaborative Tagging Systems”; Georgia Koutrika, Frans Adjie Effendi, Zoltán Gyöngyi, Paul Heymann, and Hector Garcia-Molina, “Combating Spam in Tagging Systems,” in Proceedings of the 3rd International Workshop on Adversarial Information Retrieval on the Web, Banff, Canada, May 8, 2007, http://dl.acm.org/citation.cfm?id=1244420.
5. Lucy Clements and Chern Li Liew, “Talking about Tags: An Exploratory Study of Librarians’ Perception and Use of Social Tagging in a Public Library,” Electronic Library 34, 2 (2016): 289–301.
6. Everett M. Rogers, Diffusion of Innovations, 5th ed. (New York: Free Press, 2003). 7. Harry van Vliet and Erik Hekman, “Enhancing User Involvement with Digital Cultural
Heritage: The Usage of Social Tagging and Storytelling,” First Monday 17, 5 (2012), http://firstmonday.org/ojs/index.php/fm/article/view/3922/3203.
8. Christine DeZelar-Tiedman, “Exploring User-Contributed Metadata’s Potential to Enhance Access to Literary Works: Social Tagging in Academic Library Catalogs,” Library Resources & Technical Services 55, 4 (2011): 221–33.
9. Constantia Kakali, “A Utilization Model of Users’ Metadata in Libraries,” Journal of Academic Librarianship 40, 6 (2014): 568.
10. Ibid. 11. Ibid., 571.12. Praveenkumar Vaidya and N. S. Harinarayana, “The Comparative and Analytical Study of
LibraryThing Tags with Library of Congress Subject Headings,” Knowledge Organization 43, 1 (2016): 35–43.
13. Ibid., 41.14. Ibid. 15. Matusiak, “Towards User-Centered Indexing in Digital Image Collections.” 16. Springer, Dulabahn, Michel, Natanson, Reser, Woodward, and Zinkham, “For the Common
Good.”17. Marek Sroka, “Identifying and Interpreting Prewar and Wartime Jewish Photographs
in Polish Digital Collections,” Slavic & East European Information Resources 12, 2–3 (2011): 175–87.
18. Paul Lamere, “Social Tagging and Music Information Retrieval,” Journal of New Music Research 37, 2 (2008): 101–14.
19. Jean-Yves Delort, “Automatically Characterizing Salience Using Readers’ Feedback,” Journal of Digital Information 10, 1 (2009), http://journals.tdl.org/jodi/article/view/268/274.
20. Edward Benoit III, “#MPLP [more product, less process] Part 1: Comparing Domain Expert and Novice Social Tags in a Minimally Processed Digital Archives,” American Archivist 80, 2 (2017): 407–38.
This m
ss. is
peer
review
ed, c
opy e
dited
, and
acce
pted f
or pu
blica
tion,
porta
l 18.4
.
Proceed with Caution: Deepening Practitioner Concerns about Social Tagging within Digital Collections778
21. Sebastian Chan, “Tagging and Searching—Serendipity and Museum Collection Databases,” in Jennifer Trant and David Bearman, eds., Museums and the Web 2007: Proceedings (Toronto: Archives & Museum Informatics, 2007), http://www.archimuse.com/mw2007/papers/chan/chan.html.
22. Linda Zajac, “Social Metadata Use in Art Museums: The Case of Social Tagging,” PNLA [Pacific Northwest Library Association] Quarterly 77, 2 (2013): 66–77.
23. Jen Pecoskie, Louise F. Spiteri, and Laurel Tarulli, “OPACs [online public access catalogs], Users, and Readers’ Advisory: Exploring the Implications of User-Generated Content for Readers’ Advisory in Canadian Public Libraries,” Cataloging & Classification Quarterly 52, 4 (2014): 431–53.
24. Clements and Liew, “Talking about Tags.”25. Kipp and Campbell, “Patterns and Inconsistencies in Collaborative Tagging Systems.” 26. Kipp, “@toread and Cool.” 27. Seth van Hooland, Eva Méndez Rodríguez, and Isabelle Boydens, “Between
Commodification and Engagement: On the Double-Edged Impact of User-Generated Metadata within the Cultural Heritage Sector,” Library Trends 59, 4 (2011): 707–20.
28. Koraljka Golub, Jim Moon, Douglas Tudhope, Catherine Jones, Brian Matthews, Bartłomiej Puzoń, and Marianne Lykke Nielsen, “EnTag [enhanced tagging]: Enhancing Social Tagging for Discovery,” Proceedings of the 9th ACM/IEEE-CS [Association for Computing Machinery/Institute of Electrical Engineers-Computer Society] Joint Conference on Digital Libraries (JCDL ’09) (New York: ACM, 2009), 163–72.
29. Golub, Moon, Tudhope, Jones, Matthews, Puzoń, and Nielsen, “EnTag”; Brian Matthews, Catherine Jones, Bartłomiej Puzoń, Jim Moon, Douglas Tudhope, Koraljka Golub, and Marianne Lykke Nielsen, “An Evaluation of Enhancing Social Tagging with a Knowledge Organization System,” Aslib Proceedings 62, 4–5 (2010): 447–69. The results of the study were published in two separate reports, with half of the results appearing in Golub, Moon, Tudhope, Jones, Matthews, Puzoń, and Nielsen, “EnTag,” and the other half appearing in Matthews, Jones, Puzoń, Moon, Tudhope, Golub, and Nielsen, “An Evaluation of Enhancing Social Tagging with a Knowledge Organization System.” Some overlap between the reports exists.
30. Golub, Moon, Tudhope, Jones, Matthews, Puzoń, and Nielsen, “EnTag.”31. Golub, Moon, Tudhope, Jones, Matthews, Puzoń, and Nielsen, “EnTag”; Matthews, Jones,
Puzoń, Moon, Tudhope, Golub, and Nielsen, “An Evaluation of Enhancing Social Tagging with a Knowledge Organization System.”
32. Guy and Tonkin, “Folksonomies.”33. Rose Holley, “Tagging Full Text Searchable Articles: An Overview of Social Tagging
Activity in Historic Australian Newspapers August 2008–August 2009,” D-Lib Magazine 16, 1–2 (2010), http://www.dlib.org.libezp.lib.lsu.edu/dlib/january10/holley/01holley.html.
34. Spiteri, “The Structure and Form of Folksonomy Tags.”35. Koutrika, Effendi, Heymann, and Garcia-Molina, “Combating Spam in Tagging Systems.” 36. The 2010 study used the following e-mail lists: DIGLIB ([email protected].
fr), Imagelib ([email protected]), ContentDM-L ([email protected]), SAA [Society of American Archivists] Metadata & Digital Object Discussion List ([email protected]), SAA Visual Materials List ([email protected]), A&A [Archives & Archivists] List ([email protected]), LOOKSEE ([email protected]), and Digital Preservation ([email protected]).
37. The 2016 survey used the following e-mail lists: Code4Lib, Digital Libraries Research’s DIGLIB, Digital Library Foundation’s DLF-Announce, Library and Information Technology Association’s LITA-L, and SAA A&A List.
38. Rogers, Diffusion of Innovations.39. Springer, Dulabahn, Michel, Natanson, Reser, Woodward, and Zinkham, “For the Common
Good”; Sroka, “Identifying and Interpreting Prewar and Wartime Jewish Photographs in Polish Digital Collections.”
This m
ss. is
peer
review
ed, c
opy e
dited
, and
acce
pted f
or pu
blica
tion,
porta
l 18.4
.
Edward Benoit III and Amanda L. MunsonProceed with Caution: Deepening Practitioner Concerns about Social Tagging within Digital Collections 779
40. Matusiak, “Towards User-Centered Indexing in Digital Image Collections.”41. Riste Gligorov, Michiel Hildebrand, Jacco van Ossenbruggen, Guus Schreiber, and Lora
Aroyo, “On the Role of User-Generated Metadata in Audio Visual Collections,” Proceedings of the Sixth International Conference on Knowledge Capture (New York: ACM, 2011), 1.
This m
ss. is
peer
review
ed, c
opy e
dited
, and
acce
pted f
or pu
blica
tion,
porta
l 18.4
.
This m
ss. is
peer
review
ed, c
opy e
dited
, and
acce
pted f
or pu
blica
tion,
porta
l 18.4
.