Knowledge Organisation Systems for Chatbots and ...

10
Paul Matthews – University of the West of England (Bristol), England. Knowledge Organisation Systems for Chatbots and Conversational Agents A Review of Approaches and an Evaluation of Relative Value- Added for the User Abstract: Chatbots and smart digital assistants usually rely on some form of knowledge base for providing a source of answers to user queries. These may be more or less structured, ranging from free text, keyword search to interconnected knowledge graphs where answers are surfaced through structured query and/or dynamic logics. This paper will review the state of the art and probe the added value to the end user of using more structured KOSs as the knowledge base component of agent architectures. In addition to the extant literature, experi- mental development of a chat bot for student information based on a partial knowledge graph is described. A key conclusion is that ontology in a broad sense can be used to drive dialogue as much as to derive factually correct or organisationally-approved responses. 1.0 Introduction Text and speech-based dialogue agents are growing in popularity as a new or com- plementary interface to organisational systems and services. They offer high availabil- ity, convenience and the potential to identify, triage and satisfy common queries and interactions. As a window into an organisation’s knowledge it seems natural that knowledge organisation systems (KOSs) should play a major role in supporting and di- recting exchanges, though this is a more sophisticated role than they usually play. For the sake of interoperability and portability, organisations have often erred on the side of simplicity and relatively semantically shallow KOSs to support discovery (Isaac and Baker 2015). However, this level of power is neglecting a wealth of experience in the implementation of Knowledge Representation and Knowledge Engineering coming from the computer science / AI disciplines (Giunchiglia, Dutta, and Maltese 2014). These somewhat deeper approaches allow the formal expression of conceptual struc- tures and their interrelation, as well as the application of automated logical inference over a system’s knowledge base to arrive at evidence-based conclusions. These af- fordances once again become potentially useful when it comes to conversational agents, so it is a good time to revisit standards and tools developed for knowledge representation and apply them to chatbots (Cameron et al. 2018) A similar driver has been the developments in natural language interfaces, where some progress has been made in mapping unstructured queries to formal data relations. Indeed, there was a call to action at ISKO 2004 for better connection between KO and computational linguistics (Jones, Cunliffe, and Tudhope 2004). Jone’s et al’s explora- tory work showed how term disambiguation and the mapping of user queries to KO structures could be improved by using a general purpose thesaurus and expanding matching to include thesaurus scope notes. The application of KOS to conversational interfaces requires an active approach and a pragmatic view of semantics (as the concepts are action-oriented and application-spe- cific) in addition to a unification of user and organisational warrant (Hjørland 2007), as © Ergon - ein Verlag in der Nomos Verlagsgesellschaft

Transcript of Knowledge Organisation Systems for Chatbots and ...

Page 1: Knowledge Organisation Systems for Chatbots and ...

Paul Matthews – University of the West of England (Bristol), England.

Knowledge Organisation Systems for Chatbots and Conversational Agents A Review of Approaches and an Evaluation of Relative Value-Added for the User

Abstract: Chatbots and smart digital assistants usually rely on some form of knowledge base for providing a source of answers to user queries. These may be more or less structured, ranging from free text, keyword search to interconnected knowledge graphs where answers are surfaced through structured query and/or dynamic logics. This paper will review the state of the art and probe the added value to the end user of using more structured KOSs as the knowledge base component of agent architectures. In addition to the extant literature, experi-mental development of a chat bot for student information based on a partial knowledge graph is described. A key conclusion is that ontology in a broad sense can be used to drive dialogue as much as to derive factually correct or organisationally-approved responses.

1.0 Introduction

Text and speech-based dialogue agents are growing in popularity as a new or com-plementary interface to organisational systems and services. They offer high availabil-ity, convenience and the potential to identify, triage and satisfy common queries and interactions. As a window into an organisation’s knowledge it seems natural that knowledge organisation systems (KOSs) should play a major role in supporting and di-recting exchanges, though this is a more sophisticated role than they usually play. For the sake of interoperability and portability, organisations have often erred on the side of simplicity and relatively semantically shallow KOSs to support discovery (Isaac and Baker 2015). However, this level of power is neglecting a wealth of experience in the implementation of Knowledge Representation and Knowledge Engineering coming from the computer science / AI disciplines (Giunchiglia, Dutta, and Maltese 2014). These somewhat deeper approaches allow the formal expression of conceptual struc-tures and their interrelation, as well as the application of automated logical inference over a system’s knowledge base to arrive at evidence-based conclusions. These af-fordances once again become potentially useful when it comes to conversational agents, so it is a good time to revisit standards and tools developed for knowledge representation and apply them to chatbots (Cameron et al. 2018)

A similar driver has been the developments in natural language interfaces, where some progress has been made in mapping unstructured queries to formal data relations. Indeed, there was a call to action at ISKO 2004 for better connection between KO and computational linguistics (Jones, Cunliffe, and Tudhope 2004). Jone’s et al’s explora-tory work showed how term disambiguation and the mapping of user queries to KO structures could be improved by using a general purpose thesaurus and expanding matching to include thesaurus scope notes.

The application of KOS to conversational interfaces requires an active approach and a pragmatic view of semantics (as the concepts are action-oriented and application-spe-cific) in addition to a unification of user and organisational warrant (Hjørland 2007), as

© Ergon - ein Verlag in der Nomos Verlagsgesellschaft

Page 2: Knowledge Organisation Systems for Chatbots and ...

275 the goal is to allow the user to interact successfully in real-time with an organisation’s knowledge structures.

This paper will look at ways in which KOS can both fuel and drive dialogue in con-versational agents and chatbots. This will be based on an exploration of ways in which the user experience at the dialogue interface is affected by bot behaviours and capabili-ties. We will then look at examples of how KOSs have been used in domain-specific dialogue systems, before exploring in more detail a case-study on a bot for student in-formation. It is hoped that these sections will serve to highlight challenges and opportu-nities in KOS-driven agents. Firstly, however, we will look at a broad typology of bot technical architectures to help illustrate where KOS traditionally sits.

2.0 Dialogue System Typologies and Components

A common distinction made with agents are goal-driven or rules-based and those that are more open-domain and able to tackle a wider variety of topics. (Altinok 2018). The former are perhaps more likely to end in satisfactory completion due to having a more predictable, constrained structure that can also be driven in part by menus and fixed choices. They are however restricted in requiring explicit representation of the dialog structure (Augello et al. 2013) and in only being able to perform a limited set of func-tions – also, if the pattern matching fails, the bot may flounder. In contrast, in some of the more open AI-enabled systems, the most suitable response is predicted based on models trained on a large dataset of human dialogue. Here, some precision is often sac-rificed in an attempt to provide the user with a more open ended, flexible conversation experience.

In most cases, where a dialogue requires an informational response, then a data or knowledge base is central to the architecture (Figure 1), whether this is based on the endpoint of a retrieval-based dialogue fragment or the model-guided prediction of an AI-based agent.

Increasingly, bot architectures include natural language processing to elucidate the “things” or entities being talked about and the goals or intents of the user in relation to them. These enriched structures are used to (perhaps fuzzily) match to entries in the knowledge base using similarity or information retrieval algorithms. The best matches can then be used to derive the system response.

In recent years we have also seen an increase in more hybrid architectures, including both rules-based and generative components and it is likely that this trend will increase, as the two approaches can to some extent make up for the shortcomings of the other. Certainly, new dynamic architectures are needed to help lift chatbots out of the “trough of disillusionment” in which many still lie – as the user experience at the interface falls short of expectations.

© Ergon - ein Verlag in der Nomos Verlagsgesellschaft

Page 3: Knowledge Organisation Systems for Chatbots and ...

276

Figure 1: Generic Agent Architecture (Maroengsit et al. 2019) 3.0 User Interaction with Conversational Interfaces

In terms of interaction patterns, Hill, Ford and Farreras (2015) looked at interaction logs from conversations with chatbots in comparison to human partners, and found that chatbot interactions contained fewer words per message but a greater number of mes-sages overall compared to humans. This indicates that a system needs to be able to pro-actively seek clarifications, in the same way that humans do in everyday interactions (Schlangen 2004). These may be to enable the completion of semantic underspecifica-tion, or to clarify intent.

A related issue is dialogue initiative. Finite state (rule-based) systems tend to domi-nate the initiative, whereas in real conversation, initiative alternates between participants (Jurafsky 2018). One-sided initiative feels inflexible, less natural and can’t handle state-ments containing multiple pieces of information.

Reasons for using a spoken agent include time saving and multi-tasking (Luger and Sellen 2016). Luger & Sellen noted that regular users invest time in understanding the agent’s strengths and capabilities, including initial playful interactions. Interestingly, factors affecting interaction included having sufficient understanding of an agent’s ca-pabilities and internal workings, including “whether or not its capabilities altered over time” and the extent to which the agent was learning from the user.

Maintenance of context is a key fundamental strength of human conversation, and an automated agent’s limited abilities in this area can be a source of frustration (Vtyurina et al. 2017). The “anaphora” or maintaining the implicit referent of the conversation continues to be a technical challenge for agent developers.

Vtyurina et al.’s study also highlighted the need for information credibility to be strengthened in an agent’s answers, perhaps by providing links where information could be verified, similarly where a variety of opinions exist on a topic, users asked for sum-maries of these.

© Ergon - ein Verlag in der Nomos Verlagsgesellschaft

Page 4: Knowledge Organisation Systems for Chatbots and ...

277 4.0 Evaluation criteria for Conversation-Oriented KOS

Several dimensions have been proposed to organise KOS themselves, one of the more common spectra being semantic strength – this itself correlated with time and money (Souza, Tudhope, and Almeida 2012). At the higher semantic strength end, formal the-ories and ontologies are heavier and provide richer expressivity, logics and entailments or automated reasoning. Conversely with weaker semantics comes the opportunity for more interoperability at the syntactic level.

Ontologies themselves can vary according to domain or application specificity and in the area of chatbots this maps quite well onto generalist versus specialist conversa-tional capabilities.

Soergel (2001) proposes a set of evaluation characteristics for KOS, including pur-pose, coverage, conceptual structure, extent of precombination, access and display, and degree of updating.

In terms of coverage, Tennis (2013) notes the temporal limitations of correctness and that dynamic theories are necessary given the inevitability of change and the need to cater for the ever changing and evolving knowledge domain. A geologic or archaeolog-ical view might best reflect how KOs change over time in practice, though the degree to which change is incremental or immediate may vary between systems and stakeholders. In the perhaps more radical change scenario, interaction with users may be seen to drive immediate change.

In sum, many existing criteria for KOS evaluation focus purely on aspects of the KOS itself rather than its wider position within an organisational information system. This neglects pragmatic aspects such as: ease and automation of updating, degree of fit with legacy corporate systems, degree of redundancy. These latter considerations are surely essential if the KOS is to be sustainable and to fulfil a central role in corporate digital communications.

5.0 KOS Approaches for Agents

Perhaps the simplest form of data store to support an agent is question-answer pairs, where the user query is compared to the knowledge base to find the closest matching question. This kind of architecture was used as a core component, for example, by Carisi et al (Carisi, Albarelli, and Luccio 2019) to build an airport information chatbot, though their bot also was able to connect to other airport databases to report flight information. Such a knowledge base can be bootstrapped using FAQ lists from web pages or from call centre records.

More “capable” bots may be closely linked to underlying ontology. Athreya et al’s dbpedia bot (2018) used third party natural language parsing to match queries to DBpe-dia URIs and literals, with literals given precedence as more likely to correspond to factual answers.

Altinok (2018) describes a system for banking services that uses an ontology, both to represent the bank’s services and their features, but also to structure the dialogue. Noun phrases in user queries are scanned for known entities by matching to ontology classes and individuals (with use of a synonym list to capture alternative phrasing). Matched entities are then used to provide the context for further queries, somewhat mitigating the anaphora problem. Conversation memory is also used to keep track of ontology nodes that have been visited in order to avoid repetition and provide the next dialog prompt.

© Ergon - ein Verlag in der Nomos Verlagsgesellschaft

Page 5: Knowledge Organisation Systems for Chatbots and ...

278

For their bot to discuss issues in climate change, Groza and Toniuc (2017) used an ontology but converted it to API.AI pattern matching model for use in bot interaction. Classes were mapped to entities and roles to intents. However, when no intent was matched (i.e. queries were not easily matched to models), entailment rules were used derived from a debate corpus.

Roca et al. (2019) propose a general microservice architecture for their prototype chatbot to support patients with chronic illness. The argument is that such bots are easily extensible without disruption to service. In terms of communication and storage, the bot used AIML to represent the dialogue logic, with FHIR (Fast Healthcare Interoperability Records) as the data store for the patient and associated data entities (e.g. Care Plan, Practitioner, Organization, Questionnaire), which could be used to generate monitoring questions – via dynamic AIML generation - and personalise the conversation 6.0 Case Study: Student Personal Circumstances Procedure

We have developed a prototype chatbot for students, based around common queries connected to timetabling, choice of optional modules, changing course and so on. In doing so, we have worked closely with our enquiry services to understand both the kinds of enquiry received and how students are subsequently advised in accordance with uni-versity regulations.

One of the most common sources of enquiry is around problems taking exams or submitting coursework due to illness, bereavement or other life events. These cases may be eligible for special consideration in accordance with the university’s marks removal or personal circumstances procedures depending on whether or not the student has sub-mitted the work or attempted the exam. To be eligible, students need some evidence to support the claim, or they may self-certify once per academic year without supporting evidence. Examples of the type of queries around these processes are shown in Table 1.

Table 1: Sample queries around personal circumstances supplied by information point advisors

or taken from web search logs (in some cases these have been amended slightly to preserve ano-nymity). System appropriate response noted.

Source Example Appropriate Response Information Point Advi-sors

“I want to self-certify but where is the form?” From QA knowledge base “I don’t have any evidence, what should I do?” Triage Self-Certification “When is the deadline to submit a PC form?” Triage Personal Circumstances “Can I make a request after the deadline?” Clarification of Intent “I sat an exam but I shouldn’t have, what can I do?”

Triage Marks Removal

“I handed in the work but I shouldn’t have, what can I do?”

Triage Marks Removal

“How do I submit a request?” Clarification of Intent “I applied for missed assessment but I would like to cancel it now, what should I do?”

Triage Process Cancelation

Web Search Logs

“personal circumstances evidence submission”

Clarification of Intent

“personal circumstances but can’t attend campus”

Triage Personal Circumstances

© Ergon - ein Verlag in der Nomos Verlagsgesellschaft

Page 6: Knowledge Organisation Systems for Chatbots and ...

279

“personal circumstances reasons” From QA knowledge base “I have backache and find it hard to sit in an exam”

From QA knowledge base (but suggests process outside of do-main knowledge)

“missing exams” Clarification of Intent “ill on exam day”

Clarification of Intent

“late personal circumstance”

Triage Marks Removal

“student illness”

Clarification of Intent

Our initial prototype used a rule-based decision tree for personal circumstances to

guide the user through the process, once the applicability had been established through identifying that intent (using a set of intent patterns such as “sick exam” or “missed deadline”). The logical dialogue structure for this is shown in Figure 2. Additionally, a “one-off” question-answering dataset is used when there is a strong match to a more specific question relating to the topic (here the user “passes through” to a direct answer rather than entering a protracted dialogue.

Problems with the approach taken include that, while the structure correctly formal-ises the regulations, a) the structure is hard-coded and any changes to the process require reworking at the code level; and b) it follows a set script or funnel down which all users are sent. It therefore does not adequately accommodate short-cut, clarification and pro-cess-related questions such as “I have submitted by work and don’t have any evidence, what should I do?”, “What counts as evidence?” or “What to do if I don’t have evi-dence?”, or “How many times can I self-certify?”. While problem a) was addressed by allowing new dialogue structures to be loaded from external files at run-time, the essen-tial inflexibility of this approach remained.

© Ergon - ein Verlag in der Nomos Verlagsgesellschaft

Page 7: Knowledge Organisation Systems for Chatbots and ...

280

Figure 2: Dialogue flow for personal circumstances, first prototype

We therefore have commenced work on a second prototype using a domain ontology to represent the enquirer (student), the processes we want to represent, and the require-ments for these processes. The dialogue can then be built based around the completeness of the entity (object properties) as constructed during the conversation, a more “frame-based” approach. If these are missing, then this will trigger information gathering ques-tions. Once enough information is available, then the enquirer can be channelled into the application procedure for the relevant process.

In order to facilitate maintenance, we store the natural language options to identify intents in the entities RDF labels, and the queries to generate in the entity and property RDF comments. We also take advantage of SWRL rules which capture the formal reg-ulations (e.g. Listing 1). Here, the student is flagged as eligible for marks removal if they have: a) submitted already b) have admissible evidence.

Listing 1:Example SWRL rule for Marks Removal Eligibility Student(?s) ^ Assessment(?a) ^ Process(?p) ^ Documentation(?d) ^ hasSubmitted(?s, ?a) ^ hasDocumentation(?s, ?d) ^ isAdmissibleForProcess(?d, ?p) -> isEligibleForProcess(?s, MarksRemoval)

© Ergon - ein Verlag in der Nomos Verlagsgesellschaft

Page 8: Knowledge Organisation Systems for Chatbots and ...

281 7.0 Discussion

Our case study is typical of many agent interaction settings as: a) the initial user input may be minimal and requiring clarification and b) it is very important the user is advised appropriately once the system is sufficiently appraised of its circumstances. It therefore needs to match user status to organisational policies and processes – a strong argument for use of a formal, rule-based ontology as a core architectural component. That said, for it to fulfil the requirements of a conversational interface, it does need to be suffi-ciently detailed, pointing toward the expensive, resource-intensive end of the KOS spec-trum (Souza et al. 2012).

Considering Soergel’s (2001) evaluation criteria for KOS in the light of agent appli-cations, we see that the purpose requires relatively high specificity and that, in terms of sources and coverage, while preferred terms can be those defined and used by the or-ganisation, they should be comprehensively mapped to the natural language of the user in order to be correctly identified, matched and prioritised.

In terms of conceptual structure, while some depth of class hierarchy seems useful for organisational purposes, it seems less likely that these are very useful for satisfying user needs, as queries that are applicable to ancestor classes are probably more unusual. Instead, the properties of a specific atomic entity seem to be more useful to guide slot filling and frame completion.

Finally, and perhaps most importantly is Soergel’s criterion “updating”, which we should elaborate to include not only frequency but ease of updating, along with the cen-trality of the KOS itself to the overall organisation information system. It is here that ontology-based agents seem to be very early-stage – very little is said about how they can be maintained or embedded within an organisation, though semantic wikis offer a useful opportunity to bridge the gap between knowledge engineering and domain exper-tise. On the ontology creation side, there is a relatively large amount of work looking at how they can be derived from textual sources, but it is almost inevitable that further quality assurance steps will be needed and that this cannot be fully automated. Archi-tecturally speaking, Roca et al.’s example (2019) seems to point the way, in describing an approach that allows for the knowledge base to be incrementally enhanced without loss of service. In their case, however, underlying knowledge structures are relatively partitioned or siloed, which would seem to risk increasing the problem of maintainabil-ity. On the flip side, use of single integrated ontologies will require version control and continuous validation to ensure that extensions to not introduce errors in existing rules. 8.0 Conclusions

This paper has looked at KOSs within chatbot and agent architectures in order to understand what the KOS itself can contribute to the end user experience. Clearly, there are demonstrable ways that a KOS-driven service can not only represent the domain knowledge with which a user is interacting but can also shape and facilitate the dialogue itself. Here there are potential advantages in goal-driven or regulatory oriented applica-tions in maintaining formal correctness, where “generative” AI-based systems cannot necessarily guarantee such accuracy.

While individual prototypes and technical approaches are relatively well docu-mented, less has been published on how to get to a state where suitable ontologies are embedded within an organisation and maintained as a core system. While perhaps less

© Ergon - ein Verlag in der Nomos Verlagsgesellschaft

Page 9: Knowledge Organisation Systems for Chatbots and ...

282

superficially exciting, this work can establish a basis from which conversational inter-faces can evolve productively.

References Altinok, Duygu. 2018 “An Ontology-Based Dialogue Management System for Banking and Fi-

nance Dialogue Systems.” ArXiv: 1804.04838 Athreya, Ram G., Axel-Cyrille Ngonga Ngomo, and Ricardo Usbeck. 2018. “Enhancing Commu-

nity Interactions with Data-Driven Chatbots--The DBpedia Chatbot.” In WWW '18: Compan-ion Proceedings of the The Web Conference 2018, edited by Pierre-Antoine Champin, Fabien Gandon, Lionel Médini, Mounia Lalmas, and Panagiotis G. Ipeirotis. Republic and Canton of Geneva, Switzerland: International World Wide Web Conferences Steering Committee, 143–146. https://doi.org/10.1145/3184558.3186964.

Augello, Agnese, Giovanni Pilato, Giorgio Vassallo, and Salvatore Gaglio. 2013. “Chatbots as interface to Ontologies.” In Advances onto the Internet of Things. Advances in Intelligent Sys-tems and Computing 260, edited by S. Gaglio and G. Lo Re. Cham: Springer, 285–299.

Cameron, Gillian, David Cameron, Gavin Megaw, RR Bond, Maurice Mulvenna, Siobhan O'Neill, C Armour, and Michael McTear. 2018. “Back to the Future: Lessons from Knowledge Engineering Methodologies for Chatbot Design and Development.” In Proceedings of the 32nd International BCS Human Computer Interaction Conference (HCI-2018), edited by Raymond Bond, Maurice Mulvenna, Jonathan Wallace, and Michaela Black. Swindon, UK: BCS Learning & Development Ltd, 1–5.

Carisi, Matteo, Aandrea Albarelli, and Flaminia L. Luccio. 2019. “Design and Implementation of an Airport Chatbot.” In GoodTechs '19: Proceedings of the 5th EAI International Conference on Smart Objects and Technologies for Social Good, edited by Armir Bujari, Pietro Manzoni, Anna Forster, Edjair Mota, and Ombretta Gaggi. New York: Association for Computing Ma-chinery, 49–54.

Giunchiglia, Fausto, Biswanath Dutta, and Vincenzo Maltese. 2014. “From Knowledge Organi-zation to Knowledge Representation.” Knowledge Organization 41: 44–56.

Groza, Adrian and Daniel Toniuc. 2017. “Explaining Climate Change with a Chatbot Based on Textual Entailment and Ontologies.” In IEEE 13th International Conference on Intelligent Computer Communication and Processing. Cluj-Napoca: IEEE. 2017.

Hill, Jennifer, W. Randolph Ford, and Ingrid G. Farreras. 2015. “Real Conversations with Artifi-cial Intelligence: A Comparison Between Human-Human Online Conversations and Human-Chatbot Conversations.” Computers in Human Behavior 49: 245–250.

Hjørland, Birger. 2007. “Semantics and Knowledge Organization.” Annual Review of Information Science and Technology 41: 367–405.

Isaac, Antoine and Thomas Baker. 2015 “Linked Data Practice at Different Levels of Semantic Precision: The Perspective of Libraries, Archives and Museums.” Bulletin of the American Society for Information Science and Technology 41, no. 4: 34–39.

Jones, Iolo, Daniel Cunliffe, and Douglas Tudhope. 2004. “Natural Language Processing and Knowledge Organisation Systems as an Aid to Retrieval.” In Knowledge Organization and the Global Information Society: Proceedings of the Eighth International ISKO Conference 13-16 July 2004 London, UK, edited by Ia C. McIlwaine. Advances in knowledge organization 9. Würzburg: Ergon Verlag, 315.

Jurafsky, Dan. 2018. Conversational Agents AKA Dialog Agents. https://web.stanford.edu/class/cs124/lec/chatbot.pdf

Luger, Ewa and Abigail Sellen. 2016. ““Like Having a Really Bad PA”: The Gulf Between User Expectation and Experience of Conversational Agents.” In CHI '16: Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems, edited by Jofish Kaye, Allison

© Ergon - ein Verlag in der Nomos Verlagsgesellschaft

Page 10: Knowledge Organisation Systems for Chatbots and ...

283

Druin, Cliff Lampe, Dan Morris, and Juan Pablo Hourcade. New York: Association for Com-puting Machinery, 5286–5297.

Maroengsit, Wari, Thanarath Piyakulpinyo, Korawat Phonyiam, Suporn Pongnumkul, Pimwadee Chaovalit, Thanaruk Theeramunkong. 2019. “A Survey on Evaluation Methods for Chatbots.” ICIET 2019: Proceedings of the 2019 7th International Conference on Information and Edu-cation Technology. New York: Association for Computing Machinery, 111–119. https://doi.org/10.1145/3323771.3323824

Roca, Surya, Jorge Sancho, José García, and Álvaro Alesanco. 2019. “Microservice Chatbot Ar-chitecture for Chronic Patient Support.” Journal of Biomedical Informatics 102: 103305.

Schlangen, David. 2004. “Causes and Strategies for Requesting Clarification in Dialogue.” In Proceedings of the 5th SIGdial Workshop on Discourse and Dialogue at HLT-NAACL 2004. Cambridge, Massachusetts: Association for Computational Linguistics, 136–143.

Soergel, Dagobert. 2001. “Evaluation of Knowledge Organization Systems (KOS): Characteris-tics for Describing and Evaluating KOS.” In JCDL 2001 NKOS Workshop, Roanoke, VA,5.

Souza, Renato Rocha, Douglas Tudhope, and Mauricio B. Almeida. 2012. “Towards a Taxonomy of KOS: Dimensions for Classifying Knowledge Organization Systems.” Knowledge Organ-ization 39: 179–192.

Tennis, Joseph. 2013. “Metaphors of Time and Installed Knowledge Organization Systems: Ouro-boros, Architectonics, or Lachesis?’ Information Research 18, no. 3: 1–8.

Vtyurina, Alexandra, Denis Savenkov, Eugene Agichtein, and Charles L.A. 2017. “Exploring Conversational Search With Humans, Assistants, and Wizards.” In CHI EA '17: Proceedings of the 2017 CHI Conference Extended Abstracts on Human Factors in Computing Systems, edited by Gloria Mark, Susan Fussell, Cliff Lampe, M.C. Schraefel, Juan Pablo Hourcade, Caroline Appert, and Daniel Wigdor. New York: Association for Computing Machinery, 2187–2193.

© Ergon - ein Verlag in der Nomos Verlagsgesellschaft