An Ontology of T rust Ð Formal Semantics and T...

12
An Ontology of Trust – Formal Semantics and Transitivity Jingwei Huang Enterprise Integration Laboratory University of Toronto 5 King’s College Road Toronto, ON M5S 3G8, Canada [email protected] Mark S. Fox Enterprise Integration Laboratory University of Toronto 5 King’s College Road Toronto, ON M5S 3G8, Canada [email protected] ABSTRACT This paper formalizes the semantics of trust and studies the transitivity of trust. On the Web, people and software agents have to interact with “strangers”. This makes trust a crucial factor on the Web. Basically trust is established in interaction between two entities and any one entity only has a finite number of direct trust relationships. However, activities on the Web require entities to interact with other unfamiliar or unknown entities. As a promising remedy to this problem, social networks-based trust, in which A trusts B, B trusts C, so A indirectly trusts C, is receiving consid- erable attention. A necessary condition for trust propaga- tion in social networks is that trust needs to be transitive. However, is trust transitive? What types of trust are tran- sitive and why? There are no theories and models found so far to answer these questions in a formal manner. Most models either directly assume trust transitive or do not give a formal discussion of why trust is transitive. To fill this gap, this paper constructs a logical theory of trust in the form of ontology that gives formal and explicit specification for the semantics of trust. Based on this formal semantics, two types of trust – trust in belief and trust in performance are identified, the transitivity of trust in belief is formally proved, and the conditions for trust propagation are derived. These results give theoretical evidence to support making trust judgment using social networks on the Web. Categories and Subject Descriptors J.1 [Computer Applications]: Administrative Data Pro- cessing—Business ; I.2.11 [Artificial Intelligence]: Dis- tributed Artificial Intelligence; H.4 [Information Systems]: Miscellaneous General Terms Theory Keywords Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, to republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. ICEC’06, August 14-16, 2006, Fredericton, Canada. Copyright 2006 ACM 1-59593-392-1. Trust Formalization, Transitivity of Trust, Trust Ontology, Web of Trust 1. INTRODUCTION The Web has been becoming an open dynamic and de- centralized information/knowledge repository, a distributed computing platform and a global electronic market. In such a cyberspace, people, organizations and software agents have to interact with “strangers” and have to be confronted with various information with unfamiliar sources. This makes trust arise as a crucial factor on the web. In general, trust is “a basic fact of social life” [21]. For example, we trust the data that we use for a decision to be reliable; when we cross an intersection of streets, we trust the cars in other directions to follow the traffic signals. Trust exists so widely in our social life that we may even not fully be aware of using it. Trust is important to us because it is one of the foundations for people’s decisions. Generally, rational decisions made in the real world are based on the mixture of bounded rational calculation and trust. According to Simon[31], a decision making process in the real world is limited by ”bounded rationality” i.e. the “rational choice that takes into ac- count the cognitive limitations of the decision maker - limi- tations of both knowledge and computational capacity” . In a real decision situation, since we only have limited informa- tion/knowledge and computing capacity as well as limited time available for decision analysis, a rational decision has to be partly based on bounded rational calculation and partly based on trust. As Luhmann revealed, trust functions as “reduction of complexity” in our social life [21]. Return to the context of trust on the Web. The interest in addressing the issue of trust on the web has appeared under the umbrella of the “Web of Trust”. An important issue is how to transplant the trust mechanism in our real world into the cyberspace. To this end, many researchers attempt to construct formal models of trust and to develop tools that support people making trust judgments on the web. Basically trust is established in interaction between two entities. Many models proposed focus on how to cal- culate and revise trust degrees in interaction between two entities. However, one entity only has a finite number of direct trust relationships, which cannot meet the needs of various interaction with unknown or unfamiliar entities on the Web. As a promising remedy to this problem, social 259

Transcript of An Ontology of T rust Ð Formal Semantics and T...

An Ontology of Trust – Formal Semantics and Transitivity

Jingwei HuangEnterprise Integration Laboratory

University of Toronto5 King’s College Road

Toronto, ON M5S 3G8, [email protected]

Mark S. FoxEnterprise Integration Laboratory

University of Toronto5 King’s College Road

Toronto, ON M5S 3G8, [email protected]

ABSTRACTThis paper formalizes the semantics of trust and studiesthe transitivity of trust. On the Web, people and softwareagents have to interact with “strangers”. This makes trusta crucial factor on the Web. Basically trust is establishedin interaction between two entities and any one entity onlyhas a finite number of direct trust relationships. However,activities on the Web require entities to interact with otherunfamiliar or unknown entities. As a promising remedy tothis problem, social networks-based trust, in which A trustsB, B trusts C, so A indirectly trusts C, is receiving consid-erable attention. A necessary condition for trust propaga-tion in social networks is that trust needs to be transitive.However, is trust transitive? What types of trust are tran-sitive and why? There are no theories and models foundso far to answer these questions in a formal manner. Mostmodels either directly assume trust transitive or do not givea formal discussion of why trust is transitive. To fill thisgap, this paper constructs a logical theory of trust in theform of ontology that gives formal and explicit specificationfor the semantics of trust. Based on this formal semantics,two types of trust – trust in belief and trust in performanceare identified, the transitivity of trust in belief is formallyproved, and the conditions for trust propagation are derived.These results give theoretical evidence to support makingtrust judgment using social networks on the Web.

Categories and Subject DescriptorsJ.1 [Computer Applications]: Administrative Data Pro-cessing—Business; I.2.11 [Artificial Intelligence]: Dis-tributed Artificial Intelligence; H.4 [Information Systems]:Miscellaneous

General TermsTheory

Keywords

Permission to make digital or hard copies of all or part of this work forpersonal or classroom use is granted without fee provided that copies arenot made or distributed for profit or commercial advantage and that copiesbear this notice and the full citation on the first page. To copy otherwise, torepublish, to post on servers or to redistribute to lists, requires prior specificpermission and/or a fee.ICEC’06, August 14-16, 2006, Fredericton, Canada.Copyright 2006 ACM 1-59593-392-1.

Trust Formalization, Transitivity of Trust, Trust Ontology,Web of Trust

1. INTRODUCTIONThe Web has been becoming an open dynamic and de-centralized information/knowledge repository, a distributedcomputing platform and a global electronic market. In sucha cyberspace, people, organizations and software agents haveto interact with “strangers” and have to be confronted withvarious information with unfamiliar sources. This makestrust arise as a crucial factor on the web.

In general, trust is “a basic fact of social life” [21]. Forexample, we trust the data that we use for a decision to bereliable; when we cross an intersection of streets, we trustthe cars in other directions to follow the traffic signals. Trustexists so widely in our social life that we may even not fullybe aware of using it.

Trust is important to us because it is one of the foundationsfor people’s decisions. Generally, rational decisions made inthe real world are based on the mixture of bounded rationalcalculation and trust. According to Simon[31], a decisionmaking process in the real world is limited by ”boundedrationality” i.e. the “rational choice that takes into ac-count the cognitive limitations of the decision maker - limi-tations of both knowledge and computational capacity” . Ina real decision situation, since we only have limited informa-tion/knowledge and computing capacity as well as limitedtime available for decision analysis, a rational decision has tobe partly based on bounded rational calculation and partlybased on trust. As Luhmann revealed, trust functions as“reduction of complexity” in our social life [21].

Return to the context of trust on the Web. The interestin addressing the issue of trust on the web has appearedunder the umbrella of the “Web of Trust”. An importantissue is how to transplant the trust mechanism in our realworld into the cyberspace. To this end, many researchersattempt to construct formal models of trust and to developtools that support people making trust judgments on theweb. Basically trust is established in interaction betweentwo entities. Many models proposed focus on how to cal-culate and revise trust degrees in interaction between twoentities. However, one entity only has a finite number ofdirect trust relationships, which cannot meet the needs ofvarious interaction with unknown or unfamiliar entities onthe Web. As a promising remedy to this problem, social

259

network based trust, in which A trusts B, B trusts C, thusA indirectly trusts C under certain conditions, is receivingconsiderable attention. A necessary condition of trust prop-agation in social networks is that trust need to be transitive.In other words, without transitivity, trust cannot propagatein networks. However, is trust transitive? What types oftrust are transitive and why? No theories and models foundso far answer these questions in a formal manner. Mostmodels mainly focus on how to calculate trust degrees whentrust propagates in social networks, and they either directlyassume trust transitive or do not give a formal discussion ofwhy trust is transitive. To fill this gap, this paper aims toconstruct a logical theory for trust in the form of ontology,which has formal and explicit specification for the semanticsof trust. From this formal semantics, two types of trust –trust in belief and trust in performance are identified, thetransitivity of trust in belief is formally proved, and theconditions for trust propagation are derived. These resultsprovide theoretical evidence for trust propagation in socialnetworks. The proposed trust model can be used for trustjudgements using social networks on the web.

Our specific interest in trust comes from Knowledge Prove-nance(hereafter, referred to as KP). KP is proposed in [8,9] to create an approach to determining the validity of webinformation by means of modeling and maintaining infor-mation sources, information dependencies, as well as truststructures. Three KP models, static KP [8], dynamic KP[15] and uncertainty oriented KP [16], have been studied.These models assume that trust relationships between in-formation consumers and information creators have alreadybeen calculated, and we mainly focused on modeling andmaintaining information sources and information dependen-cies. This paper focuses on modeling trust structures.

The contents of this paper are organized as follows: followingthe discussion of related research in Section 2, our view oftrust is introduced in Section 3; our motivating scenariosare given in Section 4; the informal competency questionsto be answered by the trust ontology under development aredefined in Section 5; the methodology and terminology to beused are introduced in Section 6; the axioms and theorems inthe trust ontology are developed in Section 7; an applicationexample is given to demonstrate the potential uses of thetrust ontology in Section 8; finally in Section 9, we give aconclusion and discussion on further research.

2. RELATED RESEARCHTrust has been receiving considerable attention from manydirections. This section examines the related research in twoaspects: conceptualization and formalization.

2.1 Trust ConceptualizationTrust is a complex social phenomenon. The concepts devel-oped in social sciences provide an important foundation fortrust formalization.

A large body of research has contributed to the conceptual-ization of trust (refer to [20], [3], [10]). Rotter [29] definedtrust as “expectancy”. Mayer et al [23] defined trust as“the willingness to be vulnerable”. A typical definition is“trust is a psychological state comprising the intention toaccept vulnerability based upon positive expectation of the

intentions or behavior of another” [30]. Deutsch [6] stud-ied trust in cooperation with game theory, and he built upan important framework for formalizing interpersonal trustfrom the view of decision. Luhmann [21] revealed “systemtrust” – the trust placed in the function of a system in soci-ety. Zucker [33] examined the evolution of the trust mech-anism in American economic system and identified threemodes of “trust production” (trust building): (1) “process-based”, in which trust is built on past history of interac-tion; (2) “characteristic-based”, in which trust is dependenton “social similarity”, such as culture, age and gender; (3)“institutional-based”, in which trust is built on “formal so-cial structure” comprising of “membership in a subculture”and “intermediary mechanism”, such as regulation, legisla-tion, functions of governments and banks.

Most of the study of trust in social sciences focuses on inter-individual trust, but much attention has shifted to “systemtrust” - a different mechanism of trust emerging at the turnof last century to meet people’s increasing demands of moreand more interaction with strangers in industrial society.This situation is very similar to the trust problem we arefacing today in the cyberspace.

2.2 Trust FormalizationIn computer science, trust was initially concerned by thesecurity community. For the purposes of secure web accesscontrol, Blaze et al [2] first proposed “decentralized trustmanagement” to separate trust management from applica-tions, and developed the PolicyMaker system. Khare andRifkin [19] proposed basic principles of trust management.

Trust is also a concern of the distributed AI community[26]. Marsh [22] constructed a formal model of trust to de-scribe the relations among the major concepts of trust. Hisstudy is among the first to formalize trust. However, on onehand, his work solely focuses on inter-individual trust; onthe other hand, trust is represented with simple arithmeticformulas, and the logical relations in trust were not explic-itly described. Demolombe [5] constructed a modal logicsystem of trust. This model defines the formal semanticsof trust as belief based on properties of sincerity, credibility,cooperativity and vigilance. However, an important elementof trust – context is not considered in the semantics of trust,and the transitivity of trust is not studied either. Gans et al[11] addressed distrust in trust modeling. Falcone & Castel-franchi [7] developed a formal trust model based on BDIagent architecture. One of the features of this model is therepresentation of the aspects of competence and willingnessof trust. Perhaps because trust problem in DAI arises inthe interaction among agents, all the work discussed abovefocus on inter-individual trust.

In recent years, trust models based on social networks arereceiving considerable attention. Particularly, the trend ispowered by “web of trust” which is identified as the top layerof the semantic web. The concept of “web of trust” perhapsis first developed in PGP as a trust model used for publickey validation by using social networks. However, “trust” inPGP is specifically on public key validation. FOAF project(http://foaf-project.org/) attempts to create social networkson the web by facilitating people to describe acquaintancerelationships in machine-readable web pages. Although ac-

260

quaintance relationships are not trust relationships, FOAFis a good starting point. Recently, many models of trust onthe web using social networks have emerged. For examples,Yu and Singh [32] constructed a model of reputation (trust)updating and propagation using the testimonies from thetrustee’s neighbors; Golbeck et al [12] extended the acquain-tance relationships in FOAF model by introducing levels oftrust and applied the model for filtering emails; Richardsonet al [28] proposed an algebra representation of trust propa-gation and applied it in bibliography recommendation; Guhaand Kumar [14] constructed a trust propagation model con-sidering distrust; Abdul-Rahman and Hailes [1] proposeda trust model including “direct trust” and “recommendertrust”, in which trust propagates in social networks; Josanget al [18] argued that trust is not always transitive and “re-ferral trust” (i.e. “recommender trust”) is transitive. How-ever, they did not reveal why recommendation is transitivein a formal manner. The common perspective of all thesemodels is trust propagation in social networks. However, istrust transitive? What types of trust are transitive and why?No theories and models found have answered these questionsin formal manner. The models found either directly assumetrust transitive or do not give formal discussion why trust istransitive, due to no formal representation of the semanticsof trust.

In order to formally study the transitivity of trust, the for-mal representation of the semantics of trust is necessary.However, no formal models found meet the needs for such aanalysis. To fill this gap, we aim to identify the semantics oftrust, to develop a logical model of trust to reveal the log-ical relations among the constructs of trust, and from thisformal semantics of trust, to prove the transitivity of trust.

3. WHAT IS TRUST?Synthesizing the concepts of trust in trust related literature,we have the following view of trust.

Trust is the psychological state comprising (1) expectancy:the trustor expects a specific behavior of the trustee such asproviding valid information or effectively performing cooper-ative actions; (2) belief: the trustor believes that expectancyis true, based on evidence of the trustee’s competence andgoodwill; (3) willingness to be vulnerable: the trustor is will-ing to be vulnerable to that belief in a specific context wherethe information is used or the actions are applied.

In trust, there are two roles involved: trustor and trustee.Furthermore, there are three aspects in the implications oftrust. First, when it is said that entity A trusts B, peoplemust ask a question “on what thing does A trust B?” Thisleads to the first implication of trust: expectancy, i.e. thetrustor expects that trustee behaves in a specific mannerwithin a specific context. The expected behavior can be:valid information or cooperative actions. Secondly, whentrusting, the trustor must believe that the trustee behavesas expected, according to the competence and goodwill ofthe trustee. This is the most recognized aspect of the mean-ing of “trust”. Thirdly, the trustor not only believes thetrustee will behave as expected but also is willing to be vul-nerable for that belief in a specific situation, i.e. the trustoris willing to assume the risk that the trustee may not behaveas expected.

As many researchers realized, trust is context-specific, forexample, a person may trust her or his financial advisorabout investment analysis but doesn’t trust the advisor inhealth-care. There are two types of contexts related to trust.The first is the context where the expected valid informa-tion is created, and the second is the context in which thetrusted information (or action) will be applied by trustoror the situation where the trustor is confronted with thetrust judgment problem. These two contexts may not be thesame. For example, a financial expert (the trustee) createda piece of information in a context of giving financial invest-ment seminar, and in another context of buying stocks, aninvestor (the trustor) attempts to use this information andneeds to judge the validity of it.

In summary, the meaning of trust we adopt can be expressedas follows:

Trust = Expectancy

+ Belief in expectancy

+ Willingness to be vulnerable for that belief.

4. MOTIVATING SCENARIOSIn order to construct a trust ontology, we start from moti-vating scenarios.

Consider the following use cases of the electronic businessof a gift company (denoted as F). F originally is a floralcompany. Now its online business includes both bouquetsand other gift products. The web services technology makesF able to extend its online store to a virtual omni-gift storethat sells not only the products in its catalogues but alsoany other types of gift products available via web services.

Case 1: Trust in Business PartnershipsJames, an old customer of F, wants to order from F’s onlinestore a gift to a friend, a special high quality fine china teaset made in Jingdezhen of China. Assume this product isbeyond F’s catalogues. In F’s online store, a sales agent, asoftware robot representing F, first finds from web servicesregistries (e.g. UDDI registries) a list of service providerswho sell the requested products; second, from this list, asa general business rule, the agent needs to check out ser-vice providers whose products are trusted by F to have highquality; then, the agent presents to James a selected listof products provided by the trusted service providers; af-ter James chooses a satisfied product, the agent orders theproduct for him. Here, we ignore the detail of web serviceprocess and only focus on how the agent makes trust judg-ments.

A service provider which F found in a web services registry isJ, a porcelain company in Jingdezhen city. In order to main-tain F’s business reputation, before F recommends J’s finechina tea sets to customers, F needs to make sure J’s prod-ucts with high quality. To make this judgment, the salesagent searches J in F’s business relationship managementsystem. Unfortunately, J is not in the system, that is to say,F does not know J at the time. However, F has a long termbusiness partner P. P frequently provides for F porcelainproducts such as vases. Therefore, F trusts P’s judgmentson porcelain product quality. Furthermore, P has a majortrusted porcelain product supplier S in New York, so sim-

261

ilarly P trusts S’s judgment on porcelain product quality.Finally J is one of S’s trusted suppliers. In this latter rela-tionship, S trusts the product quality of J. From this chainof trust, the sales agent of F infers that the porcelain teasets of J have high quality.

Now we make further analysis on the roles and relationshipsdemonstrated in this case.

James, a customer of F who trusts F’s services, wants tobuy from F a special high quality fine china tea set. Thisrequest activates web service discovery, trust judgments, andtrading.

F, a gift company, trusts P on P’s judgement on porcelainproduct quality because P is a long term professional busi-ness partner of F. In the terms of trust, the expectancy ofF to P is that P’s judgement on porcelain product qualityis correct, and F feel comfortable to believe what P believesin porcelain product quality.

P, a porcelain product company, trusts S’s judgment onporcelain product quality similarly as F does.

S, a porcelain product supplier, trusts J on the quality ofthe porcelain products J produces. In the terms of trust, Sexpects that the porcelain products of J have high quality;based on J’s performance in the past, S feels like to believethis expectancy.

J, a porcelain manufacturer at Jingdezhen, is unknown toF.

The chain of trust can be summarized as follows: (1) Strusts that the porcelain products of J has high quality; (2)P trusts S’s judgement on porcelain product quality, so Palso trusts that the porcelain products of J has high quality;(3) similarly, F trusts P’s judgement on porcelain productquality. This makes F to trust that the porcelain productsof J have high quality.

Case 2: Trust in SystemConsider another situation in which F does not know J, andF also does not find any useful relationship in its businessrelationship management system to connect to J. However,J shows that it has ISO 9000 quality certification. Assumethat F trusts ISO 9000 system thus F trusts J’s productsmeeting ISO 9000 quality standards. This makes F to trustthat the porcelain products of J has high quality.

Case 3: Trust in Business Partnerships (2)Consider the business relationships in case 1. Assume thatS trusts P that P is able to and will pay for all its purchasesso allows P to pay quartly rather than to pay in every trans-action, and P trusts F in the same way. Now, consider thefollowing situation. For the first time, F orders a consider-able amount of porcelain vases directly from S. Does S trustsF on later payment as trusts P? The common knowledge tellus that the answer is “no”. This is because the facts that Strusts P to pay later and P trusts F to pay later does notnecessarily imply S trusts F to pay later. In other words,this trust is not transitive.

Findings:From these cases, a number of important facts and conceptscan be revealed.

Trust Factor in Web Services Based B2BFirst, in the e-business based on web services technology,a web services user (here, F) accepts only trusted serviceproviders retrieved from web services registries. The trustissues may arise in many aspects of e-business including ser-vice quality, business credibility, as well as business infor-mation security and privacy.

Types of TrustThere are different types of trust.

In case 1, the trust which F places on P is the trust in whatP believes. The trust P places on S is the same type. Fromthese examples, we can identify a type of trust: trust inbelief . The trust which P places on F in case 3 is the trustin F’s performance. This is another type of trust: trustin performance. The other examples of this type of trustinclude the trust which S places on P in case 3 and S placeson J in case 1.

Transitivity of TrustIn case 1, S’s trust in J propagates to P and then to Fbecause F trusts in P’s belief and P trusts in S’s belief onproduct quality. This fact leads us to a general fact: trustin belief is transitive. Because of this property, trust canpropagate in business relationship formed social networks.

However, trust is not transitive in general. Case 3 showsthat trust in performance is not transitive.

Sources of TrustBasically, trust comes from the experience of interaction be-tween two parties. For example, F trusts in P’s belief onporcelain product quality because P is F’s long term busi-ness partner and F gradually knows that P is very profes-sional on porcelain business from the business interactionbetween them. The experience of interaction is the essen-tial source of trust. The trust from interaction experience iscalled inter-individual trust , and also called direct trust .

Interestingly, from case 1, we have observed the fact thattrust may propagate in social networks. Because of trustpropagation, F trusts in J’s products. This leads to a newtype of trust, whose source is from trust propagation in so-cial networks. In this paper, this type of trust is calledrelational trust or social networks based trust .

Finally, in case 3, F trusts in J’s products because F trusts inISO 9000 quality management system and J complies withthe system. This is an example of system trust that wediscussed in section 2.

5. INFORMAL COMPETENCYQUESTIONS

In order to give a formal and explicit specification for trustto make it able to be used in trust judgements using socialnetworks on the web, we develop a logical theory of trust inthe form of ontology.

262

Following the ontology development methodology of Gruninger& Fox [13], we specify trust ontology in 4 steps: (i) providemotivating scenarios; (ii) define informal competency ques-tions for which the ontology must be able to derive answers;(iii) define the terminology; (iv) formalize competency ques-tions, and develop the axioms (i.e. semantics) of the ontol-ogy to answer these questions. We already discussed moti-vating scenarios in the earlier section. This section presentsinformal competency questions.

In the context of making trust judgments in social networks,the trust ontology under development needs to answer thefollowing competency questions:

Q1: In a specific context, can an entity (trustor) trusts an-other entity (trustee) regarding what the trustee performs,particularly, the validity of the information created by trustee?

Q2: In a specific context, can an entity (trustor) trusts an-other entity (trustee) regarding what the trustee believes?

Q3: When an entity has trust in another entity’s belief ina given context, and the second entity has trust in a thirdentity’s performance (or belief) in another context, can thefirst entity have trust in the third entity’s performance (orbelief)? if so, in what context?

Q4: When an entity has trust in the performance of anorganization (or a group whose members have a common setof characteristics related to the trust) in a specific context,can this entity have trust in the performance of the membersof this organization or group?

Q5: When an entity has trust in another entity’s perfor-mance (or belief) in a specific context, given the informationcreated by the second entity within the context, can the firstentity believe this information?

After we define the terminology of the ontology under de-velopment, these informal competency questions will be for-mally represented, then axioms will be developed to answerthese competency questions.

6. METHODOLOGY AND TERMINOLOGY6.1 MethodologyIn order to define our trust ontology, we formalize trust byusing the situation calculus. The situation calculus is a logiclanguage specifically designed for representing dynamicallychanging worlds[27]. It works in the following way: thechanging world is represented by a set of fluents. A fluentis a property (of the world) whose value is dependent onsituations. In other words, a fluent dynamically changeswhen the situation does. The situation, in turn, changeswhen an action is performed by agent(s) in the world.

Trust and belief are defined as fluents. We represent fluentsin reified form [25]. That is to say, the fact that a relationalfluent is true in a situation is represented by holds(f(x), s)rather than predicate f(x, s). In this way, a fluent is a term.So that a fluent may have other fluents as parameters.

Following Pinto [25], we treat the “logical operators” be-tween fluents as functions. In other words, the “proposi-

tional logical expressions” of fluents are treated as functionsrather than logical expressions in the language of the situa-tion calculus.

holds(f1 ∧ f2, s) =def holds(f1, s) ∧ holds(f2, s) (1)

holds(¬f, s) =def ¬holds(f, s) (2)

Based on the above definitions, other “logical operators” arealso defined as functions of fluents. For example,

holds(f1 ⊃ f2, s) ≡ holds(f1, s) ⊃ holds(f2, s). (3)

The situation calculus is a many-sorted logic language. Wewill use the following sorts: A: the set of actions; S: the setof situations; F: the set of fluents; E: the set of entities; D:the set of domain objects.

Regarding context, we understand: (a) a context is associ-ated with a domain constrained by a set of terminologies,assumptions, facts, theorems, and inference rules; (b) a con-text may also contain a goal (the problem to be solved) andthe current situation towards achieving this goal. Contextrepresentation can be very sophisticated [24, 4]. In order tofocus on trust and to make our model easier to be used, wewill represent contexts with fluents’ in the situation calculus.

The logical model of trust under development mainly fo-cuses on the logical relations among trust related fluents.These relations are called state constraints in the languageof situation calculus, and methods have been developed tosolve the so called “ramification problem” [25]. In order tofocus on the logic of trust and to make the theory easy tobe understood, this paper remains the theory in the form ofstate constraints. Although actions and their preconditionsas well as successor state axioms are major contents of situ-ation calculus, this paper will not discuss these contents forthe same reason above, and we will discuss them in anotherpaper.

In this paper, we only consider the case of certainty, thusbelieving will certainly lead to willing to be vulnerable. Forthis reason, we will not explicitly include “willing to be vul-nerable” in our formalization. In the cases of uncertaintyin which the degree of belief is considered, willingness to bevulnerable for that belief is corresponding to a decision totrust or not (refer to [17]).

Finally, we follow the convention that all unbound variablesare universally quantified in the largest scope.

6.2 TerminologyWe define the relational fluents, functional fluents and pred-icates to be used in our trust ontology in tables 1, 2 and 3respectively.

7. AXIOMS AND THEOREMS7.1 Formal Semantics of TrustAs revealed in the motivating scenarios, two types of trustare identified in accordance with the types of expectancies:trust in performance, and trust in belief. We formally definethem one by one as follows.

263

Table 1: Relational FluentsFluent Definition

believe(d, x) ⊆ E× FEntity d believes that thing x istrue.

trust p(d, e, x, k) ⊆ E×E× F× F)In context k, trustor d trusts trusteee on thing x made by e. x is calledexpectancy.

trust b(d, e, x, k) ⊆ E×E× F× FIn context k, trustor d trusts trusteee on thing x that e believes.

has p tr(d, e, x, k) ⊆ E×E× F× FTrustor d has trust in performancetype of inter-individual trust rela-tionship with trustee e.

has b tr(d, e, x, k) ⊆ E×E× F× FTrustor d has trust in belief typeof inter-individual trust relation-ship with trustee e.

made(x, d, k) ⊆ F×E× FInformation x is made by entity din context k.

memberOf(e, o) ⊆ E×EEntity e is a member of o, an orga-nization entity or an entity groupwith a same set of characteristics.

Table 2: Functional FluentsFluent Definitioninfo(x) D→ F

x is a piece of information. Thisfluent is used when x is not a pre-defined fluent.

perform(e, x) E×A→ FEntity e performs action x.

Table 3: PredicatesPredicate Definition/Sematicsholds(p, s) ⊆ F× S

as defined in situation calculus, flu-ent p holds in situation s.

Poss(a, s) ⊆ A× SIt is possible to perform action a insituation s.

entail(q, k) ⊆ F× FContext q entails context k.

7.1.1 Trust in performanceTrust in performance is the trust in what trustee performssuch as the information created or the actions performed.

Trust in performance has the following basic semantics: thetrustor believes in a piece of information created by thetrustee in a context within the trustor’s context of trust; or,the trustor believes in the performance of an action commit-ted by the trustee in a context within the trustor’s contextof trust. This semantics can be formally defined as the fol-lowing axiom:

Axiom 1 (formal semantics of trust in performance):

holds(trust p(d, e, x, k), s) ≡∀q, (holds(made(x, e, q), s) ∧ entail(q, k)

⊃ holds(believe(d, k ⊃ x), s)) (4)

where predicate entail(q, k) is defined as follows.

Definition:

entail(q, k) ≡ ∀s, (holds(q, s) ⊃ holds(k, s)) (5)

In this axiom (4), the expected thing (called expectancy) isfluent x ; believe(d, k ⊃ x) represents that d believes x to betrue when context k is true. In other words, d believes x incontext k.

To express such a trust relationship in practice, variable dwill be bound to a trustor; variable e will be bound to atrustee; variable x will be bound to a fluent representingthe information created by trustee; variable k will be boundto a fluent representing the context of trust.

Example 1. Ben, a customer of Amazon, trusts Amazonregarding the message that his order status is “Shipped”in the context of a online trading. This trust relationship,which holds in an situation after Ben made his order, canbe represented as follows:

holds(trust p(Ben, Amazon,

order st(#102579, Shipped),

order from(#102579, Amazon)

∧ order st(#102579, Shipped), S0)

According to the axiom 1, this means that for any contextq, if Amazon creates information

order st(#102579, Shipped)

in q, and q entails context

order st(#102579, Shipped)

∧ order from(#102579, Amazon),

then Ben believes this information in this context of thetrust. The formal representation of this meaning is as fol-

264

lows:

(∀q)(holds(made(order st(#102579, Shipped),

Amazon, q), S0)

∧entail(q,

order st(#102579, Shipped)

∧ order from(#102579, Amazon))

⊃ holds(believe(Ben,

order st(#102579, Shipped)

∧ order from(#102579, Amazon)

⊃ order st(#102579, Shipped)),

S0))

q can be any context that entail the context of the trust, forexample,

order of(#102579, Ben)

∧ order from(#102579, Amazon)

∧ shipped by(#102579, UPS)

∧ order st(#102579, Shipped).

There are two different contexts in trust. q is the context forperforming / making x, called the context of performance;and k is the context for using x, called the context of trust.When the expectancy (x ) is a piece of information, the con-text to create this information and the context to use it areobviously different; when the expectancy is performing anaction, these two contexts may overlap in the circumstancein which the action is performed. However, the trustor’sconcerns such as goals and utilities may be quite differentto trustee’s concerns about the completion of this action.For this reason, these two contexts are also different in thelatter case.

In our formalization, for the purpose of simplicity, the ex-pectancy is defined as a fluent. When the expectancy isa piece of information, the information appears directly asa fluent to replace variable x in the axiom; when the ex-pectancy is to perform an action, the expectancy on trustee’sperformance needs to be treated as a fluent. We suggest torepresent this expectancy as fluent “perform(e,a)”, whichrepresents that trustee e performs action a, to replace x inaxiom 1 as follows.

Corollary (trust in performing actions):

holds(trust p(d, e, perform(e, a), k), s) ≡∀q, (holds(made(perform(e, a), e, q), s) ∧ entail(q, k)

⊃ holds(believe(d, k ⊃ perform(e, a)), s)) (6)

This corollary states that at any situation s, trustor d truststrustee e on performing action a in context k is logicallyequivalent to that if e commits to do a in context q that iswithin k then d believes in that e performs a in context k.

Example 2. Ben trusts Amazon regarding getting refundfor an order in the context that he wants get refund for somereasons and the order meets Amazon’s returns policy. This

trust relationship can be represented as follows:

holds(trust p(Ben, Amazon,

perform(Amazon, return(#102579)),

refund asked(#102579)

∧meets r p(#102579)),

s)

The meaning of this trust relationship is similar to example1 except the expectancy is

perform(Amazon, return(#102579)).

From its basic semantics, trust in performance can be ex-tended to a stronger semantics: the trustor believes ev-erything (rather than one specific thing) made by the trusteein a context within the trustor’s context of trust. Thisstronger semantics can be formally represented as the fol-lowing theorem.

Theorem 1.

(∀x, holds(trust p(d, e, x, k), s)) ≡(∀y, q)(holds(made(y, e, q), s) ∧ entail(q, kθ)

⊃ holds(believe(d, kθ ⊃ y), s)) (7)

Here,

θ = {x/y} (8)

is a substitution operator that replaces variable x with termy in the operand. In this theorem, the operand is the fluentwhich k is bound to.

(The proof of this theorem is omitted here.)

In this theorem, sentence

∀x, holds(trust p(d, e, x, k), s) (9)

represents the stronger or general trust relationship: trustor(d) trusts trustee (e) regarding any information (x ) trusteecreates in the context of trust (k).

To express this stronger trust relationship, x needs to remainas variable to represent the expectancy of the trust; thecontext of trust, which variable k will be bound to, mayaddresses the expectancy of trust (i.e. x ).

Example 3. Sentence

∀x, trust p(Bill, Greenspan, x,

inF ieldOf(x, Economic Analysis))

represents that Bill trusts Greenspan regarding what Greenspanstates in the field of economic analysis.

This trust relationship means: for every statement (denotedas y) made by Greenspan in a context q within the contextof trust (i.e. in field of economic analysis), Bill will believethis statement in that context.

The thing (denoted as x ) on which Bill trusts Greenspan isthe same thing (denoted as y) which Greenspan performs.

265

Technically, in order to match these two things as one, sub-stitution θ is employed to unify x with y. After this substi-tution, the thing x addressed in the context of trust (k) isbound to the same thing y as addressed in the context ofperformance (q).

7.1.2 Trust in beliefTrust in belief is the trust placed on what trustee believes.

The basic semantics of trust in belief is that the trustorbelieves a thing believed by the trustee in a context withinthe trustor’s context of trust. This is formally defined in thefollowing axiom.

Axiom 2 (formal semantics of trust in belief ):

holds(trust b(d, e, x, k), s) ≡∀q, holds(believe(e, q ⊃ x), s) ∧ entail(q, k)

⊃ holds(believe(d, k ⊃ x), s) (10)

Example 4. In the context of our motivating example, Ftrusts what P believes regarding the quality of a porcelainproduct, e.g. “TeaSet-J1106b”. This trust can be repre-sented as follows.

holds(trust b(F, P, qual grade(TeaSet− J1106b, A),

in topic(qual grade(TeaSet− J1106b, A),

Porc Qual)), s),(11)

where qual grade((TeaSet−J1106b, A) represents the qual-ity grade of product TeaSet-J1106b is A.

Similar to trust in performance, trust in belief has the strongersemantics that the trustor believes everything believed bythe trustee in a context that is within the trustor’s contextof trust. The formal representation of this semantics is asfollows.

Theorem 2.

(∀x)(holds(trust b(d, e, x, k), s)) ≡(∀y, q)(holds(believe(e, q ⊃ y), s)

∧ entail(q, kθ)

⊃ holds(believe(d, kθ ⊃ y), s)) (12)

Example 5. In our motivating example, the trust rela-tionship that F trusts what P believes regarding porcelainproduct quality can be represented as follows.

∀x, holds(trust b(F, P, x, in topic(x, Porc Qual)), s) (13)

7.2 Sources of TrustIn the last subsection, we discussed the semantics of trustor what trust means. This subsection discusses where trustcomes from.

Trust comes from three types of sources: (1) direct trustfrom the experience of interaction between trustor and trustee,called inter-individual trust ; (2) trust derived through trustpropagation in social networks, called relational trust ; (3)

trust via the trust placed on the stable or predictable func-tions / behaviors of a system, called system trust, typicallyincluding institutional based trust, professional membershipbased trust, as well as characteristics based trust. The above(2) and (3) are also called indirect trust.

Inter-individual trust relationships are represented with flu-ents: ∀x, has p tr(d, e, x, k), which represents trustor d hastrust in performance type of inter-individual trust relation-ship with trustee e in context k, and ∀x, has b tr(d, e, x, k),which represents trustor d has trust in belief type of inter-individual trust relationship with trustee e in context k.

Inter-individual trust also have the same semantics as thegeneral concept of trust. Therefore, the general semanticsof trust is the necessary condition of inter-individual trustas described in the following axiom.

Axiom 3.

holds(has p tr(d, e, x, k), s) ⊃∀q, (holds(made(x, e, q), s)

∧ entail(q, k)

⊃ holds(believe(d, k ⊃ x), s)) (14)

holds(has b tr(d, e, x, k), s) ⊃∀q, holds(believe(e, q ⊃ x), s)

∧ entail(q, k)

⊃ holds(believe(d, k ⊃ x), s) (15)

The sufficient condition for inter-individual trust dependson how trust is built up in the process of interaction amongentities. Many researchers have studied this problem (referto Section 2). In this paper, we will not discuss how inter-individual trust relationships are built up; instead we onlyassume that entities in social networks have developed theirinter-individual trust relationships, from which the indirecttrust relationships are derived.

The following theorem states that a trust relationship holdsif the inter-individual trust relationship holds.

Theorem 3.

(∀x, holds(has p tr(d, e, x, k), s))

⊃ (∀x, holds(trust p(d, e, x, k), s)) (16)

(∀x, holds(has b tr(d, e, x, k), s))

⊃ (∀x, holds(trust b(d, e, x, k), s)) (17)

This theorem needs to be used for trust inference.

7.3 System TrustAs discussed earlier, system trust is the trust placed on thestable or predictable functions or behaviors of a system.Case 2 in the motivating scenarios is an example of using sys-tem trust. The typical forms of system trust include profes-sional membership-based trust, characteristics-based trust,institutional based trust or regulation-based trust. Systemtrust is a wide topic which cannot be covered in a short

266

writing. Here, we introduce a very basic and general formof system trust.

Axiom 4 (system trust):

holds(trust p(d, o, x, k), s) ∧ holds(memberOf(e, o), s)

⊃ holds(trust p(d, e, x, k), s) (18)

This axiom states that at any situation if an entity (trustor)trusts in the performance of a system (e.g. an organization,or a group whose members have a common set of charac-teristics related to the trust) in a context, then the entitytrusts in the performance of the members of the system inthat context.

We tend to limit system trust to trust in performance only.

System trust makes trust in performance able to be trans-ferred from an organization or a group to its members. Thisaxiom is illustrated in figure 1(c). This is one of the condi-tions for trust propagation in social networks.

An example of using system trust in a trust judgment isgiven in the second case of section 8.

7.4 Reasoning with Trust RelationshipsThe above discussed axioms and theorems can be used toinfer whether a thing can be believed. When an individual(trustor) is confronted by a thing (information or actions)made or believed by another individual (trustee), the trustorcan infer whether or not to believe this thing by consideringwhether the trustor trusts the trustee. The following the-orems 4, 5 and 6 provide rules for such type of reasoning.Theorems 4 and 5 can be proved directly from axioms aboutthe semantics of trust.

Theorem 4.

(∀x)(holds(trust p(d, e, x, k), s))

∧ holds(made(y, e, q), s) ∧ entail(q, kθ)

⊃ holds(believe(d, kθ ⊃ y), s) (19)

In this and the following theorems, substitution θ is the sameas defined in (8).

Theorem 5.

(∀x)(holds(trust b(d, e, x, k), s))

∧ holds(believe(e, q ⊃ y), s) ∧ entail(q, kθ)

⊃ holds(believe(d, kθ ⊃ y), s) (20)

The following theorem states that if entity d believes fluentx in context k in a situation s, then when d believes k in s,d also believes x in s.

Theorem 6.

holds(believe(d, k ⊃ x), s)

∧ holds(believe(d, k), s)

⊃ holds(believe(d, x), s) (21)

The proof of this theorem is omitted here, for the proofrelated to the representation of belief, and we do not discussbelief further in this paper.

Theorem 7 reveals that if d trust e on thing x in context k,then d also trusts e on x in a stricter context (that satisfiesk).

Theorem 7.

holds(trust p(d, e, x, k), s) ∧ entail(q, k)

⊃ holds(trust p(d, e, x, q), s) (22)

holds(trust b(d, e, x, k), s) ∧ entail(q, k)

⊃ holds(trust b(d, e, x, q), s) (23)

7.5 Transitivity of TrustTrust propagation in social networks is based on the assump-tion that trust is transitive. However, trust is not alwaystransitive. For example, A trusts B to access A’s resources,and B trusts C to access B’s resources, but these do not nec-essarily imply that A trusts C to access A’s resources. Thequestion interesting here is what type of trust is transitive.

Our answer to the question is that trust in belief is tran-sitive. We give the formal description of the result as thefollowing theorem.

Theorem 8 (Transitivity of trust in belief).

(a) In any situation s, if entity d trusts entity c on every-thing which c believes in context k, and c trusts entity e oneverything which e performs in context q, then d trusts e oneverything which e performs in the conjunction of contextsk and q.

(∀x)(holds(trust b(d, c, x, k), s))

∧ (∀x)(holds(trust p(c, e, x, q), s))

⊃ (∀x)(holds(trust p(d, e, x, k ∧ q), s)) (24)

Proof. To prove this theorem, by theorem 1 of (7), we needto prove:

(∀x, p)(holds(made(x, e, p), s) ∧ entail(p, k ∧ q)

⊃ holds(believe(d, k ∧ q ⊃ x), s)), (25)

from the given premises:

(∀x)(holds(trust b(d, c, x, k), s)) (26)

(∀x)(holds(trust p(c, e, x, q), s)). (27)

Note that all variables are arbitrary.

Assume

holds(made(x, e, p), s) (28)

and

entail(p, k ∧ q). (29)

267

From the definition of entail (5) and the “propositional func-tions” of fluents (refer to (1)), we have:

entail(k ∧ q, k) (30)

entail(k ∧ q, q) (31)

entail(k ∧ q, k ∧ q) (32)

From theorem of (22)/(23), premises (26), (27), and theabove (30) and (31), we have,

(∀x)(holds(trust p(c, e, x, k ∧ q), s)) (33)

(∀x)(holds(trust b(d, c, x, k ∧ q), s)) (34)

From theorem of (19), assumptions (28) and (29), as well as(33), we have

holds(believe(c, k ∧ q ⊃ x), s)) (35)

Note that for simplicity, we use the same name (x ) for theexpectancies in (28) and (33). In this way, the unificationof the expectancy needn’t use any substitution to changevariable names.

From theorem of (20), the derived sentences (34), (35) and(32), we have

holds(believe(d, k ∧ q ⊃ x), s)) (36)

So, sentence (25) is proved. Note that all variables are ar-bitrary although x is unified. So, the theorem is proved.

(b) In any situation s, if entity d trusts entity c on every-thing which c believes in context k, and c trusts entity eon everything which e believes in context q, then d trustse on everything which e believes in the conjunction of thecontexts k and q.

(∀x)(holds(trust b(d, c, x, k), s))

∧ (∀x)(holds(trust b(c, e, x, q), s))

⊃ (∀x)(holds(trust b(d, e, x, k ∧ q), s)) (37)

This part of theorem can be proved with the same methodsand proof structure as the proof of part (a), and we justneed to replace trust p with trustb. This part of proof isomitted.

This theorem shows that trust in belief is transitive. Re-garding trust in performance, as revealed in case 3 of themotivating scenarios, this type of trust relationship gener-ally is not transitive. However, trust in performance can bederived by using trust in belief (theorem 8(a)) and by usingsystem trust (axiom 4 in section 7.3).

Theorem 8 together with axiom 4 also described three con-ditions and forms of trust propagation: (1) A has trust inbelief relationship with B, B has trust in belief relationshipwith C, then we can derive that A has trust in belief rela-tionship with C; (2) A has trust in belief relationship withB, B has trust in performance relationship with C, then itcan be derived that A has trust in performance relationshipwith C; (3) A has trust in performance relationship with

Figure 1: Three forms and conditions of trust prop-agation in social networks

B, B is a system (organization or group) and has memberC, then it can be derived that A has trust in performancerelationship with C. These are illustrated in figure 1. Forsimplicity, we assume the context of trust is the same andomitted in the figure.

8. APPLICATION EXAMPLEIn the following, to demonstrate the potential uses of theproposed trust ontology in trust judgments using social net-works on the web, we apply the trust ontology to the trustjudgment problem in web services base e-business intro-duced in the motivating scenarios.

Case 1: the trust relationships in case 1 of the motivatingscenarios is formally represented as follows.

268

(1) F trusts in P’s belief on porcelain product quality.

holds(has b tr(F, P, x, in topic(x, Porc Qual)), s) (38)

Here x is a free variable, so it universally quantified in thelargest scope.

(2) P trusts in S’s belief on porcelain product quality.

holds(has b tr(P, S, x, in topic(x, Porc Qual)), s) (39)

Similar to (1), x is a free variable.

(3) S trusts in J’s performance on high quality porcelainproduct manufacture.

holds(has p tr(S, J, perform(J, productOf(x, J)∧hq(x)),

in topic(perform(J, productOf(x, J) ∧ hq(x)),

Porcln Qual)), s) (40)

Here, term perform(J, productOf(x, J)∧ hq(x)) representsJ’s performance on high quality porcelain product manufac-ture, i.e. any product of J is in high quality.

Now, we use the above facts and the proposed trust ontol-ogy to answer whether F trusts in J’s performance on highquality porcelain product manufacture.

From (38) and theorem 3b (17), we have,

holds(trust b(F, P, x, in topic(x, Porc Qual)), s) (41)

From (39) and theorem 3b (17), we have,

holds(trust b(P, S, x, in topic(x, Porc Qual)), s) (42)

From (41), (42) and theorem 8b (37), we have,

holds(trust b(F, S, x, in topic(x, Porc Qual)), s) (43)

From (40) and theorem 3a (16), we have,

holds(trust p(S, J, perform(J, prodOf(x, J) ∧ hq(x)),

in topic(perform(J, prodOf(x, J) ∧ hq(x)),

Porcln Qual)), s) (44)

From (43), (44) and theorem 8a (24), we have,

holds(trust p(F, J, perform(J, prodOf(x, J) ∧ hq(x)),

in topic(perform(J, prodOf(x, J) ∧ hq(x)),

Porcln Qual)), s) (45)

This formula gives the answer that F can trust in J’s per-formance that J’s porcelain products have high quality. Be-cause of this trust, F presents to its customers J’s products.

Case 2: we can have the following formal representation.

F trusts in the product quality of the enterprises in compli-ance with ISO-9000.

holds(trust p(F, ISO9000Ents,

perform(ISO9000Ents, prodOf(x, J) ∧ hq(x)),

in topic(perform(ISO9000Ents, prodOf(x, J) ∧ hq(x)),

Porcln Qual)), s) (46)

J has ISO-9000 certification, that is, J is an enterprise incompliance with ISO-9000.

holds(memberOf(J, ISO9000Ents), s) (47)

From (46), (47) and axiom 4 (18), we have (45). In this way,F also trusts in J’s product quality.

This example shows that the proposed trust ontology isable to be used in trust judgments using social networks.This ontology can be used as a kernel logic part in specifictrust models for particular domains. In practice, a specifictrust judgment model in a particular domain can be builtby incorporating this trust ontology and domain-dependentknowledge.

9. CONCLUSION AND DISCUSSIONMaking trust judgments by using social networks is a promis-ing approach for addressing the trust problem on the Web.The necessary condition for trust propagation in social net-works is that trust is transitive. This paper created a logicaltheory of trust in the form of ontology that gives a formaland explicit specification for the semantics of trust. Fromthis formal semantics, two types of trust – trust in beliefand trust in performance were identified, the transitivity oftrust in belief was formally proved, and three conditions fortrust propagation were derived. These results answered thequestions of “is trust transitive, what types of trust are tran-sitive and why”, and provide theoretical evidence for trustpropagation in social networks.

Although many formal trust models have been proposed (re-fer to section 2), but the transitivity of trust has not beenstudied in formal manner. Although [1] and [18] arguedthat “recommendation trust” is transitive, but neither givesa formal description. In addition, “recommendation trust”is a specific case of trust in belief ; the latter is more general.

As illustrated in the application example of trust judgmentsin web services based e-business, the proposed trust ontologycan be used as a logical tool to support trust judgementsusing social networks on the Web.

The proposed logic model of trust is general. There may bemany different web implementations, but the logic behindthese implementations is the same.

In this paper, we focused on the logical representation ofthe semantics of trust and the transitivity of trust, which ismainly targeted to support social networks based trust. Thisformal semantics also supports modeling inter-individual trustand system trust.

269

The model proposed in this paper is a certainty model. Al-though this model can be used in real trust judgments, itis important to extend the model to uncertainty model, be-cause uncertainty widely exists in trust problems, especiallyin the cyberspace. In fact, we are in the process to extendthe proposed certainty model to uncertainty model.

10. ACKNOWLEDGMENTSThis research was supported, in part, by Novator Systems,Ltd.

11. REFERENCES[1] A. Abdul-Rahman and S. Hailes. Supporting trust in

virtual communities. In Proceedings of 33rd HawaiiInternational Conference on System Sciences, 2000.

[2] M. Blaze, J. Feigenbaum, and J. Lacy. Decentralizedtrust management. In Proceedings of the 1996 IEEESymposium on Security and Privacy, 1996.

[3] K. Blomqvist. The many faces of trust. ScandinavianJournal of Management, 13(3):271–286, 1997.

[4] S. Buvac and I. Mason. Propositional logic of context.In Proceedings of AAAI’1993, 1993.

[5] R. Demolombe. To trust information sources: aproposal for a modal logical framework. In Proceedingsof the Autonomous Agents Workshop on Deception,Fraud and Trust in Agent Societies, pages 9–19, 1998.

[6] M. Deutsch. Cooperation and trust: Some theoreticalnotes. In M. Jones, editor, Nebraska Symposium onMotivation, volume X, pages 275–318, 1962.

[7] R. Falcone and C. Castelfranchi. Trust dynamics:How trust is influenced by direct experiences and bytrust itself. In AAMAS’04, July 2004.

[8] M. S. Fox and J. Huang. Knowledge provenance: Anapproach to modeling and maintaining the evolutionand validity of knowledge.http://www.eil.utoronto.ca/km/papers/fox-kp1.pdf,2003.

[9] M. S. Fox and J. Huang. Knowledge provenance inenterprise information. International Journal ofProduction Research, 43(20):4471–4492, 2005.

[10] D. Gambetta. Trust : making and breaking cooperativerelations. Blackwell, 1988.

[11] G. Gans, M. Jarke, S. Kethers, and G. Lakemeyer.Modeling the impact of trust and distrust in agentnetworks. In AOIS-01 at CAiSE-01, 2001.

[12] J. Golbeck, J. Hendler, and B. Parsia. Trust networkson the semantic web. 2002.

[13] M. Gruninger and M. Fox. Methodology for the designand evaluation of ontologies. In Workshop on BasicOntological Issues in Knowledge Sharing, IJCAI-1995,1995.

[14] R. Guha and R. Kumar. Propagation of trust anddistrust. In WWW2004, 2004.

[15] J. Huang and M. S. Fox. Dynamic knowledgeprovenance. In Proceedings of Business Agents andSemantic Web Workshop, pages 11–20, 2004.

[16] J. Huang and M. S. Fox. Uncertainty in knowledgeprovenance. In C. Bussler, J. Davies, D. Fensel, andR. Studer, editors, The Semantic Web: Research andApplications, Lecture Notes in Computer Science,3053, pages 372–387, 2004.

[17] J. Huang and M. S. Fox. Trust judgment in knowledgeprovenance. In Proceedings of DEXA2005 (4thInternational Workshop on Web Semantics), pages524–528. IEEE Computer Society, 2005.

[18] A. Josang, L. Gray, and M. Kinateder. A model foranalysing transitive trust. 2005.

[19] R. Khare and A. Rifkin. Weaving a web of trust.World Wide Web Journal, 2(3):77–112, 1997.

[20] J. Lewis and A. Weigert. Trust as a social reality.Social Forces, 63(4):967–985, 1985.

[21] N. Luhmann. Trust and Power. John Wiley & SonsLtd, 1979.

[22] S. P. Marsh. Formalising Trust as a ComputationalConcept. Ph.D. Thesis, University of Stirling, 1994.

[23] R. Mayer, J. Davis, and F. Schoorman. An integrativemodel of organizational trust. Academic ofManagement Review, 20(3):709–734, 1995.

[24] J. McCarthy. Notes on formalizing context. InProceedings of IJCAI1993, 1993.

[25] J. Pinto. Temporal Reasoning in the SituationCalculus. Ph.D. Thesis, University of Toronto, 1994.

[26] S. Ramchurn, H. Dong, and N.R.Jennings. Trust inmulti-agent systems. The Knowledge EngineeringReview, 19(1):1–25.

[27] R. Reiter. Knowledge In Action. The MIT Press, 2001.

[28] M. Richardson, R. Agrawal, and P. Domingos. Trustmanagement for the semantic web. In Proceedings ofInternational Semantic Web Conference, pages351–368, 2003.

[29] J. Rotter. A new scale for the measurement ofinterpersonal trust. J. Personality, 35:651–665, 1967.

[30] D. M. Roussea, S. B. Sitkin, R. S. Burt, andC. Camerer. Not so different after all: Across-discipline view of trust. Academic ofManagement Review, 23(3):393–404, 1998.

[31] H. A. Simon. Models of Bounded Rationality,volume 3. The MIT Press, 1997.

[32] B. Yu and M. Singh. A social mechanism ofreputation management in electronic communities. InProceedings of Fourth International Workshop onCooperative Information Agents, pages 154–165, 2000.

[33] L. Zucker. Production of trust: Institutional sources ofeconomic structure, 1840-1920. Research inOrganizational Behavior, 8:53–111, 1986.

270