Author ORCID Identifier 0000-0002-2718-5426

Document Type

Book Chapter


Available under a Creative Commons Attribution Non-Commercial Share Alike 4.0 International Licence


Computer Sciences


An important class of distributed Trust-based solutions is based on the information sharing. A basic requirement of such systems is the ability of participating agents to effectively communicate, receiving and sending messages that can be interpreted correctly. Unfortunately, in open systems it is not possible to postulate a common agreement about the representation of a rating, its semantic meaning and cognitive and computational mechanisms behind a trust-rating formation. Social scientists agree to consider unqualified trust values not transferable, but a more pragmatic approach would conclude that qualified trust judgments are worth being transferred as far as decisions taken considering others’ opinion are better than the ones taken in isolation. In this paper we investigate the problem of trust transferability in open distributed environments, proposing a translation mechanism able to make information exchanged from one agent to another more accurate and useful. Our strategy implies that the parties involved disclose some elements of their trust models in order to understand how compatible the two systems are. This degree of compatibility is used to weight exchanged trust judgements. If agents are not compatible enough, transmitted values can be discarded. We define a complete simulation environment where agents are modelled with characteristics that may differ. We show how agents’ differences deteriorate the value of recommendations so that agents obtain better predictions on their own. We then show how different translation mechanisms based on the degree of compatibility improve drastically the quality of recommendations.