• Publications
  • Influence
TRAVOS: Trust and Reputation in the Context of Inaccurate Information Sources
TRAVOS (Trust and Reputation model for Agent-based Virtual OrganisationS) which models an agent’s trust in an interaction partner taking account of past interactions between agents and when there is a lack of personal experience between agents, the model draws upon reputation information gathered from third parties. Expand
Coping with inaccurate reputation sources: experimental analysis of a probabilistic trust model
TRAVOS (Trust and Reputation model for Agent-based Virtual OrganisationS) is developed which models an agent's trust in an interaction partner using probability theory taking account of past interactions between agents. Expand
Agent technology, Computing as Interaction: A Roadmap for Agent Based Computing
Citing this paper Please note that where the full-text provided on King's Research Portal is the Author Accepted Manuscript or Post-Print version this may differ from the final Published version. IfExpand
Plagiarism in programming assignments
The authors have developed a package which will allow programming assignments to be submitted online, and which includes software to assist in detecting possible instances of plagiarism, and consider its implications for large group teaching. Expand
Applying artificial intelligence to virtual reality: Intelligent virtual environments
The issues arising from combining artificial intelligence and artificial life techniques with those of virtual environments to produce just such intelligent virtual environments are reviewed. Expand
A Manifesto for Agent Technology: Towards Next Generation Computing
The current state-of-the-art of agent technologies is described and trends and challenges that will need to be addressed over the next 10 years to progress the field and realise the benefits are identified. Expand
An efficient and versatile approach to trust and reputation using hierarchical Bayesian modelling
This paper presents HABIT, a Hierarchical And Bayesian Inferred Trust model for assessing how much an agent should trust its peers based on direct and third party information, and demonstrates its ability to predict agent behaviour in both a simulated environment, and a real-world webserver domain. Expand
Norm-based behaviour modification in BDI agents
This paper provides a technique to extend BDI agent languages, by enabling them to enact behaviour modification at runtime in response to newly accepted norms, and demonstrates the viability of the approach through an implementation of the solution in the AgentSpeak(L) language. Expand