From nobody Thu Oct 17 23:48 EDT 1996 Date: Thu, 17 Oct 1996 23:48:32 -0400 (EDT) From: uid no body To: techreps@cs.buffalo.edu Subject: techrep: POST request Content-Type: text Content-Length: 2350 Comments: hadar ContactPerson: hans@isi.edu Remote host: hobbes.isi.edu Remote ident: unknown ### Begin Citation ### Do not delete this line ### %R 96-18 %U /projects/hans/Thesis/all.ps %A Chalupsky, Hans %T SIMBA: Belief Ascription by Way of Simulative Reasoning %D January 31, 1996 %I Department of Computer Science, SUNY Buffalo %K belief reasoning, agent incompleteness, cognitive modeling, nonmonotonic reasoning %Y Nonmonotonic reasoning and belief revision; cognitive simulation %X A key cognitive faculty that enables humans to communicate with each other is their ability to incrementally construct and use models describing the mental states of others, in particular, models about their beliefs. Not only do humans have beliefs about the beliefs of others, they can also reason with these beliefs even if they do not hold them themselves. If we want to build an artificial or computational cognitive agent that is similarly capable, we need a formalism that is fully adequate to represent the beliefs of other agents, and that also specifies how to reason with them. Standard formalizations of knowledge or belief, in particular, the various epistemic and doxastic logics, seem to be not very well suited to serve as the formal device upon which to build an actual computational agent. They either neglect representation problems, or the reasoning aspect, or the defeasibility that is inherent in the reasoning about somebody else's beliefs, or they use idealizations which are problematic when confronted with realistic agents. Our main result is the development of SIMBA, an actually implemented belief reasoning engine that uses simulative reasoning to reason with and about the beliefs of other agents. SIMBA is built upon SL, a fully intensional, subjective logic of belief which is representationally and inferentially adequate to serve as one of the main building blocks of an artificial cognitive agent. SL can handle agents that do not believe the consequential closure of their base beliefs, or even agents which have inconsistent beliefs, differently put, it can handle the beliefs of realistic agents. It is also adequate to model introspection, it facilitates belief revision, and it has a more intuitive semantics than standard formalizations of belief.