From nobody@cs.Buffalo.EDU Tue Apr 14 17:37 EDT 1998 From: nobody@cs.Buffalo.EDU Date: Tue, 14 Apr 1998 17:37:40 -0400 (EDT) To: techreps@cs.Buffalo.EDU Subject: techrep: POST request Content-Type: text Content-Length: 3138 ContactPerson: hexmoor@cs.und.edu Remote host: dhcp216-57.cs.und.edu Remote ident: unknown ### Begin Citation ### Do not delete this line ### %R 98-04 %U /projects3/hexmoor/Diss/diss.ps %A Hexmoor, H %T Representing and Learning Routine Activities %D December 1995 %I Department of Computer Science, SUNY Buffalo %K Knowledeg representation, robotics, intelligent agents %X A routine is a habitually repeated performance of some actions. Agents use routines to guide their everyday activities and to enrich their abstract concepts about acts. This dissertation addresses the question of how an agent who is engaged in ordinary, routine activities changes its behavior over time, how the agent's internal representations about the world is affected by its interactions, and what is a good agent architecture for learning routine interactions with the world. In it, I develop a theory that proposes several key processes: (1) automaticity, (2) habituation and skill refinement, (3) abstraction-by-chunking, and (4) discovery of new knowledge chunks. The process of automaticity caches the agent's knowledge about actions into a flat stimulus-response data structure that eliminates knowledge of action consequences. The stimulus-response data structure produces a response to environmental stimuli in constant time. The process of habituation and skill refinement uses environmental clues as rewards. Rewards are used to develop a bias for the agent's actions in competing action-situation pairs where the agent did not have prior choices. The process of abstraction-by-chunking monitors the agent's increased reliance on stimulus-response data structures and turns the agent's complex actions into primitive acts, thereby eliminating the need for the agent to elaborate its plans beyond a certain level. The process of discovering knowledge-chunks monitors the agent's use of stimulus-action data structures and constructs knowledge about action preconditions and consequences and constructs patterns of interactions into plans. I have implemented several agents that demonstrate parts of my theory using an agent architecture I developed called GLAIR. GLAIR models agents that function in the world. Beyond my theory about routines, GLAIR is used to demonstrate situated actions as well as deliberative actions. Each of GLAIR's three levels (Sensori-actuator, Perceptuo-motor, and Knowledge) uses different representations and models different components of intelligent agency. GLAIR levels operate semi-autonomously. Whereas intra-level learning improves an agent's performance, inter-level learning migrates know-how from one level to another. Using the GLAIR architecture, I model agents that engage in routines, use these routines to guide their behavior, and learn more concepts having to do with acts. The significance of my theory, architectures, and the agents are to illustrate that computational agents can autonomously learn know-how from their own routine interactions in the world as well as learn to improve their performance.