Publisher's Synopsis
One of the most essential properties of any intelligent entity is the ability to learn. "Explanation based learning" is one recently developed approach to concept acquisition by computer. In this type of machine learning, a specific problem's solution is generalised into a form that can later be used to solve conceptually similar problems.;But in the solution of any specific task, those aspects that in general can be manifested and arbitrary number of times will be represented by a "fixed" number of occurrences. Quite often this number must be generalised if the underlying concept is to be correctly acquired.;A number of explanation-based generalisation algorithms have been developed. Unfortunately, most do not alter the structure of their explanation of the specific problem's solution; hence they do not incorporate any additional objects of inference rules into the concepts they learn. Instead, these algorithms generalise by converting constants in the observed example to variables with constraints.;However, many important concepts, in order to be properly learned, require that the "structure" of explanations be generalise. Generalising structure can involve generalising such things as the number involved in a concept or the number of times some action is performed. For example, concepts such as momentum and energy conservation apply to arbitrary numbers of physical objects, clearing the top of a desk can require an arbitrary number of object relocations, and setting a table can involve differing numbers of guests.;Two theories of extending explanations during the generalisation process have been developed, and computer implementations have been created to computationally test these approaches. The PHYSICS 101 system utilises characteristics of mathematically-based problem solving to extend mathematical calculations in a psychologically-plausible way, while the BAGGER system and its successor BAGGER2 implement domain-independent approaches to generalising explantions structures.;This book describes all three of these systems, presents the details of their algorithms, and discusses several examples of learning by each. It also presents an empirical analysis of explanation-based learning. These computer experiments demonstrate the value of generalising explanation structures in particular, and of explanation-based learning in general.;They also demonstrate the advantages of learning by observing the intelligent behaviour of external agents. The book's conclusion discusses several open research issues in generalising the structure of explantions and related approaches to this problem. This research brings machine learning closer to its goal of being able to acquire all of the knowledge inherent in the solution to a specific problem.