Psychology and philosophy have long studied the nature and role of explanation. More recently, artificial intelligence research has developed promising theories of how explanation facilitates learning and generalization. By using explanations to guide learning, explanation-based methods allow reliable learning of new concepts in complex situations, often from observing a single example. The author of this volume, however, argues that explanation-based learning research has neglected key issues in explanation construction and evaluation. By examining the issues in the context of a story understanding system that explains novel events in news stories, the author shows that the standard assumptions do not apply to complex real-world domains. An alternative theory is presented, one that demonstrates that context -- involving both explainer beliefs and goals -- is crucial in deciding an explanation's goodness and that a theory of the possible contexts can be used to determine which explanations are appropriate. This important view is demonstrated with examples of the performance of ACCEPTER, a computer system for story understanding, anomaly detection, and explanation evaluation.
This will help us customize your experience to showcase the most relevant content to your age group
Please select from below
Login
Not registered?
Sign up
Already registered?
Success – Your message will goes here
We'd love to hear from you!
Thank you for visiting our website. Would you like to provide feedback on how we could improve your experience?
This site does not use any third party cookies with one exception — it uses cookies from Google to deliver its services and to analyze traffic.Learn More.