KANG AND A . TERRY BAHILL a TOOL for EXPERT

  • TERRY BAHILL

Abstract

46 he most difficult part of expertsystem design is testing. Bruteforce enumeration of all inputs is impossible for most systems, so the traditional testing method has a human expert run many test cases on the expert system. This method is time-consuming and fa l l ib le . Furthermore, the knowledge engineer never knows when enough test cases have been run. Our run-time tool helps the knowledge engineer know exactly when to quit. More mistakes could be corrected if many experts tested the system. It is often possible to find an expert who will devote lots of time for interviewing, debugging the knowledge base, and running test cases. It is much harder to find other experts with the time to test the final product: their time is expensive; there are few of them in any geographical area; and they don't have the personal commitment to the project. Because experts have so many time constraints, there should be tools to help make judicious use of it. Evaluating an expert system by using test cases is certainly not original. P.G. Politakis1 has shown how statistics gathered while running test cases can be used by the developer to modify the rules and improve the expert system. The technique described here comes into play at run-time. Each rule-firing is recorded. Rules that never succeed and rules that succeed for all test cases are probably mistakes, and the human expert is notified. "Succeed" means all the premises are true and the expression in the conclusion is assigned the appropriate value.

Cite this paper

@inproceedings{BAHILLKANGAA, title={KANG AND A . TERRY BAHILL a TOOL for EXPERT}, author={TERRY BAHILL} }