Systemic test and evaluation of a hard+soft information fusion framework: Challenges and current approaches
Increasing interest in human-centered information fusion systems involves; (1) humans as sensors (viz., “soft sensors”), (2) humans performing pattern recognition and participating in the fusion cognitive process, and (3) human groups performing collaborative analysis (viz., “crowd-sourcing” of analysis). Test and evaluation of such systems is challenging because we must develop both representative test data (involving both physical sensors and human observers) and test environments to evaluate the performance of the hardware, software and humans-in-the-loop. This paper describes an experimental facility called an extreme events laboratory, a test and evaluation approach, and evolving test data sets for evaluation of human-centered information fusion systems for situation awareness. The data sets include both synthetic data as well as data obtained using human subjects in campus wide experiments.