A knowledge-based methodology for tuning analytical models
A wide variety of mathematical and statistical software tools are available today, and the list grows daily. Means for representing and storing data and scientific information are likewise increasing, from sophisticated database and knowledge base technologies to high-speed, high-resolution graphics. Imagine a computing environment that couples all of these in a user-friendly way, using the best mathematical, computing, and AI technology to allow a scientist to quickly and easily perform all types of modeling and simulation by computer to assist in research. Imagine also that this computing environment is implemented in such a way that the human-computer interface is set up to resemble the laboratory or field environment in which real data might actually be obtained. Because of the range of computing capabilities at the scientist’s disposal and the ease with which these capabilities can be brought to bear on a research problem, a computing environment of this sort might be called an artificial laboratory. The artificial laboratory is, therefore, a computing environment that not only simulates the laboratory environment but also allows analysis of the data. Most likely, the laboratory environment that is simulated would be one which is relatively stable or mature. Research using custom-designed instruments, for example, might make it difficult to say with confidence that the measurement portion of the scientist–instrument–object-of-study system can be assumed to not influence the data which are obtained. Cutting-edge research might require frequent changes to computer programs used for analysis. An artificial laboratory is more suited for those situations where the laboratory technology is mature enough to permit focus only on the object of study. ne of the major characteristics of modern science is that theory and experimentation drive one another in a cyclic process of progressive refinement, leading to new conclusions about the world around us. Theory guides and directs the course of experimentation, and experimental results subsequently suggest ways in which theory must be modified. Some theories can, in fact, be discarded altogether. Over the past 30 years, computer modeling and simulation, analogous to theory and experimentation, has frequently guided scientific investigation. John von Neumann was one of the first to pioneer and promote the use of computers to numerically study the behavior of systems and use the results as a “heuristic guide to theorizing” (Burks 1966, p. 3). The solutions provided by the computer can serve as “an aid to discovering useful concepts, broad principles, and general theories.” Certainly, the use of computers in modeling and simulation of static and dynamic systems has become a significant part of scientific endeavor. Computer modeling is an extremely powerful way of working with representations that help us better understand the systems we study. We can guess that modeling and simulation will become ever more advanced in years to come, but where does the future of computers in science lie? As rapidly as computer technology is developing, extrapolations from the present are numerous and dangerous. Although we might successfully guess that parallel processing and supercomputing will be cheaper, faster, and more widely accessible, undoubtedly many aspects of future computing would amaze us if we could use a crystal ball and see them. Let us consider for a moment, though, one direction that the use of computers in science might take. An artificial laboratory is a hypothetical computing environment of the future that would integrate mathematical and statistical tools with AI methods to assist in computer modeling and simulation. An integrated approach of this kind has great potential for accelerating the rate of scientific discovery.