It is immensely satisfying to be able to discover fundamental relationships and structure within vast arrays of ill–understood data. The neural network method is a powerful tool in this respect, a mathematically transparent technique which is able to capture complex relationships without the need to fix the mathematical form at the outset. These capabilities are enhanced by a proper consideration of errors and uncertainties. I introduce here the method and then go on to show why large uncertainties in the data need not be depressing. Uncertainty helps define novel experiments and stimulates questions about the fundamental relationships in nature. INTRODUCTION The usual approach when dealing with difficult problems is to correlate the results against chosen variables using linear regression analysis; a more powerful method of empirical analysis involves the use of neural networks. Since the method has been described elsewhere [1, 2, 3, 4, 5], what follows is an emphasis of the essential features. There is also a comprehensive world wide web resource on www.msm.cam.ac.uk/phasetrans/abstracts/neural.review.html In conventional regression analysis the data are best–fitted to a specified relationship which is usually linear. The result is an equation in which each of the inputs xj is multiplied by a weight wj; the sum of all such products and a constant θ then gives an estimate of the output y = ∑ j wjxj +θ. Equations like these are used widely in industry, for example, in the formulation of the carbon equivalents which assign

@inproceedings{BhadeshiaNeuralNI,
title={Neural Networks in Materials Science: The Importance of Uncertainty},
author={H. K. D. H. Bhadeshia}
}