Norbert Jankowski

Learn More
The choice of transfer functions may strongly influence complexity and performance of neural networks. Although sigmoidal transfer functions are the most common there is no a priori reason why models based on such functions should always provide optimal decision borders. A large number of alternative transfer functions has been described in the literature.(More)
This paper is an continuation of the accompanying paper with the same main title. The first paper reviewed instance selection algorithms, here results of empirical comparison and comments are presented. Several test were performed mostly on benchmark data sets from the machine learning repository at UCI. Instance selection algorithms were tested with neural(More)
Incremental Net Pro IncNet Pro with local learning feature and statistically controlled growing and pruning of the network is intro duced The architecture of the net is based on RBF networks Extended Kalman Filter algorithm and its new fast version is proposed and used as learning algorithm IncNet Pro is similar to the Resource Allocation Network described(More)
Sigmoidal or radial transfer functions do not guarantee the best generalization nor fast learning of neural networks. Families of parameterized transfer functions provide flexible decision borders. Networks based on such transfer functions should be small and accurate. Several possibilities of using transfer functions of different types in neural models are(More)
The choice of transfer functions in neural networks is of crucial importance to their performance. Although sigmoidal transfer functions are the most common there is no a priori reason why they should be optimal in all cases. In this article advantages of various neural transfer functions are discussed and several new type of functions are introduced.(More)
Despite all the progress in neural networks field the technology is brittle and sometimes difficult to apply. Good initialization of adaptive parameters in neural networks and optimization of architecture are the key factor to create robust neural networks. Methods of initialization of MLPs are reviewed and new methods based on clusterization techniques are(More)
There are many knowledge-based data mining frameworks and it is common to think that new ones cannot come up with anything new. This article refutes such claims. We propose a sophisticated unification mechanism and two-tier machine cache system aimed at saving time and memory. No machine is run twice. Instead, machines are reused wherever they are(More)
Several methods were proposed to reduce the number of instances (vectors) in the learning set. Some of them extract only bad vectors while others try to remove as many instances as possible without significant degradation of the reduced dataset for learning. Several strategies to shrink training sets are compared here using different neural and machine(More)
Classification methods with linear computational complexity O(nd) in the number of samples n and their dimensionality d often give results that are better or at least statistically not significantly worse that slower algorithms. This is demonstrated here for many benchmark datasets downloaded from the UCI Machine Learning Repository. Results provided in(More)
Initialization of adaptive parameters in neural networks is of crucial importance to the speed of convergence of the learning procedure. Methods of initialization for the density networks are reviewed and two new methods, based on decision trees and dendrograms, presented. These two methods were applied in the Feature Space Mapping framework to artificial(More)