Isotonic Separation

@article{Chandrasekaran2005IsotonicS,
  title={Isotonic Separation},
  author={Ramaswamy Chandrasekaran and Young U. Ryu and Varghese S. Jacob and Sungchul Hong},
  journal={INFORMS J. Comput.},
  year={2005},
  volume={17},
  pages={462-474}
}
Data classification and prediction problems are prevalent in many domains. The need to predict to which class a particular data point belongs has been seen in areas such as medical diagnosis, credit rating, Web filtering, prediction, and stock rating. This has led to strong interest in developing systems that can accurately classify data and predict outcome. The classification is typically based on the feature values of objects being classified. Often, a form of ordering relation, defined by… 

Figures from this paper

Prognosis Using an Isotonic Prediction Technique
TLDR
The proposed technique is different from well-known statistical survival analysis methods, such as Kaplan-Meier product-limit estimation and Cox's regression, in that it predicts individual patients' survival time frame.
Isotonic Separation with an Instance Selection Algorithm Using Softset: Theory and Experiments
TLDR
Experimental and statistical results show that the condensed sets obtained by SOFIA are optimum, andSOFIA-IS and SOFia based machine learning techniques are better in terms of classification accuracy, time and space complexity.
Evolutionary isotonic separation for classification: theory and experiments
TLDR
Experimental and statistical results show that EIS outperforms its predecessors and state of the art machine learning techniques in terms of accuracy.
Breast cancer prediction using the isotonic separation technique
Studies in Learning Monotonic Models from Data
TLDR
The experiments described in this thesis show that the predictive power of the new data mining algorithms is comparable to, or sometimes even better than, that of their non-monotonic counterparts, and this is obtained at a limited additional computational cost.
A hybrid isotonic separation training algorithm with correlation-based isotonic feature selection for binary classification
TLDR
Theoretical, empirical, and statistical analyses show that MeHeIS–CPSO is superior to its predecessors in terms of training time and predictive ability on large data sets and outperforms state-of-the-art machine learning and isotonic classification techniques in termsof predictive performance on small- and large-scale data sets.
On Nonparametric Ordinal Classification with Monotonicity Constraints
TLDR
This paper provides a statistical framework for classification with monotonicity constraints, and considers two approaches to classification in the nonparametric setting: the "plug-in" method (classification by estimating first the class conditional distribution) and the direct method ( classification by minimization of the empirical risk).
Trainable monotone combiner
Online learning isotonic separation for the study of largescale data
TLDR
This paper focuses on adapting Isotonic Separation as an online learning process and consists of two components: the learning prototypes (LPs) and the learning isotonic separators (LISs).
...
...

References

SHOWING 1-10 OF 56 REFERENCES
Firm bankruptcy prediction: experimental comparison of isotonic separation and other classification approaches
  • Young U. Ryu, W. Yue
  • Computer Science
    IEEE Transactions on Systems, Man, and Cybernetics - Part A: Systems and Humans
  • 2005
TLDR
Experiments show that the isotonic separation method is a viable technique, performing generally better than other methods for short-term bankruptcy prediction.
An Inductive Learning Approach to Prognostic Prediction
The Multi-Purpose Incremental Learning System AQ15 and Its Testing Application to Three Medical Domains
TLDR
The demonstration that by applying the proposed method of cover truncation and analogical matching, called TRUNC, one may drastically decrease the complexity of the knowledge base without affecting its performance accuracy is demonstrated.
A Comparison of Prediction Accuracy, Complexity, and Training Time of Thirty-Three Old and New Classification Algorithms
TLDR
Among decision tree algorithms with univariate splits, C4.5, IND-CART, and QUEST have the best combinations of error rate and speed, but C 4.5 tends to produce trees with twice as many leaves as those fromIND-Cart and QUEST.
C4.5: Programs for Machine Learning
TLDR
A complete guide to the C4.5 system as implemented in C for the UNIX environment, which starts from simple core learning methods and shows how they can be elaborated and extended to deal with typical problems such as missing data and over hitting.
Feature Selection via Mathematical Programming
TLDR
Computational tests of three approaches to feature selection algorithm via concave minimization on publicly available real-world databases have been carried out and compared with an adaptation of the optimal brain damage method for reducing neural network complexity.
Further Research on Feature Selection and Classification Using Genetic Algorithms
TLDR
This paper summarizes work on an approach that combines feature selection and data classiication using Genetic Algorithms combined with a K-nearest neighbor algorithm to optimize classiications by searching for an optimal feature weight-ing, essentially warping the feature space to coalesce individuals within groups and to separate groups from one another.
IMPROVED LINEAR PROGRAMMING MODELS FOR DISCRIMINANT ANALYSIS
TLDR
It is shown how to eliminate a previously undetected distortion and thereby increase the scope and flexibility of the LP discriminant analysis models, including the use of a successive goal method for establishing a series of conditional objectives to achieve improved discrimination.
Breast Cancer Diagnosis and Prognosis Via Linear Programming
Two medical applications of linear programming are described in this paper. Specifically, linear programming-based machine learning techniques are used to increase the accuracy and objectivity of
Feature Selection via Concave Minimization and Support Vector Machines
TLDR
Numerical tests on 6 public data sets show that classi ers trained by the concave minimization approach and those trained by a support vector machine have comparable 10fold cross-validation correctness.
...
...