Generative Maximum Entropy Learning for Multiclass Classification

@article{Dukkipati2013GenerativeME,
  title={Generative Maximum Entropy Learning for Multiclass Classification},
  author={Ambedkar Dukkipati and Gaurav Pandey and Debarghya Ghoshdastidar and Paramita Koley and D. M. V. Satya Sriram},
  journal={2013 IEEE 13th International Conference on Data Mining},
  year={2013},
  pages={141-150}
}
Maximum entropy approach to classification is very well studied in applied statistics and machine learning and almost all the methods that exists in literature are discriminative in nature. In this paper, we introduce a maximum entropy classification method with feature selection for large dimensional data such as text datasets that is generative in nature. To tackle the curse of dimensionality of large data sets, we employ conditional independence assumption (Naive Bayes) and we perform… 
1 Citations

Missing Values and Class Prediction Based on Mutual Information and Supervised Similarity

  • N. K.S. Suriya
  • Computer Science
    Proceedings of International Conference on Artificial Intelligence, Smart Grid and Smart City Applications
  • 2020
In this chapter, a novel prediction technique is proposed that can be used to predict the missing values of a given dataset or a dataset sample by calculating the mutual information, supervised similarity, and cosine similarity.

References

SHOWING 1-10 OF 33 REFERENCES

Using Maximum Entropy for Text Classification

This paper uses maximum entropy techniques for text classification by estimating the conditional distribution of the class variable given the document by comparing accuracy to naive Bayes and showing that maximum entropy is sometimes significantly better, but also sometimes worse.

Maximum Entropy Model Based Classification with Feature Selection

A classification algorithm based on the maximum entropy principle that finds the most appropriate class-conditional maximum entropy distributions for classification and incorporates a method to select relevant features for classification.

Maximum Entropy Density Estimation with Generalized Regularization and an Application to Species Distribution Modeling

This work proposes an algorithm solving a large and general subclass of generalized maximum entropy problems, including all discussed in the paper, and proves its convergence.

Reducing Multiclass to Binary: A Unifying Approach for Margin Classifiers

A general method for combining the classifiers generated on the binary problems is proposed, and a general empirical multiclass loss bound is proved given the empirical loss of the individual binary learning algorithms.

Divergence measures based on the Shannon entropy

A novel class of information-theoretic divergence measures based on the Shannon entropy is introduced, which do not require the condition of absolute continuity to be satisfied by the probability distributions involved and are established in terms of bounds.

Bayesian Network Classifiers

Tree Augmented Naive Bayes (TAN) is single out, which outperforms naive Bayes, yet at the same time maintains the computational simplicity and robustness that characterize naive Baye.

On Discriminative vs. Generative Classifiers: A comparison of logistic regression and naive Bayes

It is shown, contrary to a widely-held belief that discriminative classifiers are almost always to be preferred, that there can often be two distinct regimes of performance as the training set size is increased, one in which each algorithm does better.

On the Optimality of the Simple Bayesian Classifier under Zero-One Loss

The Bayesian classifier is shown to be optimal for learning conjunctions and disjunctions, even though they violate the independence assumption, and will often outperform more powerful classifiers for common training set sizes and numbers of attributes, even if its bias is a priori much less appropriate to the domain.

Minimax Entropy Principle and Its Application to Texture Modeling

The minimax entropy principle is applied to texture modeling, where a novel Markov random field model, called FRAME, is derived, and encouraging results are obtained in experiments on a variety of texture images.

Divergences and Risks for Multiclass Experiments

The extension of f-divergence to more than two distributions to measure their joint similarity is studied and it is proved the resulting divergence satises all the same properties as the traditional binary one.