#### Filter Results:

#### Publication Year

2007

2010

#### Publication Type

#### Co-author

#### Publication Venue

#### Key Phrases

Learn More

The statistical pattern recognition based on Bayes formula implies the concept of mutually exclusive classes. This assumption is not applicable when we have to identify some non-exclusive properties and therefore it is unnatural in biological neural networks. Considering the framework of probabilistic neural networks we propose statistical identification of… (More)

When considering the probabilistic approach to neural networks in the framework of statistical pattern recognition we assume approximation of class-conditional probability distributions by finite mixtures of product components. The mixture components can be interpreted as probabilistic neurons in neurophysiological terms and, in this respect, the fixed… (More)

We discuss the problem of overfitting of probabilistic neural networks in the framework of statistical pattern recognition. The proba-bilistic approach to neural networks provides a statistically justified sub-space method of classification. The underlying structural mixture model includes binary structural parameters and can be optimized by EM algorithm in… (More)

Considering the probabilistic approach to neural networks in the framework of statistical pattern recognition we assume approximation of class-conditional probability distributions by finite mixtures of product components. The mixture components can be interpreted as probabilistic neurons in neurophysiological terms and, in this respect, the fixed… (More)

- Jan Hora
- 2007

Given a k-linear forms fi : V k i ! F, 1 i m, deene a k-linear form f = f1 fm : (V1 Vm) k ! F by f(u1 If a k-linear form f : V k ! F can be expressed as above call the system of subspaces V1; : : : ; Vm an orthogonal decomposition (with respect to f). We show that for k 3 such a decomposition is unique if m is maximal possible. Furthermore we prove that a… (More)

- Somol P, V ´ Acha P, Mikě, Hora J, Pudil P, ˇ Zid +6 others
- 2010

—We introduce a new standalone widely applicable software library for feature selection (also known as attribute or variable selection), capable of reducing problem dimension-ality to maximize the accuracy of data models, performance of automatic decision rules as well as to reduce data acquisition cost. The library can be exploited by users in research as… (More)

The EM algorithm has been used repeatedly to identify latent classes in categorical data by estimating finite distribution mixtures of product components. Unfortunately, the underlying mixtures are not uniquely identifiable and, moreover, the estimated mixture parameters are starting-point dependent. For this reason we use the latent class model only to… (More)

- ‹
- 1
- ›