Differential entropy

Known as: Continuous entropy 
Differential entropy (also referred to as continuous entropy) is a concept in information theory that began as an attempt by Shannon to extend the… (More)
Wikipedia

Topic mentions per year

Topic mentions per year

1980-2018
010203019802018

Papers overview

Semantic Scholar uses AI to extract papers important to this topic.
2013
2013
This paper proposes a novel feature called differential entropy for EEG-based vigilance estimation. By mathematical derivation… (More)
  • figure 1
  • table I
  • table II
Is this relevant?
2013
2013
EEG-based emotion recognition has been studied for a long time. In this paper, a new effective EEG feature named differential… (More)
  • figure 1
  • figure 2
  • figure 3
  • table IV
Is this relevant?
2010
2010
Given iid samples drawn from a distribution with known parametric form, we propose the minimization of expected Bregman… (More)
  • figure 1
Is this relevant?
2009
2009
We describe a non-parametric estimator for the differential entropy of a multidimensional distribution, given a limited set of… (More)
  • figure 2
  • figure 3
  • figure 4
Is this relevant?
Highly Cited
2008
Highly Cited
2008
For many practical probability density representations such as for the widely used Gaussian mixture densities, an analytic… (More)
  • table I
  • figure 1
  • figure 2
  • figure 4
  • figure 3
Is this relevant?
2008
2008
A novel probabilistic upper bound on the entropy of an unknown one-dimensional distribution, given the support of the… (More)
  • figure 1
  • figure 2
  • figure 3
  • figure 4
  • figure 5
Is this relevant?
2005
2005
We give a detailed analysis of the Gibbs-type entropy notion and its dynamical behavior in case of time-dependent continuous… (More)
Is this relevant?
2004
2004
Calculation of the differential entropy of the limiting density of a sequence of probability density functions (pdf) is an… (More)
Is this relevant?
Highly Cited
2003
Highly Cited
2003
We propose a new clustering algorithm using Renyi’s entropy as our similarity metric. The main idea is to assign a data pattern… (More)
  • figure 1
  • figure 3
  • figure 2
  • figure 4
  • figure 5
Is this relevant?
Highly Cited
1997
Highly Cited
1997
We derive a first-order approximation of the density of maximum entropy for a continuous 1-D random variable, given a number of… (More)
  • figure 1
Is this relevant?