This paper investigates the classification of different emotional states using presodic and voice quality information. We want to exploit the usage of different phonation types within the production of emotions. Therefore, as features we use prosodic features, voice quality parameters, and different combinations of both types. We study how prosodic and voice quality features overlap or complement each other in the application of emotion recognition. The classification is speaker independent and uses a reduced subset of 8 features and a Bayesian classifier.