In this paper, a novel method for separating underlying sources with both sub- and super-Gaussian distributions from the underdetermined mixtures is proposed. The generalized Gaussian distribution (GGD) is used to model simultaneously both sub- and super-Gaussian distributions. The process of finding the most probable decomposition of the mixtures based on the GGD leads to that of minimizing the L<sub>p</sub>-norm of the estimated sources. The switching condition for determining the decay rate of the GGD is determined by the sign of the kurtosis of the inferred source. In our simulation, the proposed algorithm separated both the sub- and super-Gaussian sources from the underdetermined mixtures and achieved about 1 dB improvement in signal-to-interference (SIR) over the l<sub>1</sub>-norm minimization algorithm in separating three speech sources from two mixtures.