We propose a novel framework designed to extend conventional deep neural network (DNN)-based feature enhancement approaches. In general, the conventional DNN-based feature enhancement framework aims to map input noisy observation to clean speech or a binary/ soft mask in a deterministic way, assuming that there is one-to-one mapping between the input and the output without any uncertainty. However, when we consider that the general feature enhancement problem to be an ill-posed inverse problem where the mapping cannot be uniquely determined given an input signal, the assumption in the conventional approaches is not theoretically correct and potentially limits the performance of DNN-based feature enhancement. To overcome this problem, this paper proposes utilizing a mixture density network (MDN), which is a neural network that maps an input feature to a set of Gaussian mixture model (GMM) parameters representing the distribution of a target variable. By estimating the distribution of clean speech feature based on MDN, we are now able to explicitly consider the uncertainty in the parameter estimation. Then, we further utilizes the estimated GMM to obtain a refined clean speech estimate in the framework of statistical model-based feature enhancement. In this paper, after detailing the proposed framework and the MDN, we show mathematically and experimentally how MDN appropriately models the uncertainty information. We also show that the proposed method can outperform a conventional DNN-based feature enhancement method.