Tejaswi Nanjundaswamy

Learn More
Current scalable audio coders typically optimize performance at a particular layer without regard to impact on other layers, and are thus unable to provide a performance trade-off between different layers. In the particular case of MPEG Scalable Advanced Audio Coding (S-AAC) and Scalable-to-Lossless (SLS) coding, the base-layer is optimized first followed(More)
Perceived quality of signal is degraded by the presence of additive noises. Hence we regard removal of these noises as quality improvement of the signal. There are many works in literature addressing this issue using adaptive filters. To this end Kalman filter and Extended Kalman filter are applied to the signals successfully in earlier works. But in these(More)
Conventional “pixel copying” prediction used in current video standards was shown in previous work to be sub-optimal compared to 2-D non-separable Markov model based recursive extrapolation approaches. The premise of this paper is that in order to achieve the full potential of these approaches it is necessary to account for several(More)
The long term prediction (LTP) tool is used in audio compression systems to exploit periodicity in signals. This tool capitalizes on the periodic component of the waveform by selecting a past segment as the basis for prediction of the current frame. However, most audio signals are polyphonic in nature, consisting of a mixture of periodic signals. This(More)
A novel filtering approach that naturally combines information from both intra-frame and motion compensated referencing for efficient prediction is proposed to fully exploit the spatio-temporal correlations of video signals, thereby achieving superior compression performance. Inspiration was drawn from our recent work on extrapolation filter based intra(More)
This paper focuses on a new framework for scalable coding of information based on principles derived from common information of two dependent random variables. In the conventional successive refinement setting, the encoder generates two layers of information called the base layer and the enhancement layer. The first decoder, which receives only the base(More)
Current video coders exploit temporal dependencies via prediction that consists of motion-compensated pixel copying operations. Such per-pixel temporal prediction ignores important underlying spatial correlations, as well as considerable variations in temporal correlation across frequency components. In the transform domain, however, spatial decorrelation(More)
Linear prediction is widely used in speech, audio and video coding systems. Predictive coders often operate over unreliable channels or networks prone to packet loss, wherein errors propagate through the prediction loop and may catastrophically degrade the reconstructed signal at the decoder. To mitigate this problem, end-to-end distortion (EED) estimation,(More)
MPEG-4 High-Definition Advanced Audio Coding (HD-AAC) enables scalable-to-lossless (SLS) audio coding with an Advanced Audio Coding (AAC) base layer, and fine-grained enhancements based on the MPEG SLS standard. While the AAC core offers better perceptual quality at lossy bit-rates, its inclusion has been observed to compromise the ultimate lossless(More)
This paper proposes a frame loss concealment technique for audio signals, which is designed to overcome the main challenge due to the polyphonic nature of most music signals and is inspired by our recent research on compression of such signals. The underlying idea is to employ a cascade of long term prediction filters (tailored to the periodic components)(More)