A high throughput sort free VLSI architecture for wireless applications
Detection of MIMO signals presents a formidable computation challenge due to the inherent complexity incurred by matrix-type wireless fading channels. When combined with OFDM, the MIMO detection over frequency-selective fading channels is divided into that over each flat-fading subchannel. Since this is usually done on per subchannel basis, it implies significant hardware cost. This work proposes the use of linear interpolation to exploit the correlation among adjacent subchannels. Since the weight matrices on non-pilot subchannels are linearly interpolated from those of pilot subchannels, the computation overhead is greatly reduced. This is particularly effective when the subchannel bandwidth is much smaller than the coherence bandwidth of the channel. Simulation using MMSE MIMO detection shows that, by keeping the product of interpolation factor and normalized RMS delay spread within a certain limit, linear interpolation can result in minimum degradation of system performance up to a practical SNR level of 20 dB to 25 dB.