Beside haptic and vision, mobile robotic platforms are equipped with audition in order to autonomously navigate and interact with their environment. Speaker and speech recognition as well as the recognition of different kind of sounds are vital tasks for human robot interaction. In situations where more than one sound source is active, the mixture has to be separated before being passed to the reasoning unit. Independent Component Analysis (ICA) has been proposed to solve the blind source separation problem. For audio signals however, ICA cannot be applied directly. Due to non-instantaneous mixtures in the time domain, the problem is usually transferred to multiple separately performed ICAs in the frequency domain, which causes the well-known permutation problem. For robotic sound separation, in this paper we propose a method called Independent Vector Analysis (IVA) to separate audio mixtures while avoiding the permutation problem. Performance of the method is evaluated for synthetic data as well as for anechoic and echoic recordings. Furthermore, a new method to evaluate the separation results for real recordings is introduced.