Themos Stafylakis

Learn More
The duration of speech segments has traditionally been controlled in the NIST speaker recognition evaluations so that researchers working in this framework have been relieved of the responsibility of dealing with the duration variability that arises in practical applications. The fixed dimensional i-vector representation of speech utterances is ideal for(More)
We tackle the problem of text-dependent speaker verification using a version of Joint Factor Analysis (JFA) in which speakerphrase variability is modeled with a factorial prior and channel variability with a subspace prior. We implemented this using Zhao and Dong’s variational Bayes algorithm, an extension of Vogt’s Gauss-Seidel method that supports UBM(More)
We examine the use of Deep Neural Networks (DNN) in extracting Baum-Welch statistics for i-vector-based textindependent speaker recognition. Instead of training the universal background model using the standard EM algorithm, the components are predefined and correspond to the set of triphone states, the posterior occupancy probabilities of which are modeled(More)
In this paper, we apply and enhance the i-vector-PLDA paradigm to text-dependent speaker recognition. Due to its origin in text-independent speaker recognition, this paradigm does not make use of the phonetic content of each utterance. Moreover, the uncertainty in the i-vector estimates should be taken into account in the PLDA model, due to the short(More)
Speaker clustering is a crucial step for speaker diarization. The short duration of speech segments in telephone speech dialogue and the absence of prior information on the number of clusters dramatically increase the difficulty of this problem in diarizing spontaneous telephone speech conversations. We propose a simple iterative Mean Shift algorithm based(More)
State of the art speaker recognition systems are based on the i-vector representation of speech segments. In this paper we show how this representation can be used to perform blind speaker adaptation of hybrid DNN-HMM speech recognition system and we report excellent results on a French language audio transcription task. The implemenation is very simple. An(More)
This paper describes data collection efforts conducted as part of the RedDots project which is dedicated to the study of speaker recognition under conditions where test utterances are of short duration and of variable phonetic content. At the current stage, we focus on English speakers, both native and non-native, recruited worldwide. This is made possible(More)
The automatic speaker verification spoofing and countermeasures challenge 2015 provides a common framework for the evaluation of spoofing countermeasures or anti-spoofing techniques in the presence of various seen and unseen spoofing attacks. This contribution proposes a system consisting of amplitude, phase, linear prediction residual, and combined(More)
In this paper, we present tempo estimation and beat tracking algorithms by utilizing percussive/harmonic separation of the audio signal, in order to extract filterbank energies and chroma features from the respective components. Periodicity analysis is carried out by the convolution of feature sequences with a bank of resonators. Target tempo is estimated(More)