This article addresses the setting up of a Biometric Authentication System (BAS) based on the fusion of two user-friendly biometric modalities: signature and speech. All biometric data used in this work were extracted from the BIOMET multimodal database . The Signature Verification system relies on Hidden Markov Models (HMMs) , and we use two kinds of Speaker Verification systems. The first one is text-dependent and uses Dynamic Time Warping (DTW)  to compute a decision score. The second one is text-independent and based on Gaussian Mixture Models (GMMs) . We first present the BIOMET database and describe precisely the two modalities of interest before giving a presentation of each monomodal BAS as well as their performance evaluation. We then compare performances of two classical learning-based fusion techniques: an additive CART-trees  classifier built with boosting , and Support Vector Machines (SVMs) . In particular, the signature modality was fused with clean and noisy speech, at two different levels of degradation. The impact of noise in fusion performance is studied relative to that of each of the speech experts alone.