Much of what we know about speech perception comes from laboratory studies with clean, canonical<lb>speech, ideal listeners and artificial tasks. But how do interlocutors manage to communicate effec-<lb>tively in the seemingly less-than-ideal conditions of everyday listening, which frequently involve try-<lb>ing to make sense of speech while listening in a non-native language, or in the presence of competing<lb>sound sources, or while multitasking? In this talk I’ll examine the effect of real-world conditions on<lb>speech perception and quantify the contributions made by factors such as binaural hearing, visual in-<lb>formation and prior knowledge to speech communication in noise. I’ll present a computational model<lb>which trades stimulus-related cues with information from learnt speech models, and examine how<lb>well it handles both energetic and informational masking in a two-sentence separation task. Speech<lb>communication also involves listening-while-talking. In the final part of the talk I’ll describe some<lb>ways in which speakers might be making communication easier for their interlocutors, and demon-<lb>strate the application of these principles to improving the intelligibility of natural and synthetic speech<lb>in adverse conditions.