Starting with a novel audio analysis and editing paradigm, a set of new and adaptive audio analysis and editing algorithms in the spectrogram are developed and integrated into a smart visual audio editing tool in a "what you see is what you hear" style. At the core of our algorithms and methods is a very?exible audio spectrogram that goes beyond FFT and Wavelets and supports manipulating a signal at any chosen time-frequency resolution:the Gabor analysis and synthesis. It gives maximum accuracy of the representation, is fully invertible, and enables resolution zooming. Simple audio objects are localized in time and frequency. They can easily be identified visually and selected by simple geometric selection masks such as rectangles, combs and polygons. For many audio objects, however the structures in the spectrogram are rather complex. Therefore, we present several intelligent and adaptive mask selection approaches. They are based on audio fingerprinting and visual pattern matching algorithms. Spectrograms of individually recorded sounds under controlled conditions or interactively selected in the current spectrogram can be regarded as visual and sophisticated templates. We discuss how to generate templates, how to find the best match out of a database and how to adapt the match to the sound which we want to edit.