We present a framework to provide a quantitative representation of aspects of musical sound that are associated with musical expressiveness and emotions. After a brief introduction to the background of expressive features in music, we introduce a score to audio mapping algorithm based on dynamic time warping, which segments the audio by comparing it to a music score. Expressive feature extraction algorithms are then introduced. The algorithms extract an expressive feature set that includes pitch deviation, loudness, timbre, timing, articulation, and modulation from the segmented audio to construct an expressive feature database. We have demonstrated these tools in the context of solo western classical music, specifically for the solo oboe. We also discuss potential applications to music performance education and music “language” processing.