In this paper, a generic motion-based approach to semantic video analysis is presented. The examined video is initially segmented into shots and for every resulting shot appropriate motion features are extracted at fixed time intervals. Then, Hidden Markov Models (HMMs) are employed for performing the association of each shot with one of the semantic classes that are of interest in any given domain. Regarding the motion feature extraction procedure, higher order statistics of the motion estimates are calculated and a new representation for providing local-level motion information to HMMs is presented. The latter is based on the combination of energy distribution-related information and spatial attributes of the motion signal. Experimental results as well as comparative evaluation from the application of the proposed approach in the domain of news broadcast video are presented.