Most current foveation strategies are limited to foveating sequences based on a direct measurement or an implicit assumption of the gaze direction. Such approaches often fail in unconstrained environments or when necessary equipment is absent. Alternatively, a computational model of visual attention may be used to predict visually salient locations. We describe such a neurobiological model of attention and its specific application to foveated video compression. The algorithm is demonstrated to be successful in foveating to Regions Of human Interest in a variety of video segments, including synthetic as well as natural scenes, and also gives good compression ratios.