Most predictive coding schemes for videoconferencing and multimedia employ block matching motion compensation. The algorithm is easily parallelised and thus readily implemented in VLSI using SIMD structures. The image frame is divided into a fixed number of square blocks, and for each block a search is conducted within a predefined window of an adjacent frame to find the best match. The acceptance criterion is based usually on minimising the mean square error or mean absolute error between the two sets of pixels, and the relative displacement between the two blocks is taken to be the motion vector. The prediction quality is dependent on each block representing an area of uniform translational motion. This is rare for real image sequences, but the assumption becomes more valid as the block size is reduced. However, if the size of the block is decreased, then the overhead of computation and transmission of displacement information is increased. This problem can be alleviated by allowing the dimensions of blocks to adapt to local activity within the image, larger blocks being used in large areas of stationary background or uniform motion, and smaller blocks where the movement is localised or complex. We describe a variable size block matching motion estimation algorithm which is as computationally efficient as fixed size block matching and yet provides a better quality prediction. The total number of blocks in any frame can be varied while still representing true motion fairly accurately. This allows variable bit allocation between the representation of displacement and the residual (error) data. It also permits the frameby-frame bit rate to adapt to the available buffer capacity of a low bit-rate coder. The technique starts by matching small square blocks. Displacement information is stored as simple bit vectors for those positions which, when the blocks are matched, result in a mean absolute error being less than a predefined threshold. Blocks are then merged in a quadtree manner depending on whether they have candidate motion vectors in common. The merging process is trivial, using bit-wise logical AND operations. The threshold, which is found to be proportional to the minimum mean absolute [matched] error of the entire frame, can be adjusted to vary the frame block count. The threshold also ensures that poorly matched blocks are treated as special cases and not merged into inappropriate areas. Good quality motion estimation is achieved regardless of the number of blocks. After merging, the quad-tree structure, the displacement information and the residual error data is encoded for subsequent transmission or storage. Further compression may be possible by differentially encoding the quad tree and transmitting only changed motion vectors. The adaptive motion compensation technique has been tested on a number of MPEG-4 image test sequences. The prediction performance is significantly better than fixed size block matching, and yet the computational requirement is comparable. The technique can be applied to variable-shaped video object planes as well as fixed sized rectangular image frames. Consequently it has application in object-based video coding systems.