Distributed Shared Memory (DSM) systems have been proposed as a way of combining the programmability of traditional shared memory with the scalability of message-passing systems. Eager DSM systems can greatly reduce access latencies for remote data by keeping copies of shared values in local memory and updating them immediately after every change. However, for fast execution, unnecessary interprocessor message traac must be limited. It is usually possible to transform a program into an equivalent form that generates much less traac, and therefore executes much more eeciently. This paper describes a compile-time analysis model for transforming simple shared memory programs with parallelized loop structures into programs that are optimized for eecient execution on eager DSM systems.