The thesis addresses questions of relating diagrams and mental representations in diagrammatic reasoning through eye movement research. It proposes model-based representations of attention for improved human-computer cooperation. In particular, the thesis proposes in theory and details in practice a computational framework for live capture and analysis of eye movement data in diagrammatic reasoning scenarios. The analysis includes an on-line generation of hypotheses about spatial mental representations currently held by human reasoners. The generated hypotheses may be employed to guide reactive and proactive behavior in semiautomated reasoning systems more (cognitively) adequately than previously possible, for example, in live human-computer collaboration or tutoring settings. Among other fields of application, the framework may be used to externally influence mental visuo-spatial reasoning in selective ways through administering specific patterns of sensory cues. Additionally, the presented theoretical and practical approaches may significantly contribute to the development of novel techniques and research methodologies aimed at better understanding human visuo-spatial reasoning and problem solving.