Twenty four pilots flew simulated missions in an unmanned air vehicle (UAV) simulator under both single and dual UAV control, and in three conditions: a baseline condition, a condition in which certain information was displayed auditorally, to offload the heavy visual demands of the mission, and a condition in which flight path tracking was automated. Three tasks were performed within each UAV workstation: (1) Meeting the mission goals, by flying to 10 command target waypoints and reporting intelligence information at each of these command targets, (2) monitoring a 3D image display for targets of opportunity on the ground below the flight path,(3) monitoring the health of on-board system parameters. Upon reaching a command target, or seeing a target of opportunity, pilots were required to enter a loiter pattern, zoom in and inspect the image. Pilots could also retrieve command target coordinates and report information at any time they wished. The data were evaluated in the context of three models of concurrent task performance, strict single channel theory, single resource theory and multiple resource theory. The results indicated a cost to dual UAV control in all three tasks, although this cost varied in its magnitude. The results also indicated that both the auditory and the automation assistance improved performance, and reduced the dual task decrement, relative to the baseline condition. In particular, the auditory display of system parameter failures enabled a large degree of parallel processing. Various analyses were carried out to examine the extent to which models based on each of the three attention theories were adequate in predicting the data. Some aspects of the data were consistent with each model. Thus a valid model to account for all aspects of the task would need to incorporate mechanisms based on each model. A separate section of the results applies the Army’s IMPRINT model to predicting the workload imposed by the various conditions.