We present an approach to learning to recognize concurrent activities based on multiple data streams. One example is recognition of concurrent activities in hospital operating rooms based on multiple wearable and embedded sensors. This problem differs from standard time series classification in that there is no natural single target dimension, as multiple activities are performed at the same time. Hence, most existing approaches fail. The key innovations that allow us to tackle this problem is (1) learning to recognize base activities from raw sensor data, (2) creating artificial joint activities from base activities using frequent pattern mining and (3) handling temporal dependency using virtual evidence boosting.