In this paper we propose a new multisensor based activity recognition approach which uses video cameras and environmental sensors in order to recognize interesting elderly activities at home. This approach aims to provide accuracy and robustness to the activity recognition system. In the proposed approach, we choose to perform fusion at the high-level (event level) by combining video events with environmental events. To measure the accuracy of the proposed approach, we have tested a set of human activities in an experimental laboratory. The experiment consists of a scenario of daily activities performed by fourteen volunteers (aged from 60 to 85 years). Each volunteer has been observed during 4 hours and 14 video scenes have been acquired by 4 video cameras (about ten frames per second). The fourteen volunteers were asked to perform a set of household activities, such as preparing a meal, taking a meal, washing dishes, cleaning the kitchen, and watching TV. Each volunteer was alone in the laboratory during the experiment.