Alexandros Iosifidis

Learn More
In this paper, a novel view invariant action recognition method based on neural network representation and recognition is proposed. The novel representation of action videos is based on learning spatially related human body posture prototypes using self organizing maps. Fuzzy distances from human body posture prototypes are used to produce a time invariant(More)
In this paper a novel view-invariant movement recognition method is presented. A multi-camera setup is used to capture the movement from different observation angles. Identification of the position of each camera with respect to the subject's body is achieved by a procedure based on morphological operations and the proportions of the human body. Binary body(More)
In this paper, a novel view invariant person identification method based on human activity information is proposed. Unlike most methods proposed in the literature, in which “walk” (i.e., gait) is assumed to be the only activity exploited for person identification, we incorporate several activities in order to identify a person. A multicamera(More)
In this paper, we present a novel method aiming at multidimensional sequence classification. We propose a novel sequence representation, based on its fuzzy distances from optimal representative signal instances, called statemes. We also propose a novel modified clustering discriminant analysis algorithm minimizing the adopted criterion with respect to both(More)
Eating and drinking activity recognition can be considered a solitary research field in activity recognition area. The development of an application capable to identify human eating and drinking activity can be really useful in a smart home environment targeting to extend independent living of older persons in the early stages of dementia. In this paper a(More)
In this paper, we present a view-independent action recognition method exploiting a low computational-cost volumetric action representation. Binary images depicting the human body during action execution are accumulated in order to produce the so-called action volumes. A novel time-invariant action representation is obtained by exploiting the circular shift(More)