Ahmed Ben Said

We don’t have enough information about this author to calculate their statistics. If you think this is an error let us know.
Learn More
Cluster validity indexes are very important tools designed for two purposes: comparing the performance of clustering algorithms and determining the number of clusters that best fits the data. These indexes are in general constructed by combining a measure of compactness and a measure of separation. A classical measure of compactness is the variance. As for(More)
In this paper, we propose a new cluster validity index (CVI) based on geometrical shape. Classic CVIs are based on a combination of separation and compactness measures and may include a measure of overlap between clusters. The proposed CVI combines measures of compactness and over-lap using n-sphere shape. We conducted experiments on several real data sets(More)
In this paper, we present a novel denoising algorithm based on the Rodin-Osher-Fatemi (ROF) model. The goal is to ensure maximum noise removal while preserving image details. To achieve this goal, we developed a new edge detector based on the structure tensor, Non-Local Mean filtering and fuzzy complement. This edge detector is incorporated in the objective(More)
In this paper, we propose a denoising algorithm based on the Total Variation (TV) model. Specifically, we associate to the regularization term of the Rodin-Osher-Fatimi (ROF) functional a small weight whenever denoising is performed in edge and texture regions, which means less regularization and more details preservation. On the other hand, a large weight(More)
This paper presents a novel clustering approach based on the classic Fuzzy c-means algorithm. The approach is inspired from the concept of interaction between objects in physics. Each data point is regarded as a particle. A specific weight is associated with each data particle depending on its interaction with other particles. This interaction is induced by(More)
Nowadays, many applications rely on images of high quality to ensure good performance in conducting their tasks. However, noise goes against this objective as it is an unavoidable issue in most applications. Therefore, it is essential to develop techniques to attenuate the impact of noise, while maintaining the integrity of relevant information in images.(More)
The emergence of mobile health (mHealth) systems has risen the challenges and concerns due to the sensitivity of the data involved in such systems. It is essential to ensure that these data are well delivered to the health monitoring center for accurate and perfect diagnosis and follow-up. Due to the wireless network constraints, these requirements become(More)
In this paper, we present a joint compression and classification approach of EEG and EMG signals using a deep learning approach. Specifically, we build our system based on the deep autoencoder architecture which is designed not only to extract discriminant features in the multimodal data representation but also to reconstruct the data from the latent(More)