Omar Bencharef

  • Citations Per Year
Learn More
Most of the reported works in the field of character recognition systems achieve modest results by using a single method for calculating the parameters of the character image and a single approach in the classification phase of the system. So, in order to improve the recognition rate, this document proposes an automatic system to recognize isolated printed(More)
In order to improve the recognition rate, this document proposes an automatic system to recognize isolated printed Tifinagh characters by using a fusion of 3 classifiers and a combination of some features extraction methods. The Legendre moments, Zernike moments and Hu moments are used as descriptors in the features extraction phase due to their invariance(More)
In this paper, we propose a hybrid approach based on neural networks and the combination of the classic Hu & Zernike moments joined with Geodesic descriptors. To be able to keep the maximum amount of information that are given by the color of the image, we have calculated Zernike & Hu for each color level. On the other side, geodesic descriptors are(More)
The Tifinagh alphabet-IRCAM is the official alphabet of the Amazigh language widely used in North Africa [1]. It includes thirty-one basic letter and two letters each composed of a base letter followed by the sign of labialization. Normalized only in 2003 (Unicode) [2], ICRAM-Tifinagh is a young character repertoire. Which needs more work on all levels. In(More)
Recently, shape-based matching and retrieval of 3D polygonal models has become one of the most fundamental problems in computer vision. Dealing with families of objects instead of a single one may impose further challenges on regular geometric algorithms. In this paper we focus on the classification of 3D objects based on their geodesic distance & path(More)
Abstract— The development of search engine for similar images stays yet a scientific challenge. Since every image is represented by one or more vectors, the research stage becomes very hard. To raise these difficulties, we suggest inserting a search engine based on multiple representations of images where every image will be represented by three vectors(More)
In this paper we propose a faces recognition system. This system does not directly reproduce human vision on machine, but it seeks to find algorithms to achieve similar results by identifying a person using 2D image of his face. The descriptors used for features extraction, combine two algorithms: Principal Component Analysis (PCA) and a double Linear(More)
To perform a semantic search on a large dataset of images, we need to be able to transform the visual content of images (colors, textures, shapes) into semantic information. This transformation, called image annotation, assigns a caption or keywords to the visual content in a digital image. In this paper we try to resolve partially the region homogeneity(More)
  • 1