This study proposes an Intelligent Tutor System for assessing slide presentations from novice undergraduate students. To develop such system, two learner models (rule based model and clustering model) were built using 80 presentations graded by three human experts. An experiment to determine the best learner model and students' perception was carried out using 51 presentations uploaded by students. The findings show that the clustering model classified in a similar way as a human evaluator only when a holistic evaluation criterion was used. Whereas, the rule-base model was more precise when the evaluation rules were easier to be followed by a human evaluator. Furthermore, students agreed with the usefulness of the system as well as the level of agreement with the grading model, although the latter in a lesser extent. Results from this study encourage to explore this area and adapt the proposed Intelligent Tutor System to other existing automated grading systems.