Marco Túlio Ribeiro

Learn More
Despite widespread adoption, machine learning models remain mostly black boxes. Understanding the reasons behind predictions is, however, quite important in assessing trust, which is fundamental if one plans to take action based on a prediction, or when choosing whether to deploy a new model. Such understanding also provides insights into the model, which(More)
Recommender systems are quickly becoming ubiquitous in applications such as e-commerce, social media channels, and content providers, among others, acting as an enabling mechanism designed to overcome the information overload problem by improving browsing and consumption experience. A typical task in many recommender systems is to output a ranked list of(More)
Performing accurate suggestions is an objective of paramount importance for effective recommender systems. Other important and increasingly evident objectives are novelty and diversity, which are achieved by recommender systems that are able to suggest diversified items not easily discovered by the users. Different recommendation algorithms have particular(More)
The authors describe a series of sections of adipose autografts in humans, focusing on the histological viability and the alterations observed in a postgraft followup. Five female patients aged 29 to 43 years were subjected to seven grafting sessions prior to a classic abdominoplasty. The autologous adipose tissue was grafted in the infraumbilical region.(More)
An adipose tissue graft's ability to obtain nutrition through plasmatic imbibition occurs approximately 1.5 mm from the vascularized edge. This and the observation that only 40% of this peripheral margin is viable led the authors to create spherical and cylindroid models to correlate the volume and the percentage of graft viability to the initial injected(More)
Understanding why machine learning models behave the way they do empowers both system designers and end-users in many ways: in model selection, feature engineering, in order to trust and act upon the predictions, and in more intuitive user interfaces. Thus, interpretability has become a vital concern in machine learning, and work in the area of(More)
At the core of interpretable machine learning is the question of whether humans are able to make accurate predictions about a model’s behavior. Assumed in this question are three properties of the interpretable output: coverage, precision, and effort. Coverage refers to how often humans think they can predict the model’s behavior, precision to how accurate(More)
Traditional content-based e-mail spam filtering takes into account content of e-mail messages and apply machine learning techniques to infer patterns that discriminate spams from hams. In particular, the use of content-based spam filtering unleashed an unending arms race between spammers and filter developers, given the spammers' ability to continuously(More)