Laura Antanas

Learn More
While relational representations have been popular in early work on syntactic and structural pattern recognition, they are rarely used in contemporary approaches to computer vision due to their pure symbolic nature. The recent progress and successes in combining statistical learning principles with relational representations motivates us to reinvestigate(More)
Real-world scenes involve many objects that interact with each other in complex semantic patterns. For example , a bar scene can be naturally described as having a variable number of chairs of similar size, close to each other and aligned horizontally. This high-level interpretation of a scene relies on semantically meaningful entities and is most generally(More)
Histological image analysis plays a key role in understanding the effects of disease and treatment responses at the cellular level. However, evaluating histology images by hand is time-consuming and subjective. While semi-automatic and automatic approaches for image segmentation give acceptable results in some branches of histological image analysis, until(More)
Understanding images in terms of logical and hierarchical structures is crucial for many semantic tasks, including image retrieval, scene understanding and robotic vision. This paper combines robust feature extraction, qualitative spatial relations, relational instance-based learning and compositional hierarchies in one framework. For each layer in the(More)
Augmenting vision systems with high-level knowledge and reasoning can improve lower-level vision processes, such as object detection, with richer and more structured information. In this paper we tackle the problem of delimiting conceptual elements of street views based on spatial relations between lower-level components, e.g. the element 'house' is(More)
Object grasping is a key task in robot manipulation. Performing a grasp largely depends on the object properties and grasp constraints. This paper proposes a new statistical relational learning approach to recognize graspable points in object point clouds. We characterize each point with numerical shape features and represent each cloud as a (hyper-) graph(More)
—While grasps must satisfy the grasping stability criteria , good grasps depend on the specific manipulation scenario: the object, its properties and functionalities, as well as the task and grasp constraints. In this paper, we consider such information for robot grasping by leveraging manifolds and symbolic object parts. Specifically, we introduce a new(More)