SpCoMapGAN: Spatial Concept Formation-based Semantic Mapping with Generative Adversarial Networks

  title={SpCoMapGAN: Spatial Concept Formation-based Semantic Mapping with Generative Adversarial Networks},
  author={Yuki Katsumata and Akira Taniguchi and Lotfi El Hafi and Yoshinobu Hagiwara and Tadahiro Taniguchi},
  journal={2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)},
In semantic mapping, which connects semantic information to an environment map, it is a challenging task for robots to deal with both local and global information of environments. In addition, it is important to estimate semantic information of unobserved areas from already acquired partial observations in a newly visited environment. On the other hand, previous studies on spatial concept formation enabled a robot to relate multiple words to places from bottom-up observations even when the… 

Figures and Tables from this paper

Map completion from partial observation using the global structure of multiple environmental maps
A novel SLAM method, map completion network-based SLAM (MCN-SLAM), based on a probabilistic generative model incorporating deep neural networks for map completion, which can estimate the environment map 1.3 times better than the previous SLAM methods in the situation of partial observation.
Learning to Map for Active Semantic Goal Navigation
This work proposes a novel framework that actively learns to generate semantic maps outside the field of view of the agent and leverages the uncertainty over the semantic classes in the unobserved areas to decide on long term goals.
Hierarchical Bayesian model for the transfer of knowledge on spatial concepts based on multimodal information
Experimental results demonstrated that the proposed hierarchical Bayesian model that enables a robot to transfer the knowledge of places from experienced environments to a new environment has a higher prediction accuracy of location names and positions than the conventional method owing to the transfer of knowledge.
Hippocampal formation-inspired probabilistic generative model
Uncertainty-driven Planner for Exploration and Navigation
A novel planning framework is presented that first learns to generate occupancy maps beyond the field-of-view of the agent, and second leverages the model uncertainty over the generated areas to formulate path selection policies for each task of interest.


Semantic Mapping Based on Spatial Concepts for Grounding Words Related to Places in Daily Environments
A novel statistical semantic mapping method called SpCoMapping is proposed, which integrates probabilistic spatial concept acquisition based on multimodal sensor information and a Markov random field applied for learning the arbitrary shape of a place on a map.
Automatic semantic maps generation from lexical annotations
This work proposes the use of information provided by lexical annotations to generate general-purpose semantic maps from RGB-D images, and exploits the availability of deep learning models suitable for describing any input image by means of lexical labels.
Learning semantic place labels from occupancy grids using CNNs
  • Robert Goeddel, E. Olson
  • Computer Science
    2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)
  • 2016
An ontology of space is defined and a Convolutional Neural Network is created that allows the robot to classify LIDAR sensor data accordingly and is proposed that performs comparably or better than existing methods based on engineered features.
HouseExpo: A Large-scale 2D Indoor Layout Dataset for Learning-based Algorithms on Mobile Robots
HouseExpo is built, a large-scale indoor layout dataset containing 35, 126 2D floor plans including 252, 550 rooms in total and PseudoSLAM, a lightweight and efficient simulation platform to accelerate the data generation procedure, thereby speeding up the training process.
Semantic Scene Completion from a Single Depth Image
The semantic scene completion network (SSCNet) is introduced, an end-to-end 3D convolutional network that takes a single depth image as input and simultaneously outputs occupancy and semantic labels for all voxels in the camera view frustum.
Spatial Concept Acquisition for a Mobile Robot That Integrates Self-Localization and Unsupervised Word Discovery From Spoken Sentences
The experimental results showed that SpCoA enabled the robot to acquire the names of places from speech sentences and revealed that the robot could effectively utilize the acquired spatial concepts and reduce the uncertainty in self-localization.
Semantic Labeling of Indoor Environments from 3D RGB Maps
An approach to automatically assign semantic labels to rooms reconstructed from 3D RGB maps of apartments using state-of-the-art deep-learning techniques for scene classification and object detection based on automatically generated virtual RGB views.
Generative Adversarial Nets
We propose a new framework for estimating generative models via an adversarial process, in which we simultaneously train two models: a generative model G that captures the data distribution, and a
Improved Techniques for Training GANs
This work focuses on two applications of GANs: semi-supervised learning, and the generation of images that humans find visually realistic, and presents ImageNet samples with unprecedented resolution and shows that the methods enable the model to learn recognizable features of ImageNet classes.