Roominoes: Generating Novel 3D Floor Plans From Existing 3D Rooms

@article{Wang2021RoominoesGN,
  title={Roominoes: Generating Novel 3D Floor Plans From Existing 3D Rooms},
  author={Kai Wang and Xianghao Xu and Leon Lei and Selena Ling and Natalie Lindsay and Angel X. Chang and Manolis Savva and Daniel Ritchie},
  journal={Computer Graphics Forum},
  year={2021},
  volume={40}
}
  • Kai Wang, Xianghao Xu, Daniel Ritchie
  • Published 1 August 2021
  • Computer Science
  • Computer Graphics Forum
Realistic 3D indoor scene datasets have enabled significant recent progress in computer vision, scene understanding, autonomous navigation, and 3D reconstruction. But the scale, diversity, and customizability of existing datasets is limited, and it is time‐consuming and expensive to scan and annotate more. Fortunately, combinatorics is on our side: there are enough individual rooms in existing 3D scene datasets, if there was but a way to recombine them into new layouts. In this paper, we… 
1 Citations

WallPlan: Synthesizing Floorplans by Learning to Generate Wall Graphs

Intensive experiments demonstrate the proposed novel wall-oriented method, WallPlan, requires no post-processing, producing higher quality floorplans than state-of-the-art techniques.

References

SHOWING 1-10 OF 50 REFERENCES

3D-FRONT: 3D Furnished Rooms with layOuts and semaNTics

  • Huan FuBowen Cai H. Zhang
  • Computer Science
    2021 IEEE/CVF International Conference on Computer Vision (ICCV)
  • 2021
Two applications, interior scene synthesis and texture synthesis, that are especially tailored to the strengths of the new dataset, 3D-FRONT are demonstrated.

Human-Centric Indoor Scene Synthesis Using Stochastic Grammar

We present a human-centric method to sample and synthesize 3D room layouts and 2D images thereof, to obtain large-scale 2D/3D image data with the perfect per-pixel ground truth. An attributed spatial

The Replica Dataset: A Digital Replica of Indoor Spaces

Replica, a dataset of 18 highly photo-realistic 3D indoor scene reconstructions at room and building scale, is introduced to enable machine learning (ML) research that relies on visually, geometrically, and semantically realistic generative models of the world.

Data-driven interior plan generation for residential buildings

By comparing the plausibility of different floor plans, it is observed that the novel data-driven technique substantially outperforms existing methods, and in many cases the authors' floor plans are comparable to human-created ones.

SceneCAD: Predicting Object Alignments and Layouts in RGB-D Scans

A message-passing graph neural network is proposed to model the inter-relationships between objects and layout, guiding generation of a globally object alignment in a scene by considering the global scene layout.

SceneGraphNet: Neural Message Passing for 3D Indoor Scene Augmentation

A neural message passing approach to augment an input 3D indoor scene with new objects matching their surroundings by weighting messages through an attention mechanism, which significantly outperforms state-of-the-art approaches in terms of correctly predicting objects missing in a scene.

Example-based synthesis of 3D object arrangements

This work introduces a probabilistic model for scenes based on Bayesian networks and Gaussian mixtures that can be trained from a small number of input examples, and develops a clustering algorithm that groups objects occurring in a database of scenes according to their local scene neighborhoods.

Matterport3D: Learning from RGB-D Data in Indoor Environments

Matterport3D is introduced, a large-scale RGB-D dataset containing 10,800 panoramic views from 194,400RGB-D images of 90 building-scale scenes that enable a variety of supervised and self-supervised computer vision tasks, including keypoint matching, view overlap prediction, normal prediction from color, semantic segmentation, and region classification.

Physically-Based Rendering for Indoor Scene Understanding Using Convolutional Neural Networks

This work introduces a large-scale synthetic dataset with 500K physically-based rendered images from 45K realistic 3D indoor scenes and shows that pretraining with this new synthetic dataset can improve results beyond the current state of the art on all three computer vision tasks.

Graph2Plan: Learning Floorplan Generation from Layout Graphs

A learning framework for automated floorplan generation which combines generative modeling using deep neural networks and user-in-the-loop designs to enable human users to provide sparse design constraints, and which converts a layout graph into a floorplan that fulfills both the layout and boundary constraints.