In-game Residential Home Planning via Visual Context-aware Global Relation Learning

@article{Liu2021IngameRH,
  title={In-game Residential Home Planning via Visual Context-aware Global Relation Learning},
  author={Lijuan Liu and Yin Yang and Yi Yuan and Tianjia Shao and He Wang and Kun Zhou},
  journal={ArXiv},
  year={2021},
  volume={abs/2102.04035}
}
In this paper, we propose an effective global relation learning algorithm to recommend an appropriate location of a building unit for in-game customization of residential home complex. Given a construction layout, we propose a visual context-aware graph generation network that learns the implicit global relations among the scene components and infers the location of a new building unit. The proposed network takes as input the scene graph and the corresponding top-view depth image. It provides… 

Figures and Tables from this paper

References

SHOWING 1-10 OF 31 REFERENCES

Data-driven interior plan generation for residential buildings

TLDR
By comparing the plausibility of different floor plans, it is observed that the novel data-driven technique substantially outperforms existing methods, and in many cases the authors' floor plans are comparable to human-created ones.

Graph2Plan: Learning Floorplan Generation from Layout Graphs

TLDR
A learning framework for automated floorplan generation which combines generative modeling using deep neural networks and user-in-the-loop designs to enable human users to provide sparse design constraints, and which converts a layout graph into a floorplan that fulfills both the layout and boundary constraints.

House-GAN: Relational Generative Adversarial Networks for Graph-constrained House Layout Generation

TLDR
A novel graph-constrained generative adversarial network, whose generator and discriminator are built upon relational architecture, to encode the constraint into the graph structure of its relational networks.

GRAINS: Generative Recursive Autoencoders for INdoor Scenes

TLDR
A generative neural network which enables us to generate plausible 3D indoor scenes in large quantities and varieties, easily and highly efficiently, and shows applications of GRAINS including 3D scene modeling from 2D layouts, scene editing, and semantic scene segmentation via PointNet.

Deep convolutional priors for indoor scene synthesis

TLDR
This work presents a convolutional neural network based approach for indoor scene synthesis that generates scenes that are preferred over the baselines, and in some cases are equally preferred to human-created scenes.

Stylistic scene enhancement GAN: mixed stylistic enhancement generation for 3D indoor scenes

TLDR
This approach is the first to apply a Gumbel-Softmax module in conditional Wasserstein GANs, as well as theFirst to explore the application of GAN-based models in the scene enhancement field.

Fast and Flexible Indoor Scene Synthesis via Deep Convolutional Generative Models

TLDR
A new, fast and flexible pipeline for indoor scene synthesis that is based on deep convolutional generative models, and generates results that outperforms it and other state-of-the-art deep generative scene models in terms of faithfulness to training data and perceived visual quality.

FiLM: Visual Reasoning with a General Conditioning Layer

TLDR
It is shown that FiLM layers are highly effective for visual reasoning - answering image-related questions which require a multi-step, high-level process - a task which has proven difficult for standard deep learning methods that do not explicitly model reasoning.

Efficient Graph Generation with Graph Recurrent Attention Networks

TLDR
A new family of efficient and expressive deep generative models of graphs, called Graph Recurrent Attention Networks (GRANs), which better captures the auto-regressive conditioning between the already-generated and to-be-generated parts of the graph using Graph Neural Networks (GNNs) with attention.

AttnGAN: Fine-Grained Text to Image Generation with Attentional Generative Adversarial Networks

TLDR
An Attentional Generative Adversarial Network that allows attention-driven, multi-stage refinement for fine-grained text-to-image generation and for the first time shows that the layered attentional GAN is able to automatically select the condition at the word level for generating different parts of the image.