• Corpus ID: 240288691

Brick-by-Brick: Combinatorial Construction with Deep Reinforcement Learning

@article{Chung2021BrickbyBrickCC,
  title={Brick-by-Brick: Combinatorial Construction with Deep Reinforcement Learning},
  author={H. Chung and Jungtaek Kim and Boris Knyazev and Jinhwi Lee and Graham W. Taylor and Jaesik Park and Minsu Cho},
  journal={ArXiv},
  year={2021},
  volume={abs/2110.15481}
}
Discovering a solution in a combinatorial space is prevalent in many real-world problems but it is also challenging due to diverse complex constraints and the vast number of possible combinations. To address such a problem, we introduce a novel formulation, combinatorial construction , which requires a building agent to assemble unit primitives (i.e., LEGO bricks) sequentially – every connection between two bricks must follow a fixed rule, while no bricks mutually overlap. To construct a target… 

Figures and Tables from this paper

Sequential Brick Assembly with Efficient Constraint Satisfaction

This work addresses the problem of generating a sequence of LEGO brick assembly with high-fidelity structures, satisfying physical constraints between bricks, by employing a U-shaped sparse 3D convolutional network and devise a sampling strategy to determine the next brick position by considering attachable positions under physical constraints.

Blocks Assemble! Learning to Assemble with Large-Scale Structured Reinforcement Learning

It is found that the combination of large-scale reinforcement learning and graph-based policies – surprisingly without any additional complexity – is an effective recipe for training agents that not only generalize to complex unseen blueprints in a zero-shot manner, but even operate in a reset-free setting without being trained to do so.

Learning to Assemble Geometric Shapes

This work introduces the more challenging problem of shape assembly, which involves textureless fragments of arbitrary shapes with indistinctive junctions, and proposes a learning-based approach to solving it, and demonstrates the effectiveness on shape assembly tasks with various scenarios.

IKEA-Manual: Seeing Shape Assembly Step by Step

Human-designed visual manuals are crucial components in shape assembly activ-ities. They provide step-by-step guidance on how we should move and connect different parts in a convenient and

References

SHOWING 1-10 OF 46 REFERENCES

Learning 3D Part Assembly from a Single Image

A novel problem, single-image-guided 3D part assembly, along with a learningbased solution, and proposes a two-module pipeline that leverages strong 2D-3D correspondences and assembly-oriented graph message-passing to infer part relationships.

Structured agents for physical construction

A suite of challenging physical construction tasks inspired by how children play with blocks are introduced, such as matching a target configuration, stacking blocks to connect objects together, and creating shelter-like structures over target objects.

Visual Reinforcement Learning with Imagined Goals

An algorithm is proposed that acquires general-purpose skills by combining unsupervised representation learning and reinforcement learning of goal-conditioned policies, efficient enough to learn policies that operate on raw image observations and goals for a real-world robotic system, and substantially outperforms prior techniques.

Relational inductive biases, deep learning, and graph networks

It is argued that combinatorial generalization must be a top priority for AI to achieve human-like abilities, and that structured representations and computations are key to realizing this objective.

Relational inductive bias for physical construction in humans and machines

This work introduces a deep reinforcement learning agent which uses object- and relation-centric scene and policy representations and shows that these structured representations allow the agent to outperform both humans and more naive approaches, suggesting that relational inductive bias is an important component in solving structured reasoning problems and for building more intelligent, flexible machines.

Learn What Not to Learn: Action Elimination with Deep Reinforcement Learning

This work proposes the Action-Elimination Deep Q-Network (AE-DQN) architecture that combines a Deep RL algorithm with an Action Elimination Network (AEN) that eliminates sub-optimal actions.

Synthesizing Programs for Images using Reinforced Adversarial Learning

SPIRAL is an adversarially trained agent that generates a program which is executed by a graphics engine to interpret and sample images, and a surprising finding is that using the discriminator's output as a reward signal is the key to allow the agent to make meaningful progress at matching the desired output rendering.

Reinforcement Learning for Molecular Design Guided by Quantum Mechanics

MolGym is introduced, an RL environment comprising several challenging molecular design tasks along with baselines, and it is shown that the agent can efficiently learn to solve these tasks from scratch by working in a translation and rotation invariant state-action space.

Combinatorial 3D Shape Generation via Sequential Assembly

This work proposes a new 3D shape generation algorithm that aims to create a combinatorial configuration from a set of volumetric primitives, and adopts sequential model-based optimization to tackle the exponential growth of feasible combinations in terms of the number of primitives.

SPARE3D: A Dataset for SPAtial REasoning on Three-View Line Drawings

The SPARE3D dataset contains three types of 2D-3D reasoning tasks on view consistency, camera pose, and shape generation, with increasing difficulty, and it is shown that although convolutional networks have achieved superhuman performance in many visual learning tasks, their spatial reasoning performance in SPARE2D is almost equal to random guesses.