Dataset
Contents
Dataset#
Furniture assembly is a complex, long-horizon manipulation task, which is very challenging to solve using reinforcement learning. To make our benchmark tractable, we provide 219.6 hours of 5100 successful demonstrations collected using an Oculus Quest 2 controller and a keyboard.
Each furniture assembly task has three different levels with respect to the randomness in task initialization: low, medium, and high.
Download Dataset#
The FurnitureBench dataset is available on our Google Drive:
low               # Low randomness
  |- cabinet      # Demonstration files for cabinet
    |- 0.pkl
    |- 1.pkl
    |- ...
  |- chair        # Demonstration files for chair
  |- ...
med               # Medium randomness
  |- cabinet
  |- ...
high              # High randomness
  |- cabinet
  |- ...
For easy downloading, we also provide compressed datasets (*.tar.gz) for each furniture model and randomness level under low_compressed, med_compressed, and high_compressed directory. You can download our data with commands below.
Download with gdown#
pip install gdown
python furniture_bench/scripts/download_dataset.py --untar --randomness [low|med|high] --furniture <furniture> --out_dir <path/to/data>
# E.g., download lamp data with low randomness
python furniture_bench/scripts/download_dataset.py --untar --randomness low --furniture lamp --out_dir ./furniture_dataset
# E.g., download all furniture data with medium randomness
python furniture_bench/scripts/download_dataset.py --untar --randomness med --furniture all --out_dir ./furniture_dataset
Note! If you are facing download issues related to Too many users have viewed or downloaded..., please refer to the following Download with rclone section.
Download with rclone#
Sometimes gdown might be slow or reject your download request due to access quota limitations. In this case, you can utilize rclone to download the dataset.
How to use rclone#
Dataset Size#
The size (in GB) of demonstrations of raw .pkl files for each furniture in each level is summarized below:
Furniture  | 
low  | 
med  | 
high  | 
|---|---|---|---|
lamp  | 
26  | 
27  | 
11  | 
square_table  | 
76  | 
75  | 
25  | 
desk  | 
46  | 
57  | 
25  | 
drawer  | 
43  | 
39  | 
11  | 
cabinet  | 
38  | 
36  | 
17  | 
round_table  | 
25  | 
26  | 
15  | 
stool  | 
37  | 
42  | 
19  | 
chair  | 
54  | 
68  | 
31  | 
one_leg  | 
112  | 
129  | 
69  | 
Total  | 
457  | 
499  | 
223  | 
Demonstration File Format#
Each demonstration is stored in a .pkl file, containing a sequence of sensory inputs, actions, rewards, and other metadata:
'furniture': Furniture name, e.g., 'lamp'
'observations': List of observation dicts
  {
    'color_image1': Wrist camera image (224, 224, 3)
    'color_image2': Front camera image (224, 224, 3)
    'robot_state': {
      'ee_pos': EEF position (3,)
      'ee_quat': EEF orientation (4,)
      'ee_pos_vel': EEF linear velocity (3,)
      'ee_ori_vel': EEF angular velocity (3,)
      'joint_positions': Joint positions (7,)
      'joint_velocities': Joint velocities (7,)
      'joint_torques': Joint torques (7,)
      'gripper_width': Gripper width (1,)
    }
  }
'actions': List of 8-D actions
'rewards': List of rewards (1 if a furniture part is assembled; otherwise, 0)
'skills': List of skill completion flags (1 if a skill is completed; otherwise, 0)
