How to Use FurnitureBench
Contents
How to Use FurnitureBench#
FurnitureBench Environments#
Environment List#
- The following environments are available in FurnitureBench:
FurnitureBench-v0
: is mainly used for data collection, providing all available observations, including robot states, high-resolution RGB images, and depth inputs from wrist, front, and rear cameras.FurnitureBenchImage-v0
: is used for pixel-based RL and IL by providing 224x224 wrist and front RGB images and robot states for observation.FurnitureBenchImageFeature-v0
: provides pre-trained image features (R3M or VIP) instead of visual observations.FurnitureDummy-v0
: Dummy environment for pixel-based policies.FurnitureImageFeatureDummy-v0
: Dummy environment for policies with pre-trained visual encoders.
FurnitureBench Configuration#
FurnitureBench can be configured with the following arguments:
import furniture_bench
import gym
env = gym.make(
"FurnitureBench-v0",
furniture=..., # Specifies the name of furniture [lamp | square_table | desk | drawer | cabinet | round_table | stool | chair | one_leg].
resize_img=True, # If true, images are resized to 224 x 224.
manual_done=False, # If true, the episode ends only when the user presses the 'done' button.
with_display=True, # If true, camera inputs are rendered on environment steps.
draw_marker=False, # If true and with_display is also true, the AprilTag marker is rendered on display.
manual_label=False, # If true, manual labeling of the reward is allowed.
from_skill=0, # Skill index to start from (range: [0-5)). Index `i` denotes the completion of ith skill and commencement of the (i + 1)th skill.
to_skill=-1, # Skill index to end at (range: [1-5]). Should be larger than `from_skill`. Default -1 expects the full task from `from_skill` onwards.
randomness="low", # Level of randomness in the environment [low | med | high].
high_random_idx=-1, # Index of the high randomness level (range: [0-2]). Default -1 will randomly select the index within the range.
visualize_init_pose=True, # If true, the initial pose of furniture parts is visualized.
record=False, # If true, the video of the agent's observation is recorded.
manual_reset=True # If true, a manual reset of the environment is allowed.
)
FurnitureBench env.step
#
The input and output of the APIs are as follows:
"""
# Input
action: np.ndarray (shape: (8,)) # 3D EE delta position, 4D EE delta rotation (quaternion), and 1D gripper.Range to [-1, 1].
# Output
obs: Observation in dictionary format.
reward: float # Reward of the action.
done: boolean # True if the episode is terminated.
info: Dictionary of additional information.
"""
env = gym.make(
"FurnitureBench-v0",
furniture='one_leg',
)
ac = env.action_space.sample() # np.ndarray shape (8,)
ob, rew, done, _ = env.step(ac)
print(ob.keys()) # ['color_image1', 'color_image2', 'color_imag3', 'depth_image1', 'depth_image2', 'depth_image3', 'robot_state', 'parts_poses']
print(ob['robot_state'].keys()) # ['ee_pos', 'ee_quat', 'ee_pos_vel', 'ee_ori_vel', 'gripper_width', 'joint_positions', 'joint_velocities', 'joint_torques']
print(ob['color_image1'].shape) # Wrist camera of shape (224, 224, 3)
print(ob['depth_image1'].shape) # Wrist depth image of shape (224, 224)
FurnitureBench Arguments#
furniture
can be one of[lamp|square_table|desk|drawer|cabinet|round_table|stool|chair|one_leg]
.randomness
controls the randomness level of the initial furniture and robot configurations.For the
med
andhigh
, the end-effector pose is perturbed from the pre-defined target pose with noise (±5 cm positional, ±15◦ rotational).For the
low
of the full assembly task, the end-effector pose is fixed to the pre-defined target pose.For the
low
of the skill benchmark, the noise is applied to the pre-defined target pose (±0.5 cm positional, ±5◦ rotational).
from_skill
andto_skill
control the skill range of the environment. During initialization, you should match the initial pose of the furniture with the pre-defined pose using GUI tool (see Start Teleoperation list item 3). And then, the script will move the end-effector to the pre-defined pose (plus with noise depending on randomness level) for each skill. Below are the initialization processes of the script whenfrom_skill
is set at 1 to 4, from left to right.
Utilities#
The following sections explain the utilities of FurnitureBench.
Visualize Camera Inputs#
This script allows you to visualize AprilTag detection and the camera from three different views (front, wrist, and rear):
python furniture_bench/scripts/run_cam_april.py
Visualize Robot Trajectory#
This script will show robot’s trajectory saved in a .pkl
file.
The wrist and front camera views are shown in the left and right panels, respectively.
If you want to try out with the pre-recorded trajectories, you can download the .pkl
files from Download Dataset.
We run the following commands with cabinet trajectory.
python furniture_bench/scripts/show_trajectory.py --data-path 00149.pkl
Similarly, you can visualize trajectory you’ve collected by yourself own by setting --data-path
, or -data-dir
argument.
--data-path
specify single pickle file, while --data-dir
is a demonstration directory (i.e., a directory containing pkl and mp4 files of one trajectory):
python -m furniture_bench.scripts.show_trajectory --data-path <path/to/pkl/file>
python -m furniture_bench.scripts.show_trajectory --data-dir <path/to/data>
# E.g., show a sequence of three camera inputs with metadata
python -m furniture_bench.scripts.show_trajectory --data-dir /hdd/demo_path/one_leg/2022-12-22-03:19:48
python furniture_bench/scripts/show_trajectory.py --data-dir <your_data_dir>
Camera Calibration#
Our demonstration consists of randomly perturbed front camera poses in each episode. To determine the camera pose from the front-view image, we calculate the average camera pose for each type of furniture.
Run the following commands to calibrate the front camera pose for each furniture type.
python furniture_bench/scripts/calibration.py --target <furniture>