Image-to-3D
PyTorch
FLARE / README.md
zhang3z's picture
Add model card with metadata (#1)
586c455 verified
metadata
pipeline_tag: image-to-3d
library_name: pytorch
license: apache-2.0

FLARE: Feed-forward Geometry, Appearance and Camera Estimation from Uncalibrated Sparse Views

Website Hugging Face Video

This repository contains the FLARE model, as presented in FLARE: Feed-forward Geometry, Appearance and Camera Estimation from Uncalibrated Sparse Views. FLARE is a feed-forward model that estimates high-quality camera poses, 3D geometry, and appearance from as few as 2-8 uncalibrated images.

Project Page: https://zhanghe3z.github.io/FLARE/

Run a Demo (Point Cloud and Camera Pose Estimation)

To run a demo, follow these steps:

  1. Install Dependencies: Ensure you have PyTorch and other necessary libraries installed as detailed in the installation instructions.
  2. Download Checkpoint: Download the checkpoint from Hugging Face and place it in the /checkpoints/geometry_pose.pth directory.
  3. Run the Script: Execute the following command, replacing "Your/Data/Path" and "Your/Checkpoint/Path" with the appropriate paths:
torchrun --nproc_per_node=1 run_pose_pointcloud.py \
    --test_dataset "1 @ CustomDataset(split='train', ROOT='Your/Data/Path', resolution=(512,384), seed=1, num_views=8, gt_num_image=0, aug_portrait_or_landscape=False, sequential_input=False)" \
    --model "AsymmetricMASt3R(pos_embed='RoPE100', patch_embed_cls='ManyAR_PatchEmbed', img_size=(512, 512), head_type='catmlp+dpt', output_mode='pts3d+desc24', depth_mode=('exp', -inf, inf), conf_mode=('exp', 1, inf), enc_embed_dim=1024, enc_depth=24, enc_num_heads=16, dec_embed_dim=768, dec_depth=12, dec_num_heads=12, two_confs=True, desc_conf_mode=('exp', 0, inf))" \
    --pretrained "Your/Checkpoint/Path" \
    --test_criterion "MeshOutput(sam=False)" --output_dir "log/" --amp 1 --seed 1 --num_workers 0

Visualization

After running the demo, you can visualize the results using the following command:

sh ./visualizer/vis.sh 

This will run a visualization script. Refer to the Github README for more details on visualization options.

Citation

@misc{zhang2025flarefeedforwardgeometryappearance,
      title={FLARE: Feed-forward Geometry, Appearance and Camera Estimation from Uncalibrated Sparse Views}, 
      author={Shangzhan Zhang and Jianyuan Wang and Yinghao Xu and Nan Xue and Christian Rupprecht and Xiaowei Zhou and Yujun Shen and Gordon Wetzstein},
      year={2025},
      eprint={2502.12138},
      archivePrefix={arXiv},
      primaryClass={cs.CV},
      url={https://arxiv.org/abs/2502.12138}, 
}