BlenderGym

Benchmarking Foundational Model Systems for Graphics Editing

Stanford University
CVPR 2025 Highlight🌟

TLDR: BlenderGym benchmarks your VLM system's ability to edit 3D graphics.

Abstract

3D graphics editing is a crucial component in applications like movie production and game design, yet it remains a time-consuming process that demands highly specialized domain expertise. Automating the process is challenging because graphical editing requires performing a variety of tasks, each requiring distinct skill sets. Recently, vision-language models (VLMs) have emerged as a powerful framework for automating the editing process, but their development and evaluation are bottlenecked by the lack of a comprehensive benchmark that requires human-level perception and presents real-world editing complexity.

In this work, we present BlenderGym, a comprehensive VLM system benchmark for 3D graphics editing. BlenderGym evaluates VLM systems through code-based 3D reconstruction tasks. We evaluate closed- and open-source VLM systems and observe that even the state-of-the-art VLM system struggles with tasks relatively easy for human Blender users.

Enabled by BlenderGym, we study how inference scaling on verification impacts VLMs' performance on graphics editing tasks. Notably, our findings reveal that the verifier used to guide the scaling of generation can itself be improved through inference scaling, complementing recent insights on inference scaling of LLM generation in coding and math tasks. We further show that inference compute is not uniformly effective and can be optimized by strategically distributing it between generation and verification.

What is BlenderGym?

BlenderGym consists of 245 hand-crafted Blender scenes across 5 key graphics editing tasks: procedural geometry editing, lighting adjustments, procedural material design, blend shape manipulation, and object placement.

1Procedural Geometry

Handling variations in spatial and geometrical attributes like shape, scale, and spatial distribution.

Geometry task

2Lighting Adjustments

Manipulating the color, intensity, location, and orientation of the light sources.

Lighting task

3Procedural Material

Editing color, texture, displacement, and patterns of a surface material.

Material task

4Blend Shape manipulation

Adjusting blend shapes, continuous variables that control some features of an object, such as facial expressions.

Blend shape task

5Object Placement

Perceiving and adjusting the location of the objects in the scene.

Object placement task

Each instance in BlenderGym presents a reconstruction task from a start scene to a goal scene. Each start-goal instance includes:

  • A base Blender file of the scene setup
  • A pair of Python scripts that generate the start and goal scene
  • Rendered images for both scenes
  • Language description of the differences between the two scenes

What can we do with BlenderGym?

Compare the performance of VLM systems for 3D graphcs. Check out the performance of VLMs in our Leaderboard!.

Inference Scaling for Verification. We explore the inference scaling of the VLM verifier that guides the generation by selecting desirable edits and pruning suboptimal ones.

Verifier scaling figure
Performance of inference scaling for VLM verifier with InternVL2-8B, Claude3.5 Sonnet, and GPT-4o. With more verifiercation queries, the output is closer to the goal.

We show in our paper that VLM verifiers used for guiding generation also benefit from inference scaling and that scaled open-source VLM verifiers can exceed the performance of closed-source VLM verifiers, as shown by the figure above.

How do we allocate total compute on generation and verification? The figure below shows that there exists a Goldilocks ratio of allocation.

Allocation of compute figure
The impact of compute allocation on VLM system performance. We set VeriRatio(verification compute over total compute) to 0.33, 0.62, and 0.73.

Leaderboard

Blend Shape Placement Geometry Lighting Material
Model Date Blend Shape Placement Geometry Lighting Material
PL↓ Photometric Loss ↓ N-CLIP↓ 1-CLIP similarity ↓ CD↓ Chamfer Distance ↓ PL↓ Photometric Loss ↓ N-CLIP↓ 1-CLIP similarity ↓ CD↓ Chamfer Distance ↓ PL↓ Photometric Loss ↓ N-CLIP↓ 1-CLIP similarity ↓ CD↓ Chamfer Distance ↓ PL↓ Photometric Loss ↓ N-CLIP↓ 1-CLIP similarity ↓ PL↓ Photometric Loss ↓ N-CLIP↓ 1-CLIP similarity ↓
GPT-4o 2024-08-06 9.140 20.47 0.904 11.89 30.38 11.22 6.747 8.561 1.192 2.410 2.398 3.653 8.942
Claude-3.5-Sonnet 2024-10-22 12.79 27.96 1.962 13.19 51.76 11.29 10.81 13.04 1.452 2.897 4.049 5.769 11.44
GPT-4-Turbo 2024-04-09 15.21 26.15 1.927 12.21 37.57 12.80 8.160 10.92 1.120 2.723 3.912 5.424 8.812
Claude-3-Haiku 2024-03-07 13.62 29.72 2.563 14.78 44.10 12.13 10.15 12.51 1.362 3.712 4.824 5.960 11.61
Gemini-1.5-flash 2024-9 23.18 30.47 2.412 10.94 45.34 8.324 9.443 10.49 1.323 3.514 5.688 6.364 10.42
Qwen2-VL-7b-Instruct 2024-09-25 16.78 29.22 2.123 15.31 41.12 14.21 -- -- -- 2.985 2.225 -- --
Qwen-Llama N/A 14.32 28.23 2.012 14.65 34.93 12.41 13.97 14.13 1.673 3.173 3.998 -- --
Phi-3.5-vision 2024-08-20 12.51 24.14 2.012 -- -- -- -- -- -- 3.127 6.012 -- --
Phi-Llama N/A 12.13 24.77 1.826 14.61 35.61 12.61 9.818 11.92 1.471 3.621 6.895 -- --
MiniCPM-V-2.6 2024-08-06 13.86 29.92 1.997 11.99 31.69 12.62 7.127 8.542 1.229 3.829 6.124 -- --
MiniCPM-Llama N/A 13.76 27.21 1.882 12.74 31.72 15.81 9.561 11.47 1.569 3.725 6.090 7.152 12.14
InternVL2-8b 2024-07-04 12.69 29.09 1.920 14.71 35.92 17.22 -- -- -- 3.920 6.825 -- --
Intern-Llama N/A 11.80 23.83 1.861 16.15 37.23 18.22 13.70 14.44 1.578 3.825 6.152 -- --
Human 2025-1-15 0.934 9.12 0.399 0.423 13.34 1.532 1.269 2.434 0.334 1.239 1.632 0.629 3.043

BibTeX


    @misc{gu2025blendergymbenchmarkingfoundationalmodel,
      title={BlenderGym: Benchmarking Foundational Model Systems for Graphics Editing}, 
      author={Yunqi Gu and Ian Huang and Jihyeon Je and Guandao Yang and Leonidas Guibas},
      year={2025},
      eprint={2504.01786},
      archivePrefix={arXiv},
      primaryClass={cs.GR},
      url={https://arxiv.org/abs/2504.01786}, 
    }