Skip to content

Official Implementation of ReCo: Region-Constraint In-Context Generation for Instructional Video Editing

License

Notifications You must be signed in to change notification settings

HiDream-ai/ReCo

Repository files navigation

ReCo

🖥️ GitHub    |    🌐 Project Page    |   🤗 ReCo-Data   |    📈 ReCo-Bench   |    🤗 ReCo-Models(TBD)    |    📖 Paper   

ReCo: Region-Constraint In-Context Generation for Instructional Video Editing

🔆 If you find ReCo useful, please give a ⭐ for this repo, which is important to Open-Source projects. Thanks!

Here, we will gradually release the following resources, including:

  • ReCo training dataset: ReCo-Data
  • Evaluation code: ReCo-Bench
  • Model weights, inference code, and training code

Video Demos

video_edit_demo.mp4

Examples of different video editing tasks by our ReCo.

🔥 Updates

  • [2025.12.22] Upload Our arXiv Paper.
  • [2025.12.23] Release ReCo-Data and Usage code.
  • [2025.12.23] Release ReCo-Bench and evaluation code.
  • ⬜ Release Model weights and inference code in 2-3 weeks.
  • ⬜ Release training code.

📊 ReCo-Data Preparation

ReCo-Data is a large-scale, high-quality video editing dataset consisting of 500K+ instruction–video pairs, covering four video editing tasks: object addition (add), object removal (remove), object replacement (replace), and video stylization (style).

Downloading ReCo-Data

Please download each task of ReCo-Data into the ./ReCo-Data directory by running:

bash ./tools/download_dataset.sh

Before downloading the full dataset, you may first browse the visualization examples.

These examples are generated by randomly sampling 50 instances from each task (add, remove, replace, and style), without any manual curation or cherry-picking, and are intended to help users quickly inspect and assess the overall data quality.

Note: The examples are formatted for visualization convenience and do not strictly follow the dataset format.

Directory Structure

After downloading, please ensure that the dataset follows the directory structure below:

ReCo-Data directory structure
ReCo-Data/
├── add/
│   ├── add_data_configs.json
│   ├── src_videos/
│   │   ├── video1.mp4
│   │   ├── video2.mp4
│   │   └── ...
│   └── tar_videos/
│       ├── video1.mp4
│       ├── video2.mp4
│       └── ...
├── remove/
│   ├── remove_data_configs.json
│   ├── src_videos/
│   └── tar_videos/
├── replace/
│   ├── replace_data_configs.json
│   ├── src_videos/
│   └── tar_videos/
└── style/
    ├── style_data_configs.json
    ├── src_videos/
    │   ├── video1.mp4
    │   └── ...
    └── tar_videos/
        ├── video1-a_Van_Gogh_style.mp4
        └── ...

Testing and Visualization

After downloading the dataset, you can directly test and visualize samples from any single task using the following script (taking the replace task as an example):

python reco_data_test_single.py \
  --json_path ./ReCo-Data/replace/replace_data_configs.json \
  --video_folder ./ReCo-Data \
  --debug

Mixed Task Loading

You can also load a mixed dataset composed of the four tasks (add, remove, replace, and style) with arbitrary ratios by running:

python reco_data_test_mix_data.py \
  --json_folder ./ReCo-Data \
  --video_folder ./ReCo-Data \
  --debug

Notes

  • src_videos/ contains the original source videos.
  • tar_videos/ contains the edited target videos corresponding to each instruction.
  • *_data_configs.json stores the instruction–video mappings and metadata for each task.

📈 Evaluation

VLLM-based Evaluation Benchmark

ReCo-Bench details

Traditional video generation metrics often struggle to accurately assess the fidelity and quality of video editing results. Inspired by recent image editing evaluation protocols, we propose a VLLM-based evaluation benchmark to comprehensively and effectively evaluate video editing quality.

We collect 480 video–instruction pairs as the evaluation set, evenly distributed across four tasks: object addition, object removal, object replacement, and video stylization (120 pairs per task). All source videos are collected from the Pexels video platform.

For local editing tasks (add, remove, and replace), we utilize Gemini-2.5-Flash-Thinking to automatically generate diverse editing instructions conditioned on video content. For video stylization, we randomly select 10 source videos and apply 12 distinct styles to each, resulting in 120 stylization evaluation pairs.


Downloading ReCo-Bench

Please download ReCo-Bench into the ./ReCo-Bench directory by running:

bash ./tools/download_ReCo-Bench.sh

Usage

After downloading the benchmark, you can directly start the evaluation using:

bash run_eval_via_gemini.sh
This script performs the evaluation in two stages:

Step 1: Per-dimension Evaluation with Gemini

In the first stage, Gemini-2.5-Flash-Thinking is used as a VLLM evaluator to score each edited video across multiple evaluation dimensions.

Key arguments used in this step include:

  • --edited_video_folder: Path to the folder containing the edited (target) videos generated by the model.

  • --src_video_folder: Path to the folder containing the original source videos.

  • --base_txt_folder: Path to the folder containing task-specific instruction configuration files.

  • --task_name: Name of the evaluation task, one of {add, remove, replace, style}.

This step outputs per-video, per-dimension evaluation results in JSON format.

Step 2: Final Score Aggregation

After all four tasks have been fully evaluated, the second stage aggregates the evaluation results and computes the final scores.

  • --json_folder: Path to the JSON output folder generated in Step 1

    (default: all_results/gemini_results)

  • --base_txt_folder: Path to the instruction configuration folder

This step produces the final benchmark scores for each task as well as the overall performance.

🏃🏼 Inference

Stay tuned — we will open-source the model weights and inference codes within 2–3 weeks expectly.

🚀 Training

Will be released soon.

🌟 Star and Citation

If you find our work helpful for your research, please consider giving a star⭐ on this repository and citing our work.

@article{reco,
	title={{Region-Constraint In-Context Generation for Instructional Video Editing}},
	author={Zhongwei Zhang and Fuchen Long and Wei Li and Zhaofan Qiu and Wu Liu and Ting Yao and Tao Mei},
	journal={arXiv preprint arXiv:2512.17650},
	year={2025}
}

💖 Acknowledgement

Our code is inspired by several works, including WAN, ObjectClear--a strong object remover, VACE, Flux-Kontext-dev. Thanks to all the contributors!

About

Official Implementation of ReCo: Region-Constraint In-Context Generation for Instructional Video Editing

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published