🖥️ GitHub | 🌐 Project Page | 🤗 ReCo-Data | 📈 ReCo-Bench | 🤗 ReCo-Models(TBD) | 📖 Paper
ReCo: Region-Constraint In-Context Generation for Instructional Video Editing
🔆 If you find ReCo useful, please give a ⭐ for this repo, which is important to Open-Source projects. Thanks!
Here, we will gradually release the following resources, including:
- ReCo training dataset: ReCo-Data
- Evaluation code: ReCo-Bench
- Model weights, inference code, and training code
video_edit_demo.mp4
Examples of different video editing tasks by our ReCo.
- ✅ [2025.12.22] Upload Our arXiv Paper.
- ✅ [2025.12.23] Release ReCo-Data and Usage code.
- ✅ [2025.12.23] Release ReCo-Bench and evaluation code.
- ⬜ Release Model weights and inference code in 2-3 weeks.
- ⬜ Release training code.
ReCo-Data is a large-scale, high-quality video editing dataset consisting of 500K+ instruction–video pairs, covering four video editing tasks: object addition (add), object removal (remove), object replacement (replace), and video stylization (style).
Please download each task of ReCo-Data into the ./ReCo-Data directory by running:
bash ./tools/download_dataset.shBefore downloading the full dataset, you may first browse the visualization examples.
These examples are generated by randomly sampling 50 instances from each task (add, remove, replace, and style), without any manual curation or cherry-picking, and are intended to help users quickly inspect and assess the overall data quality.
Note: The examples are formatted for visualization convenience and do not strictly follow the dataset format.
After downloading, please ensure that the dataset follows the directory structure below:
ReCo-Data directory structure
ReCo-Data/
├── add/
│ ├── add_data_configs.json
│ ├── src_videos/
│ │ ├── video1.mp4
│ │ ├── video2.mp4
│ │ └── ...
│ └── tar_videos/
│ ├── video1.mp4
│ ├── video2.mp4
│ └── ...
├── remove/
│ ├── remove_data_configs.json
│ ├── src_videos/
│ └── tar_videos/
├── replace/
│ ├── replace_data_configs.json
│ ├── src_videos/
│ └── tar_videos/
└── style/
├── style_data_configs.json
├── src_videos/
│ ├── video1.mp4
│ └── ...
└── tar_videos/
├── video1-a_Van_Gogh_style.mp4
└── ...
After downloading the dataset, you can directly test and visualize samples from any single task using the following script (taking the replace task as an example):
python reco_data_test_single.py \
--json_path ./ReCo-Data/replace/replace_data_configs.json \
--video_folder ./ReCo-Data \
--debugYou can also load a mixed dataset composed of the four tasks (add, remove, replace, and style) with arbitrary ratios by running:
python reco_data_test_mix_data.py \
--json_folder ./ReCo-Data \
--video_folder ./ReCo-Data \
--debugsrc_videos/contains the original source videos.tar_videos/contains the edited target videos corresponding to each instruction.*_data_configs.jsonstores the instruction–video mappings and metadata for each task.
ReCo-Bench details
Traditional video generation metrics often struggle to accurately assess the fidelity and quality of video editing results. Inspired by recent image editing evaluation protocols, we propose a VLLM-based evaluation benchmark to comprehensively and effectively evaluate video editing quality.
We collect 480 video–instruction pairs as the evaluation set, evenly distributed across four tasks: object addition, object removal, object replacement, and video stylization (120 pairs per task). All source videos are collected from the Pexels video platform.
For local editing tasks (add, remove, and replace), we utilize Gemini-2.5-Flash-Thinking to automatically generate diverse editing instructions conditioned on video content. For video stylization, we randomly select 10 source videos and apply 12 distinct styles to each, resulting in 120 stylization evaluation pairs.
Please download ReCo-Bench into the ./ReCo-Bench directory by running:
bash ./tools/download_ReCo-Bench.shAfter downloading the benchmark, you can directly start the evaluation using:
bash run_eval_via_gemini.shThis script performs the evaluation in two stages:
In the first stage, Gemini-2.5-Flash-Thinking is used as a VLLM evaluator to score each edited video across multiple evaluation dimensions.
Key arguments used in this step include:
-
--edited_video_folder: Path to the folder containing the edited (target) videos generated by the model. -
--src_video_folder: Path to the folder containing the original source videos. -
--base_txt_folder: Path to the folder containing task-specific instruction configuration files. -
--task_name: Name of the evaluation task, one of{add, remove, replace, style}.
This step outputs per-video, per-dimension evaluation results in JSON format.
After all four tasks have been fully evaluated, the second stage aggregates the evaluation results and computes the final scores.
-
--json_folder: Path to the JSON output folder generated in Step 1(default:
all_results/gemini_results) -
--base_txt_folder: Path to the instruction configuration folder
This step produces the final benchmark scores for each task as well as the overall performance.
Stay tuned — we will open-source the model weights and inference codes within 2–3 weeks expectly.
Will be released soon.
If you find our work helpful for your research, please consider giving a star⭐ on this repository and citing our work.
@article{reco,
title={{Region-Constraint In-Context Generation for Instructional Video Editing}},
author={Zhongwei Zhang and Fuchen Long and Wei Li and Zhaofan Qiu and Wu Liu and Ting Yao and Tao Mei},
journal={arXiv preprint arXiv:2512.17650},
year={2025}
}
Our code is inspired by several works, including WAN, ObjectClear--a strong object remover, VACE, Flux-Kontext-dev. Thanks to all the contributors!

