This is the unofficial implementation for Event3DGS: Event-based 3D Gaussian Splatting for High-Speed Robot Egomotion (CoRL 2024). This work was completed as a course project in "CMSC 848B - Selected Topics in Information Processing: Computational Imaging" at the University of Maryland.
We prepare this repo based on nerfstudio and EventNeRF, the idea was to train a 3D Gaussian Splatting for event camera data in the nerfstudio pipeline.
ESplat follows the integration guidelines described here for custom methods within Nerfstudio. Update to nerfstudio==1.0.3
.
Give a star before you clone this repo please.
git clone https://github.com/jayhsu0627/Event3DGS
conda env create -f environment.yml
Navigate to this folder and run python -m pip install -e .
4. Reinstall gsplat to avoid this issue
Please check your gsplat version by conda list
before you do this step.
pip install git+https://github.com/nerfstudio-project/[email protected]
Run ns-train -h
: you should see a list of "subcommands" with esplatfacto, esplatfacto-big, and esplatfacto-lite included among them.
Now that ESplat is installed you can play with it!
- Prepare datasets: If you interested the full datasets from EventNeRF, please follow here to download the desire datasets
data.tar.gz
. We also prepared a sample data extracted from EventNeRF.
data
│ ex2_events_viz.ipynb
│
├───drums
│ events.npz
| \---pose
├───lego
│ test_lego1_color_init159_1e-6eps.npz
| \---pose
├───sewing
| b10_cal1_45rpm_gfox_eonly-2022_05_12_01_17_04_shift_ts1.npz
└───\---pose
-
We also provided a jupyter notebook
ex2_events_viz.ipynb
for you to visualize the event camera data. -
Preprocess: We designed an EventImageDatamanager to handle the preprocessing of event data. The idea was to sample event data and perform debayering to reconstruct images in nerfstudio format. You'll find a new output folder created within the same scene folder after running this code. This folder contains our nerfstudio format data, which we will use for training in the next stage.
python dataloader.py -p data/ -s lego
Frame 0 | Frame 1 |
---|---|
![]() |
![]() |
- Launch training with
ns-train esplatfacto --data <data_folder>
. This specifies a data folder to use. For more details, see Nerfstudio documentation.
ns-train esplatfacto --data data/lego/output
or
ns-train esplatfacto-big --data data/lego/output --pipeline.model.use_scale_regularization True --pipeline.model.cull_alpha_thresh=0.005 --pipeline.model.continue_cull_post_densification=False
- Connect to the viewer by forwarding the viewer port (we use VSCode to do this), and click the link to
viewer.nerf.studio
provided in the output of the train script. Use the viewer running locally at:http://localhost:7007
- During training, you can use the crop scale in crop viewport to locate the noisy 3DGS model.
Lego (Synthetic) | Sew (Real) |
---|---|
![]() |
![]() |
- The nerfstudio support pointcloud output once you trained a 3DGS, however, we can't make it here right now, since the nerfstudio will verify if the output class is a
SplatfactoModel
instance or not. Instead, we are aESplatfactoModel
. Idea: editclass ExportGaussianSplat(Exporter)
inexporter.py
.
ns-export gaussian-splat --load-config outputs\plane\esplatfacto\2024-04-22_201709\config.yml --output-dir exports/ply`
File "C:\Users\sjxu\AppData\Local\miniconda3\envs\event3dgs\lib\site-packages\nerfstudio\scripts\exporter.py", line 614, in entrypoint
tyro.cli(Commands).main()
File "C:\Users\sjxu\AppData\Local\miniconda3\envs\event3dgs\lib\site-packages\nerfstudio\scripts\exporter.py", line 536, in main
assert isinstance(pipeline.model, SplatfactoModel)
AssertionError
- You may discover our result looks floatering compare to the result from papers. This repo only contains a pure 3DGS implementation, it's not the same as the CoRL 2024 paper or the EventNeRF paper. The ignored assumption include: (a) no Density Clipping as mentioned in A.2 of EventNeRF paper (b) no or wrong negative sampling
- We can split RGB channels as independent 3DGS.
- Add t0 and t estimation in prepossing.
- Untable background (grayscale) coloring in prepossing.
- The readme template is borrowed from Lerf.
- Thanks to the work from EventNeRF.
- The event camera visualizer notebook is borrowed from events_viz.
If you find this useful, please cite their paper!
@inproceedings{xiong2024event3dgs, title={Event3dgs: Event-based 3d gaussian splatting for high-speed robot egomotion}, author={Xiong, Tianyi and Wu, Jiayi and He, Botao and Fermuller, Cornelia and Aloimonos, Yiannis and Huang, Heng and Metzler, Christopher}, booktitle={8th Annual Conference on Robot Learning}, year={2024}}