Tio Magic Animation Toolkit is designed to simplify the use of open and closed-source video AI models for animation. The Animation Toolkit empowers animators, developers, and AI enthusiasts to easily generate animated videos without the pain of complex technical setup, local hardware limitations, and haphazard documentation.
This toolkit leverages Modal for cloud computing and runs various open/closed-source video generation models.
Prompt: Woman smiling at the camera, waving her right hand as if she was saying hi and greeting someone.
![]() Wan 2.1 Vace 14b |
![]() Framepack I2V HY |
![]() Wan 2.1 I2V FusionX (LoRA) |
![]() LTX Video |
![]() Pusa V1 |
![]() Veo 2 |
Prompt: An anime-style young man in a blue t-shirt starts in a standing position. He lifts his right hand and waves to the camera.
![]() Framepack I2V HY |
![]() Wan FLFV 14b |
![]() Wan 2.1 Vace 14b |
Prompt: Anime-style cartoon animation of a man waving, empty white background. Skin tone, shading, lighting should be the same as the reference image.
![]() Starting Image |
![]() Pose Video |
![]() Wan 2.1 Vace 14b |
Prompt: A playful cartoon-style penguin with a round belly and flappy wings waddles up to a pair of green sunglasses lying on the ground. The penguin leans forward, carefully picks up the sunglasses with its flipper, and smoothly lifts them up to its face. It tilts its head with a confident smile as the green sunglasses rest perfectly on its beak. The animation is smooth and expressive, with exaggerated, bouncy cartoon motion.
![]() Wan 2.1 PhantomX (LoRA) |
![]() Pusa V1 |
![]() Wan 2.1 14b |
Go to Tio Magic Animation Toolkit Docs for detailed information on usage.
First, create a virtual environment and activate it.
python3 -m venv venv
source venv/bin/activate # on MacOS/Linux
venv/Scripts/activate # on Windows Command Prompt
Then, install TioMagic package:
pip install tiomagic
Then, create a .env file. Depending on what provider(s) you are using, copy/pase appropriate access keys to the .env file. For starters, we recommend registering for a Modal account and create an access token:
MODAL_TOKEN_ID=...
MODAL_TOKEN_SECRET=...
Create a Hugging Face account and add the token to your Modal account (this is needed to access open source models)
Copy/paste modal_demo.py from repository to run a Modal example of this toolkit
Run python3 modal_demo.py
Please note that this demo runs on Modal credits. The number of credits used per run is dependent on the model you are running and the gpu. The first time you run a model and load it onto Modal will cost more than running subsequent generationa. The approximate cost of running modal_demo.py for the first time is about $0.93. For more information on Modal pricing, refer to the Modal Pricing Page
A locally hosted Gradio GUI is provided for your convenience
- Clone the Tio Magic Animation Toolkit
- Follow Usage instructions above
- Run python3 gradio_wrapper.py
- Cogvideox 5b I2V
- Framepack I2V HY
- LTX video
- Pusa V1
- Wan 2.1 I2V 14b 720p
- Wan 2.1 Vace 14b
- Wan 2.1 I2V FusionX (LoRA)
- Cogvideox 5b
- Pusa V1
- Wan 2.1 T2V FantomX (LoRA)
- Wan 2.1 14b
- Wan 2.1 Vace 14b
- Wan 2.1 PhantomX (LoRA)
- Release of Tio Magic Animation Toolkit.
TL;DR: We don't make the videos - the AI models do. We just make it easier to use them.
TioMagic Animation is an interface toolkit that sits between you and various video generation AI models. Think of it as a universal remote control for AI video models.
✅ Provide a simple Python API to access multiple video models
✅ Handle the complexity of deploying models on Modal/cloud infrastructure
✅ Eliminate the need for expensive local GPUs
✅ Manage job queuing, status tracking, and result retrieval
✅ Abstract away provider-specific implementation details
❌ Create or train the AI models
❌ Modify or enhance model outputs
❌ Own any rights to the generated content
❌ Control what the models can or cannot generate
- All generated content comes directly from the underlying models (e.g., Wan2.1-VACE, CogVideoX)
- You must comply with each model's individual license terms
- Model availability and capabilities depend on the model creators, not us














