You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Let's say we have multiple diffusion models, as in the Cascading diffusion model paper.
Is there an easy way to setup training such that each conditional model is trained simultaneously on different GPUs? What about with a shared replay buffer, that each conditional model can access?