Update txt2img.py for colab#1
Conversation
add import path memory optimizations from https://github.com/multimodalart/latent-diffusion-notebook set eta_ddim to 0 for plms
|
We probably want to keep compatibility for CPU users, can you make the model loading function take the device name (and then only do the .half() if it is on GPU)? Thank you :) |
make cpu/cuda choice dependent on cuda availability also for the added optimizations
use torch.autocast
|
i committed changes to implement this and the cuda works, but i didn't get the cpu to work. seems like there is some cuda hardcoding in ddpm.py which gives: |
|
Oh :/ I may have to fix that at some point because these models probably are fast enough to run on CPU with PLMS sampling. |
|
/sub, I ran into the same issue over at CompVis#118 |
add import path
memory optimizations from https://github.com/multimodalart/latent-diffusion-notebook
set eta_ddim to 0 for plms