Skip to content

Error when using Nvidia GPU. #50

@Toolfolks

Description

@Toolfolks

I have 'python inference_for_demo_video.py --wav_path data/audio/acknowledgement_english.m4a --style_clip_path data/style_clip/3DMM/M030_front_neutral_level1_001.mat --pose_path data/pose/RichardShelby_front_neutral_level1_001.mat --image_path data/src_img/uncropped/male_face.png --cfg_scale 1.0 --max_gen_len 30 --output_name acknowledgement_english@M030_front_neutral_level1_001@male_face --device cuda' working okay.
(new_dreamtalk) D:\techy\talkingHeads\dreamtalk>python testGpu.py
WAV Path: D:\techy\talkingHeads\dreamtalk\data\audio\acknowledgement_english.m4a
Output Path: D:\techy\talkingHeads\dreamtalk\tmp\acknowledgement_english@M030_front_neutral_level1_001@male_face\acknowledgement_english@M030_front_neutral_level1_001@male_face_16K.wav
PyTorch Version: 2.3.1+cpu
CUDA Available: False
CUDA Version: None
No CUDA device found.
NumPy Version: 1.22.4
SciPy Version: 1.13.1
Torchaudio Version: 2.3.1+cpu
OpenCV Version: 4.4.0
Available backends after updating PATH: ['soundfile']
However when I switch to an environment with GPU

(dreamtalk) D:\techy\talkingHeads\dreamtalk>python testGpu.py
WAV Path: D:\techy\talkingHeads\dreamtalk\data\audio\acknowledgement_english.m4a
Output Path: D:\techy\talkingHeads\dreamtalk\tmp\acknowledgement_english@M030_front_neutral_level1_001@male_face\acknowledgement_english@M030_front_neutral_level1_001@male_face_16K.wav
PyTorch Version: 2.3.1+cu121
CUDA Available: True
CUDA Version: 12.1
Device Name: NVIDIA GeForce RTX 3060
NumPy Version: 1.22.4
SciPy Version: 1.10.0
Torchaudio Version: 2.3.1+cu121
OpenCV Version: 4.10.0
Available backends after updating PATH: ['soundfile']

I get error
Traceback (most recent call last):
File "inference_for_demo_video.py", line 187, in
inference_one_video(
File "C:\Users\User.conda\envs\dreamtalk\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "inference_for_demo_video.py", line 88, in inference_one_video
gen_exp_stack = diff_net.sample(
File "D:\techy\talkingHeads\dreamtalk\core\networks\diffusion_net.py", line 216, in sample
return self.ddim_sample(
File "D:\techy\talkingHeads\dreamtalk\core\networks\diffusion_net.py", line 144, in ddim_sample
"style_clip": torch.cat([style_clip, uncond_style_clip], dim=0),
RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu! (when checking argument for argument tensors in method wrapper_CUDA_cat)

New to programming but I have hours going round in circles with ChatGPT.

Anyone have a solution ?

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions