System Info
I am using a Tesla T4 16 gb
Reproduction
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig
base_model_id = "mistralai/Mistral-7B-Instruct-v0.1"
bnb_config = BitsAndBytesConfig(
load_in_4bit=True,
bnb_4bit_use_double_quant=True,
bnb_4bit_quant_type="nf4",
bnb_4bit_compute_dtype=torch.bfloat16
)
model = AutoModelForCausalLM.from_pretrained(base_model_id, quantization_config=bnb_config, device_map="auto", token=access_token)
Expected behavior
Hello, I am trying to finetune Mistral 7b using qlora but m facing this error does anyone know how to solve it :
AttributeError: 'NoneType' object has no attribute 'cquantize_blockwise_fp16_nf4'. These are the versions of the packages I am using :
bitsandbytes==0.43.2
transformers==4.34.0
torch==2.3.0
accelerate==0.29.3
I am using python 3.9
System Info
I am using a Tesla T4 16 gb
Reproduction
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig
base_model_id = "mistralai/Mistral-7B-Instruct-v0.1"
bnb_config = BitsAndBytesConfig(
load_in_4bit=True,
bnb_4bit_use_double_quant=True,
bnb_4bit_quant_type="nf4",
bnb_4bit_compute_dtype=torch.bfloat16
)
model = AutoModelForCausalLM.from_pretrained(base_model_id, quantization_config=bnb_config, device_map="auto", token=access_token)
Expected behavior
Hello, I am trying to finetune Mistral 7b using qlora but m facing this error does anyone know how to solve it :
AttributeError: 'NoneType' object has no attribute 'cquantize_blockwise_fp16_nf4'. These are the versions of the packages I am using :
bitsandbytes==0.43.2
transformers==4.34.0
torch==2.3.0
accelerate==0.29.3
I am using python 3.9