fix: fallback to CPU when CUDA is not available
Previously, the code unconditionally attempted to move the model to the CUDA device (`self.model.to("cuda")`), which caused a runtime crash on systems where CUDA is not available (e.g., Apple M1/M2 or CPU-only environments). This resulted in the error: AssertionError: Torch not compiled with CUDA enabled The fix introduces a dynamic device selection: device = "cuda" if torch.cuda.is_available() else "cpu" self.model.to(device) This change ensures compatibility across platforms and prevents crashes due to unavailable CUDA devices.
This commit is contained in:
parent
8ac1495de8
commit
3b86dc6254
3
chat.py
3
chat.py
@ -93,7 +93,8 @@ class DOLPHIN:
|
|||||||
ckpt = try_rename_lagacy_weights(ckpt)
|
ckpt = try_rename_lagacy_weights(ckpt)
|
||||||
self.model.load_state_dict(ckpt, strict=True)
|
self.model.load_state_dict(ckpt, strict=True)
|
||||||
|
|
||||||
self.model.to("cuda")
|
device = "cuda" if torch.cuda.is_available() else "cpu"
|
||||||
|
self.model.to(device)
|
||||||
self.model.eval()
|
self.model.eval()
|
||||||
transform_args = {
|
transform_args = {
|
||||||
"input_size": self.swin_args["img_size"],
|
"input_size": self.swin_args["img_size"],
|
||||||
|
Loading…
Reference in New Issue
Block a user