fix: fallback to CPU when CUDA is not available
Previously, the code unconditionally attempted to move the model to the CUDA device (`self.model.to("cuda")`), which caused a runtime crash on systems where CUDA is not available (e.g., Apple M1/M2 or CPU-only environments). This resulted in the error:
AssertionError: Torch not compiled with CUDA enabled
The fix introduces a dynamic device selection:
device = "cuda" if torch.cuda.is_available() else "cpu"
self.model.to(device)
This change ensures compatibility across platforms and prevents crashes due to unavailable CUDA devices.
This commit is contained in:
3
chat.py
3
chat.py
@@ -93,7 +93,8 @@ class DOLPHIN:
|
||||
ckpt = try_rename_lagacy_weights(ckpt)
|
||||
self.model.load_state_dict(ckpt, strict=True)
|
||||
|
||||
self.model.to("cuda")
|
||||
device = "cuda" if torch.cuda.is_available() else "cpu"
|
||||
self.model.to(device)
|
||||
self.model.eval()
|
||||
transform_args = {
|
||||
"input_size": self.swin_args["img_size"],
|
||||
|
||||
Reference in New Issue
Block a user