fix: fallback to CPU when CUDA is not available

Previously, the code unconditionally attempted to move the model to the CUDA device (`self.model.to("cuda")`), which caused a runtime crash on systems where CUDA is not available (e.g., Apple M1/M2 or CPU-only environments). This resulted in the error:

AssertionError: Torch not compiled with CUDA enabled

The fix introduces a dynamic device selection:

    device = "cuda" if torch.cuda.is_available() else "cpu"
    self.model.to(device)

This change ensures compatibility across platforms and prevents crashes due to unavailable CUDA devices.
This commit is contained in:
Ivan 2025-06-15 13:52:42 +04:00 committed by GitHub
parent 8ac1495de8
commit 3b86dc6254
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194

View File

@ -93,7 +93,8 @@ class DOLPHIN:
ckpt = try_rename_lagacy_weights(ckpt)
self.model.load_state_dict(ckpt, strict=True)
self.model.to("cuda")
device = "cuda" if torch.cuda.is_available() else "cpu"
self.model.to(device)
self.model.eval()
transform_args = {
"input_size": self.swin_args["img_size"],