Previously, the code unconditionally attempted to move the model to the CUDA device (`self.model.to("cuda")`), which caused a runtime crash on systems where CUDA is not available (e.g., Apple M1/M2 or CPU-only environments). This resulted in the error:
AssertionError: Torch not compiled with CUDA enabled
The fix introduces a dynamic device selection:
device = "cuda" if torch.cuda.is_available() else "cpu"
self.model.to(device)
This change ensures compatibility across platforms and prevents crashes due to unavailable CUDA devices.