Commit Graph

28 Commits

Author SHA1 Message Date
Hao Feng
f03da2de6b
Update README.md 2025-06-30 21:15:42 +08:00
Hao Feng
55102128d5
Update README_CN.md 2025-06-30 20:15:02 +08:00
Hao Feng
9e73b7ab77
Update README.md 2025-06-30 20:11:07 +08:00
Hao Feng
e35e241325
Merge pull request #92 from hanyd2010/feat/support_inference_by_tensorrt_llm
Support inference by TensorRT-LLM
2025-06-30 20:02:19 +08:00
yingdong.han
f171021615 update doc 2025-06-30 19:56:25 +08:00
yingdong.han
cab9b3f952 update 2025-06-30 19:51:26 +08:00
yingdong.han
0705bc12ce update doc 2025-06-30 19:47:10 +08:00
yingdong.han
f4a6c495a6 fix link 2025-06-30 19:43:59 +08:00
yingdong.han
c247e5e1f3 add dolphin inference by tensorrt-llm 2025-06-30 19:41:03 +08:00
Hao Feng
ce591d9136
新增vLLM支持 2025-06-27 15:41:18 +08:00
Hao Feng
a02ed32b3f
Added vLLM support 2025-06-27 15:39:45 +08:00
Hao Feng
25616d71ea
Added vLLM support 2025-06-27 15:38:57 +08:00
Hao Feng
1a59ba5bc0
Merge pull request #91 from hanyd2010/feat/support_inference_by_vllm
Support inference by vllm
2025-06-27 15:18:47 +08:00
yingdong.han
6177c2686b support inference by vllm 2025-06-27 15:01:22 +08:00
Hao Feng
eb1737ae95
Merge pull request #85 from xiaolonggee/master
中文版本的md
2025-06-26 20:10:31 +08:00
Hao Feng
0e4ead6717
Merge pull request #90 from hanyd2010/feat/remove_albumentations
remove 'albumentations'
2025-06-26 20:01:35 +08:00
yingdong.han
4edac82fc3 remove 'albumentations' 2025-06-26 19:45:12 +08:00
Hao Feng
98b8ccc38d
Update demo_page_hf.py 2025-06-23 20:20:57 +08:00
Hao Feng
675dceb08e
Update demo_page.py 2025-06-23 20:20:23 +08:00
xuezhilong
620477be95 中文版本的md 2025-06-23 10:03:14 +08:00
Hao Feng
aac3aceb45
Update README.md 2025-06-23 00:54:30 +08:00
Hao Feng
0c99adc7dc
Update README.md 2025-06-23 00:36:16 +08:00
Hao Feng
cb1c409cea
Merge pull request #66 from Ivan-Inby/bugfix/fallback-to-cpu-if-no-cuda
fix: fallback to CPU when CUDA is not available
2025-06-15 22:05:53 +08:00
Ivan
3b86dc6254
fix: fallback to CPU when CUDA is not available
Previously, the code unconditionally attempted to move the model to the CUDA device (`self.model.to("cuda")`), which caused a runtime crash on systems where CUDA is not available (e.g., Apple M1/M2 or CPU-only environments). This resulted in the error:

AssertionError: Torch not compiled with CUDA enabled

The fix introduces a dynamic device selection:

    device = "cuda" if torch.cuda.is_available() else "cpu"
    self.model.to(device)

This change ensures compatibility across platforms and prevents crashes due to unavailable CUDA devices.
2025-06-15 13:52:42 +04:00
Hao Feng
8ac1495de8
Update README.md 2025-06-13 19:26:22 +08:00
Hao Feng
2a6fcb51c8
Update README.md 2025-06-13 17:01:01 +08:00
fenghao.2019
10b017a62b add pdf parsing 2025-06-13 16:45:28 +08:00
fenghao.2019
49f51871c6 [init] initial commit 2025-05-26 23:20:51 +08:00