![]() * add the pytests Signed-off-by: Peter Staar <taa@zurich.ibm.com> * renamed the test folder and added the toplevel test Signed-off-by: Peter Staar <taa@zurich.ibm.com> * updated the toplevel function test Signed-off-by: Peter Staar <taa@zurich.ibm.com> * need to start running all tests successfully Signed-off-by: Peter Staar <taa@zurich.ibm.com> * added the reference converted documents Signed-off-by: Peter Staar <taa@zurich.ibm.com> * added first test for json and md output Signed-off-by: Peter Staar <taa@zurich.ibm.com> * ran pre-commit Signed-off-by: Peter Staar <taa@zurich.ibm.com> * replaced deprecated json function with model_dump_json Signed-off-by: Peter Staar <taa@zurich.ibm.com> * replaced deprecated json function with model_dump_json Signed-off-by: Peter Staar <taa@zurich.ibm.com> * reformatted code Signed-off-by: Peter Staar <taa@zurich.ibm.com> * Fix backend tests Signed-off-by: Christoph Auer <cau@zurich.ibm.com> * commented out the drawing Signed-off-by: Peter Staar <taa@zurich.ibm.com> * ci: avoid duplicate runs Signed-off-by: Michele Dolfi <97102151+dolfim-ibm@users.noreply.github.com> * commented out json verification for now Signed-off-by: Peter Staar <taa@zurich.ibm.com> * added verification of input cells Signed-off-by: Peter Staar <taa@zurich.ibm.com> * reformat code Signed-off-by: Peter Staar <taa@zurich.ibm.com> * added test to verify the cells in the pages Signed-off-by: Peter Staar <taa@zurich.ibm.com> * added test to verify the cells in the pages (2) Signed-off-by: Peter Staar <taa@zurich.ibm.com> * added test to verify the cells in the pages (3) Signed-off-by: Peter Staar <taa@zurich.ibm.com> * run all examples in CI Signed-off-by: Michele Dolfi <dol@zurich.ibm.com> * make sure examples return failures Signed-off-by: Michele Dolfi <dol@zurich.ibm.com> * raise a failure if examples fail Signed-off-by: Michele Dolfi <dol@zurich.ibm.com> * fix examples Signed-off-by: Michele Dolfi <dol@zurich.ibm.com> * run examples after tests Signed-off-by: Michele Dolfi <dol@zurich.ibm.com> * Add tests and update top_level_tests using only datamodels Signed-off-by: Christoph Auer <cau@zurich.ibm.com> * Remove unnecessary code Signed-off-by: Christoph Auer <cau@zurich.ibm.com> * Validate conversion status on e2e test Signed-off-by: Christoph Auer <cau@zurich.ibm.com> * package verify utils and add more tests Signed-off-by: Michele Dolfi <dol@zurich.ibm.com> * reduce docs in example, since they are already in the tests Signed-off-by: Michele Dolfi <dol@zurich.ibm.com> * skip batch_convert Signed-off-by: Michele Dolfi <dol@zurich.ibm.com> * pin docling-parse 1.1.2 Signed-off-by: Michele Dolfi <dol@zurich.ibm.com> * updated the error messages Signed-off-by: Peter Staar <taa@zurich.ibm.com> * commented out the json verification for now Signed-off-by: Peter Staar <taa@zurich.ibm.com> * bumped GLM version Signed-off-by: Peter Staar <taa@zurich.ibm.com> * Fix lockfile Signed-off-by: Christoph Auer <cau@zurich.ibm.com> * Pin new docling-parse v1.1.3 Signed-off-by: Christoph Auer <cau@zurich.ibm.com> --------- Signed-off-by: Peter Staar <taa@zurich.ibm.com> Signed-off-by: Christoph Auer <cau@zurich.ibm.com> Signed-off-by: Michele Dolfi <97102151+dolfim-ibm@users.noreply.github.com> Signed-off-by: Michele Dolfi <dol@zurich.ibm.com> Co-authored-by: Christoph Auer <cau@zurich.ibm.com> Co-authored-by: Michele Dolfi <97102151+dolfim-ibm@users.noreply.github.com> Co-authored-by: Michele Dolfi <dol@zurich.ibm.com> |
||
---|---|---|
.github | ||
docling | ||
examples | ||
tests | ||
.gitignore | ||
.pre-commit-config.yaml | ||
CHANGELOG.md | ||
CODE_OF_CONDUCT.md | ||
CONTRIBUTING.md | ||
Dockerfile | ||
LICENSE | ||
logo.png | ||
MAINTAINERS.md | ||
poetry.lock | ||
pyproject.toml | ||
README.md |
Docling
Docling bundles PDF document conversion to JSON and Markdown in an easy, self-contained package.
Features
- ⚡ Converts any PDF document to JSON or Markdown format, stable and lightning fast
- 📑 Understands detailed page layout, reading order and recovers table structures
- 📝 Extracts metadata from the document, such as title, authors, references and language
- 🔍 Optionally applies OCR (use with scanned PDFs)
Installation
To use Docling, simply install docling
from your package manager, e.g. pip:
pip install docling
Note
Works on macOS and Linux environments. Windows platforms are currently not tested.
Use alternative PyTorch distributions
The Docling models depend on the PyTorch library.
Depending on your architecture, you might want to use a different distribution of torch
.
For example, you might want support for different accelerator or for a cpu-only version.
All the different ways for installing torch
are listed on their website https://pytorch.org/.
One common situation is the installation on Linux systems with cpu-only support. In this case, we suggest the installation of Docling with the following options
# Example for installing on the Linux cpu-only version
pip install docling --extra-index-url https://download.pytorch.org/whl/cpu
Development setup
To develop for Docling, you need Python 3.10 / 3.11 / 3.12 and Poetry. You can then install from your local clone's root dir:
poetry install --all-extras
Usage
Convert a single document
To convert invidual PDF documents, use convert_single()
, for example:
from docling.document_converter import DocumentConverter
source = "https://arxiv.org/pdf/2408.09869" # PDF path or URL
converter = DocumentConverter()
result = converter.convert_single(source)
print(result.render_as_markdown()) # output: "## Docling Technical Report[...]"
Convert a batch of documents
For an example of batch-converting documents, see batch_convert.py.
From a local repo clone, you can run it with:
python examples/batch_convert.py
The output of the above command will be written to ./scratch
.
Adjust pipeline features
The example file custom_convert.py contains multiple ways one can adjust the conversion pipeline and features.
Control pipeline options
You can control if table structure recognition or OCR should be performed by arguments passed to DocumentConverter
:
doc_converter = DocumentConverter(
artifacts_path=artifacts_path,
pipeline_options=PipelineOptions(
do_table_structure=False, # controls if table structure is recovered
do_ocr=True, # controls if OCR is applied (ignores programmatic content)
),
)
Control table extraction options
You can control if table structure recognition should map the recognized structure back to PDF cells (default) or use text cells from the structure prediction itself. This can improve output quality if you find that multiple columns in extracted tables are erroneously merged into one.
pipeline_options = PipelineOptions(do_table_structure=True)
pipeline_options.table_structure_options.do_cell_matching = False # uses text cells predicted from table structure model
doc_converter = DocumentConverter(
artifacts_path=artifacts_path,
pipeline_options=pipeline_options,
)
Impose limits on the document size
You can limit the file size and number of pages which should be allowed to process per document:
conv_input = DocumentConversionInput.from_paths(
paths=[Path("./test/data/2206.01062.pdf")],
limits=DocumentLimits(max_num_pages=100, max_file_size=20971520)
)
Convert from binary PDF streams
You can convert PDFs from a binary stream instead of from the filesystem as follows:
buf = BytesIO(your_binary_stream)
docs = [DocumentStream(filename="my_doc.pdf", stream=buf)]
conv_input = DocumentConversionInput.from_streams(docs)
results = doc_converter.convert(conv_input)
Limit resource usage
You can limit the CPU threads used by Docling by setting the environment variable OMP_NUM_THREADS
accordingly. The default setting is using 4 CPU threads.
Contributing
Please read Contributing to Docling for details.
References
If you use Docling in your projects, please consider citing the following:
@techreport{Docling,
author = {Deep Search Team},
month = {8},
title = {{Docling Technical Report}},
url={https://arxiv.org/abs/2408.09869},
eprint={2408.09869},
doi = "10.48550/arXiv.2408.09869",
version = {1.0.0},
year = {2024}
}
License
The Docling codebase is under MIT license. For individual model usage, please refer to the model licenses found in the original packages.