Docling simplifies document processing, parsing diverse formats — including advanced PDF understanding — and providing seamless integrations with the gen AI ecosystem.
Go to file
Peter W. J. Staar cfdf4cea25
feat: new vlm-models support (#1570)
* feat: adding new vlm-models support

Signed-off-by: Peter Staar <taa@zurich.ibm.com>

* fixed the transformers

Signed-off-by: Peter Staar <taa@zurich.ibm.com>

* got microsoft/Phi-4-multimodal-instruct to work

Signed-off-by: Peter Staar <taa@zurich.ibm.com>

* working on vlm's

Signed-off-by: Peter Staar <taa@zurich.ibm.com>

* refactoring the VLM part

Signed-off-by: Peter Staar <taa@zurich.ibm.com>

* all working, now serious refacgtoring necessary

Signed-off-by: Peter Staar <taa@zurich.ibm.com>

* refactoring the download_model

Signed-off-by: Peter Staar <taa@zurich.ibm.com>

* added the formulate_prompt

Signed-off-by: Peter Staar <taa@zurich.ibm.com>

* pixtral 12b runs via MLX and native transformers

Signed-off-by: Peter Staar <taa@zurich.ibm.com>

* added the VlmPredictionToken

Signed-off-by: Peter Staar <taa@zurich.ibm.com>

* refactoring minimal_vlm_pipeline

Signed-off-by: Peter Staar <taa@zurich.ibm.com>

* fixed the MyPy

Signed-off-by: Peter Staar <taa@zurich.ibm.com>

* added pipeline_model_specializations file

Signed-off-by: Peter Staar <taa@zurich.ibm.com>

* need to get Phi4 working again ...

Signed-off-by: Peter Staar <taa@zurich.ibm.com>

* finalising last points for vlms support

Signed-off-by: Peter Staar <taa@zurich.ibm.com>

* fixed the pipeline for Phi4

Signed-off-by: Peter Staar <taa@zurich.ibm.com>

* streamlining all code

Signed-off-by: Peter Staar <taa@zurich.ibm.com>

* reformatted the code

Signed-off-by: Peter Staar <taa@zurich.ibm.com>

* fixing the tests

Signed-off-by: Peter Staar <taa@zurich.ibm.com>

* added the html backend to the VLM pipeline

Signed-off-by: Peter Staar <taa@zurich.ibm.com>

* fixed the static load_from_doctags

Signed-off-by: Peter Staar <taa@zurich.ibm.com>

* restore stable imports

Signed-off-by: Michele Dolfi <dol@zurich.ibm.com>

* use AutoModelForVision2Seq for Pixtral and review example (including rename)

Signed-off-by: Michele Dolfi <dol@zurich.ibm.com>

* remove unused value

Signed-off-by: Michele Dolfi <dol@zurich.ibm.com>

* refactor instances of VLM models

Signed-off-by: Michele Dolfi <dol@zurich.ibm.com>

* skip compare example in CI

Signed-off-by: Michele Dolfi <dol@zurich.ibm.com>

* use lowercase and uppercase only

Signed-off-by: Michele Dolfi <dol@zurich.ibm.com>

* add new minimal_vlm example and refactor pipeline_options_vlm_model for cleaner import

Signed-off-by: Michele Dolfi <dol@zurich.ibm.com>

* rename pipeline_vlm_model_spec

Signed-off-by: Michele Dolfi <dol@zurich.ibm.com>

* move more argument to options and simplify model init

Signed-off-by: Michele Dolfi <dol@zurich.ibm.com>

* add supported_devices

Signed-off-by: Michele Dolfi <dol@zurich.ibm.com>

* remove not-needed function

Signed-off-by: Michele Dolfi <dol@zurich.ibm.com>

* exclude minimal_vlm

Signed-off-by: Michele Dolfi <dol@zurich.ibm.com>

* missing file

Signed-off-by: Michele Dolfi <dol@zurich.ibm.com>

* add message for transformers version

Signed-off-by: Michele Dolfi <dol@zurich.ibm.com>

* rename to specs

Signed-off-by: Michele Dolfi <dol@zurich.ibm.com>

* use module import and remove MLX from non-darwin

Signed-off-by: Michele Dolfi <dol@zurich.ibm.com>

* remove hf_vlm_model and add extra_generation_args

Signed-off-by: Michele Dolfi <dol@zurich.ibm.com>

* use single HF VLM model class

Signed-off-by: Michele Dolfi <dol@zurich.ibm.com>

* remove torch type

Signed-off-by: Michele Dolfi <dol@zurich.ibm.com>

* add docs for vision models

Signed-off-by: Michele Dolfi <dol@zurich.ibm.com>

---------

Signed-off-by: Peter Staar <taa@zurich.ibm.com>
Signed-off-by: Michele Dolfi <dol@zurich.ibm.com>
Co-authored-by: Michele Dolfi <dol@zurich.ibm.com>
2025-06-02 17:01:06 +02:00
.actor fix(integration): update the Apify Actor integration (#1619) 2025-05-21 02:47:55 +02:00
.github feat: new vlm-models support (#1570) 2025-06-02 17:01:06 +02:00
docling feat: new vlm-models support (#1570) 2025-06-02 17:01:06 +02:00
docs feat: new vlm-models support (#1570) 2025-06-02 17:01:06 +02:00
tests feat: new vlm-models support (#1570) 2025-06-02 17:01:06 +02:00
.gitattributes chore: exclude data from GH Linguist (#1671) 2025-05-28 15:42:34 +02:00
.gitignore ci: Add Github Actions (#4) 2024-07-16 13:05:04 +02:00
.pre-commit-config.yaml ci: add coverage and ruff (#1383) 2025-04-14 18:01:26 +02:00
CHANGELOG.md chore: bump version to 2.35.0 [skip ci] 2025-06-02 12:30:26 +00:00
CITATION.cff chore: Update repository URL in CITATION.cff (#1363) 2025-04-14 06:57:04 +02:00
CODE_OF_CONDUCT.md docs: Linux Foundation AI & Data (#1183) 2025-03-19 09:05:57 +01:00
CONTRIBUTING.md docs: Add testing in the docs (#1379) 2025-04-14 12:31:48 +02:00
Dockerfile chore: properly clean up apt temporary files in Dockerfile (#1223) 2025-03-25 11:10:09 +01:00
LICENSE chore: fix placeholders in license (#63) 2024-09-06 17:10:07 +02:00
MAINTAINERS.md docs: Linux Foundation AI & Data (#1183) 2025-03-19 09:05:57 +01:00
mkdocs.yml feat: new vlm-models support (#1570) 2025-06-02 17:01:06 +02:00
poetry.lock feat: new vlm-models support (#1570) 2025-06-02 17:01:06 +02:00
pyproject.toml feat: new vlm-models support (#1570) 2025-06-02 17:01:06 +02:00
README.md feat: new vlm-models support (#1570) 2025-06-02 17:01:06 +02:00

Docling

Docling

DS4SD%2Fdocling | Trendshift

arXiv Docs PyPI version PyPI - Python Version Poetry Code style: black Imports: isort Pydantic v2 pre-commit License MIT PyPI Downloads Docling Actor OpenSSF Best Practices LF AI & Data

Docling simplifies document processing, parsing diverse formats — including advanced PDF understanding — and providing seamless integrations with the gen AI ecosystem.

Features

  • 🗂️ Parsing of multiple document formats incl. PDF, DOCX, XLSX, HTML, images, and more
  • 📑 Advanced PDF understanding incl. page layout, reading order, table structure, code, formulas, image classification, and more
  • 🧬 Unified, expressive DoclingDocument representation format
  • ↪️ Various export formats and options, including Markdown, HTML, and lossless JSON
  • 🔒 Local execution capabilities for sensitive data and air-gapped environments
  • 🤖 Plug-and-play integrations incl. LangChain, LlamaIndex, Crew AI & Haystack for agentic AI
  • 🔍 Extensive OCR support for scanned PDFs and images
  • 🥚 Support of several Visual Language Models (SmolDocling)
  • 💻 Simple and convenient CLI

Coming soon

  • 📝 Metadata extraction, including title, authors, references & language
  • 📝 Chart understanding (Barchart, Piechart, LinePlot, etc)
  • 📝 Complex chemistry understanding (Molecular structures)

Installation

To use Docling, simply install docling from your package manager, e.g. pip:

pip install docling

Works on macOS, Linux and Windows environments. Both x86_64 and arm64 architectures.

More detailed installation instructions are available in the docs.

Getting started

To convert individual documents with python, use convert(), for example:

from docling.document_converter import DocumentConverter

source = "https://arxiv.org/pdf/2408.09869"  # document per local path or URL
converter = DocumentConverter()
result = converter.convert(source)
print(result.document.export_to_markdown())  # output: "## Docling Technical Report[...]"

More advanced usage options are available in the docs.

CLI

Docling has a built-in CLI to run conversions.

docling https://arxiv.org/pdf/2206.01062

You can also use 🥚SmolDocling and other VLMs via Docling CLI:

docling --pipeline vlm --vlm-model smoldocling https://arxiv.org/pdf/2206.01062

This will use MLX acceleration on supported Apple Silicon hardware.

Read more here

Documentation

Check out Docling's documentation, for details on installation, usage, concepts, recipes, extensions, and more.

Examples

Go hands-on with our examples, demonstrating how to address different application use cases with Docling.

Integrations

To further accelerate your AI application development, check out Docling's native integrations with popular frameworks and tools.

Get help and support

Please feel free to connect with us using the discussion section.

Technical report

For more details on Docling's inner workings, check out the Docling Technical Report.

Contributing

Please read Contributing to Docling for details.

References

If you use Docling in your projects, please consider citing the following:

@techreport{Docling,
  author = {Deep Search Team},
  month = {8},
  title = {Docling Technical Report},
  url = {https://arxiv.org/abs/2408.09869},
  eprint = {2408.09869},
  doi = {10.48550/arXiv.2408.09869},
  version = {1.0.0},
  year = {2024}
}

License

The Docling codebase is under MIT license. For individual model usage, please refer to the model licenses found in the original packages.

LF AI & Data

Docling is hosted as a project in the LF AI & Data Foundation.

IBM ❤️ Open Source AI

The project was started by the AI for knowledge team at IBM Research Zurich.