
* build: Add ollama sdk dependency Branch: OllamaVlmModel Signed-off-by: Gabe Goodhart <ghart@us.ibm.com> * feat: Add option plumbing for OllamaVlmOptions in pipeline_options Branch: OllamaVlmModel Signed-off-by: Gabe Goodhart <ghart@us.ibm.com> * feat: Full implementation of OllamaVlmModel Branch: OllamaVlmModel Signed-off-by: Gabe Goodhart <ghart@us.ibm.com> * feat: Connect "granite_vision_ollama" pipeline option to CLI Branch: OllamaVlmModel Signed-off-by: Gabe Goodhart <ghart@us.ibm.com> * Revert "build: Add ollama sdk dependency" After consideration, we're going to use the generic OpenAI API instead of the Ollama-specific API to avoid duplicate work. This reverts commit bc6b366468cdd66b52540aac9c7d8b584ab48ad0. Signed-off-by: Gabe Goodhart <ghart@us.ibm.com> * refactor: Move OpenAI API call logic into utils.utils This will allow reuse of this logic in a generic VLM model NOTE: There is a subtle change here in the ordering of the text prompt and the image in the call to the OpenAI API. When run against Ollama, this ordering makes a big difference. If the prompt comes before the image, the result is terse and not usable whereas the prompt coming after the image works as expected and matches the non-OpenAI chat API. Branch: OllamaVlmModel Signed-off-by: Gabe Goodhart <ghart@us.ibm.com> * refactor: Refactor from Ollama SDK to generic OpenAI API Branch: OllamaVlmModel Signed-off-by: Gabe Goodhart <ghart@us.ibm.com> * fix: Linting, formatting, and bug fixes The one bug fix was in the timeout arg to openai_image_request. Otherwise, this is all style changes to get MyPy and black passing cleanly. Branch: OllamaVlmModel Signed-off-by: Gabe Goodhart <ghart@us.ibm.com> * remove model from download enum Signed-off-by: Michele Dolfi <dol@zurich.ibm.com> * generalize input args for other API providers Signed-off-by: Michele Dolfi <dol@zurich.ibm.com> * rename and refactor Signed-off-by: Michele Dolfi <dol@zurich.ibm.com> * add example Signed-off-by: Michele Dolfi <dol@zurich.ibm.com> * require flag for remote services Signed-off-by: Michele Dolfi <dol@zurich.ibm.com> * disable example from CI Signed-off-by: Michele Dolfi <dol@zurich.ibm.com> * add examples to docs Signed-off-by: Michele Dolfi <dol@zurich.ibm.com> --------- Signed-off-by: Gabe Goodhart <ghart@us.ibm.com> Signed-off-by: Michele Dolfi <dol@zurich.ibm.com> Co-authored-by: Michele Dolfi <dol@zurich.ibm.com>
62 lines
1.4 KiB
Python
62 lines
1.4 KiB
Python
import base64
|
|
import logging
|
|
from io import BytesIO
|
|
from typing import Dict, Optional
|
|
|
|
import requests
|
|
from PIL import Image
|
|
from pydantic import AnyUrl
|
|
|
|
from docling.datamodel.base_models import OpenAiApiResponse
|
|
|
|
_log = logging.getLogger(__name__)
|
|
|
|
|
|
def api_image_request(
|
|
image: Image.Image,
|
|
prompt: str,
|
|
url: AnyUrl,
|
|
timeout: float = 20,
|
|
headers: Optional[Dict[str, str]] = None,
|
|
**params,
|
|
) -> str:
|
|
img_io = BytesIO()
|
|
image.save(img_io, "PNG")
|
|
image_base64 = base64.b64encode(img_io.getvalue()).decode("utf-8")
|
|
messages = [
|
|
{
|
|
"role": "user",
|
|
"content": [
|
|
{
|
|
"type": "image_url",
|
|
"image_url": {"url": f"data:image/png;base64,{image_base64}"},
|
|
},
|
|
{
|
|
"type": "text",
|
|
"text": prompt,
|
|
},
|
|
],
|
|
}
|
|
]
|
|
|
|
payload = {
|
|
"messages": messages,
|
|
**params,
|
|
}
|
|
|
|
headers = headers or {}
|
|
|
|
r = requests.post(
|
|
str(url),
|
|
headers=headers,
|
|
json=payload,
|
|
timeout=timeout,
|
|
)
|
|
if not r.ok:
|
|
_log.error(f"Error calling the API. Response was {r.text}")
|
|
r.raise_for_status()
|
|
|
|
api_resp = OpenAiApiResponse.model_validate_json(r.text)
|
|
generated_text = api_resp.choices[0].message.content.strip()
|
|
return generated_text
|