Skip to content

Explain Image functionality #148

Open
@leventmolla

Description

@leventmolla

Discussed in #147

Originally posted by leventmolla January 25, 2024
OpenAI has a new model gp4-1106-vision-preview which can explain a collection of images. I think it could be done with the general chat completions endpoint, but the documentation is not very clear about the message structure. There should be a text prompt initially describing the task and follow-up messages that contain the images. I tried this and passed base64-encoded images, got errors (for some reason the number of tokens requested is a very large number and the query fails). I then tried to pass the URLs of the files for the images, which failed as well. So I am at a loss about how to use this functionality.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions