Skip to content

Support for connecting to models running via Llamafile #57

@phildougherty

Description

@phildougherty

Hey Steven!

Very cool project! I have been having some fun trying to get a multi-modal agent setup going using hosted models alongside open source models running locally on my 3090 setup. I have been surprised by how much is already possible!

I am not sure if you have heard of the Mozilla foundation's Llamafile project but it builds on llama-cpp and it bundles weights as executable programs and allows for easier sharing/running of models locally. I was curious to get your thoughts on integrating the use of these? It seems like each Llamafile has a OAI API completions endpoint it runs by default. I thought perhaps this might be useful to avoid having to convert models to support the API as you describe in the README. Would love to pick your brain on so much!

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions