Skip to content

chrishart0/ollama-langchain-llama_index-samples

Repository files navigation

Relevant learning materials

Basic LangChain Agent

Setup

1) Prepare the ollama

You can check the .env file to see what model is specified, you will need to ensure that model is pulled down.

ollama pull llama3:8b-text-fp16
# ollama pull llama3

Configure your .env file as needed

  • Ensure the MODEL defined is one you have downloaded with ollama
    • NOTE: The mistral model provided by ollama is really 7b-instruct-v0.2-q4_0. This model is blazing fast, but isn't that smart and will fail on anything but simple questions.
      • This may change in the future as ollama updates their defaults
    • For higher quality outputs, try updating the model to mixtral, note this will be slower
  • Make sure the URL specified is correct for your setup of Ollama

2) Install prereqs

python3 -m venv .venv
source .venv/bin/activate
pip install -r requirements.txt --upgrade

Spin up a qdrant vector db

This is needed for some of the more advanced examples, such as example 3. Example 2 uses qdrant in memory.

docker run -p 6333:6333 -p 6334:6334
-v $(pwd)/qdrant_storage:/qdrant/storage:z
qdrant/qdrant

Try out the example scripts

LangChain Chain

python langchain-chain-ollama.py

Llama_Index RAG chatbot over Ben Franklin's writings

python llama_index-rag-ollama.py

LangChain Agent with Duck Duck Go search api Note: Chose Duck Duck Go search API because it is the only of the big ones with a keyless API

python langchain-agent-ollama.py

Want to contribute?

Would love to accept some contributions or requests for other examples you'd like to see. I am running all this on my personal hardware and trying to come up with fun and useful examples for myself.

About

Samples of LangChain and Llama_Index using Ollama to run local LLMs.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages