This tool helps you intercept, log, and visualize requests and responses sent to a local Ollama server. It's useful for debugging, auditing, or understanding how prompts are being handled by your models.
Install Python dependencies (once):
pip install streamlit
sh run-interceptor.sh
defaults: Target URL : http://localhost:11434 Log Dir : ./logs Listen Addr: :11435
or
sh run-interceptor.sh -target "http://ai-test-3:11434" -log "./newlogs" -listen ":7800"
- This starts a reverse proxy that logs requests/responses intended for the Ollama server.
- It creates a directory where logs are stored in a format the viewer understands.
Once you have logs:
streamlit run olla.py
- Select the default
logs/
folder or provide a custom path to view logs. - Only logs written in the expected format (by the Go script) will be displayed correctly.
- Make sure the client app is configured to point to the proxy (e.g.,
http://localhost:11435
) instead of directly to Ollama. - The proxy will forward the request and respond transparently, while logging everything in between.