Skip to content

A go script logging requests to local ollama server and a streamlit app to view the prompt logs to /chat and /generate api.

Notifications You must be signed in to change notification settings

SKM162/Ollama-Local-Log-Analyser

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

12 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Ollama Request Interceptor & Log Viewer

This tool helps you intercept, log, and visualize requests and responses sent to a local Ollama server. It's useful for debugging, auditing, or understanding how prompts are being handled by your models.

🧰 Dependencies

Install Python dependencies (once):

pip install streamlit

⚙️ Getting Started

1. Clone the Repository

2. Run the Interceptor

sh run-interceptor.sh

defaults: Target URL : http://localhost:11434 Log Dir : ./logs Listen Addr: :11435

or

sh run-interceptor.sh -target "http://ai-test-3:11434" -log "./newlogs" -listen ":7800"
  • This starts a reverse proxy that logs requests/responses intended for the Ollama server.
  • It creates a directory where logs are stored in a format the viewer understands.

📊 Viewing the Logs

Once you have logs:

3. Launch the Viewer

streamlit run olla.py
  • Select the default logs/ folder or provide a custom path to view logs.
  • Only logs written in the expected format (by the Go script) will be displayed correctly.

📌 Notes

  • Make sure the client app is configured to point to the proxy (e.g., http://localhost:11435) instead of directly to Ollama.
  • The proxy will forward the request and respond transparently, while logging everything in between.

About

A go script logging requests to local ollama server and a streamlit app to view the prompt logs to /chat and /generate api.

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 2

  •  
  •