Skip to content

update /api/delete to use POST method instead of DELETE #178

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Draft
wants to merge 2 commits into
base: main
Choose a base branch
from
Draft
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
10 changes: 8 additions & 2 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -21,7 +21,9 @@ console.log(response.message.content)
```

### Browser Usage

To use the library without node, import the browser module.

```javascript
import ollama from 'ollama/browser'
```
Expand All @@ -34,7 +36,11 @@ Response streaming can be enabled by setting `stream: true`, modifying function
import ollama from 'ollama'

const message = { role: 'user', content: 'Why is the sky blue?' }
const response = await ollama.chat({ model: 'llama3.1', messages: [message], stream: true })
const response = await ollama.chat({
model: 'llama3.1',
messages: [message],
stream: true,
})
for await (const part of response) {
process.stdout.write(part.message.content)
}
Expand Down Expand Up @@ -207,7 +213,7 @@ ollama.abort()
This method will abort **all** streamed generations currently running with the client instance.
If there is a need to manage streams with timeouts, it is recommended to have one Ollama client per stream.

All asynchronous threads listening to streams (typically the ```for await (const part of response)```) will throw an ```AbortError``` exception. See [examples/abort/abort-all-requests.ts](examples/abort/abort-all-requests.ts) for an example.
All asynchronous threads listening to streams (typically the `for await (const part of response)`) will throw an `AbortError` exception. See [examples/abort/abort-all-requests.ts](examples/abort/abort-all-requests.ts) for an example.

## Custom client

Expand Down
50 changes: 25 additions & 25 deletions examples/abort/abort-all-requests.ts
Original file line number Diff line number Diff line change
Expand Up @@ -8,45 +8,45 @@ setTimeout(() => {

// Start multiple concurrent streaming requests
Promise.all([
ollama.generate({
model: 'llama3.2',
prompt: 'Write a long story about dragons',
stream: true,
}).then(
async (stream) => {
ollama
.generate({
model: 'llama3.2',
prompt: 'Write a long story about dragons',
stream: true,
})
.then(async (stream) => {
console.log(' Starting stream for dragons story...')
for await (const chunk of stream) {
process.stdout.write(' 1> ' + chunk.response)
}
}
),
}),

ollama.generate({
model: 'llama3.2',
prompt: 'Write a long story about wizards',
stream: true,
}).then(
async (stream) => {
ollama
.generate({
model: 'llama3.2',
prompt: 'Write a long story about wizards',
stream: true,
})
.then(async (stream) => {
console.log(' Starting stream for wizards story...')
for await (const chunk of stream) {
process.stdout.write(' 2> ' + chunk.response)
}
}
),
}),

ollama.generate({
model: 'llama3.2',
prompt: 'Write a long story about knights',
stream: true,
}).then(
async (stream) => {
ollama
.generate({
model: 'llama3.2',
prompt: 'Write a long story about knights',
stream: true,
})
.then(async (stream) => {
console.log(' Starting stream for knights story...')
for await (const chunk of stream) {
process.stdout.write(' 3>' + chunk.response)
}
}
)
]).catch(error => {
}),
]).catch((error) => {
if (error.name === 'AbortError') {
console.log('All requests have been aborted')
} else {
Expand Down
37 changes: 17 additions & 20 deletions examples/abort/abort-single-request.ts
Original file line number Diff line number Diff line change
Expand Up @@ -13,38 +13,35 @@ setTimeout(() => {

// Start multiple concurrent streaming requests with different clients
Promise.all([
client1.generate({
model: 'llama3.2',
prompt: 'Write a long story about dragons',
stream: true,
}).then(
async (stream) => {
client1
.generate({
model: 'llama3.2',
prompt: 'Write a long story about dragons',
stream: true,
})
.then(async (stream) => {
console.log(' Starting stream for dragons story...')
for await (const chunk of stream) {
process.stdout.write(' 1> ' + chunk.response)
}
}
),
}),

client2.generate({
model: 'llama3.2',
prompt: 'Write a short story about wizards',
stream: true,
}).then(
async (stream) => {
client2
.generate({
model: 'llama3.2',
prompt: 'Write a short story about wizards',
stream: true,
})
.then(async (stream) => {
console.log(' Starting stream for wizards story...')
for await (const chunk of stream) {
process.stdout.write(' 2> ' + chunk.response)
}
}
),

]).catch(error => {
}),
]).catch((error) => {
if (error.name === 'AbortError') {
console.log('Dragons story request has been aborted')
} else {
console.error('An error occurred:', error)
}
})


129 changes: 71 additions & 58 deletions examples/structured_outputs/structured-outputs-image.ts
Original file line number Diff line number Diff line change
@@ -1,10 +1,10 @@
import ollama from 'ollama';
import ollama from 'ollama'

import { z } from 'zod';
import { zodToJsonSchema } from 'zod-to-json-schema';
import { readFileSync } from 'fs';
import { resolve } from 'path';
import { createInterface } from 'readline';
import { z } from 'zod'
import { zodToJsonSchema } from 'zod-to-json-schema'
import { readFileSync } from 'fs'
import { resolve } from 'path'
import { createInterface } from 'readline'

/*
Ollama vision capabilities with structured outputs
Expand All @@ -14,70 +14,83 @@ import { createInterface } from 'readline';

// Schema for individual objects detected in the image
const ObjectSchema = z.object({
name: z.string().describe('The name of the object'),
confidence: z.number().min(0).max(1).describe('The confidence score of the object detection'),
attributes: z.record(z.any()).optional().describe('Additional attributes of the object')
});
name: z.string().describe('The name of the object'),
confidence: z
.number()
.min(0)
.max(1)
.describe('The confidence score of the object detection'),
attributes: z
.record(z.any())
.optional()
.describe('Additional attributes of the object'),
})

// Schema for individual objects detected in the image
const ImageDescriptionSchema = z.object({
summary: z.string().describe('A concise summary of the image'),
objects: z.array(ObjectSchema).describe('An array of objects detected in the image'),
scene: z.string().describe('The scene of the image'),
colors: z.array(z.string()).describe('An array of colors detected in the image'),
time_of_day: z.enum(['Morning', 'Afternoon', 'Evening', 'Night']).describe('The time of day the image was taken'),
setting: z.enum(['Indoor', 'Outdoor', 'Unknown']).describe('The setting of the image'),
text_content: z.string().describe('Any text detected in the image')
});
summary: z.string().describe('A concise summary of the image'),
objects: z.array(ObjectSchema).describe('An array of objects detected in the image'),
scene: z.string().describe('The scene of the image'),
colors: z.array(z.string()).describe('An array of colors detected in the image'),
time_of_day: z
.enum(['Morning', 'Afternoon', 'Evening', 'Night'])
.describe('The time of day the image was taken'),
setting: z.enum(['Indoor', 'Outdoor', 'Unknown']).describe('The setting of the image'),
text_content: z.string().describe('Any text detected in the image'),
})

async function run(model: string) {
// Create readline interface for user input
const rl = createInterface({
input: process.stdin,
output: process.stdout
});
// Create readline interface for user input
const rl = createInterface({
input: process.stdin,
output: process.stdout,
})

// Get path from user input
const path = await new Promise<string>(resolve => {
rl.question('Enter the path to your image: ', resolve);
});
rl.close();
// Get path from user input
const path = await new Promise<string>((resolve) => {
rl.question('Enter the path to your image: ', resolve)
})
rl.close()

// Verify the file exists and read it
try {
const imagePath = resolve(path);
const imageBuffer = readFileSync(imagePath);
const base64Image = imageBuffer.toString('base64');

// Convert the Zod schema to JSON Schema format
const jsonSchema = zodToJsonSchema(ImageDescriptionSchema);
// Verify the file exists and read it
try {
const imagePath = resolve(path)
const imageBuffer = readFileSync(imagePath)
const base64Image = imageBuffer.toString('base64')

const messages = [{
role: 'user',
content: 'Analyze this image and return a detailed JSON description including objects, scene, colors and any text detected. If you cannot determine certain details, leave those fields empty.',
images: [base64Image]
}];
// Convert the Zod schema to JSON Schema format
const jsonSchema = zodToJsonSchema(ImageDescriptionSchema)

const response = await ollama.chat({
model: model,
messages: messages,
format: jsonSchema,
options: {
temperature: 0 // Make responses more deterministic
}
});
const messages = [
{
role: 'user',
content:
'Analyze this image and return a detailed JSON description including objects, scene, colors and any text detected. If you cannot determine certain details, leave those fields empty.',
images: [base64Image],
},
]

// Parse and validate the response
try {
const imageAnalysis = ImageDescriptionSchema.parse(JSON.parse(response.message.content));
console.log('Image Analysis:', imageAnalysis);
} catch (error) {
console.error("Generated invalid response:", error);
}
const response = await ollama.chat({
model: model,
messages: messages,
format: jsonSchema,
options: {
temperature: 0, // Make responses more deterministic
},
})

// Parse and validate the response
try {
const imageAnalysis = ImageDescriptionSchema.parse(
JSON.parse(response.message.content),
)
console.log('Image Analysis:', imageAnalysis)
} catch (error) {
console.error("Error reading image file:", error);
console.error('Generated invalid response:', error)
}
} catch (error) {
console.error('Error reading image file:', error)
}
}

run('llama3.2-vision').catch(console.error);
run('llama3.2-vision').catch(console.error)
Loading