You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
// Define table schema (must match the insert API schema above)
53
90
lettable_template=Table::builder()
54
91
.name("sensor_readings")
55
92
.build()
@@ -81,7 +118,12 @@ let responses = bulk_writer.wait_for_all_pending().await?;
81
118
bulk_writer.finish().await?;
82
119
```
83
120
84
-
> **Important**: For bulk operations, currently use `add_field()` instead of `add_tag()`. Tag columns are part of the primary key in GreptimeDB, but bulk operations don't yet support tables with tag columns. This limitation will be addressed in future versions.
121
+
> **Important**:
122
+
> 1.**Manual Table Creation Required**: Bulk API does **not** create tables automatically. You must create the table beforehand using either:
123
+
> - Insert API (which supports auto table creation), or
124
+
> - SQL DDL statements (CREATE TABLE)
125
+
> 2.**Schema Matching**: The table template in bulk API must exactly match the existing table schema.
126
+
> 3.**Column Types**: For bulk operations, currently use `add_field()` instead of `add_tag()`. Tag columns are part of the primary key in GreptimeDB, but bulk operations don't yet support tables with tag columns. This limitation will be addressed in future versions.
85
127
86
128
## When to Choose Which API
87
129
@@ -192,6 +234,7 @@ if let Some(binary_data) = row.get_binary(5) {
192
234
- Monitor and optimize network round-trip times
193
235
194
236
### For High-Throughput Applications
237
+
-**Create tables manually first** - bulk API requires existing tables
195
238
- Use parallelism=8-16 for network-bound workloads
196
239
- Batch 2000-100000 rows per request for optimal performance
Copy file name to clipboardExpand all lines: examples/README.md
+22Lines changed: 22 additions & 0 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -40,6 +40,7 @@ cargo run --example bulk_stream_writer_example
40
40
- Async submission patterns with `write_rows_async()`
41
41
- Optimal configuration for high-volume scenarios
42
42
- Performance metrics and best practices
43
+
-**Important**: Bulk API requires manual table creation (does not auto-create tables)
43
44
- Current limitation: bulk operations work only with field columns (tag support coming)
44
45
45
46
## Choosing the Right Example
@@ -182,6 +183,27 @@ Use these metrics to:
182
183
3. Choose the right approach for your use case
183
184
4. Monitor production performance
184
185
186
+
## Important Notes for Bulk Operations
187
+
188
+
**Manual Table Creation Required**: Unlike the insert API which can automatically create tables, the bulk API requires tables to exist beforehand. In production, you should:
189
+
190
+
1.**Create tables manually using SQL DDL**:
191
+
```sql
192
+
CREATETABLEsensor_readings (
193
+
ts TIMESTAMPTIME INDEX,
194
+
sensor_id STRING,
195
+
temperature DOUBLE,
196
+
sensor_status BIGINT
197
+
);
198
+
```
199
+
200
+
2.**Or use insert API first** (as shown in examples):
201
+
```rust
202
+
// Insert one row to create the table
203
+
database.insert(initial_request).await?;
204
+
// Then use bulk API for high-throughput operations
205
+
```
206
+
185
207
## Column Types in Bulk vs Insert Operations
186
208
187
209
**Important Difference**: The two examples use different column types due to current limitations:
0 commit comments