Skip to content

Commit 6516e40

Browse files
committed
feat: integration test
1 parent 9f27b32 commit 6516e40

File tree

2 files changed

+68
-3
lines changed

2 files changed

+68
-3
lines changed

README.md

Lines changed: 46 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -45,11 +45,48 @@ let affected_rows = database.insert(insert_request).await?;
4545

4646
```rust
4747
use greptimedb_ingester::{BulkInserter, BulkWriteOptions, ColumnDataType, Row, Table, Value};
48+
use greptimedb_ingester::api::v1::*;
49+
use greptimedb_ingester::helpers::schema::*;
50+
use greptimedb_ingester::helpers::values::*;
51+
52+
// Step 1: Create table manually (bulk API requires table to exist beforehand)
53+
// Option A: Use insert API to create table
54+
let database = Database::new_with_dbname("public", client.clone());
55+
let init_schema = vec![
56+
timestamp("ts", ColumnDataType::TimestampMillisecond),
57+
field("device_id", ColumnDataType::String),
58+
field("temperature", ColumnDataType::Float64),
59+
];
60+
61+
let init_request = RowInsertRequests {
62+
inserts: vec![RowInsertRequest {
63+
table_name: "sensor_readings".to_string(),
64+
rows: Some(Rows {
65+
schema: init_schema,
66+
rows: vec![Row {
67+
values: vec![
68+
timestamp_millisecond_value(current_timestamp()),
69+
string_value("init_device".to_string()),
70+
f64_value(0.0),
71+
],
72+
}],
73+
}),
74+
}],
75+
};
76+
77+
database.insert(init_request).await?; // Table is now created
78+
79+
// Option B: Create table using SQL (if you have SQL access)
80+
// CREATE TABLE sensor_readings (
81+
// ts TIMESTAMP TIME INDEX,
82+
// device_id STRING,
83+
// temperature DOUBLE
84+
// );
4885

49-
// Create bulk inserter
86+
// Step 2: Now use bulk API for high-throughput operations
5087
let bulk_inserter = BulkInserter::new(client, "public");
5188

52-
// Define table schema
89+
// Define table schema (must match the insert API schema above)
5390
let table_template = Table::builder()
5491
.name("sensor_readings")
5592
.build()
@@ -81,7 +118,12 @@ let responses = bulk_writer.wait_for_all_pending().await?;
81118
bulk_writer.finish().await?;
82119
```
83120

84-
> **Important**: For bulk operations, currently use `add_field()` instead of `add_tag()`. Tag columns are part of the primary key in GreptimeDB, but bulk operations don't yet support tables with tag columns. This limitation will be addressed in future versions.
121+
> **Important**:
122+
> 1. **Manual Table Creation Required**: Bulk API does **not** create tables automatically. You must create the table beforehand using either:
123+
> - Insert API (which supports auto table creation), or
124+
> - SQL DDL statements (CREATE TABLE)
125+
> 2. **Schema Matching**: The table template in bulk API must exactly match the existing table schema.
126+
> 3. **Column Types**: For bulk operations, currently use `add_field()` instead of `add_tag()`. Tag columns are part of the primary key in GreptimeDB, but bulk operations don't yet support tables with tag columns. This limitation will be addressed in future versions.
85127
86128
## When to Choose Which API
87129

@@ -192,6 +234,7 @@ if let Some(binary_data) = row.get_binary(5) {
192234
- Monitor and optimize network round-trip times
193235

194236
### For High-Throughput Applications
237+
- **Create tables manually first** - bulk API requires existing tables
195238
- Use parallelism=8-16 for network-bound workloads
196239
- Batch 2000-100000 rows per request for optimal performance
197240
- Enable compression to reduce network overhead

examples/README.md

Lines changed: 22 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -40,6 +40,7 @@ cargo run --example bulk_stream_writer_example
4040
- Async submission patterns with `write_rows_async()`
4141
- Optimal configuration for high-volume scenarios
4242
- Performance metrics and best practices
43+
- **Important**: Bulk API requires manual table creation (does not auto-create tables)
4344
- Current limitation: bulk operations work only with field columns (tag support coming)
4445

4546
## Choosing the Right Example
@@ -182,6 +183,27 @@ Use these metrics to:
182183
3. Choose the right approach for your use case
183184
4. Monitor production performance
184185

186+
## Important Notes for Bulk Operations
187+
188+
**Manual Table Creation Required**: Unlike the insert API which can automatically create tables, the bulk API requires tables to exist beforehand. In production, you should:
189+
190+
1. **Create tables manually using SQL DDL**:
191+
```sql
192+
CREATE TABLE sensor_readings (
193+
ts TIMESTAMP TIME INDEX,
194+
sensor_id STRING,
195+
temperature DOUBLE,
196+
sensor_status BIGINT
197+
);
198+
```
199+
200+
2. **Or use insert API first** (as shown in examples):
201+
```rust
202+
// Insert one row to create the table
203+
database.insert(initial_request).await?;
204+
// Then use bulk API for high-throughput operations
205+
```
206+
185207
## Column Types in Bulk vs Insert Operations
186208

187209
**Important Difference**: The two examples use different column types due to current limitations:

0 commit comments

Comments
 (0)