You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
| model | defines Redis model used to persist DataFrame, see [Persistence model](#persistence-model)|`enum [binary, hash]`|`hash`|
338
-
| filter.keys.by.type | make sure the underlying data structures match persistence model |`Boolean`|`false`|
339
-
| partitions.number | number of partitions (applies only when reading dataframe) |`Int`|`3`|
340
-
| key.column | when writing - specifies unique column used as a Redis key, by default a key is auto-generated. <br/> When reading - specifies column name to store hash key |`String`| - |
341
-
| ttl | data time to live in `seconds`. Data doesn't expire if `ttl` is less than `1`|`Int`|`0`|
342
-
| infer.schema | infer schema from random row, all columns will have `String` type |`Boolean`|`false`|
343
-
| max.pipeline.size | maximum number of commands per pipeline (used to batch commands) |`Int`| 100 |
344
-
| scan.count | count option of SCAN command (used to iterate over keys) |`Int`| 100 |
| model | defines Redis model used to persist DataFrame, see [Persistence model](#persistence-model)|`enum [binary, hash]`|`hash`|
338
+
| filter.keys.by.type | make sure the underlying data structures match persistence model |`Boolean`|`false`|
339
+
| partitions.number | number of partitions (applies only when reading dataframe) |`Int`|`3`|
340
+
| key.column | when writing - specifies unique column used as a Redis key, by default a key is auto-generated. <br/> When reading - specifies column name to store hash key |`String`| - |
341
+
| ttl | data time to live in `seconds`. Data doesn't expire if `ttl` is less than `1`|`Int`|`0`|
342
+
| infer.schema | infer schema from random row, all columns will have `String` type |`Boolean`|`false`|
343
+
| max.pipeline.size | maximum number of commands per pipeline (used to batch commands) |`Int`| 100 |
344
+
| scan.count | count option of SCAN command (used to iterate over keys) |`Int`| 100 |
345
+
| iterator.grouping.size | the number of items to be grouped when iterating over underlying RDD partition |`Int`| 1000 |
0 commit comments