Skip to content

Commit c1d7f43

Browse files
authored
Added max_item_size to Memcached client (#2304)
* Added max_item_size to Memcached client Signed-off-by: Marco Pracucci <marco@pracucci.com> * Changed imports order and splitted tests Signed-off-by: Marco Pracucci <marco@pracucci.com> * Fixed type casting Signed-off-by: Marco Pracucci <marco@pracucci.com> * Changed imports grouping Signed-off-by: Marco Pracucci <marco@pracucci.com> * Changed memcached max_item_size default from 0 to 1MB Signed-off-by: Marco Pracucci <marco@pracucci.com> * Increased e2e tests timeout Signed-off-by: Marco Pracucci <marco@pracucci.com> * Fixed typo in CHANGELOG Signed-off-by: Marco Pracucci <marco@pracucci.com> * Reverted Makefile changes Signed-off-by: Marco Pracucci <marco@pracucci.com>
1 parent c040657 commit c1d7f43

File tree

8 files changed

+71
-8
lines changed

8 files changed

+71
-8
lines changed

CHANGELOG.md

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -23,6 +23,7 @@ We use *breaking* word for marking changes that are not backward compatible (rel
2323
- [#2265](https://github.com/thanos-io/thanos/pull/2265) Compactor: Add `--wait-interval` to specify compaction wait interval between consecutive compact runs when `--wait` enabled.
2424
- [#2250](https://github.com/thanos-io/thanos/pull/2250) Compactor: Enable vertical compaction for offline deduplication (Experimental). Uses `--deduplication.replica-label` flag to specify the replica label to deduplicate on (Hidden). Please note that this uses a NAIVE algorithm for merging (no smart replica deduplication, just chaining samples together). This works well for deduplication of blocks with **precisely the same samples** like produced by Receiver replication. We plan to add a smarter algorithm in the following weeks.
2525
- [#1714](https://github.com/thanos-io/thanos/pull/1714) Run the bucket web UI in the compact component when it is run as a long-lived process.
26+
- [#2304](https://github.com/thanos-io/thanos/pull/2304) Store: Added `max_item_size` config option to memcached-based index cache. This should be set to the max item size configured in memcached (`-I` flag) in order to not waste network round-trips to cache items larger than the limit configured in memcached.
2627

2728
### Changed
2829

cmd/thanos/store.go

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -226,7 +226,7 @@ func runStore(
226226
indexCache, err = storecache.NewIndexCache(logger, indexCacheContentYaml, reg)
227227
} else {
228228
indexCache, err = storecache.NewInMemoryIndexCacheWithConfig(logger, reg, storecache.InMemoryIndexCacheConfig{
229-
MaxSize: storecache.Bytes(indexCacheSizeBytes),
229+
MaxSize: model.Bytes(indexCacheSizeBytes),
230230
MaxItemSize: storecache.DefaultInMemoryIndexCacheConfig.MaxItemSize,
231231
})
232232
}

docs/components/store.md

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -212,6 +212,7 @@ config:
212212
max_idle_connections: 0
213213
max_async_concurrency: 0
214214
max_async_buffer_size: 0
215+
max_item_size: 1MiB
215216
max_get_multi_concurrency: 0
216217
max_get_multi_batch_size: 0
217218
dns_provider_update_interval: 0s
@@ -229,6 +230,7 @@ While the remaining settings are **optional**:
229230
- `max_async_buffer_size`: maximum number of enqueued asynchronous operations allowed.
230231
- `max_get_multi_concurrency`: maximum number of concurrent connections when fetching keys. If set to `0`, the concurrency is unlimited.
231232
- `max_get_multi_batch_size`: maximum number of keys a single underlying operation should fetch. If more keys are specified, internally keys are splitted into multiple batches and fetched concurrently, honoring `max_get_multi_concurrency`. If set to `0`, the batch size is unlimited.
233+
- `max_item_size`: maximum size of an item to be stored in memcached. This option should be set to the same value of memcached `-I` flag (defaults to 1MB) in order to avoid wasting network round trips to store items larger than the max item size allowed in memcached. If set to `0`, the item size is unlimited.
232234
- `dns_provider_update_interval`: the DNS discovery update interval.
233235

234236
## Index Header

pkg/cacheutil/memcached_client.go

Lines changed: 23 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -17,13 +17,15 @@ import (
1717
"github.com/thanos-io/thanos/pkg/discovery/dns"
1818
"github.com/thanos-io/thanos/pkg/extprom"
1919
"github.com/thanos-io/thanos/pkg/gate"
20+
"github.com/thanos-io/thanos/pkg/model"
2021
"github.com/thanos-io/thanos/pkg/tracing"
2122
yaml "gopkg.in/yaml.v2"
2223
)
2324

2425
const (
25-
opSet = "set"
26-
opGetMulti = "getmulti"
26+
opSet = "set"
27+
opGetMulti = "getmulti"
28+
reasonMaxItemSize = "max-item-size"
2729
)
2830

2931
var (
@@ -35,6 +37,7 @@ var (
3537
MaxIdleConnections: 100,
3638
MaxAsyncConcurrency: 20,
3739
MaxAsyncBufferSize: 10000,
40+
MaxItemSize: model.Bytes(1024 * 1024),
3841
MaxGetMultiConcurrency: 100,
3942
MaxGetMultiBatchSize: 0,
4043
DNSProviderUpdateInterval: 10 * time.Second,
@@ -88,6 +91,11 @@ type MemcachedClientConfig struct {
8891
// running GetMulti() operations. If set to 0, concurrency is unlimited.
8992
MaxGetMultiConcurrency int `yaml:"max_get_multi_concurrency"`
9093

94+
// MaxItemSize specifies the maximum size of an item stored in memcached. Bigger
95+
// items are skipped to be stored by the client. If set to 0, no maximum size is
96+
// enforced.
97+
MaxItemSize model.Bytes `yaml:"max_item_size"`
98+
9199
// MaxGetMultiBatchSize specifies the maximum number of keys a single underlying
92100
// GetMulti() should run. If more keys are specified, internally keys are splitted
93101
// into multiple batches and fetched concurrently, honoring MaxGetMultiConcurrency
@@ -140,6 +148,7 @@ type memcachedClient struct {
140148
// Tracked metrics.
141149
operations *prometheus.CounterVec
142150
failures *prometheus.CounterVec
151+
skipped *prometheus.CounterVec
143152
duration *prometheus.HistogramVec
144153
}
145154

@@ -215,6 +224,12 @@ func newMemcachedClient(
215224
ConstLabels: prometheus.Labels{"name": name},
216225
}, []string{"operation"})
217226

227+
c.skipped = promauto.With(reg).NewCounterVec(prometheus.CounterOpts{
228+
Name: "thanos_memcached_operation_skipped_total",
229+
Help: "Total number of operations against memcached that have been skipped.",
230+
ConstLabels: prometheus.Labels{"name": name},
231+
}, []string{"operation", "reason"})
232+
218233
c.duration = promauto.With(reg).NewHistogramVec(prometheus.HistogramOpts{
219234
Name: "thanos_memcached_operation_duration_seconds",
220235
Help: "Duration of operations against memcached.",
@@ -250,6 +265,12 @@ func (c *memcachedClient) Stop() {
250265
}
251266

252267
func (c *memcachedClient) SetAsync(ctx context.Context, key string, value []byte, ttl time.Duration) (err error) {
268+
// Skip hitting memcached at all if the item is bigger than the max allowed size.
269+
if c.config.MaxItemSize > 0 && uint64(len(value)) > uint64(c.config.MaxItemSize) {
270+
c.skipped.WithLabelValues(opSet, reasonMaxItemSize).Inc()
271+
return nil
272+
}
273+
253274
return c.enqueueAsync(func() {
254275
start := time.Now()
255276
c.operations.WithLabelValues(opSet).Inc()

pkg/cacheutil/memcached_client_test.go

Lines changed: 39 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -14,6 +14,7 @@ import (
1414
"github.com/go-kit/kit/log"
1515
"github.com/pkg/errors"
1616
prom_testutil "github.com/prometheus/client_golang/prometheus/testutil"
17+
"github.com/thanos-io/thanos/pkg/model"
1718
"github.com/thanos-io/thanos/pkg/testutil"
1819
)
1920

@@ -76,6 +77,7 @@ addresses:
7677
testutil.Equals(t, defaultMemcachedClientConfig.DNSProviderUpdateInterval, cache.config.DNSProviderUpdateInterval)
7778
testutil.Equals(t, defaultMemcachedClientConfig.MaxGetMultiConcurrency, cache.config.MaxGetMultiConcurrency)
7879
testutil.Equals(t, defaultMemcachedClientConfig.MaxGetMultiBatchSize, cache.config.MaxGetMultiBatchSize)
80+
testutil.Equals(t, defaultMemcachedClientConfig.MaxItemSize, cache.config.MaxItemSize)
7981

8082
// Should instance a memcached client with configured YAML config.
8183
conf = []byte(`
@@ -87,6 +89,7 @@ max_idle_connections: 1
8789
max_async_concurrency: 1
8890
max_async_buffer_size: 1
8991
max_get_multi_concurrency: 1
92+
max_item_size: 1MiB
9093
max_get_multi_batch_size: 1
9194
dns_provider_update_interval: 1s
9295
`)
@@ -102,6 +105,7 @@ dns_provider_update_interval: 1s
102105
testutil.Equals(t, 1*time.Second, cache.config.DNSProviderUpdateInterval)
103106
testutil.Equals(t, 1, cache.config.MaxGetMultiConcurrency)
104107
testutil.Equals(t, 1, cache.config.MaxGetMultiBatchSize)
108+
testutil.Equals(t, model.Bytes(1024*1024), cache.config.MaxItemSize)
105109
}
106110

107111
func TestMemcachedClient_SetAsync(t *testing.T) {
@@ -120,9 +124,43 @@ func TestMemcachedClient_SetAsync(t *testing.T) {
120124
testutil.Ok(t, client.SetAsync(ctx, "key-2", []byte("value-2"), time.Second))
121125
testutil.Ok(t, backendMock.waitItems(2))
122126

127+
actual, err := client.getMultiSingle(ctx, []string{"key-1", "key-2"})
128+
testutil.Ok(t, err)
129+
testutil.Equals(t, []byte("value-1"), actual["key-1"].Value)
130+
testutil.Equals(t, []byte("value-2"), actual["key-2"].Value)
131+
123132
testutil.Equals(t, 2.0, prom_testutil.ToFloat64(client.operations.WithLabelValues(opSet)))
124-
testutil.Equals(t, 0.0, prom_testutil.ToFloat64(client.operations.WithLabelValues(opGetMulti)))
133+
testutil.Equals(t, 1.0, prom_testutil.ToFloat64(client.operations.WithLabelValues(opGetMulti)))
134+
testutil.Equals(t, 0.0, prom_testutil.ToFloat64(client.failures.WithLabelValues(opSet)))
135+
testutil.Equals(t, 0.0, prom_testutil.ToFloat64(client.skipped.WithLabelValues(opSet, reasonMaxItemSize)))
136+
}
137+
138+
func TestMemcachedClient_SetAsyncWithCustomMaxItemSize(t *testing.T) {
139+
defer leaktest.CheckTimeout(t, 10*time.Second)()
140+
141+
ctx := context.Background()
142+
config := defaultMemcachedClientConfig
143+
config.Addresses = []string{"127.0.0.1:11211"}
144+
config.MaxItemSize = model.Bytes(10)
145+
backendMock := newMemcachedClientBackendMock()
146+
147+
client, err := prepare(config, backendMock)
148+
testutil.Ok(t, err)
149+
defer client.Stop()
150+
151+
testutil.Ok(t, client.SetAsync(ctx, "key-1", []byte("value-1"), time.Second))
152+
testutil.Ok(t, client.SetAsync(ctx, "key-2", []byte("value-2-too-long-to-be-stored"), time.Second))
153+
testutil.Ok(t, backendMock.waitItems(1))
154+
155+
actual, err := client.getMultiSingle(ctx, []string{"key-1", "key-2"})
156+
testutil.Ok(t, err)
157+
testutil.Equals(t, []byte("value-1"), actual["key-1"].Value)
158+
testutil.Equals(t, (*memcache.Item)(nil), actual["key-2"])
159+
160+
testutil.Equals(t, 1.0, prom_testutil.ToFloat64(client.operations.WithLabelValues(opSet)))
161+
testutil.Equals(t, 1.0, prom_testutil.ToFloat64(client.operations.WithLabelValues(opGetMulti)))
125162
testutil.Equals(t, 0.0, prom_testutil.ToFloat64(client.failures.WithLabelValues(opSet)))
163+
testutil.Equals(t, 1.0, prom_testutil.ToFloat64(client.skipped.WithLabelValues(opSet, reasonMaxItemSize)))
126164
}
127165

128166
func TestMemcachedClient_GetMulti(t *testing.T) {
Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,7 +1,7 @@
11
// Copyright (c) The Thanos Authors.
22
// Licensed under the Apache License 2.0.
33

4-
package storecache
4+
package model
55

66
import (
77
"github.com/alecthomas/units"
Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,7 +1,7 @@
11
// Copyright (c) The Thanos Authors.
22
// Licensed under the Apache License 2.0.
33

4-
package storecache
4+
package model
55

66
import (
77
"testing"

pkg/store/cache/inmemory.go

Lines changed: 3 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -17,6 +17,7 @@ import (
1717
"github.com/prometheus/client_golang/prometheus"
1818
"github.com/prometheus/client_golang/prometheus/promauto"
1919
"github.com/prometheus/prometheus/pkg/labels"
20+
"github.com/thanos-io/thanos/pkg/model"
2021
"gopkg.in/yaml.v2"
2122
)
2223

@@ -52,9 +53,9 @@ type InMemoryIndexCache struct {
5253
// InMemoryIndexCacheConfig holds the in-memory index cache config.
5354
type InMemoryIndexCacheConfig struct {
5455
// MaxSize represents overall maximum number of bytes cache can contain.
55-
MaxSize Bytes `yaml:"max_size"`
56+
MaxSize model.Bytes `yaml:"max_size"`
5657
// MaxItemSize represents maximum size of single item.
57-
MaxItemSize Bytes `yaml:"max_item_size"`
58+
MaxItemSize model.Bytes `yaml:"max_item_size"`
5859
}
5960

6061
// parseInMemoryIndexCacheConfig unmarshals a buffer into a InMemoryIndexCacheConfig with default values.

0 commit comments

Comments
 (0)