Skip to content

Conversation

@ktock
Copy link
Member

@ktock ktock commented Mar 29, 2021

Though we currently use bytes.Buffers in many places around the cache, some of these uses seem to be able to reduce.
This commit reduces the use of buffers around the cache, which leads to reduce memory usage of containerd-stargz-grpc.

We are currently using buffer during cache.Add() but we can reduce it by providing direct access to *os.File of each cache file to the cache client.

// Cache the passed data to disk.
b2 := dc.bufPool.Get().(*bytes.Buffer)
b2.Reset()
b2.Write(p)

This commit modifies the interface to provide the writer of that file directly to the client when adding contents to the cache.

type BlobCache interface {
	Add(key string, opts ...Option) (Writer, error)

// omit...
}
type Writer interface {
	io.WriteCloser
	Commit() error
	Abort() error
}

Another buffer we can reduce is the one in blob.ReadAt(). We are using this buffer for trimming the data got from the registry. Thus we are buffering the downloaded data first into this buffer then trimming off the unnecessary range.

// Use temporally buffer for aligning this chunk
bf := b.resolver.bufPool.Get().(*bytes.Buffer)
putBufs = append(putBufs, bf)
bf.Reset()
bf.Grow(int(chunk.size()))
allData[chunk] = bf

But instead, we can remove this by just discarding the data out of the necessary range.

Both of the above buffers are on the code path of prefetch and background-fetch, where the entire layer blob is cached and loaded to/from the cache. So the above fixes reduced the memory usage, especially during rpull.

Memory usage during ctr-remote i rpull ghcr.io/stargz-containers/python:3.7-esgz

master PR

@ktock ktock marked this pull request as ready for review March 29, 2021 13:27
@AkihiroSuda AkihiroSuda merged commit 63627bd into containerd:master Mar 29, 2021
@ktock ktock mentioned this pull request May 12, 2021
@ktock ktock deleted the cachebuf branch September 3, 2021 09:56
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants