Skip to content
This repository was archived by the owner on Apr 22, 2023. It is now read-only.
This repository was archived by the owner on Apr 22, 2023. It is now read-only.

tls: dynamic record size to optimize latency & throughput #6889

Closed
@igrigorik

Description

@igrigorik

Reducing size of TLS records can yield significant latency wins on the client: faster time to first paint, time to first frame for video, etc. The issue is that by default TLS will pack up to 16KB of data into each record, which effectively guarantees that when the CWND is low (e.g. new TCP connection), then the record will span multiple RTTs. As a result, the time to first (application consumable) byte is pushed out by an extra RTT. More info:

Node uses the default 16KB limit, which forces an extra RTT on new connections (wpt):

image

That said, exposing a config flag to set a smaller / static record size is also suboptimal as it introduces an inherent tradeoff between latency and throughput – smaller records are good for latency, but hurt server throughput by adding bytes and CPU overhead. It would be great if we could implement a smarter strategy in node... Some background on how Google servers handle this:

  • new connections default to small record size
    • each record fits into a TCP packet
    • packets are flushed at record boundaries
  • server tracks number of bytes written since reset and timestamp of last
    write
    • if bytes written > {configurable byte threshold) then boost record size
      to 16KB
    • if last write timestamp > now - {configurable time threshold} then reset
      sent byte count

In other words, start with small record size to optimize for delivery of
small/interactive objects (bulk of HTTP traffic). Then, if large file is
being transferred bump record size to 16KB and continue using that until
the connection goes idle.. when communication resumes, start with small
record size and repeat. Overall, this is aimed to optimize delivery of
small files where incremental delivery is a priority, and also for large
downloads where overall throughput is a priority.

Both byte and time thresholds are exposed as configurable flags, and
current defaults in GFE are 1MB and 1s.

A dynamic strategy would provide the best out-of-the-box experience and work well regardless of mix/type of traffic being served (interactive, bulk, etc).

/cc @indutny

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions