Skip to content

Performance issue when sending large responses for concurrent requests #37680

Open
@xue-cai

Description

@xue-cai
  • Version: 12.20.1
  • Platform: Linux xxx 5.4.0-1038-aws Working with nvm, nvmw and similar installation managers #4018.04.1-Ubuntu SMP Sat Feb 6 01:56:56 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux, or Darwin xxx 19.6.0 Darwin Kernel Version 19.6.0: Tue Jan 12 22:13:05 PST 2021; root:xnu-6153.141.161/RELEASE_X86_64 x86_64
  • Subsystem:

What steps will reproduce the bug?

const express = require("express");

function makeBigObject(leaves, depth) {
  if (depth === 0) {
    return "howdy";
  } else {
    const ret = {};
    for (let i = 0; i < leaves; ++i) {
      ret[i] = makeBigObject(leaves, depth - 1);
    }
    return ret;
  }
}

function getTimeMs() {
  return Date.now() - firstRequestStartTime;
}

const app = express();

const bigObject = makeBigObject(2000, 2);
let requestCount = 0;
let firstRequestStartTime;

async function requestHandler({ requestIndex, req, res }) {
  if (requestIndex === 1) {
    firstRequestStartTime = Date.now();
  }

  console.log(
    `[${getTimeMs()}] Serializing response for request ${requestIndex}...`
  );
  const serializedBigObject = JSON.stringify(bigObject);

  const flushStartTimeMs = Date.now();
  res.on("finish", () => {
    const flushDurationMs = Date.now() - flushStartTimeMs;
    console.log(
      `[${getTimeMs()}] -- Took ${flushDurationMs}ms to flush response for request ${requestIndex} --`
    );
  });

  console.log(
    `[${getTimeMs()}] Sending ${Math.round(
      serializedBigObject.length / 1024 / 1024
    )}MB response for request ${requestIndex}...`
  );
  res.send(serializedBigObject);
  // res.send("ok");

  console.log(`[${getTimeMs()}] - Handler done for request ${requestIndex} -`);
}

app.get("/", async (req, res) => {
  const requestIndex = ++requestCount;
  requestHandler({ requestIndex, req, res });
});

app.listen("/tmp/sock", () =>
  console.log(`Example app listening on Unix domain socket /tmp/sock!`)
);

How often does it reproduce? Is there a required condition?

Send multiple concurrent requests to the app. For example,

(for x in $(seq 5); do time curl --unix-socket /tmp/sock http://localhost/ > /dev/null & done)

What is the expected behavior?

I expect that responses are flushed fairly fast and early requests don't "wait" for later requests to flush together

[0] Processing request 1...
[23] Serializing response for request 1...
[390] Sending 55MB response for request 1...
[536] - Handler done for request 1 -
[537] Processing request 2...
[547] -- Took 158ms to flush response for request 1 --
[558] Serializing response for request 2...
[925] Sending 55MB response for request 2...
[1065] - Handler done for request 2 -
[1065] Processing request 3...
[1077] -- Took 152ms to flush response for request 2 --
[1086] Serializing response for request 3...
[1444] Sending 55MB response for request 3...
[1568] - Handler done for request 3 -
[1568] Processing request 4...
[1579] -- Took 135ms to flush response for request 3 --
[1589] Serializing response for request 4...
[1948] Sending 55MB response for request 4...
[2075] - Handler done for request 4 -
[2075] Processing request 5...
[2087] -- Took 139ms to flush response for request 4 --
[2096] Serializing response for request 5...
[2455] Sending 55MB response for request 5...
[2581] - Handler done for request 5 -
[2591] -- Took 136ms to flush response for request 5 --

What do you see instead?

Earlier requests were blocked and not making progress until the later requests finished processing/serializing. The more concurrent requests, the longer the delay and the worse the problem is.

[0] Serializing response for request 1...
[334] Sending 55MB response for request 1...
[491] - Handler done for request 1 -
[492] Serializing response for request 2...
[820] Sending 55MB response for request 2...
[957] - Handler done for request 2 -
[958] Serializing response for request 3...
[1292] Sending 55MB response for request 3...
[1430] - Handler done for request 3 -
[1432] Serializing response for request 4...
[1763] Sending 55MB response for request 4...
[1902] - Handler done for request 4 -
[1910] Serializing response for request 5...
[2229] Sending 55MB response for request 5...
[2371] - Handler done for request 5 -
[2536] -- Took 773ms to flush response for request 4 --
[2538] -- Took 309ms to flush response for request 5 --
[2539] -- Took 1719ms to flush response for request 2 --
[2540] -- Took 2206ms to flush response for request 1 --
[2540] -- Took 1248ms to flush response for request 3 --

Additional information

if replacing res.send(serializedBigObject) with res.send("ok"), then you get expected behavior. So response size matters.

[0] Serializing response for request 1...
[341] Sending 55MB response for request 1...
[345] - Handler done for request 1 -
[346] -- Took 5ms to flush response for request 1 --
[348] Serializing response for request 2...
[660] Sending 55MB response for request 2...
[661] - Handler done for request 2 -
[661] -- Took 1ms to flush response for request 2 --
[661] Serializing response for request 3...
[976] Sending 55MB response for request 3...
[977] - Handler done for request 3 -
[977] -- Took 1ms to flush response for request 3 --
[977] Serializing response for request 4...
[1296] Sending 55MB response for request 4...
[1297] - Handler done for request 4 -
[1297] -- Took 1ms to flush response for request 4 --
[1297] Serializing response for request 5...
[1633] Sending 55MB response for request 5...
[1633] - Handler done for request 5 -
[1634] -- Took 1ms to flush response for request 5 --

Metadata

Metadata

Assignees

No one assigned

    Labels

    questionIssues that look for answers.

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions