Description
Hi,
I am using "@smithy/node-http-handler" version 4.0.2.
When using Amazon SQS send a message, I encountered the following warning:
WARN @smithy/node-http-handler:WARN - socket usage at capacity=50 and 788 additional requests are enqueued. See https://docs.aws.amazon.com/sdk-for-javascript/v3/developer-guide/node-configuring-maxsockets.html or increase socketAcquisitionWarningTimeout=(millis) in the NodeHttpHandler config.
This warning is logged at the following line:
smithy-typescript/packages/node-http-handler/src/node-http-handler.ts
Lines 97 to 101 in 9b26765
I modified the code and tested it, but setting socketAcquisitionWarningTimeout
option did not seem to have any effect.
const client = new SQSClient({
region: 'ap-northeast-1',
requestHandler: new NodeHttpHandler({
socketAcquisitionWarningTimeout: 60_000, # Added
}),
});
It appears that this issue is caused by socketAcquisitionWarningTimeout
not being handled in the configuration setup at:
It seemed to be caused by not handling “socketAcquisitionWarningTimeout” in the following sections.
smithy-typescript/packages/node-http-handler/src/node-http-handler.ts
Lines 125 to 146 in 9b26765
I have confirmed that the issue can be resolved by making the following modification:
@@ -122,7 +122,7 @@ or increase socketAcquisitionWarningTimeout=(millis) in the NodeHttpHandler conf
}
private resolveDefaultConfig(options?: NodeHttpHandlerOptions | void): ResolvedNodeHttpHandlerConfig {
- const { requestTimeout, connectionTimeout, socketTimeout, httpAgent, httpsAgent } = options || {};
+ const { requestTimeout, connectionTimeout, socketTimeout, httpAgent, httpsAgent, socketAcquisitionWarningTimeout } = options || {};
const keepAlive = true;
const maxSockets = 50;
@@ -142,6 +142,7 @@ or increase socketAcquisitionWarningTimeout=(millis) in the NodeHttpHandler conf
return new hsAgent({ keepAlive, maxSockets, ...httpsAgent });
})(),
logger: console,
+ socketAcquisitionWarningTimeout,
};
}
The “socketAcquisitionWarningTimeout” option now works correctly in my environment with this fix.
I am currently working on a pull request to address this issue.