Skip to content

feat(sdk-core): optimize bulk accept share with adaptive batching #6707

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
wants to merge 1 commit into from

Conversation

aarvay
Copy link
Contributor

@aarvay aarvay commented Aug 14, 2025

Summary

This PR optimizes the bulk accept share request implementation by replacing the inefficient payload size calculation approach with smart adaptive batching that automatically discovers optimal batch sizes per coin type.

Context

The original PR (#6688) addressed the issue CSI-889 where users were getting 413 "Request Entity Too Large" errors when bulk accepting many wallet shares (78 in the reported case). However, the implementation had several inefficiencies:

  • Built batches one item at a time with repeated payload size calculations
  • Reduced the max payload size globally after failures, making all future batches unnecessarily small
  • Didn't account for coin-specific key sizes (ETH keys are much larger than BTC keys)

Solution

This optimization implements an intelligent adaptive batching algorithm:

  1. Optimistic First Attempt: Tries to send all shares at once (most requests will succeed)
  2. Coin-Type Grouping: On 413 error, groups shares by coin type for efficient batching
  3. Binary Search: Uses binary search to find the optimal batch size for each coin type
  4. Memory: Remembers and reuses the optimal batch size for remaining items of the same coin
  5. No Magic Numbers: No hardcoded batch sizes - automatically adapts to actual server limits

Key Benefits

  • Efficient for mixed coins: ETH shares (large keys) can be sent 1-2 at a time while BTC shares (small keys) can be sent 20+ at a time
  • Minimal API calls: Binary search quickly finds optimal batch sizes
  • No unnecessary calculations: Removes all payload size calculations
  • Self-adjusting: Automatically adapts to server limits without configuration

Testing

The existing tests should continue to pass as the external behavior remains the same - just more efficient internally.

Related

- Replace inefficient payload size calculation with smart adaptive batching
- Start optimistically by trying to send all shares at once
- On 413 error, group shares by coin type for efficient batching
- Use binary search to find optimal batch size per coin type
- Remember and reuse optimal batch sizes for each coin
- No hardcoded batch sizes - automatically adapts to actual limits
// If optimal size fails, reset and try binary search again
if ((error as { status?: number }).status === 413) {
optimalBatchSize = null;
lastFailedSize = batch.length;

Check warning

Code scanning / CodeQL

Useless assignment to local variable Warning

The value assigned to lastFailedSize here is unused.

Copilot Autofix

AI 4 days ago

To fix the problem, simply remove the assignment lastFailedSize = batch.length; on line 764, as it is never used and does not affect the control flow or logic of the function. No other changes are needed, and no imports or definitions are required. The fix is limited to the catch block inside the outer loop in the bulkAcceptShareRequestWithRetry method in modules/sdk-core/src/bitgo/wallet/wallets.ts.

Suggested changeset 1
modules/sdk-core/src/bitgo/wallet/wallets.ts

Autofix patch

Autofix patch
Run the following command in your local git repository to apply this patch
cat << 'EOF' | git apply
diff --git a/modules/sdk-core/src/bitgo/wallet/wallets.ts b/modules/sdk-core/src/bitgo/wallet/wallets.ts
--- a/modules/sdk-core/src/bitgo/wallet/wallets.ts
+++ b/modules/sdk-core/src/bitgo/wallet/wallets.ts
@@ -761,7 +761,6 @@
             // If optimal size fails, reset and try binary search again
             if ((error as { status?: number }).status === 413) {
               optimalBatchSize = null;
-              lastFailedSize = batch.length;
               lastSuccessSize = 0;
             } else {
               throw error;
EOF
@@ -761,7 +761,6 @@
// If optimal size fails, reset and try binary search again
if ((error as { status?: number }).status === 413) {
optimalBatchSize = null;
lastFailedSize = batch.length;
lastSuccessSize = 0;
} else {
throw error;
Copilot is powered by AI and may make mistakes. Always verify output.
@aarvay aarvay closed this Aug 14, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant