Skip to content

Ignore RateLimitedOutputStreamSuite for now. #54

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
wants to merge 1 commit into from

Conversation

rxin
Copy link
Contributor

@rxin rxin commented Mar 2, 2014

This test has been flaky. We can re-enable it after @tdas has a chance to look at it.

@AmplabJenkins
Copy link

Merged build triggered.

@AmplabJenkins
Copy link

Merged build started.

@AmplabJenkins
Copy link

Merged build finished.

@AmplabJenkins
Copy link

All automated tests passed.
Refer to this link for build results: https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/12953/

@pwendell
Copy link
Contributor

pwendell commented Mar 2, 2014

Sounds good, merged.

@asfgit asfgit closed this in 353ac6b Mar 2, 2014
jhartlaub referenced this pull request in jhartlaub/spark May 27, 2014
Remove unnecessary mutable imports

It appears that the imports aren't necessary here.

(cherry picked from commit dca8009)
Signed-off-by: Reynold Xin <[email protected]>
@rxin rxin deleted the ratelimit branch August 13, 2014 08:01
JasonMWhite pushed a commit to JasonMWhite/spark that referenced this pull request Dec 2, 2015
tnachen pushed a commit to tnachen/spark that referenced this pull request Jan 27, 2017
lins05 pushed a commit to lins05/spark that referenced this pull request Apr 23, 2017
erikerlandson pushed a commit to erikerlandson/spark that referenced this pull request Jul 28, 2017
bzhaoopenstack pushed a commit to bzhaoopenstack/spark that referenced this pull request Sep 11, 2019
Add enable service and plugin for LBaaS v2
XinDongSh pushed a commit to XinDongSh/spark that referenced this pull request Jan 12, 2021
asfgit pushed a commit that referenced this pull request Nov 30, 2022
…the lock is unlocked gracefully

### What changes were proposed in this pull request?
`BlockManager#removeBlockInternal` should ensure the lock is unlocked gracefully.
`removeBlockInternal` tries to call `removeBlock` in the finally block.

### Why are the changes needed?
When the driver submits a job, `DAGScheduler` calls `sc.broadcast(taskBinaryBytes)`.
`TorrentBroadcast#writeBlocks` may fail due to disk problems during `blockManager#putBytes`.
`BlockManager#doPut` calls `BlockManager#removeBlockInternal` to clean up the block.
`BlockManager#removeBlockInternal` calls `DiskStore#remove` to clean up blocks on disk.
`DiskStore#remove` will try to create the directory because the directory does not exist, and an exception will be thrown at this time.
`BlockInfoManager#blockInfoWrappers` block info and lock not removed.
The catch block in `TorrentBroadcast#writeBlocks` will call `blockManager.removeBroadcast` to clean up the broadcast.
Because the block lock in `BlockInfoManager#blockInfoWrappers` is not released, the `dag-scheduler-event-loop` thread of `DAGScheduler` will wait forever.

```
22/11/01 18:27:48 WARN BlockManager: Putting block broadcast_0_piece0 failed due to exception java.io.IOException: XXXXX.
22/11/01 18:27:48 ERROR TorrentBroadcast: Store broadcast broadcast_0 fail, remove all pieces of the broadcast
```

```
"dag-scheduler-event-loop" #54 daemon prio=5 os_prio=31 tid=0x00007fc98e3fa800 nid=0x7203 waiting on condition [0x0000700008c1e000]
   java.lang.Thread.State: WAITING (parking)
    at sun.misc.Unsafe.park(Native Method)
    - parking to wait for  <0x00000007add3d8c8> (a java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
    at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
    at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039)
    at org.apache.spark.storage.BlockInfoManager.$anonfun$acquireLock$1(BlockInfoManager.scala:221)
    at org.apache.spark.storage.BlockInfoManager.$anonfun$acquireLock$1$adapted(BlockInfoManager.scala:214)
    at org.apache.spark.storage.BlockInfoManager$$Lambda$3038/1307533457.apply(Unknown Source)
    at org.apache.spark.storage.BlockInfoWrapper.withLock(BlockInfoManager.scala:105)
    at org.apache.spark.storage.BlockInfoManager.acquireLock(BlockInfoManager.scala:214)
    at org.apache.spark.storage.BlockInfoManager.lockForWriting(BlockInfoManager.scala:293)
    at org.apache.spark.storage.BlockManager.removeBlock(BlockManager.scala:1979)
    at org.apache.spark.storage.BlockManager.$anonfun$removeBroadcast$3(BlockManager.scala:1970)
    at org.apache.spark.storage.BlockManager.$anonfun$removeBroadcast$3$adapted(BlockManager.scala:1970)
    at org.apache.spark.storage.BlockManager$$Lambda$3092/1241801156.apply(Unknown Source)
    at scala.collection.Iterator.foreach(Iterator.scala:943)
    at scala.collection.Iterator.foreach$(Iterator.scala:943)
    at scala.collection.AbstractIterator.foreach(Iterator.scala:1431)
    at org.apache.spark.storage.BlockManager.removeBroadcast(BlockManager.scala:1970)
    at org.apache.spark.broadcast.TorrentBroadcast.writeBlocks(TorrentBroadcast.scala:179)
    at org.apache.spark.broadcast.TorrentBroadcast.<init>(TorrentBroadcast.scala:99)
    at org.apache.spark.broadcast.TorrentBroadcastFactory.newBroadcast(TorrentBroadcastFactory.scala:38)
    at org.apache.spark.broadcast.BroadcastManager.newBroadcast(BroadcastManager.scala:78)
    at org.apache.spark.SparkContext.broadcastInternal(SparkContext.scala:1538)
    at org.apache.spark.SparkContext.broadcast(SparkContext.scala:1520)
    at org.apache.spark.scheduler.DAGScheduler.submitMissingTasks(DAGScheduler.scala:1539)
    at org.apache.spark.scheduler.DAGScheduler.submitStage(DAGScheduler.scala:1355)
    at org.apache.spark.scheduler.DAGScheduler.handleJobSubmitted(DAGScheduler.scala:1297)
    at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:2929)
    at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2921)
    at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2910)
    at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:49)
```

### Does this PR introduce _any_ user-facing change?
No

### How was this patch tested?
Throw an exception before `Files.createDirectory` to simulate disk problems.

DiskBlockManager#getFile
```java
if (filename.contains("piece")) {
  throw new java.io.IOException("disk issue")
}
Files.createDirectory(path)
```

```
./bin/spark-shell
```
```scala
spark.sql("select 1").collect()
```

```
22/11/24 19:29:58 WARN BlockManager: Putting block broadcast_0_piece0 failed due to exception java.io.IOException: disk issue.
22/11/24 19:29:58 ERROR TorrentBroadcast: Store broadcast broadcast_0 fail, remove all pieces of the broadcast
org.apache.spark.SparkException: Job aborted due to stage failure: Task serialization failed: java.io.IOException: disk issue
java.io.IOException: disk issue
	at org.apache.spark.storage.DiskBlockManager.getFile(DiskBlockManager.scala:109)
	at org.apache.spark.storage.DiskBlockManager.containsBlock(DiskBlockManager.scala:160)
	at org.apache.spark.storage.DiskStore.contains(DiskStore.scala:153)
	at org.apache.spark.storage.BlockManager.org$apache$spark$storage$BlockManager$$getCurrentBlockStatus(BlockManager.scala:879)
	at org.apache.spark.storage.BlockManager.removeBlockInternal(BlockManager.scala:1998)
	at org.apache.spark.storage.BlockManager.org$apache$spark$storage$BlockManager$$doPut(BlockManager.scala:1484)
	at org.apache.spark.storage.BlockManager$BlockStoreUpdater.save(BlockManager.scala:378)
	at org.apache.spark.storage.BlockManager.putBytes(BlockManager.scala:1419)
	at org.apache.spark.broadcast.TorrentBroadcast.$anonfun$writeBlocks$1(TorrentBroadcast.scala:170)
	at org.apache.spark.broadcast.TorrentBroadcast.$anonfun$writeBlocks$1$adapted(TorrentBroadcast.scala:164)
	at scala.collection.IndexedSeqOptimized.foreach(IndexedSeqOptimized.scala:36)
	at scala.collection.IndexedSeqOptimized.foreach$(IndexedSeqOptimized.scala:33)
	at scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:198)
	at org.apache.spark.broadcast.TorrentBroadcast.writeBlocks(TorrentBroadcast.scala:164)
	at org.apache.spark.broadcast.TorrentBroadcast.<init>(TorrentBroadcast.scala:99)
	at org.apache.spark.broadcast.TorrentBroadcastFactory.newBroadcast(TorrentBroadcastFactory.scala:38)
	at org.apache.spark.broadcast.BroadcastManager.newBroadcast(BroadcastManager.scala:78)
	at org.apache.spark.SparkContext.broadcastInternal(SparkContext.scala:1538)
	at org.apache.spark.SparkContext.broadcast(SparkContext.scala:1520)
	at org.apache.spark.scheduler.DAGScheduler.submitMissingTasks(DAGScheduler.scala:1539)
	at org.apache.spark.scheduler.DAGScheduler.submitStage(DAGScheduler.scala:1355)
	at org.apache.spark.scheduler.DAGScheduler.handleJobSubmitted(DAGScheduler.scala:1297)
	at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:2929)
```

Closes #38467 from cxzl25/SPARK-40987.

Authored-by: sychen <[email protected]>
Signed-off-by: Mridul <mridul<at>gmail.com>
asfgit pushed a commit that referenced this pull request Nov 30, 2022
…the lock is unlocked gracefully

### What changes were proposed in this pull request?
`BlockManager#removeBlockInternal` should ensure the lock is unlocked gracefully.
`removeBlockInternal` tries to call `removeBlock` in the finally block.

### Why are the changes needed?
When the driver submits a job, `DAGScheduler` calls `sc.broadcast(taskBinaryBytes)`.
`TorrentBroadcast#writeBlocks` may fail due to disk problems during `blockManager#putBytes`.
`BlockManager#doPut` calls `BlockManager#removeBlockInternal` to clean up the block.
`BlockManager#removeBlockInternal` calls `DiskStore#remove` to clean up blocks on disk.
`DiskStore#remove` will try to create the directory because the directory does not exist, and an exception will be thrown at this time.
`BlockInfoManager#blockInfoWrappers` block info and lock not removed.
The catch block in `TorrentBroadcast#writeBlocks` will call `blockManager.removeBroadcast` to clean up the broadcast.
Because the block lock in `BlockInfoManager#blockInfoWrappers` is not released, the `dag-scheduler-event-loop` thread of `DAGScheduler` will wait forever.

```
22/11/01 18:27:48 WARN BlockManager: Putting block broadcast_0_piece0 failed due to exception java.io.IOException: XXXXX.
22/11/01 18:27:48 ERROR TorrentBroadcast: Store broadcast broadcast_0 fail, remove all pieces of the broadcast
```

```
"dag-scheduler-event-loop" #54 daemon prio=5 os_prio=31 tid=0x00007fc98e3fa800 nid=0x7203 waiting on condition [0x0000700008c1e000]
   java.lang.Thread.State: WAITING (parking)
    at sun.misc.Unsafe.park(Native Method)
    - parking to wait for  <0x00000007add3d8c8> (a java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
    at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
    at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039)
    at org.apache.spark.storage.BlockInfoManager.$anonfun$acquireLock$1(BlockInfoManager.scala:221)
    at org.apache.spark.storage.BlockInfoManager.$anonfun$acquireLock$1$adapted(BlockInfoManager.scala:214)
    at org.apache.spark.storage.BlockInfoManager$$Lambda$3038/1307533457.apply(Unknown Source)
    at org.apache.spark.storage.BlockInfoWrapper.withLock(BlockInfoManager.scala:105)
    at org.apache.spark.storage.BlockInfoManager.acquireLock(BlockInfoManager.scala:214)
    at org.apache.spark.storage.BlockInfoManager.lockForWriting(BlockInfoManager.scala:293)
    at org.apache.spark.storage.BlockManager.removeBlock(BlockManager.scala:1979)
    at org.apache.spark.storage.BlockManager.$anonfun$removeBroadcast$3(BlockManager.scala:1970)
    at org.apache.spark.storage.BlockManager.$anonfun$removeBroadcast$3$adapted(BlockManager.scala:1970)
    at org.apache.spark.storage.BlockManager$$Lambda$3092/1241801156.apply(Unknown Source)
    at scala.collection.Iterator.foreach(Iterator.scala:943)
    at scala.collection.Iterator.foreach$(Iterator.scala:943)
    at scala.collection.AbstractIterator.foreach(Iterator.scala:1431)
    at org.apache.spark.storage.BlockManager.removeBroadcast(BlockManager.scala:1970)
    at org.apache.spark.broadcast.TorrentBroadcast.writeBlocks(TorrentBroadcast.scala:179)
    at org.apache.spark.broadcast.TorrentBroadcast.<init>(TorrentBroadcast.scala:99)
    at org.apache.spark.broadcast.TorrentBroadcastFactory.newBroadcast(TorrentBroadcastFactory.scala:38)
    at org.apache.spark.broadcast.BroadcastManager.newBroadcast(BroadcastManager.scala:78)
    at org.apache.spark.SparkContext.broadcastInternal(SparkContext.scala:1538)
    at org.apache.spark.SparkContext.broadcast(SparkContext.scala:1520)
    at org.apache.spark.scheduler.DAGScheduler.submitMissingTasks(DAGScheduler.scala:1539)
    at org.apache.spark.scheduler.DAGScheduler.submitStage(DAGScheduler.scala:1355)
    at org.apache.spark.scheduler.DAGScheduler.handleJobSubmitted(DAGScheduler.scala:1297)
    at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:2929)
    at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2921)
    at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2910)
    at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:49)
```

### Does this PR introduce _any_ user-facing change?
No

### How was this patch tested?
Throw an exception before `Files.createDirectory` to simulate disk problems.

DiskBlockManager#getFile
```java
if (filename.contains("piece")) {
  throw new java.io.IOException("disk issue")
}
Files.createDirectory(path)
```

```
./bin/spark-shell
```
```scala
spark.sql("select 1").collect()
```

```
22/11/24 19:29:58 WARN BlockManager: Putting block broadcast_0_piece0 failed due to exception java.io.IOException: disk issue.
22/11/24 19:29:58 ERROR TorrentBroadcast: Store broadcast broadcast_0 fail, remove all pieces of the broadcast
org.apache.spark.SparkException: Job aborted due to stage failure: Task serialization failed: java.io.IOException: disk issue
java.io.IOException: disk issue
	at org.apache.spark.storage.DiskBlockManager.getFile(DiskBlockManager.scala:109)
	at org.apache.spark.storage.DiskBlockManager.containsBlock(DiskBlockManager.scala:160)
	at org.apache.spark.storage.DiskStore.contains(DiskStore.scala:153)
	at org.apache.spark.storage.BlockManager.org$apache$spark$storage$BlockManager$$getCurrentBlockStatus(BlockManager.scala:879)
	at org.apache.spark.storage.BlockManager.removeBlockInternal(BlockManager.scala:1998)
	at org.apache.spark.storage.BlockManager.org$apache$spark$storage$BlockManager$$doPut(BlockManager.scala:1484)
	at org.apache.spark.storage.BlockManager$BlockStoreUpdater.save(BlockManager.scala:378)
	at org.apache.spark.storage.BlockManager.putBytes(BlockManager.scala:1419)
	at org.apache.spark.broadcast.TorrentBroadcast.$anonfun$writeBlocks$1(TorrentBroadcast.scala:170)
	at org.apache.spark.broadcast.TorrentBroadcast.$anonfun$writeBlocks$1$adapted(TorrentBroadcast.scala:164)
	at scala.collection.IndexedSeqOptimized.foreach(IndexedSeqOptimized.scala:36)
	at scala.collection.IndexedSeqOptimized.foreach$(IndexedSeqOptimized.scala:33)
	at scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:198)
	at org.apache.spark.broadcast.TorrentBroadcast.writeBlocks(TorrentBroadcast.scala:164)
	at org.apache.spark.broadcast.TorrentBroadcast.<init>(TorrentBroadcast.scala:99)
	at org.apache.spark.broadcast.TorrentBroadcastFactory.newBroadcast(TorrentBroadcastFactory.scala:38)
	at org.apache.spark.broadcast.BroadcastManager.newBroadcast(BroadcastManager.scala:78)
	at org.apache.spark.SparkContext.broadcastInternal(SparkContext.scala:1538)
	at org.apache.spark.SparkContext.broadcast(SparkContext.scala:1520)
	at org.apache.spark.scheduler.DAGScheduler.submitMissingTasks(DAGScheduler.scala:1539)
	at org.apache.spark.scheduler.DAGScheduler.submitStage(DAGScheduler.scala:1355)
	at org.apache.spark.scheduler.DAGScheduler.handleJobSubmitted(DAGScheduler.scala:1297)
	at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:2929)
```

Closes #38467 from cxzl25/SPARK-40987.

Authored-by: sychen <[email protected]>
Signed-off-by: Mridul <mridul<at>gmail.com>
(cherry picked from commit bbab0af)
Signed-off-by: Mridul <mridulatgmail.com>
asfgit pushed a commit that referenced this pull request Nov 30, 2022
…the lock is unlocked gracefully

### What changes were proposed in this pull request?
`BlockManager#removeBlockInternal` should ensure the lock is unlocked gracefully.
`removeBlockInternal` tries to call `removeBlock` in the finally block.

### Why are the changes needed?
When the driver submits a job, `DAGScheduler` calls `sc.broadcast(taskBinaryBytes)`.
`TorrentBroadcast#writeBlocks` may fail due to disk problems during `blockManager#putBytes`.
`BlockManager#doPut` calls `BlockManager#removeBlockInternal` to clean up the block.
`BlockManager#removeBlockInternal` calls `DiskStore#remove` to clean up blocks on disk.
`DiskStore#remove` will try to create the directory because the directory does not exist, and an exception will be thrown at this time.
`BlockInfoManager#blockInfoWrappers` block info and lock not removed.
The catch block in `TorrentBroadcast#writeBlocks` will call `blockManager.removeBroadcast` to clean up the broadcast.
Because the block lock in `BlockInfoManager#blockInfoWrappers` is not released, the `dag-scheduler-event-loop` thread of `DAGScheduler` will wait forever.

```
22/11/01 18:27:48 WARN BlockManager: Putting block broadcast_0_piece0 failed due to exception java.io.IOException: XXXXX.
22/11/01 18:27:48 ERROR TorrentBroadcast: Store broadcast broadcast_0 fail, remove all pieces of the broadcast
```

```
"dag-scheduler-event-loop" #54 daemon prio=5 os_prio=31 tid=0x00007fc98e3fa800 nid=0x7203 waiting on condition [0x0000700008c1e000]
   java.lang.Thread.State: WAITING (parking)
    at sun.misc.Unsafe.park(Native Method)
    - parking to wait for  <0x00000007add3d8c8> (a java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
    at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
    at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039)
    at org.apache.spark.storage.BlockInfoManager.$anonfun$acquireLock$1(BlockInfoManager.scala:221)
    at org.apache.spark.storage.BlockInfoManager.$anonfun$acquireLock$1$adapted(BlockInfoManager.scala:214)
    at org.apache.spark.storage.BlockInfoManager$$Lambda$3038/1307533457.apply(Unknown Source)
    at org.apache.spark.storage.BlockInfoWrapper.withLock(BlockInfoManager.scala:105)
    at org.apache.spark.storage.BlockInfoManager.acquireLock(BlockInfoManager.scala:214)
    at org.apache.spark.storage.BlockInfoManager.lockForWriting(BlockInfoManager.scala:293)
    at org.apache.spark.storage.BlockManager.removeBlock(BlockManager.scala:1979)
    at org.apache.spark.storage.BlockManager.$anonfun$removeBroadcast$3(BlockManager.scala:1970)
    at org.apache.spark.storage.BlockManager.$anonfun$removeBroadcast$3$adapted(BlockManager.scala:1970)
    at org.apache.spark.storage.BlockManager$$Lambda$3092/1241801156.apply(Unknown Source)
    at scala.collection.Iterator.foreach(Iterator.scala:943)
    at scala.collection.Iterator.foreach$(Iterator.scala:943)
    at scala.collection.AbstractIterator.foreach(Iterator.scala:1431)
    at org.apache.spark.storage.BlockManager.removeBroadcast(BlockManager.scala:1970)
    at org.apache.spark.broadcast.TorrentBroadcast.writeBlocks(TorrentBroadcast.scala:179)
    at org.apache.spark.broadcast.TorrentBroadcast.<init>(TorrentBroadcast.scala:99)
    at org.apache.spark.broadcast.TorrentBroadcastFactory.newBroadcast(TorrentBroadcastFactory.scala:38)
    at org.apache.spark.broadcast.BroadcastManager.newBroadcast(BroadcastManager.scala:78)
    at org.apache.spark.SparkContext.broadcastInternal(SparkContext.scala:1538)
    at org.apache.spark.SparkContext.broadcast(SparkContext.scala:1520)
    at org.apache.spark.scheduler.DAGScheduler.submitMissingTasks(DAGScheduler.scala:1539)
    at org.apache.spark.scheduler.DAGScheduler.submitStage(DAGScheduler.scala:1355)
    at org.apache.spark.scheduler.DAGScheduler.handleJobSubmitted(DAGScheduler.scala:1297)
    at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:2929)
    at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2921)
    at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2910)
    at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:49)
```

### Does this PR introduce _any_ user-facing change?
No

### How was this patch tested?
Throw an exception before `Files.createDirectory` to simulate disk problems.

DiskBlockManager#getFile
```java
if (filename.contains("piece")) {
  throw new java.io.IOException("disk issue")
}
Files.createDirectory(path)
```

```
./bin/spark-shell
```
```scala
spark.sql("select 1").collect()
```

```
22/11/24 19:29:58 WARN BlockManager: Putting block broadcast_0_piece0 failed due to exception java.io.IOException: disk issue.
22/11/24 19:29:58 ERROR TorrentBroadcast: Store broadcast broadcast_0 fail, remove all pieces of the broadcast
org.apache.spark.SparkException: Job aborted due to stage failure: Task serialization failed: java.io.IOException: disk issue
java.io.IOException: disk issue
	at org.apache.spark.storage.DiskBlockManager.getFile(DiskBlockManager.scala:109)
	at org.apache.spark.storage.DiskBlockManager.containsBlock(DiskBlockManager.scala:160)
	at org.apache.spark.storage.DiskStore.contains(DiskStore.scala:153)
	at org.apache.spark.storage.BlockManager.org$apache$spark$storage$BlockManager$$getCurrentBlockStatus(BlockManager.scala:879)
	at org.apache.spark.storage.BlockManager.removeBlockInternal(BlockManager.scala:1998)
	at org.apache.spark.storage.BlockManager.org$apache$spark$storage$BlockManager$$doPut(BlockManager.scala:1484)
	at org.apache.spark.storage.BlockManager$BlockStoreUpdater.save(BlockManager.scala:378)
	at org.apache.spark.storage.BlockManager.putBytes(BlockManager.scala:1419)
	at org.apache.spark.broadcast.TorrentBroadcast.$anonfun$writeBlocks$1(TorrentBroadcast.scala:170)
	at org.apache.spark.broadcast.TorrentBroadcast.$anonfun$writeBlocks$1$adapted(TorrentBroadcast.scala:164)
	at scala.collection.IndexedSeqOptimized.foreach(IndexedSeqOptimized.scala:36)
	at scala.collection.IndexedSeqOptimized.foreach$(IndexedSeqOptimized.scala:33)
	at scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:198)
	at org.apache.spark.broadcast.TorrentBroadcast.writeBlocks(TorrentBroadcast.scala:164)
	at org.apache.spark.broadcast.TorrentBroadcast.<init>(TorrentBroadcast.scala:99)
	at org.apache.spark.broadcast.TorrentBroadcastFactory.newBroadcast(TorrentBroadcastFactory.scala:38)
	at org.apache.spark.broadcast.BroadcastManager.newBroadcast(BroadcastManager.scala:78)
	at org.apache.spark.SparkContext.broadcastInternal(SparkContext.scala:1538)
	at org.apache.spark.SparkContext.broadcast(SparkContext.scala:1520)
	at org.apache.spark.scheduler.DAGScheduler.submitMissingTasks(DAGScheduler.scala:1539)
	at org.apache.spark.scheduler.DAGScheduler.submitStage(DAGScheduler.scala:1355)
	at org.apache.spark.scheduler.DAGScheduler.handleJobSubmitted(DAGScheduler.scala:1297)
	at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:2929)
```

Closes #38467 from cxzl25/SPARK-40987.

Authored-by: sychen <[email protected]>
Signed-off-by: Mridul <mridul<at>gmail.com>
(cherry picked from commit bbab0af)
Signed-off-by: Mridul <mridulatgmail.com>
beliefer pushed a commit to beliefer/spark that referenced this pull request Dec 18, 2022
…the lock is unlocked gracefully

### What changes were proposed in this pull request?
`BlockManager#removeBlockInternal` should ensure the lock is unlocked gracefully.
`removeBlockInternal` tries to call `removeBlock` in the finally block.

### Why are the changes needed?
When the driver submits a job, `DAGScheduler` calls `sc.broadcast(taskBinaryBytes)`.
`TorrentBroadcast#writeBlocks` may fail due to disk problems during `blockManager#putBytes`.
`BlockManager#doPut` calls `BlockManager#removeBlockInternal` to clean up the block.
`BlockManager#removeBlockInternal` calls `DiskStore#remove` to clean up blocks on disk.
`DiskStore#remove` will try to create the directory because the directory does not exist, and an exception will be thrown at this time.
`BlockInfoManager#blockInfoWrappers` block info and lock not removed.
The catch block in `TorrentBroadcast#writeBlocks` will call `blockManager.removeBroadcast` to clean up the broadcast.
Because the block lock in `BlockInfoManager#blockInfoWrappers` is not released, the `dag-scheduler-event-loop` thread of `DAGScheduler` will wait forever.

```
22/11/01 18:27:48 WARN BlockManager: Putting block broadcast_0_piece0 failed due to exception java.io.IOException: XXXXX.
22/11/01 18:27:48 ERROR TorrentBroadcast: Store broadcast broadcast_0 fail, remove all pieces of the broadcast
```

```
"dag-scheduler-event-loop" apache#54 daemon prio=5 os_prio=31 tid=0x00007fc98e3fa800 nid=0x7203 waiting on condition [0x0000700008c1e000]
   java.lang.Thread.State: WAITING (parking)
    at sun.misc.Unsafe.park(Native Method)
    - parking to wait for  <0x00000007add3d8c8> (a java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
    at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
    at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039)
    at org.apache.spark.storage.BlockInfoManager.$anonfun$acquireLock$1(BlockInfoManager.scala:221)
    at org.apache.spark.storage.BlockInfoManager.$anonfun$acquireLock$1$adapted(BlockInfoManager.scala:214)
    at org.apache.spark.storage.BlockInfoManager$$Lambda$3038/1307533457.apply(Unknown Source)
    at org.apache.spark.storage.BlockInfoWrapper.withLock(BlockInfoManager.scala:105)
    at org.apache.spark.storage.BlockInfoManager.acquireLock(BlockInfoManager.scala:214)
    at org.apache.spark.storage.BlockInfoManager.lockForWriting(BlockInfoManager.scala:293)
    at org.apache.spark.storage.BlockManager.removeBlock(BlockManager.scala:1979)
    at org.apache.spark.storage.BlockManager.$anonfun$removeBroadcast$3(BlockManager.scala:1970)
    at org.apache.spark.storage.BlockManager.$anonfun$removeBroadcast$3$adapted(BlockManager.scala:1970)
    at org.apache.spark.storage.BlockManager$$Lambda$3092/1241801156.apply(Unknown Source)
    at scala.collection.Iterator.foreach(Iterator.scala:943)
    at scala.collection.Iterator.foreach$(Iterator.scala:943)
    at scala.collection.AbstractIterator.foreach(Iterator.scala:1431)
    at org.apache.spark.storage.BlockManager.removeBroadcast(BlockManager.scala:1970)
    at org.apache.spark.broadcast.TorrentBroadcast.writeBlocks(TorrentBroadcast.scala:179)
    at org.apache.spark.broadcast.TorrentBroadcast.<init>(TorrentBroadcast.scala:99)
    at org.apache.spark.broadcast.TorrentBroadcastFactory.newBroadcast(TorrentBroadcastFactory.scala:38)
    at org.apache.spark.broadcast.BroadcastManager.newBroadcast(BroadcastManager.scala:78)
    at org.apache.spark.SparkContext.broadcastInternal(SparkContext.scala:1538)
    at org.apache.spark.SparkContext.broadcast(SparkContext.scala:1520)
    at org.apache.spark.scheduler.DAGScheduler.submitMissingTasks(DAGScheduler.scala:1539)
    at org.apache.spark.scheduler.DAGScheduler.submitStage(DAGScheduler.scala:1355)
    at org.apache.spark.scheduler.DAGScheduler.handleJobSubmitted(DAGScheduler.scala:1297)
    at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:2929)
    at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2921)
    at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2910)
    at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:49)
```

### Does this PR introduce _any_ user-facing change?
No

### How was this patch tested?
Throw an exception before `Files.createDirectory` to simulate disk problems.

DiskBlockManager#getFile
```java
if (filename.contains("piece")) {
  throw new java.io.IOException("disk issue")
}
Files.createDirectory(path)
```

```
./bin/spark-shell
```
```scala
spark.sql("select 1").collect()
```

```
22/11/24 19:29:58 WARN BlockManager: Putting block broadcast_0_piece0 failed due to exception java.io.IOException: disk issue.
22/11/24 19:29:58 ERROR TorrentBroadcast: Store broadcast broadcast_0 fail, remove all pieces of the broadcast
org.apache.spark.SparkException: Job aborted due to stage failure: Task serialization failed: java.io.IOException: disk issue
java.io.IOException: disk issue
	at org.apache.spark.storage.DiskBlockManager.getFile(DiskBlockManager.scala:109)
	at org.apache.spark.storage.DiskBlockManager.containsBlock(DiskBlockManager.scala:160)
	at org.apache.spark.storage.DiskStore.contains(DiskStore.scala:153)
	at org.apache.spark.storage.BlockManager.org$apache$spark$storage$BlockManager$$getCurrentBlockStatus(BlockManager.scala:879)
	at org.apache.spark.storage.BlockManager.removeBlockInternal(BlockManager.scala:1998)
	at org.apache.spark.storage.BlockManager.org$apache$spark$storage$BlockManager$$doPut(BlockManager.scala:1484)
	at org.apache.spark.storage.BlockManager$BlockStoreUpdater.save(BlockManager.scala:378)
	at org.apache.spark.storage.BlockManager.putBytes(BlockManager.scala:1419)
	at org.apache.spark.broadcast.TorrentBroadcast.$anonfun$writeBlocks$1(TorrentBroadcast.scala:170)
	at org.apache.spark.broadcast.TorrentBroadcast.$anonfun$writeBlocks$1$adapted(TorrentBroadcast.scala:164)
	at scala.collection.IndexedSeqOptimized.foreach(IndexedSeqOptimized.scala:36)
	at scala.collection.IndexedSeqOptimized.foreach$(IndexedSeqOptimized.scala:33)
	at scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:198)
	at org.apache.spark.broadcast.TorrentBroadcast.writeBlocks(TorrentBroadcast.scala:164)
	at org.apache.spark.broadcast.TorrentBroadcast.<init>(TorrentBroadcast.scala:99)
	at org.apache.spark.broadcast.TorrentBroadcastFactory.newBroadcast(TorrentBroadcastFactory.scala:38)
	at org.apache.spark.broadcast.BroadcastManager.newBroadcast(BroadcastManager.scala:78)
	at org.apache.spark.SparkContext.broadcastInternal(SparkContext.scala:1538)
	at org.apache.spark.SparkContext.broadcast(SparkContext.scala:1520)
	at org.apache.spark.scheduler.DAGScheduler.submitMissingTasks(DAGScheduler.scala:1539)
	at org.apache.spark.scheduler.DAGScheduler.submitStage(DAGScheduler.scala:1355)
	at org.apache.spark.scheduler.DAGScheduler.handleJobSubmitted(DAGScheduler.scala:1297)
	at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:2929)
```

Closes apache#38467 from cxzl25/SPARK-40987.

Authored-by: sychen <[email protected]>
Signed-off-by: Mridul <mridul<at>gmail.com>
sunchao pushed a commit to sunchao/spark that referenced this pull request Jun 2, 2023
…the lock is unlocked gracefully

### What changes were proposed in this pull request?
`BlockManager#removeBlockInternal` should ensure the lock is unlocked gracefully.
`removeBlockInternal` tries to call `removeBlock` in the finally block.

### Why are the changes needed?
When the driver submits a job, `DAGScheduler` calls `sc.broadcast(taskBinaryBytes)`.
`TorrentBroadcast#writeBlocks` may fail due to disk problems during `blockManager#putBytes`.
`BlockManager#doPut` calls `BlockManager#removeBlockInternal` to clean up the block.
`BlockManager#removeBlockInternal` calls `DiskStore#remove` to clean up blocks on disk.
`DiskStore#remove` will try to create the directory because the directory does not exist, and an exception will be thrown at this time.
`BlockInfoManager#blockInfoWrappers` block info and lock not removed.
The catch block in `TorrentBroadcast#writeBlocks` will call `blockManager.removeBroadcast` to clean up the broadcast.
Because the block lock in `BlockInfoManager#blockInfoWrappers` is not released, the `dag-scheduler-event-loop` thread of `DAGScheduler` will wait forever.

```
22/11/01 18:27:48 WARN BlockManager: Putting block broadcast_0_piece0 failed due to exception java.io.IOException: XXXXX.
22/11/01 18:27:48 ERROR TorrentBroadcast: Store broadcast broadcast_0 fail, remove all pieces of the broadcast
```

```
"dag-scheduler-event-loop" apache#54 daemon prio=5 os_prio=31 tid=0x00007fc98e3fa800 nid=0x7203 waiting on condition [0x0000700008c1e000]
   java.lang.Thread.State: WAITING (parking)
    at sun.misc.Unsafe.park(Native Method)
    - parking to wait for  <0x00000007add3d8c8> (a java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
    at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
    at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039)
    at org.apache.spark.storage.BlockInfoManager.$anonfun$acquireLock$1(BlockInfoManager.scala:221)
    at org.apache.spark.storage.BlockInfoManager.$anonfun$acquireLock$1$adapted(BlockInfoManager.scala:214)
    at org.apache.spark.storage.BlockInfoManager$$Lambda$3038/1307533457.apply(Unknown Source)
    at org.apache.spark.storage.BlockInfoWrapper.withLock(BlockInfoManager.scala:105)
    at org.apache.spark.storage.BlockInfoManager.acquireLock(BlockInfoManager.scala:214)
    at org.apache.spark.storage.BlockInfoManager.lockForWriting(BlockInfoManager.scala:293)
    at org.apache.spark.storage.BlockManager.removeBlock(BlockManager.scala:1979)
    at org.apache.spark.storage.BlockManager.$anonfun$removeBroadcast$3(BlockManager.scala:1970)
    at org.apache.spark.storage.BlockManager.$anonfun$removeBroadcast$3$adapted(BlockManager.scala:1970)
    at org.apache.spark.storage.BlockManager$$Lambda$3092/1241801156.apply(Unknown Source)
    at scala.collection.Iterator.foreach(Iterator.scala:943)
    at scala.collection.Iterator.foreach$(Iterator.scala:943)
    at scala.collection.AbstractIterator.foreach(Iterator.scala:1431)
    at org.apache.spark.storage.BlockManager.removeBroadcast(BlockManager.scala:1970)
    at org.apache.spark.broadcast.TorrentBroadcast.writeBlocks(TorrentBroadcast.scala:179)
    at org.apache.spark.broadcast.TorrentBroadcast.<init>(TorrentBroadcast.scala:99)
    at org.apache.spark.broadcast.TorrentBroadcastFactory.newBroadcast(TorrentBroadcastFactory.scala:38)
    at org.apache.spark.broadcast.BroadcastManager.newBroadcast(BroadcastManager.scala:78)
    at org.apache.spark.SparkContext.broadcastInternal(SparkContext.scala:1538)
    at org.apache.spark.SparkContext.broadcast(SparkContext.scala:1520)
    at org.apache.spark.scheduler.DAGScheduler.submitMissingTasks(DAGScheduler.scala:1539)
    at org.apache.spark.scheduler.DAGScheduler.submitStage(DAGScheduler.scala:1355)
    at org.apache.spark.scheduler.DAGScheduler.handleJobSubmitted(DAGScheduler.scala:1297)
    at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:2929)
    at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2921)
    at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2910)
    at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:49)
```

### Does this PR introduce _any_ user-facing change?
No

### How was this patch tested?
Throw an exception before `Files.createDirectory` to simulate disk problems.

DiskBlockManager#getFile
```java
if (filename.contains("piece")) {
  throw new java.io.IOException("disk issue")
}
Files.createDirectory(path)
```

```
./bin/spark-shell
```
```scala
spark.sql("select 1").collect()
```

```
22/11/24 19:29:58 WARN BlockManager: Putting block broadcast_0_piece0 failed due to exception java.io.IOException: disk issue.
22/11/24 19:29:58 ERROR TorrentBroadcast: Store broadcast broadcast_0 fail, remove all pieces of the broadcast
org.apache.spark.SparkException: Job aborted due to stage failure: Task serialization failed: java.io.IOException: disk issue
java.io.IOException: disk issue
	at org.apache.spark.storage.DiskBlockManager.getFile(DiskBlockManager.scala:109)
	at org.apache.spark.storage.DiskBlockManager.containsBlock(DiskBlockManager.scala:160)
	at org.apache.spark.storage.DiskStore.contains(DiskStore.scala:153)
	at org.apache.spark.storage.BlockManager.org$apache$spark$storage$BlockManager$$getCurrentBlockStatus(BlockManager.scala:879)
	at org.apache.spark.storage.BlockManager.removeBlockInternal(BlockManager.scala:1998)
	at org.apache.spark.storage.BlockManager.org$apache$spark$storage$BlockManager$$doPut(BlockManager.scala:1484)
	at org.apache.spark.storage.BlockManager$BlockStoreUpdater.save(BlockManager.scala:378)
	at org.apache.spark.storage.BlockManager.putBytes(BlockManager.scala:1419)
	at org.apache.spark.broadcast.TorrentBroadcast.$anonfun$writeBlocks$1(TorrentBroadcast.scala:170)
	at org.apache.spark.broadcast.TorrentBroadcast.$anonfun$writeBlocks$1$adapted(TorrentBroadcast.scala:164)
	at scala.collection.IndexedSeqOptimized.foreach(IndexedSeqOptimized.scala:36)
	at scala.collection.IndexedSeqOptimized.foreach$(IndexedSeqOptimized.scala:33)
	at scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:198)
	at org.apache.spark.broadcast.TorrentBroadcast.writeBlocks(TorrentBroadcast.scala:164)
	at org.apache.spark.broadcast.TorrentBroadcast.<init>(TorrentBroadcast.scala:99)
	at org.apache.spark.broadcast.TorrentBroadcastFactory.newBroadcast(TorrentBroadcastFactory.scala:38)
	at org.apache.spark.broadcast.BroadcastManager.newBroadcast(BroadcastManager.scala:78)
	at org.apache.spark.SparkContext.broadcastInternal(SparkContext.scala:1538)
	at org.apache.spark.SparkContext.broadcast(SparkContext.scala:1520)
	at org.apache.spark.scheduler.DAGScheduler.submitMissingTasks(DAGScheduler.scala:1539)
	at org.apache.spark.scheduler.DAGScheduler.submitStage(DAGScheduler.scala:1355)
	at org.apache.spark.scheduler.DAGScheduler.handleJobSubmitted(DAGScheduler.scala:1297)
	at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:2929)
```

Closes apache#38467 from cxzl25/SPARK-40987.

Authored-by: sychen <[email protected]>
Signed-off-by: Mridul <mridul<at>gmail.com>
(cherry picked from commit bbab0af)
Signed-off-by: Mridul <mridulatgmail.com>
panbingkun pushed a commit that referenced this pull request Nov 22, 2024
…ead pool

### What changes were proposed in this pull request?

This PR aims to use a meaningful class name prefix for REST Submission API thread pool instead of the default value of Jetty QueuedThreadPool, `"qtp"+super.hashCode()`.

https://github.com/dekellum/jetty/blob/3dc0120d573816de7d6a83e2d6a97035288bdd4a/jetty-util/src/main/java/org/eclipse/jetty/util/thread/QueuedThreadPool.java#L64

### Why are the changes needed?

This is helpful during JVM investigation.

**BEFORE (4.0.0-preview2)**

```
$ SPARK_MASTER_OPTS='-Dspark.master.rest.enabled=true' sbin/start-master.sh
$ jstack 28217 | grep qtp
"qtp1925630411-52" #52 daemon prio=5 os_prio=31 cpu=0.07ms elapsed=19.06s tid=0x0000000134906c10 nid=0xde03 runnable  [0x0000000314592000]
"qtp1925630411-53" #53 daemon prio=5 os_prio=31 cpu=0.05ms elapsed=19.06s tid=0x0000000134ac6810 nid=0xc603 runnable  [0x000000031479e000]
"qtp1925630411-54" #54 daemon prio=5 os_prio=31 cpu=0.06ms elapsed=19.06s tid=0x000000013491ae10 nid=0xdc03 runnable  [0x00000003149aa000]
"qtp1925630411-55" #55 daemon prio=5 os_prio=31 cpu=0.08ms elapsed=19.06s tid=0x0000000134ac9810 nid=0xc803 runnable  [0x0000000314bb6000]
"qtp1925630411-56" #56 daemon prio=5 os_prio=31 cpu=0.04ms elapsed=19.06s tid=0x0000000134ac9e10 nid=0xda03 runnable  [0x0000000314dc2000]
"qtp1925630411-57" #57 daemon prio=5 os_prio=31 cpu=0.05ms elapsed=19.06s tid=0x0000000134aca410 nid=0xca03 runnable  [0x0000000314fce000]
"qtp1925630411-58" #58 daemon prio=5 os_prio=31 cpu=0.04ms elapsed=19.06s tid=0x0000000134acaa10 nid=0xcb03 runnable  [0x00000003151da000]
"qtp1925630411-59" #59 daemon prio=5 os_prio=31 cpu=0.06ms elapsed=19.06s tid=0x0000000134acb010 nid=0xcc03 runnable  [0x00000003153e6000]
"qtp1925630411-60-acceptor-0108e9815-ServerConnector1e497474{HTTP/1.1, (http/1.1)}{M3-Max.local:6066}" #60 daemon prio=3 os_prio=31 cpu=0.11ms elapsed=19.06s tid=0x00000001317ffa10 nid=0xcd03 runnable  [0x00000003155f2000]
"qtp1925630411-61-acceptor-11d90f2aa-ServerConnector1e497474{HTTP/1.1, (http/1.1)}{M3-Max.local:6066}" #61 daemon prio=3 os_prio=31 cpu=0.10ms elapsed=19.06s tid=0x00000001314ed610 nid=0xcf03 waiting on condition  [0x00000003157fe000]
```

**AFTER**
```
$ SPARK_MASTER_OPTS='-Dspark.master.rest.enabled=true' sbin/start-master.sh
$ jstack 28317 | grep StandaloneRestServer
"StandaloneRestServer-52" #52 daemon prio=5 os_prio=31 cpu=0.09ms elapsed=60.06s tid=0x00000001284a8e10 nid=0xdb03 runnable  [0x000000032cfce000]
"StandaloneRestServer-53" #53 daemon prio=5 os_prio=31 cpu=0.06ms elapsed=60.06s tid=0x00000001284acc10 nid=0xda03 runnable  [0x000000032d1da000]
"StandaloneRestServer-54" #54 daemon prio=5 os_prio=31 cpu=0.05ms elapsed=60.06s tid=0x00000001284ae610 nid=0xd803 runnable  [0x000000032d3e6000]
"StandaloneRestServer-55" #55 daemon prio=5 os_prio=31 cpu=0.09ms elapsed=60.06s tid=0x00000001284aec10 nid=0xd703 runnable  [0x000000032d5f2000]
"StandaloneRestServer-56" #56 daemon prio=5 os_prio=31 cpu=0.06ms elapsed=60.06s tid=0x00000001284af210 nid=0xc803 runnable  [0x000000032d7fe000]
"StandaloneRestServer-57" #57 daemon prio=5 os_prio=31 cpu=0.05ms elapsed=60.06s tid=0x00000001284af810 nid=0xc903 runnable  [0x000000032da0a000]
"StandaloneRestServer-58" #58 daemon prio=5 os_prio=31 cpu=0.06ms elapsed=60.06s tid=0x00000001284afe10 nid=0xcb03 runnable  [0x000000032dc16000]
"StandaloneRestServer-59" #59 daemon prio=5 os_prio=31 cpu=0.05ms elapsed=60.06s tid=0x00000001284b0410 nid=0xcc03 runnable  [0x000000032de22000]
"StandaloneRestServer-60-acceptor-04aefbaa8-ServerConnector44284d85{HTTP/1.1, (http/1.1)}{M3-Max.local:6066}" #60 daemon prio=3 os_prio=31 cpu=0.13ms elapsed=60.05s tid=0x000000015cda1a10 nid=0xcd03 runnable  [0x000000032e02e000]
"StandaloneRestServer-61-acceptor-148976251-ServerConnector44284d85{HTTP/1.1, (http/1.1)}{M3-Max.local:6066}" #61 daemon prio=3 os_prio=31 cpu=0.12ms elapsed=60.05s tid=0x000000015cd1c810 nid=0xce03 waiting on condition  [0x000000032e23a000]
```

### Does this PR introduce _any_ user-facing change?

No, the thread names are accessed during the debugging.

### How was this patch tested?

Manual review.

### Was this patch authored or co-authored using generative AI tooling?

No.

Closes #48924 from dongjoon-hyun/SPARK-50385.

Authored-by: Dongjoon Hyun <[email protected]>
Signed-off-by: panbingkun <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants