Skip to content

Commit 27af5ee

Browse files
committed
drm/i915: Move execlists irq handler to a bottom half
Doing a lot of work in the interrupt handler introduces huge latencies to the system as a whole. Most dramatic effect can be seen by running an all engine stress test like igt/gem_exec_nop/all where, when the kernel config is lean enough, the whole system can be brought into multi-second periods of complete non-interactivty. That can look for example like this: NMI watchdog: BUG: soft lockup - CPU#0 stuck for 23s! [kworker/u8:3:143] Modules linked in: [redacted for brevity] CPU: 0 PID: 143 Comm: kworker/u8:3 Tainted: G U L 4.5.0-160321+ torvalds#183 Hardware name: Intel Corporation Broadwell Client platform/WhiteTip Mountain 1 Workqueue: i915 gen6_pm_rps_work [i915] task: ffff8800aae88000 ti: ffff8800aae90000 task.ti: ffff8800aae90000 RIP: 0010:[<ffffffff8104a3c2>] [<ffffffff8104a3c2>] __do_softirq+0x72/0x1d0 RSP: 0000:ffff88014f403f38 EFLAGS: 00000206 RAX: ffff8800aae94000 RBX: 0000000000000000 RCX: 00000000000006e0 RDX: 0000000000000020 RSI: 0000000004208060 RDI: 0000000000215d80 RBP: ffff88014f403f80 R08: 0000000b1b42c180 R09: 0000000000000022 R10: 0000000000000004 R11: 00000000ffffffff R12: 000000000000a030 R13: 0000000000000082 R14: ffff8800aa4d0080 R15: 0000000000000082 FS: 0000000000000000(0000) GS:ffff88014f400000(0000) knlGS:0000000000000000 CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 CR2: 00007fa53b90c000 CR3: 0000000001a0a000 CR4: 00000000001406f0 DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000 DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400 Stack: 042080601b33869f ffff8800aae94000 00000000fffc2678 ffff88010000000a 0000000000000000 000000000000a030 0000000000005302 ffff8800aa4d0080 0000000000000206 ffff88014f403f90 ffffffff8104a716 ffff88014f403fa8 Call Trace: <IRQ> [<ffffffff8104a716>] irq_exit+0x86/0x90 [<ffffffff81031e7d>] smp_apic_timer_interrupt+0x3d/0x50 [<ffffffff814f3eac>] apic_timer_interrupt+0x7c/0x90 <EOI> [<ffffffffa01c5b40>] ? gen8_write64+0x1a0/0x1a0 [i915] [<ffffffff814f2b39>] ? _raw_spin_unlock_irqrestore+0x9/0x20 [<ffffffffa01c5c44>] gen8_write32+0x104/0x1a0 [i915] [<ffffffff8132c6a2>] ? n_tty_receive_buf_common+0x372/0xae0 [<ffffffffa017cc9e>] gen6_set_rps_thresholds+0x1be/0x330 [i915] [<ffffffffa017eaf0>] gen6_set_rps+0x70/0x200 [i915] [<ffffffffa0185375>] intel_set_rps+0x25/0x30 [i915] [<ffffffffa01768fd>] gen6_pm_rps_work+0x10d/0x2e0 [i915] [<ffffffff81063852>] ? finish_task_switch+0x72/0x1c0 [<ffffffff8105ab29>] process_one_work+0x139/0x350 [<ffffffff8105b186>] worker_thread+0x126/0x490 [<ffffffff8105b060>] ? rescuer_thread+0x320/0x320 [<ffffffff8105fa64>] kthread+0xc4/0xe0 [<ffffffff8105f9a0>] ? kthread_create_on_node+0x170/0x170 [<ffffffff814f351f>] ret_from_fork+0x3f/0x70 [<ffffffff8105f9a0>] ? kthread_create_on_node+0x170/0x170 I could not explain, or find a code path, which would explain a +20 second lockup, but from some instrumentation it was apparent the interrupts off proportion of time was between 10-25% under heavy load which is quite bad. When a interrupt "cliff" is reached, which was >~320k irq/s on my machine, the whole system goes into a terrible state of the above described multi-second lockups. By moving the GT interrupt handling to a tasklet in a most simple way, the problem above disappears completely. Testing the effect on sytem-wide latencies using igt/gem_syslatency shows the following before this patch: gem_syslatency: cycles=1532739, latency mean=416531.829us max=2499237us gem_syslatency: cycles=1839434, latency mean=1458099.157us max=4998944us gem_syslatency: cycles=1432570, latency mean=2688.451us max=1201185us gem_syslatency: cycles=1533543, latency mean=416520.499us max=2498886us This shows that the unrelated process is experiencing huge delays in its wake-up latency. After the patch the results look like this: gem_syslatency: cycles=808907, latency mean=53.133us max=1640us gem_syslatency: cycles=862154, latency mean=62.778us max=2117us gem_syslatency: cycles=856039, latency mean=58.079us max=2123us gem_syslatency: cycles=841683, latency mean=56.914us max=1667us Showing a huge improvement in the unrelated process wake-up latency. It also shows an approximate halving in the number of total empty batches submitted during the test. This may not be worrying since the test puts the driver under a very unrealistic load with ncpu threads doing empty batch submission to all GPU engines each. Another benefit compared to the hard-irq handling is that now work on all engines can be dispatched in parallel since we can have up to number of CPUs active tasklets. (While previously a single hard-irq would serially dispatch on one engine after another.) More interesting scenario with regards to throughput is "gem_latency -n 100" which shows 25% better throughput and CPU usage, and 14% better dispatch latencies. I did not find any gains or regressions with Synmark2 or GLbench under light testing. More benchmarking is certainly required. v2: * execlists_lock should be taken as spin_lock_bh when queuing work from userspace now. (Chris Wilson) * uncore.lock must be taken with spin_lock_irq when submitting requests since that now runs from either softirq or process context. v3: * Expanded commit message with more testing data; * converted missed locking sites to _bh; * added execlist_lock comment. (Chris Wilson) v4: * Mention dispatch parallelism in commit. (Chris Wilson) * Do not hold uncore.lock over MMIO reads since the block is already serialised per-engine via the tasklet itself. (Chris Wilson) * intel_lrc_irq_handler should be static. (Chris Wilson) * Cancel/sync the tasklet on GPU reset. (Chris Wilson) * Document and WARN that tasklet cannot be active/pending on engine cleanup. (Chris Wilson/Imre Deak) Signed-off-by: Tvrtko Ursulin <[email protected]> Cc: Chris Wilson <[email protected]> Cc: Imre Deak <[email protected]> Testcase: igt/gem_exec_nop/all Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=94350 Reviewed-by: Chris Wilson <[email protected]> Link: http://patchwork.freedesktop.org/patch/msgid/[email protected]
1 parent 183aec1 commit 27af5ee

File tree

6 files changed

+33
-24
lines changed

6 files changed

+33
-24
lines changed

drivers/gpu/drm/i915/i915_debugfs.c

Lines changed: 2 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -2100,7 +2100,6 @@ static int i915_execlists(struct seq_file *m, void *data)
21002100
for_each_engine(engine, dev_priv) {
21012101
struct drm_i915_gem_request *head_req = NULL;
21022102
int count = 0;
2103-
unsigned long flags;
21042103

21052104
seq_printf(m, "%s\n", engine->name);
21062105

@@ -2127,13 +2126,13 @@ static int i915_execlists(struct seq_file *m, void *data)
21272126
i, status, ctx_id);
21282127
}
21292128

2130-
spin_lock_irqsave(&engine->execlist_lock, flags);
2129+
spin_lock_bh(&engine->execlist_lock);
21312130
list_for_each(cursor, &engine->execlist_queue)
21322131
count++;
21332132
head_req = list_first_entry_or_null(&engine->execlist_queue,
21342133
struct drm_i915_gem_request,
21352134
execlist_link);
2136-
spin_unlock_irqrestore(&engine->execlist_lock, flags);
2135+
spin_unlock_bh(&engine->execlist_lock);
21372136

21382137
seq_printf(m, "\t%d requests in queue\n", count);
21392138
if (head_req) {

drivers/gpu/drm/i915/i915_gem.c

Lines changed: 6 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -2842,13 +2842,15 @@ static void i915_gem_reset_engine_cleanup(struct drm_i915_private *dev_priv,
28422842
*/
28432843

28442844
if (i915.enable_execlists) {
2845-
spin_lock_irq(&engine->execlist_lock);
2845+
/* Ensure irq handler finishes or is cancelled. */
2846+
tasklet_kill(&engine->irq_tasklet);
28462847

2848+
spin_lock_bh(&engine->execlist_lock);
28472849
/* list_splice_tail_init checks for empty lists */
28482850
list_splice_tail_init(&engine->execlist_queue,
28492851
&engine->execlist_retired_req_list);
2852+
spin_unlock_bh(&engine->execlist_lock);
28502853

2851-
spin_unlock_irq(&engine->execlist_lock);
28522854
intel_execlists_retire_requests(engine);
28532855
}
28542856

@@ -2968,9 +2970,9 @@ i915_gem_retire_requests(struct drm_device *dev)
29682970
i915_gem_retire_requests_ring(engine);
29692971
idle &= list_empty(&engine->request_list);
29702972
if (i915.enable_execlists) {
2971-
spin_lock_irq(&engine->execlist_lock);
2973+
spin_lock_bh(&engine->execlist_lock);
29722974
idle &= list_empty(&engine->execlist_queue);
2973-
spin_unlock_irq(&engine->execlist_lock);
2975+
spin_unlock_bh(&engine->execlist_lock);
29742976

29752977
intel_execlists_retire_requests(engine);
29762978
}

drivers/gpu/drm/i915/i915_irq.c

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1323,7 +1323,7 @@ gen8_cs_irq_handler(struct intel_engine_cs *engine, u32 iir, int test_shift)
13231323
if (iir & (GT_RENDER_USER_INTERRUPT << test_shift))
13241324
notify_ring(engine);
13251325
if (iir & (GT_CONTEXT_SWITCH_INTERRUPT << test_shift))
1326-
intel_lrc_irq_handler(engine);
1326+
tasklet_schedule(&engine->irq_tasklet);
13271327
}
13281328

13291329
static irqreturn_t gen8_gt_irq_handler(struct drm_i915_private *dev_priv,

drivers/gpu/drm/i915/intel_lrc.c

Lines changed: 22 additions & 14 deletions
Original file line numberDiff line numberDiff line change
@@ -131,6 +131,7 @@
131131
* preemption, but just sampling the new tail pointer).
132132
*
133133
*/
134+
#include <linux/interrupt.h>
134135

135136
#include <drm/drmP.h>
136137
#include <drm/i915_drm.h>
@@ -418,20 +419,18 @@ static void execlists_submit_requests(struct drm_i915_gem_request *rq0,
418419
{
419420
struct drm_i915_private *dev_priv = rq0->i915;
420421

421-
/* BUG_ON(!irqs_disabled()); */
422-
423422
execlists_update_context(rq0);
424423

425424
if (rq1)
426425
execlists_update_context(rq1);
427426

428-
spin_lock(&dev_priv->uncore.lock);
427+
spin_lock_irq(&dev_priv->uncore.lock);
429428
intel_uncore_forcewake_get__locked(dev_priv, FORCEWAKE_ALL);
430429

431430
execlists_elsp_write(rq0, rq1);
432431

433432
intel_uncore_forcewake_put__locked(dev_priv, FORCEWAKE_ALL);
434-
spin_unlock(&dev_priv->uncore.lock);
433+
spin_unlock_irq(&dev_priv->uncore.lock);
435434
}
436435

437436
static void execlists_context_unqueue(struct intel_engine_cs *engine)
@@ -538,22 +537,22 @@ get_context_status(struct intel_engine_cs *engine, unsigned int read_pointer,
538537

539538
/**
540539
* intel_lrc_irq_handler() - handle Context Switch interrupts
541-
* @ring: Engine Command Streamer to handle.
540+
* @engine: Engine Command Streamer to handle.
542541
*
543542
* Check the unread Context Status Buffers and manage the submission of new
544543
* contexts to the ELSP accordingly.
545544
*/
546-
void intel_lrc_irq_handler(struct intel_engine_cs *engine)
545+
static void intel_lrc_irq_handler(unsigned long data)
547546
{
547+
struct intel_engine_cs *engine = (struct intel_engine_cs *)data;
548548
struct drm_i915_private *dev_priv = engine->dev->dev_private;
549549
u32 status_pointer;
550550
unsigned int read_pointer, write_pointer;
551551
u32 csb[GEN8_CSB_ENTRIES][2];
552552
unsigned int csb_read = 0, i;
553553
unsigned int submit_contexts = 0;
554554

555-
spin_lock(&dev_priv->uncore.lock);
556-
intel_uncore_forcewake_get__locked(dev_priv, FORCEWAKE_ALL);
555+
intel_uncore_forcewake_get(dev_priv, FORCEWAKE_ALL);
557556

558557
status_pointer = I915_READ_FW(RING_CONTEXT_STATUS_PTR(engine));
559558

@@ -578,8 +577,7 @@ void intel_lrc_irq_handler(struct intel_engine_cs *engine)
578577
_MASKED_FIELD(GEN8_CSB_READ_PTR_MASK,
579578
engine->next_context_status_buffer << 8));
580579

581-
intel_uncore_forcewake_put__locked(dev_priv, FORCEWAKE_ALL);
582-
spin_unlock(&dev_priv->uncore.lock);
580+
intel_uncore_forcewake_put(dev_priv, FORCEWAKE_ALL);
583581

584582
spin_lock(&engine->execlist_lock);
585583

@@ -621,7 +619,7 @@ static void execlists_context_queue(struct drm_i915_gem_request *request)
621619

622620
i915_gem_request_reference(request);
623621

624-
spin_lock_irq(&engine->execlist_lock);
622+
spin_lock_bh(&engine->execlist_lock);
625623

626624
list_for_each_entry(cursor, &engine->execlist_queue, execlist_link)
627625
if (++num_elements > 2)
@@ -646,7 +644,7 @@ static void execlists_context_queue(struct drm_i915_gem_request *request)
646644
if (num_elements == 0)
647645
execlists_context_unqueue(engine);
648646

649-
spin_unlock_irq(&engine->execlist_lock);
647+
spin_unlock_bh(&engine->execlist_lock);
650648
}
651649

652650
static int logical_ring_invalidate_all_caches(struct drm_i915_gem_request *req)
@@ -1033,9 +1031,9 @@ void intel_execlists_retire_requests(struct intel_engine_cs *engine)
10331031
return;
10341032

10351033
INIT_LIST_HEAD(&retired_list);
1036-
spin_lock_irq(&engine->execlist_lock);
1034+
spin_lock_bh(&engine->execlist_lock);
10371035
list_replace_init(&engine->execlist_retired_req_list, &retired_list);
1038-
spin_unlock_irq(&engine->execlist_lock);
1036+
spin_unlock_bh(&engine->execlist_lock);
10391037

10401038
list_for_each_entry_safe(req, tmp, &retired_list, execlist_link) {
10411039
struct intel_context *ctx = req->ctx;
@@ -2016,6 +2014,13 @@ void intel_logical_ring_cleanup(struct intel_engine_cs *engine)
20162014
if (!intel_engine_initialized(engine))
20172015
return;
20182016

2017+
/*
2018+
* Tasklet cannot be active at this point due intel_mark_active/idle
2019+
* so this is just for documentation.
2020+
*/
2021+
if (WARN_ON(test_bit(TASKLET_STATE_SCHED, &engine->irq_tasklet.state)))
2022+
tasklet_kill(&engine->irq_tasklet);
2023+
20192024
dev_priv = engine->dev->dev_private;
20202025

20212026
if (engine->buffer) {
@@ -2089,6 +2094,9 @@ logical_ring_init(struct drm_device *dev, struct intel_engine_cs *engine)
20892094
INIT_LIST_HEAD(&engine->execlist_retired_req_list);
20902095
spin_lock_init(&engine->execlist_lock);
20912096

2097+
tasklet_init(&engine->irq_tasklet,
2098+
intel_lrc_irq_handler, (unsigned long)engine);
2099+
20922100
logical_ring_init_platform_invariants(engine);
20932101

20942102
ret = i915_cmd_parser_init_ring(engine);

drivers/gpu/drm/i915/intel_lrc.h

Lines changed: 0 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -118,7 +118,6 @@ int intel_execlists_submission(struct i915_execbuffer_params *params,
118118
struct drm_i915_gem_execbuffer2 *args,
119119
struct list_head *vmas);
120120

121-
void intel_lrc_irq_handler(struct intel_engine_cs *engine);
122121
void intel_execlists_retire_requests(struct intel_engine_cs *engine);
123122

124123
#endif /* _INTEL_LRC_H_ */

drivers/gpu/drm/i915/intel_ringbuffer.h

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -266,7 +266,8 @@ struct intel_engine_cs {
266266
} semaphore;
267267

268268
/* Execlists */
269-
spinlock_t execlist_lock;
269+
struct tasklet_struct irq_tasklet;
270+
spinlock_t execlist_lock; /* used inside tasklet, use spin_lock_bh */
270271
struct list_head execlist_queue;
271272
struct list_head execlist_retired_req_list;
272273
unsigned int next_context_status_buffer;

0 commit comments

Comments
 (0)