Skip to content

Commit 171ba05

Browse files
cminyardJiri Slaby
authored andcommitted
ring-buffer: Always run per-cpu ring buffer resize with schedule_work_on()
commit 021c5b3 upstream. The code for resizing the trace ring buffers has to run the per-cpu resize on the CPU itself. The code was using preempt_off() and running the code for the current CPU directly, otherwise calling schedule_work_on(). At least on RT this could result in the following: |BUG: sleeping function called from invalid context at kernel/rtmutex.c:673 |in_atomic(): 1, irqs_disabled(): 0, pid: 607, name: bash |3 locks held by bash/607: |CPU: 0 PID: 607 Comm: bash Not tainted 3.12.15-rt25+ torvalds#124 |(rt_spin_lock+0x28/0x68) |(free_hot_cold_page+0x84/0x3b8) |(free_buffer_page+0x14/0x20) |(rb_update_pages+0x280/0x338) |(ring_buffer_resize+0x32c/0x3dc) |(free_snapshot+0x18/0x38) |(tracing_set_tracer+0x27c/0x2ac) probably via |cd /sys/kernel/debug/tracing/ |echo 1 > events/enable ; sleep 2 |echo 1024 > buffer_size_kb If we just always use schedule_work_on(), there's no need for the preempt_off(). So do that. Link: http://lkml.kernel.org/p/[email protected] Reported-by: Stanislav Meduna <[email protected]> Signed-off-by: Corey Minyard <[email protected]> Signed-off-by: Steven Rostedt <[email protected]> Signed-off-by: Jiri Slaby <[email protected]>
1 parent 1e5e555 commit 171ba05

File tree

1 file changed

+4
-20
lines changed

1 file changed

+4
-20
lines changed

kernel/trace/ring_buffer.c

Lines changed: 4 additions & 20 deletions
Original file line numberDiff line numberDiff line change
@@ -1700,22 +1700,14 @@ int ring_buffer_resize(struct ring_buffer *buffer, unsigned long size,
17001700
if (!cpu_buffer->nr_pages_to_update)
17011701
continue;
17021702

1703-
/* The update must run on the CPU that is being updated. */
1704-
preempt_disable();
1705-
if (cpu == smp_processor_id() || !cpu_online(cpu)) {
1703+
/* Can't run something on an offline CPU. */
1704+
if (!cpu_online(cpu)) {
17061705
rb_update_pages(cpu_buffer);
17071706
cpu_buffer->nr_pages_to_update = 0;
17081707
} else {
1709-
/*
1710-
* Can not disable preemption for schedule_work_on()
1711-
* on PREEMPT_RT.
1712-
*/
1713-
preempt_enable();
17141708
schedule_work_on(cpu,
17151709
&cpu_buffer->update_pages_work);
1716-
preempt_disable();
17171710
}
1718-
preempt_enable();
17191711
}
17201712

17211713
/* wait for all the updates to complete */
@@ -1753,22 +1745,14 @@ int ring_buffer_resize(struct ring_buffer *buffer, unsigned long size,
17531745

17541746
get_online_cpus();
17551747

1756-
preempt_disable();
1757-
/* The update must run on the CPU that is being updated. */
1758-
if (cpu_id == smp_processor_id() || !cpu_online(cpu_id))
1748+
/* Can't run something on an offline CPU. */
1749+
if (!cpu_online(cpu_id))
17591750
rb_update_pages(cpu_buffer);
17601751
else {
1761-
/*
1762-
* Can not disable preemption for schedule_work_on()
1763-
* on PREEMPT_RT.
1764-
*/
1765-
preempt_enable();
17661752
schedule_work_on(cpu_id,
17671753
&cpu_buffer->update_pages_work);
17681754
wait_for_completion(&cpu_buffer->update_done);
1769-
preempt_disable();
17701755
}
1771-
preempt_enable();
17721756

17731757
cpu_buffer->nr_pages_to_update = 0;
17741758
put_online_cpus();

0 commit comments

Comments
 (0)