Skip to content

Commit 04aa530

Browse files
committed
genirq: Always force thread affinity
Sankara reported that the genirq core code fails to adjust the affinity of an interrupt thread in several cases: 1) On request/setup_irq() the call to setup_affinity() happens before the new action is registered, so the new thread is not notified. 2) For secondary shared interrupts nothing notifies the new thread to change its affinity. 3) Interrupts which have the IRQ_NO_BALANCE flag set are not moving the thread either. Fix this by setting the thread affinity flag right on thread creation time. This ensures that under all circumstances the thread moves to the right place. Requires a check in irq_thread_check_affinity for an existing affinity mask (CONFIG_CPU_MASK_OFFSTACK=y) Reported-and-tested-by: Sankara Muthukrishnan <[email protected]> Cc: [email protected] Link: http://lkml.kernel.org/r/alpine.LFD.2.02.1209041738200.2754@ionos Signed-off-by: Thomas Gleixner <[email protected]>
1 parent f3de44e commit 04aa530

File tree

1 file changed

+21
-2
lines changed

1 file changed

+21
-2
lines changed

kernel/irq/manage.c

Lines changed: 21 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -732,6 +732,7 @@ static void
732732
irq_thread_check_affinity(struct irq_desc *desc, struct irqaction *action)
733733
{
734734
cpumask_var_t mask;
735+
bool valid = true;
735736

736737
if (!test_and_clear_bit(IRQTF_AFFINITY, &action->thread_flags))
737738
return;
@@ -746,10 +747,18 @@ irq_thread_check_affinity(struct irq_desc *desc, struct irqaction *action)
746747
}
747748

748749
raw_spin_lock_irq(&desc->lock);
749-
cpumask_copy(mask, desc->irq_data.affinity);
750+
/*
751+
* This code is triggered unconditionally. Check the affinity
752+
* mask pointer. For CPU_MASK_OFFSTACK=n this is optimized out.
753+
*/
754+
if (desc->irq_data.affinity)
755+
cpumask_copy(mask, desc->irq_data.affinity);
756+
else
757+
valid = false;
750758
raw_spin_unlock_irq(&desc->lock);
751759

752-
set_cpus_allowed_ptr(current, mask);
760+
if (valid)
761+
set_cpus_allowed_ptr(current, mask);
753762
free_cpumask_var(mask);
754763
}
755764
#else
@@ -954,6 +963,16 @@ __setup_irq(unsigned int irq, struct irq_desc *desc, struct irqaction *new)
954963
*/
955964
get_task_struct(t);
956965
new->thread = t;
966+
/*
967+
* Tell the thread to set its affinity. This is
968+
* important for shared interrupt handlers as we do
969+
* not invoke setup_affinity() for the secondary
970+
* handlers as everything is already set up. Even for
971+
* interrupts marked with IRQF_NO_BALANCE this is
972+
* correct as we want the thread to move to the cpu(s)
973+
* on which the requesting code placed the interrupt.
974+
*/
975+
set_bit(IRQTF_AFFINITY, &new->thread_flags);
957976
}
958977

959978
if (!alloc_cpumask_var(&mask, GFP_KERNEL)) {

0 commit comments

Comments
 (0)