Skip to content

Commit 6a6dcae

Browse files
Ming LeiKAGA-KOKO
authored andcommitted
blk-mq: Build default queue map via group_cpus_evenly()
The default queue mapping builder of blk_mq_map_queues doesn't take NUMA topo into account, so the built mapping is pretty bad, since CPUs belonging to different NUMA node are assigned to same queue. It is observed that IOPS drops by ~30% when running two jobs on same hctx of null_blk from two CPUs belonging to two NUMA nodes compared with from same NUMA node. Address the issue by reusing group_cpus_evenly() for building queue mapping since group_cpus_evenly() does group cpus according to CPU/NUMA locality. Also performance data becomes more stable with this given correct queue mapping is applied wrt. numa locality viewpoint, for example, on one two nodes arm64 machine with 160 cpus, node 0(cpu 0~79), node 1(cpu 80~159): 1) modprobe null_blk nr_devices=1 submit_queues=2 2) run 'fio(t/io_uring -p 0 -n 4 -r 20 /dev/nullb0)', and observe that IOPS becomes much stable on multiple tests: - unpatched: IOPS is 2.5M ~ 4.5M - patched: IOPS is 4.3M ~ 5.0M Lots of drivers may benefit from the change, such as nvme pci poll, nvme tcp, ... Signed-off-by: Ming Lei <[email protected]> Signed-off-by: Thomas Gleixner <[email protected]> Reviewed-by: Christoph Hellwig <[email protected]> Reviewed-by: John Garry <[email protected]> Reviewed-by: Jens Axboe <[email protected]> Link: https://lore.kernel.org/r/[email protected]
1 parent f7b3ea8 commit 6a6dcae

File tree

1 file changed

+13
-50
lines changed

1 file changed

+13
-50
lines changed

block/blk-mq-cpumap.c

Lines changed: 13 additions & 50 deletions
Original file line numberDiff line numberDiff line change
@@ -10,66 +10,29 @@
1010
#include <linux/mm.h>
1111
#include <linux/smp.h>
1212
#include <linux/cpu.h>
13+
#include <linux/group_cpus.h>
1314

1415
#include <linux/blk-mq.h>
1516
#include "blk.h"
1617
#include "blk-mq.h"
1718

18-
static int queue_index(struct blk_mq_queue_map *qmap,
19-
unsigned int nr_queues, const int q)
20-
{
21-
return qmap->queue_offset + (q % nr_queues);
22-
}
23-
24-
static int get_first_sibling(unsigned int cpu)
25-
{
26-
unsigned int ret;
27-
28-
ret = cpumask_first(topology_sibling_cpumask(cpu));
29-
if (ret < nr_cpu_ids)
30-
return ret;
31-
32-
return cpu;
33-
}
34-
3519
void blk_mq_map_queues(struct blk_mq_queue_map *qmap)
3620
{
37-
unsigned int *map = qmap->mq_map;
38-
unsigned int nr_queues = qmap->nr_queues;
39-
unsigned int cpu, first_sibling, q = 0;
40-
41-
for_each_possible_cpu(cpu)
42-
map[cpu] = -1;
43-
44-
/*
45-
* Spread queues among present CPUs first for minimizing
46-
* count of dead queues which are mapped by all un-present CPUs
47-
*/
48-
for_each_present_cpu(cpu) {
49-
if (q >= nr_queues)
50-
break;
51-
map[cpu] = queue_index(qmap, nr_queues, q++);
21+
const struct cpumask *masks;
22+
unsigned int queue, cpu;
23+
24+
masks = group_cpus_evenly(qmap->nr_queues);
25+
if (!masks) {
26+
for_each_possible_cpu(cpu)
27+
qmap->mq_map[cpu] = qmap->queue_offset;
28+
return;
5229
}
5330

54-
for_each_possible_cpu(cpu) {
55-
if (map[cpu] != -1)
56-
continue;
57-
/*
58-
* First do sequential mapping between CPUs and queues.
59-
* In case we still have CPUs to map, and we have some number of
60-
* threads per cores then map sibling threads to the same queue
61-
* for performance optimizations.
62-
*/
63-
if (q < nr_queues) {
64-
map[cpu] = queue_index(qmap, nr_queues, q++);
65-
} else {
66-
first_sibling = get_first_sibling(cpu);
67-
if (first_sibling == cpu)
68-
map[cpu] = queue_index(qmap, nr_queues, q++);
69-
else
70-
map[cpu] = map[first_sibling];
71-
}
31+
for (queue = 0; queue < qmap->nr_queues; queue++) {
32+
for_each_cpu(cpu, &masks[queue])
33+
qmap->mq_map[cpu] = qmap->queue_offset + queue;
7234
}
35+
kfree(masks);
7336
}
7437
EXPORT_SYMBOL_GPL(blk_mq_map_queues);
7538

0 commit comments

Comments
 (0)