From: Ming Lei <ming.lei@redhat.com>
To: Jens Axboe <axboe@kernel.dk>,
Christoph Hellwig <hch@infradead.org>,
Thomas Gleixner <tglx@linutronix.de>,
linux-kernel@vger.kernel.org
Cc: linux-block@vger.kernel.org,
Laurence Oberman <loberman@redhat.com>,
Ming Lei <ming.lei@redhat.com>
Subject: [PATCH V2 0/5] genirq/affinity: irq vector spread among online CPUs as far as possible
Date: Mon, 5 Mar 2018 11:13:52 +0800 [thread overview]
Message-ID: <20180305031357.23950-1-ming.lei@redhat.com> (raw)
Hi,
This patchset tries to spread among online CPUs as far as possible, so
that we can avoid to allocate too less irq vectors with online CPUs
mapped.
For example, in a 8cores system, 4 cpu cores(4~7) are offline/non present,
on a device with 4 queues:
1) before this patchset
irq 39, cpu list 0-2
irq 40, cpu list 3-4,6
irq 41, cpu list 5
irq 42, cpu list 7
2) after this patchset
irq 39, cpu list 0,4
irq 40, cpu list 1,6
irq 41, cpu list 2,5
irq 42, cpu list 3,7
Without this patchset, only two vectors(39, 40) can be active, but there
can be 4 active irq vectors after applying this patchset.
One disadvantage is that CPUs from different NUMA node can be mapped to
one same irq vector. Given generally one CPU should be enough to handle
one irq vector, it shouldn't be a big deal. Especailly more vectors have
to be allocated, otherwise performance can be hurt in current
assignment.
V2:
- address coments from Christoph
- mark irq_build_affinity_masks as static
- move constification of get_nodes_in_cpumask's parameter into one
prep patch
- add Reviewed-by tag
Thanks
Ming
Ming Lei (5):
genirq/affinity: rename *node_to_possible_cpumask as *node_to_cpumask
genirq/affinity: mark 'node_to_cpumask' as const for
get_nodes_in_cpumask()
genirq/affinity: move actual irq vector spread into one helper
genirq/affinity: support to do irq vectors spread starting from any
vector
genirq/affinity: irq vector spread among online CPUs as far as
possible
kernel/irq/affinity.c | 145 ++++++++++++++++++++++++++++++++------------------
1 file changed, 94 insertions(+), 51 deletions(-)
--
2.9.5
next reply other threads:[~2018-03-05 3:13 UTC|newest]
Thread overview: 9+ messages / expand[flat|nested] mbox.gz Atom feed top
2018-03-05 3:13 Ming Lei [this message]
2018-03-05 3:13 ` [PATCH V2 1/5] genirq/affinity: rename *node_to_possible_cpumask as *node_to_cpumask Ming Lei
2018-03-05 3:13 ` [PATCH V2 2/5] genirq/affinity: mark 'node_to_cpumask' as const for get_nodes_in_cpumask() Ming Lei
2018-03-05 3:13 ` [PATCH V2 3/5] genirq/affinity: move actual irq vector spread into one helper Ming Lei
2018-03-05 16:28 ` kbuild test robot
2018-03-08 7:48 ` Christoph Hellwig
2018-03-08 10:05 ` Ming Lei
2018-03-05 3:13 ` [PATCH V2 4/5] genirq/affinity: support to do irq vectors spread starting from any vector Ming Lei
2018-03-05 3:13 ` [PATCH V2 5/5] genirq/affinity: irq vector spread among online CPUs as far as possible Ming Lei
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20180305031357.23950-1-ming.lei@redhat.com \
--to=ming.lei@redhat.com \
--cc=axboe@kernel.dk \
--cc=hch@infradead.org \
--cc=linux-block@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=loberman@redhat.com \
--cc=tglx@linutronix.de \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox