public inbox for linux-nvme@lists.infradead.org
 help / color / mirror / Atom feed
From: Keith Busch <kbusch@kernel.org>
To: Pratyush Yadav <ptyadav@amazon.de>
Cc: Christoph Hellwig <hch@lst.de>, Sagi Grimberg <sagi@grimberg.me>,
	Jens Axboe <axboe@kernel.dk>,
	linux-nvme@lists.infradead.org, linux-kernel@vger.kernel.org
Subject: Re: [PATCH] nvme-pci: do not set the NUMA node of device if it has none
Date: Wed, 26 Jul 2023 10:17:20 -0600	[thread overview]
Message-ID: <ZMFHEK95WGwtYbid@kbusch-mbp.dhcp.thefacebook.com> (raw)
In-Reply-To: <mafs0cz0e8zc6.fsf_-_@amazon.de>

On Wed, Jul 26, 2023 at 05:30:33PM +0200, Pratyush Yadav wrote:
> On Wed, Jul 26 2023, Christoph Hellwig wrote:
> > On Wed, Jul 26, 2023 at 10:58:36AM +0300, Sagi Grimberg wrote:
> >>>> For example, AWS EC2's i3.16xlarge instance does not expose NUMA
> >>>> information for the NVMe devices. This means all NVMe devices have
> >>>> NUMA_NO_NODE by default. Without this patch, random 4k read performance
> >>>> measured via fio on CPUs from node 1 (around 165k IOPS) is almost 50%
> >>>> less than CPUs from node 0 (around 315k IOPS). With this patch, CPUs on
> >>>> both nodes get similar performance (around 315k IOPS).
> >>>
> >>> irqbalance doesn't work with this driver though: the interrupts are
> >>> managed by the kernel. Is there some other reason to explain the perf
> >>> difference?
> 
> Hmm, I did not know that. I have not gone and looked at the code but I
> think the same reasoning should hold, just with s/irqbalance/kernel. If
> the kernel IRQ balancer sees the device is on node 0, it would deliver
> its interrupts to CPUs on node 0.
> 
> In my tests I can see that the interrupts for NVME queues are sent only
> to CPUs from node 0 without this patch. With this patch CPUs from both
> nodes get the interrupts.

Could you send the output of:

  numactl --hardware

and then with and without your patch:

  for i in $(cat /proc/interrupts | grep nvme0 | sed "s/^ *//g" | cut -d":" -f 1); do \
    cat /proc/irq/$i/{smp,effective}_affinity_list; \
  done

?


  reply	other threads:[~2023-07-26 16:17 UTC|newest]

Thread overview: 16+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-07-25 11:06 [PATCH] nvme-pci: do not set the NUMA node of device if it has none Pratyush Yadav
2023-07-25 14:35 ` Keith Busch
2023-07-26  7:58   ` Sagi Grimberg
2023-07-26 13:14     ` Christoph Hellwig
2023-07-26 15:30       ` Pratyush Yadav
2023-07-26 16:17         ` Keith Busch [this message]
2023-07-26 19:32           ` Pratyush Yadav
2023-07-26 22:25             ` Keith Busch
2023-07-28 18:09               ` Pratyush Yadav
2023-07-28 19:34                 ` Keith Busch
2023-08-04 14:50                   ` Pratyush Yadav
2023-08-04 15:19                     ` Keith Busch
2023-08-08 15:51                       ` Pratyush Yadav
2023-08-08 16:35                         ` Keith Busch
2024-07-23  9:49 ` Maurizio Lombardi
2024-07-23 14:39   ` Christoph Hellwig

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=ZMFHEK95WGwtYbid@kbusch-mbp.dhcp.thefacebook.com \
    --to=kbusch@kernel.org \
    --cc=axboe@kernel.dk \
    --cc=hch@lst.de \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-nvme@lists.infradead.org \
    --cc=ptyadav@amazon.de \
    --cc=sagi@grimberg.me \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox