public inbox for linux-nvme@lists.infradead.org
 help / color / mirror / Atom feed
From: Alexander Shumakovitch <shurik@jhu.edu>
To: Damien Le Moal <damien.lemoal@opensource.wdc.com>
Cc: "linux-nvme@lists.infradead.org" <linux-nvme@lists.infradead.org>
Subject: Re: Read speed for a PCIe NVMe SSD is ridiculously slow on a multi-socket machine.
Date: Sat, 25 Mar 2023 00:33:14 +0000	[thread overview]
Message-ID: <ZB5BSJgeqaOkiXFF@hornet> (raw)
In-Reply-To: <e2df2f18-aaf9-89d5-6fed-aa1fb663f69c@opensource.wdc.com>

Hi Damien,

Just to add to my previous message, I've run the same set of tests on a
small SATA SSD boot drive (Kingston A400) attached to the same system, and
it turned out to be more or less node and I/O mode agnostic, producing
consistent reading speeds of about 450MB/sec in the direct I/O mode and
about 480MB/sec in the cached I/O mode. In particular, the cashed mode on
a "wrong" NUMA node was significantly faster for this SATA SSD drive than
for a NVMe one at about 170MB/sec (both drives are connected to CPU #0).

So my question becomes: why is the NVMe driver susceptible to (very) slow
cached reads, while the AHCI one is not? Are there some fundamental
differences in how AHCI and NVMe block devices handle page cache?

Thank you,

  --- Alex.

On Fri, Mar 24, 2023 at 05:43:42PM +0900, Damien Le Moal wrote:
> It is very unusual to use hdparm, a tool designed mainly for ATA devices, to
> benchmark an nvme device. At the very least, if you really want to measure the
> drive performance, you should add the --direct option (see man hdparm).
> 
> But a better way to test would be to use fio with io_uring or libaio IO engine
> doing multi-job & high QD --direct=1 IOs. That will give you the maximum
> performance of your device. Then remove the --direct=1 option to do buffered
> IOs, which will expose potential issues with your system memory bandwidth.
> 

  parent reply	other threads:[~2023-03-25  0:33 UTC|newest]

Thread overview: 8+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
     [not found] <ZB1JgJ2DxyTMVUHB@hornet>
2023-03-24  8:43 ` Read speed for a PCIe NVMe SSD is ridiculously slow on a multi-socket machine Damien Le Moal
2023-03-24 21:19   ` Alexander Shumakovitch
2023-03-25  1:52     ` Damien Le Moal
2023-03-31  7:53       ` Alexander Shumakovitch
2023-03-25  0:33   ` Alexander Shumakovitch [this message]
2023-03-25  1:56     ` Damien Le Moal
2023-03-24 19:34 ` Keith Busch
2023-03-24 21:38   ` Alexander Shumakovitch

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=ZB5BSJgeqaOkiXFF@hornet \
    --to=shurik@jhu.edu \
    --cc=damien.lemoal@opensource.wdc.com \
    --cc=linux-nvme@lists.infradead.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox