linux-block.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Bart Van Assche <bvanassche@acm.org>
To: Felipe Franciosi <felipe@nutanix.com>,
	"lsf-pc@lists.linux-foundation.org" 
	<lsf-pc@lists.linux-foundation.org>
Cc: "linux-block@vger.kernel.org" <linux-block@vger.kernel.org>,
	"linux-nvme@lists.infradead.org" <linux-nvme@lists.infradead.org>
Subject: Re: [LSF/MM TOPIC] NVMe Performance: Userspace vs Kernel
Date: Fri, 15 Feb 2019 13:41:34 -0800	[thread overview]
Message-ID: <1550266894.31902.104.camel@acm.org> (raw)
In-Reply-To: <DAB8A2DA-37D3-4CBE-8AD7-356E3CE8B0D3@nutanix.com>

On Fri, 2019-02-15 at 21:19 +0000, Felipe Franciosi wrote:
> Hi All,
> 
> I'd like to attend LSF/MM this year and discuss the kernel performance when accessing NVMe devices, specifically (but not limited to) Intel Optane Memory (which boasts very low latency and high
> iops/throughput per NVMe controller).
> 
> Over the last year or two, I have done extensive experimentation comparing applications using libaio to those using SDPK. For hypervisors, where storage devices can be exclusively accessed with
> userspace drivers (given the device can be dedicated to a single process), using SPDK has proven to be significantly faster and more efficient. That remains true even in the latest versions of the
> kernel.
> 
> I have presented work focusing on hypervisors in several conferences during this time. Although I appreciate the LSF/MM is more discussion-oriented, I am linking a couple of these presentations for
> reference:
> 
> Flash Memory Summit 2018
> https://www.flashmemorysummit.com/English/Collaterals/Proceedings/2018/20180808_SOFT-202-1_Franciosi.pdf
> 
> Linux Piter 2018
> https://linuxpiter.com/system/attachments/files/000/001/558/original/20181103_-_AHV_and_SPDK.pdf
> 
> For LSF/MM, instead of focusing on hypervisors, I would like to discuss what can be done to achieve better efficiency and performance when using the kernel. My data include detailed results
> considering various scenarios like different NUMA configurations, IRQ affinity and polling modes.

Hi Felipe,

It seems like you missed the performance comparison between SPDK and io_uring
Jens posted recently?

Bart.

  reply	other threads:[~2019-02-15 21:41 UTC|newest]

Thread overview: 6+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-02-15 21:19 [LSF/MM TOPIC] NVMe Performance: Userspace vs Kernel Felipe Franciosi
2019-02-15 21:41 ` Bart Van Assche [this message]
     [not found]   ` <11A6C7D0-A26D-410F-8EE3-9AF524DF2050@nutanix.com>
2019-02-16  1:01     ` Bart Van Assche
2019-02-16  1:54     ` Jens Axboe
2019-02-15 21:47 ` Keith Busch
2019-02-15 22:14   ` Felipe Franciosi

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1550266894.31902.104.camel@acm.org \
    --to=bvanassche@acm.org \
    --cc=felipe@nutanix.com \
    --cc=linux-block@vger.kernel.org \
    --cc=linux-nvme@lists.infradead.org \
    --cc=lsf-pc@lists.linux-foundation.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).