Linux-NVME Archive on lore.kernel.org
 help / color / mirror / Atom feed
From: mlevitsk@redhat.com (Maxim Levitsky)
Subject: No subject
Date: Thu, 21 Mar 2019 19:04:50 +0200	[thread overview]
Message-ID: <8698ad583b1cfe86afc3d5440be630fc3e8e0680.camel@redhat.com> (raw)
In-Reply-To: <E217011E-C273-442D-AF99-84D3CD63FE77@nutanix.com>

On Thu, 2019-03-21@16:41 +0000, Felipe Franciosi wrote:
> > On Mar 21, 2019,@4:21 PM, Keith Busch <kbusch@kernel.org> wrote:
> > 
> > On Thu, Mar 21, 2019@04:12:39PM +0000, Stefan Hajnoczi wrote:
> > > mdev-nvme seems like a duplication of SPDK.  The performance is not
> > > better and the features are more limited, so why focus on this approach?
> > > 
> > > One argument might be that the kernel NVMe subsystem wants to offer this
> > > functionality and loading the kernel module is more convenient than
> > > managing SPDK to some users.
> > > 
> > > Thoughts?
> > 
> > Doesn't SPDK bind a controller to a single process? mdev binds to
> > namespaces (or their partitions), so you could have many mdev's assigned
> > to many VMs accessing a single controller.
> 
> Yes, it binds to a single process which can drive the datapath of multiple
> virtual controllers for multiple VMs (similar to what you described for mdev).
> You can therefore efficiently poll multiple VM submission queues (and multiple
> device completion queues) from a single physical CPU.
> 
> The same could be done in the kernel, but the code gets complicated as you add
> more functionality to it. As this is a direct interface with an untrusted
> front-end (the guest), it's also arguably safer to do in userspace.
> 
> Worth noting: you can eventually have a single physical core polling all sorts
> of virtual devices (eg. virtual storage or network controllers) very
> efficiently. And this is quite configurable, too. In the interest of fairness,
> performance or efficiency, you can choose to dynamically add or remove queues
> to the poll thread or spawn more threads and redistribute the work.
> 
> F.

Note though that SPDK doesn't support sharing the device between host and the
guests, it takes over the nvme device, thus it makes the kernel nvme driver
unbind from it.

My driver creates a polling thread per guest, but its trivial to add option to
use the same polling thread for many guests if there need for that.

Best regards,
	Maxim Levitsky

  reply	other threads:[~2019-03-21 17:04 UTC|newest]

Thread overview: 50+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-03-19 14:41 No subject Maxim Levitsky
2019-03-19 14:41 ` [PATCH 1/9] vfio/mdev: add .request callback Maxim Levitsky
2019-03-19 14:41 ` [PATCH 2/9] nvme/core: add some more values from the spec Maxim Levitsky
2019-03-19 14:41 ` [PATCH 3/9] nvme/core: add NVME_CTRL_SUSPENDED controller state Maxim Levitsky
2019-03-19 14:41 ` [PATCH 4/9] nvme/pci: use the NVME_CTRL_SUSPENDED state Maxim Levitsky
2019-03-20  2:54   ` Fam Zheng
2019-03-19 14:41 ` [PATCH 5/9] nvme/pci: add known admin effects to augument admin effects log page Maxim Levitsky
2019-03-19 14:41 ` [PATCH 6/9] nvme/pci: init shadow doorbell after each reset Maxim Levitsky
2019-03-19 14:41 ` [PATCH 7/9] nvme/core: add mdev interfaces Maxim Levitsky
2019-03-20 11:46   ` Stefan Hajnoczi
2019-03-20 12:50     ` Maxim Levitsky
2019-03-19 14:41 ` [PATCH 8/9] nvme/core: add nvme-mdev core driver Maxim Levitsky
2019-03-19 14:41 ` [PATCH 9/9] nvme/pci: implement the mdev external queue allocation interface Maxim Levitsky
2019-03-19 14:58 ` [PATCH 0/9] RFC: NVME VFIO mediated device Maxim Levitsky
2019-03-25 18:52   ` [PATCH 0/9] RFC: NVME VFIO mediated device [BENCHMARKS] Maxim Levitsky
2019-03-26  9:38     ` Stefan Hajnoczi
2019-03-26  9:50       ` Maxim Levitsky
2019-03-19 15:22 ` your mail Keith Busch
2019-03-19 23:49   ` Chaitanya Kulkarni
2019-03-20 16:44     ` Maxim Levitsky
2019-03-20 16:30   ` Maxim Levitsky
2019-03-20 17:03     ` Keith Busch
2019-03-20 17:33       ` Maxim Levitsky
2019-04-08 10:04   ` Maxim Levitsky
2019-03-20 11:03 ` No subject Felipe Franciosi
2019-03-20 19:08   ` Maxim Levitsky
2019-03-21 16:12     ` Stefan Hajnoczi
2019-03-21 16:21       ` Keith Busch
2019-03-21 16:41         ` Felipe Franciosi
2019-03-21 17:04           ` Maxim Levitsky [this message]
2019-03-22  7:54             ` Felipe Franciosi
2019-03-22 10:32               ` Maxim Levitsky
2019-03-22 15:30               ` Keith Busch
2019-03-25 15:44                 ` Felipe Franciosi
2019-03-20 15:08 ` [PATCH 0/9] RFC: NVME VFIO mediated device Bart Van Assche
2019-03-20 16:48   ` Maxim Levitsky
2019-03-20 15:28 ` Bart Van Assche
2019-03-20 16:42   ` Maxim Levitsky
2019-03-20 17:03     ` Alex Williamson
2019-03-21 16:13 ` your mail Stefan Hajnoczi
2019-03-21 17:07   ` Maxim Levitsky
2019-03-25 16:46     ` Stefan Hajnoczi
  -- strict thread matches above, loose matches on Subject: below --
2018-10-05 13:39 No subject Christoph Hellwig
2018-02-02  6:54 Jianchao Wang
2017-09-13 18:15 unmesh rathi
2017-04-21 23:23 Sandeep Mann
2017-02-07  0:22 Scott Bauer
2017-02-07  0:46 ` Jens Axboe
2014-11-10  6:39 Libo Chen
2014-11-10  3:11 Libo Chen

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=8698ad583b1cfe86afc3d5440be630fc3e8e0680.camel@redhat.com \
    --to=mlevitsk@redhat.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox