From: zbhhbz <zbhhbz at yeah.net>
To: spdk@lists.01.org
Subject: [SPDK] Re: The difference between vhost-nvme and vhost-blk
Date: Tue, 08 Mar 2022 14:46:19 +0000 [thread overview]
Message-ID: <a06bb73.44610.17f69feaa76.Coremail.zbhhbz@yeah.net> (raw)
In-Reply-To: CH0PR02MB8009E669C9C1779EF3C4ACF58B099@CH0PR02MB8009.namprd02.prod.outlook.com
[-- Attachment #1: Type: text/plain, Size: 9971 bytes --]
OK,thanks,
I'm also curious about how the spdk interrupt the guestOS
does the interrupt go through the qemu/kvm
or does it go straight to the guestOS?
1.in Vhost-user the guestOS is in poll mode,
so it should wait for an interrupt from the nic.Will the interrupt from the nic go through qemu/kvm first?
2.in vfio-user , since the guestOS has direct access to the "pcie address"(DRAM), can the spdk interrupt the guestOS directly? (a nvme interrupt)
---- 回复的原邮件 ----
| 发件人 | Thanos Makatos<thanos.makatos(a)nutanix.com> |
| 日期 | 2022年03月08日 22:19 |
| 收件人 | Storage Performance Development Kit<spdk(a)lists.01.org> |
| 抄送至 | |
| 主题 | [SPDK] Re: The difference between vhost-nvme and vhost-blk |
> -----Original Message-----
> From: zbhhbz <zbhhbz(a)yeah.net>
> Sent: 08 March 2022 14:16
> To: Storage Performance Development Kit <spdk(a)lists.01.org>
> Subject: [SPDK] Re: The difference between vhost-nvme and vhost-blk
>
>
>
>
>
>
>
>
>
>
> what i meat is that :
> in vhost-user:
> 1. the guestOS should put an (vhost) request in the virtqueue
> 2. then the spdk polling discover this request
> 3. spdk should put an nvme request in the actual device and knock the door
> bell.
> this is two data(not the actual data but the request struct itself).
>
>
> in vfio-user:
> 1. the guestOS put an (nvme) request struct in the DRAM region
> 2. the spdk discovers this and then what ? still needs to inform the nvme
> physical device right?
> this is still two data copy(DMA) in the manner of nvme request struct.
You're right, it does need to create a new request an put in the queue that can be seen by the physical controller. However, I believe the read and write payload doesn't require this additional step.
>
>
>
>
>
>
>
>
> At 2022-03-08 22:02:48, "Thanos Makatos" <thanos.makatos(a)nutanix.com>
> wrote:
> >> -----Original Message-----
> >> From: Liu, Xiaodong <xiaodong.liu(a)intel.com>
> >> Sent: 08 March 2022 14:00
> >> To: Storage Performance Development Kit <spdk(a)lists.01.org>
> >> Subject: [SPDK] Re: The difference between vhost-nvme and vhost-blk
> >>
> >> No, Vhost-user will just do once data DMA from Guest DRAM region to
> >> device buffer.
> >
> >Same for vfio-user, the guest has shared its memory to SPDK so the physical
> device can access that memory directly.
> >
> >>
> >> -----Original Message-----
> >> From: zbhhbz <zbhhbz(a)yeah.net>
> >> Sent: Tuesday, March 8, 2022 9:54 PM
> >> To: Storage Performance Development Kit <spdk(a)lists.01.org>
> >> Subject: [SPDK] Re: The difference between vhost-nvme and vhost-blk
> >>
> >>
> >>
> >>
> >> ok, thanks,
> >> one more follow up question:
> >> isn't this leads to double DMA of data? one from guest to DRAM region
> and
> >> one from DRAM region to device buffer.
> >>
> >>
> >>
> >>
> >>
> >>
> >>
> >>
> >>
> >>
> >>
> >> At 2022-03-08 21:31:44, "Thanos Makatos"
> <thanos.makatos(a)nutanix.com>
> >> wrote:
> >> >> -----Original Message-----
> >> >> From: zbhhbz <zbhhbz(a)yeah.net>
> >> >> Sent: 08 March 2022 13:20
> >> >> To: Storage Performance Development Kit <spdk(a)lists.01.org>
> >> >> Subject: [SPDK] Re: The difference between vhost-nvme and vhost-
> blk
> >> >>
> >> >>
> >> >>
> >> >>
> >> >> thanks, this helps a lot.
> >> >>
> >> >>
> >> >> about the vfio-user,
> >> >>
> >> >>
> >> >> I understand that in the vfio-user,
> >> >> the guestOS can issue DMA read/write to a "pcie space " of a virtual
> >> >> device, but I'm confused:
> >> >> 1. does the guestOS issue DMA read/write to region on actual physical
> >> >> device or just a DRAM region?
> >> >
> >> >A DRAM region.
> >> >
> >> >> if the guestOS directly access the physical device(DMA,IOMMU),
> >> >> where does spdk stands?
> >> >> 2. why do vfio-user need a socket, what kind of data does the socket
> >> carries?
> >> >
> >> >The vfio-user protocol allows a device to be emulated outside QEMU
> (the
> >> vfio-user client), in a separate process (the vfio-user server, SPDK running
> >> the nvmf/vfio-user target in our case). The UNIX domain socket is used
> >> between QEMU and SPDK for initial device setup, virtual IRQs, and other
> >> infrequent operations.
> >> >
> >> >> 3. Does the vfio-user look like the vhost-user except for direct DMA
> >> >> access instead of shared memory communication?
> >> >
> >> >vfio-user allows any kind of device to be emulated in a separate process
> >> (even non-PCI), while vhost-user is mainly for VirtIO devices.
> >> >
> >> >>
> >> >>
> >> >> thanks
> >> >>
> >> >>
> >> >>
> >> >>
> >> >>
> >> >>
> >> >>
> >> >>
> >> >>
> >> >>
> >> >>
> >> >>
> >> >>
> >> >>
> >> >> At 2022-03-08 17:40:12, "Thanos Makatos"
> >> <thanos.makatos(a)nutanix.com>
> >> >> wrote:
> >> >> >> -----Original Message-----
> >> >> >> From: zbhhbz <zbhhbz(a)yeah.net>
> >> >> >> Sent: 08 March 2022 09:29
> >> >> >> To: spdk <spdk(a)lists.01.org>
> >> >> >> Subject: [SPDK] Re: The difference between vhost-nvme and
> >> >> >> vhost-blk
> >> >> >>
> >> >> >> thanks, follow up questions:
> >> >> >> 1. if I still use Vhost-user-blk-pci in qemu and with nvme ssd(bdev),
> i
> >> cant'
> >> >> >> have access to the nvme feature in guest(shadow door bell). is
> >> >> >> that
> >> >> correct?
> >> >> >
> >> >> >Correct.
> >> >> >
> >> >> >> 2. in the vfio-user solution, does the interrupt sending from the
> >> >> ssd(nvme)
> >> >> >> go through the qemu/kvm? or it go straight to the guest kernel?
> >> >> >
> >> >> >It doesn't go to QEMU/KVM. It depends on how you've set it up in
> >> >> >SPDK: it
> >> >> can either go the host kernel or to SPDK.
> >> >> >
> >> >> >> 3. when will the vfio-user be available? does kvm have same delima
> >> here?
> >> >> >
> >> >> >vfio-user is under review in QEMU, so we can't predict when it will
> >> >> >be
> >> >> accepted upstream. This doesn't mean you can't use it though, have a
> >> >> look
> >> >> here: https://urldefense.proofpoint.com/v2/url?u=https-
> >> >> 3A__github.com_nutanix_libvfio-
> >> >>
> >>
> 2Duser_blob_master_docs_spdk.md&d=DwIGaQ&c=s883GpUCOChKOHiocY
> >> >>
> >>
> tGcg&r=XTpYsh5Ps2zJvtw6ogtti46atk736SI4vgsJiUKIyDE&m=5xzcHgA4AEwEk
> >> >> wrWO3u9vZt5Fmg2QcvFmOxjMr4NL5Usz4o_2HIoCisBygQ9-
> >> >> D1Q&s=k2t1I2xi9Och5KIeiaXyo2wo9wFbhIyALcMUph5xLK0&e=
> >> >> >
> >> >> >>
> >> >> >>
> >> >> >> thank you very much!
> >> >> >>
> >> >> >>
> >> >> >>
> >> >> >>
> >> >> >> ---- 回复的原邮件 ----
> >> >> >> | 发件人 | Liu, Changpeng<changpeng.liu(a)intel.com> |
> >> >> >> | 日期 | 2022年03月08日 16:33 |
> >> >> >> | 收件人 | Storage Performance Development
> Kit<spdk(a)lists.01.org>
> >> |
> >> >> >> | 抄送至 | |
> >> >> >> | 主题 | [SPDK] Re: The difference between vhost-nvme and
> vhost-
> >> blk
> >> >> >> | |
> >> >> >> Previously SPDK extended QEMU with a separate driver to enable
> >> >> >> vhost- nvme, this driver was not accepted by QEMU, so now we
> >> >> >> support emulated NVMe with a
> >> >> new
> >> >> >> solution
> >> >> >> "vfio-user", again the driver for supporting this is still under
> >> >> >> code review
> >> >> of
> >> >> >> QEMU community,
> >> >> >> but SPDK already supports this.
> >> >> >>
> >> >> >> > -----Original Message-----
> >> >> >> > From: zbhhbz(a)yeah.net <zbhhbz(a)yeah.net>
> >> >> >> > Sent: Tuesday, March 8, 2022 4:29 PM
> >> >> >> > To: spdk(a)lists.01.org
> >> >> >> > Subject: [SPDK] The difference between vhost-nvme and vhost-
> blk
> >> >> >> >
> >> >> >> > Could someone help me understand the difference between
> vhost-
> >> >> nvme
> >> >> >> and
> >> >> >> > vhost-blk?
> >> >> >> > The online doc only shows there is vhost-user-blk-pci, why not
> >> >> >> > vhost-
> >> >> user-
> >> >> >> nvme-
> >> >> >> > pci?
> >> >> >> > There is little doc fined in github/spdk and the qemu itself
> >> >> >> > doesn't help
> >> >> >> either
> >> >> >> > Thanks!
> >> >> >> > _______________________________________________
> >> >> >> > SPDK mailing list -- spdk(a)lists.01.org To unsubscribe send an
> >> >> >> > email to spdk-leave(a)lists.01.org
> >> >> >> _______________________________________________
> >> >> >> SPDK mailing list -- spdk(a)lists.01.org To unsubscribe send an
> >> >> >> email to spdk-leave(a)lists.01.org
> >> >> >> _______________________________________________
> >> >> >> SPDK mailing list -- spdk(a)lists.01.org To unsubscribe send an
> >> >> >> email to spdk-leave(a)lists.01.org
> >> >> >_______________________________________________
> >> >> >SPDK mailing list -- spdk(a)lists.01.org To unsubscribe send an email
> >> >> >to spdk-leave(a)lists.01.org
> >> >> _______________________________________________
> >> >> SPDK mailing list -- spdk(a)lists.01.org To unsubscribe send an email
> >> >> to spdk-leave(a)lists.01.org
> >> >_______________________________________________
> >> >SPDK mailing list -- spdk(a)lists.01.org
> >> >To unsubscribe send an email to spdk-leave(a)lists.01.org
> >> _______________________________________________
> >> SPDK mailing list -- spdk(a)lists.01.org
> >> To unsubscribe send an email to spdk-leave(a)lists.01.org
> >> _______________________________________________
> >> SPDK mailing list -- spdk(a)lists.01.org
> >> To unsubscribe send an email to spdk-leave(a)lists.01.org
> >_______________________________________________
> >SPDK mailing list -- spdk(a)lists.01.org
> >To unsubscribe send an email to spdk-leave(a)lists.01.org
> _______________________________________________
> SPDK mailing list -- spdk(a)lists.01.org
> To unsubscribe send an email to spdk-leave(a)lists.01.org
_______________________________________________
SPDK mailing list -- spdk(a)lists.01.org
To unsubscribe send an email to spdk-leave(a)lists.01.org
next reply other threads:[~2022-03-08 14:46 UTC|newest]
Thread overview: 13+ messages / expand[flat|nested] mbox.gz Atom feed top
2022-03-08 14:46 zbhhbz [this message]
-- strict thread matches above, loose matches on Subject: below --
2022-03-08 17:54 [SPDK] Re: The difference between vhost-nvme and vhost-blk zbhhbz
2022-03-08 16:45 Walker, Benjamin
2022-03-08 14:19 Thanos Makatos
2022-03-08 14:15 zbhhbz
2022-03-08 14:02 Thanos Makatos
2022-03-08 14:00 Liu, Xiaodong
2022-03-08 13:54 zbhhbz
2022-03-08 13:31 Thanos Makatos
2022-03-08 13:19 zbhhbz
2022-03-08 9:40 Thanos Makatos
2022-03-08 9:29 zbhhbz
2022-03-08 8:33 Liu, Changpeng
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=a06bb73.44610.17f69feaa76.Coremail.zbhhbz@yeah.net \
--to=spdk@lists.01.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox