xen-devel.lists.xenproject.org archive mirror
 help / color / mirror / Atom feed
From: Wei Liu <wei.liu2@citrix.com>
To: Paul Durrant <Paul.Durrant@citrix.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
	Wei Liu <wei.liu2@citrix.com>,
	Ian Jackson <Ian.Jackson@citrix.com>
Subject: Re: [PATCH RESEND] tools/libxl: add support for emulated NVMe drives
Date: Wed, 18 Jan 2017 12:02:22 +0000	[thread overview]
Message-ID: <20170118120222.GT5089@citrix.com> (raw)
In-Reply-To: <5d2d7b224bf2437a8679427d0c45a915@AMSPEX02CL03.citrite.net>

On Wed, Jan 18, 2017 at 10:51:50AM +0000, Paul Durrant wrote:
> > -----Original Message-----
> > From: Wei Liu [mailto:wei.liu2@citrix.com]
> > Sent: 18 January 2017 10:29
> > To: Paul Durrant <Paul.Durrant@citrix.com>
> > Cc: xen-devel@lists.xenproject.org; Ian Jackson <Ian.Jackson@citrix.com>;
> > Wei Liu <wei.liu2@citrix.com>
> > Subject: Re: [PATCH RESEND] tools/libxl: add support for emulated NVMe
> > drives
> > 
> > On Fri, Jan 13, 2017 at 02:00:41PM +0000, Paul Durrant wrote:
> > > Upstream QEMU supports emulation of NVM Express a.k.a. NVMe drives.
> > >
> > > This patch adds a new vdev type into libxl to allow such drives to be
> > > presented to HVM guests. Because the purpose of the new vdev is purely
> > > to configure emulation, the syntax only supports specification of
> > > whole disks. Also there is no need to introduce a new concrete VBD
> > > encoding for NVMe drives.
> > 
> > This seems to be in contradiction with the code below?
> >
> 
> No, there's no contradiction because the encoding is identical to xvdX encoding. Perhaps I should state that in the comment.
>  

That would be appreciated.

> > >
> > > NOTE: QEMU's emulation only supports a single NVMe namespace, so the
> > >       vdev syntax does not include specification of a namespace.
> > >       Also, current versions of SeaBIOS do not support booting from
> > >       NVMe devices, so the vdev should only be used for secondary
> > >       drives.
> > >
> > 
> > I don't know much about NVMe, but I presume we could just extend the
> > proposed syntax to support namespaces should the need arise?
> > 
> 
> Well, I could make the syntax be nvme<device>n<namespace> (to match linux's namespace syntax) and insist that namespace be 1 at the moment. Do you think that would be preferable?
> 

Fine by me. Let's wait a bit for other people to comment.


> > > Signed-off-by: Paul Durrant <paul.durrant@citrix.com>
> > > ---
> > > Cc: Ian Jackson <ian.jackson@eu.citrix.com>
> > > Cc: Wei Liu <wei.liu2@citrix.com>
> > > ---
> > >  docs/man/xen-vbd-interface.markdown.7 | 15 ++++++++-------
> > >  docs/man/xl-disk-configuration.pod.5  |  4 ++--
> > >  tools/libxl/libxl_device.c            |  8 ++++++++
> > >  tools/libxl/libxl_dm.c                |  6 ++++++
> > >  4 files changed, 24 insertions(+), 9 deletions(-)
> > >
> > > diff --git a/docs/man/xen-vbd-interface.markdown.7 b/docs/man/xen-
> > vbd-interface.markdown.7
> > > index 1c996bf..8fd378c 100644
> > > --- a/docs/man/xen-vbd-interface.markdown.7
> > > +++ b/docs/man/xen-vbd-interface.markdown.7
> > > @@ -8,12 +8,12 @@ emulated IDE, AHCI or SCSI disks.
> > >  The abstract interface involves specifying, for each block device:
> > >
> > >   * Nominal disk type: Xen virtual disk (aka xvd*, the default); SCSI
> > > -   (sd*); IDE or AHCI (hd*).
> > > +   (sd*); IDE or AHCI (hd*); NVMe.
> > 
> > NVMe (nvme*) ?
> 
> Yes.
> 
> > 
> > >
> > > -   For HVM guests, each whole-disk hd* and and sd* device is made
> > > -   available _both_ via emulated IDE resp. SCSI controller, _and_ as a
> > > -   Xen VBD.  The HVM guest is entitled to assume that the IDE or SCSI
> > > -   disks available via the emulated IDE controller target the same
> > > +   For HVM guests, each whole-disk hd*, sd* or nvme* device is made
> > > +   available _both_ via emulated IDE, SCSI controller or NVMe drive
> > > +   respectively _and_ as a Xen VBD.  The HVM guest is entitled to
> > > +   assume that the disks available via the emulation target the same
> > 
> > How do you expect the guest to deal with multipath NVMe devices? Maybe
> > we need to add unplug support for NVMe devices in QEMU?
> 
> That's true. For convenience, there would need to a QEMU patch for unplug to allow displacement of the emulated device with PV. I can document this as a shortcoming at the moment, if that's ok?
> 

The unplug functionality, as I understand, is crucial to data integrity.
Documenting this as shortcoming doesn't seem to be good enough.  Do we
need to wait until QEMU is ready before we can apply this patch?

Wei.


>   Paul

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

  reply	other threads:[~2017-01-18 12:02 UTC|newest]

Thread overview: 23+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2017-01-13 14:00 [PATCH RESEND] tools/libxl: add support for emulated NVMe drives Paul Durrant
2017-01-18 10:28 ` Wei Liu
2017-01-18 10:51   ` Paul Durrant
2017-01-18 12:02     ` Wei Liu [this message]
2017-01-18 12:15       ` Paul Durrant
2017-01-18 12:20         ` Wei Liu
2017-01-18 15:07           ` Wei Liu
2017-01-19  8:58             ` Paul Durrant
2017-01-19 11:18               ` Wei Liu
  -- strict thread matches above, loose matches on Subject: below --
2017-03-22 13:09 Paul Durrant
2017-03-22 14:16 ` Ian Jackson
2017-03-22 14:22   ` Paul Durrant
2017-03-22 15:01     ` Ian Jackson
2017-03-22 15:21       ` Paul Durrant
2017-03-22 16:03         ` Ian Jackson
2017-03-22 16:31           ` Paul Durrant
2017-03-22 16:45             ` Paul Durrant
2017-03-22 17:02             ` Ian Jackson
2017-03-22 17:16               ` Paul Durrant
2017-03-22 17:31                 ` Ian Jackson
2017-03-22 17:41                   ` Paul Durrant
2017-03-22 17:48                     ` Ian Jackson
2017-03-23  8:55                       ` Paul Durrant

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20170118120222.GT5089@citrix.com \
    --to=wei.liu2@citrix.com \
    --cc=Ian.Jackson@citrix.com \
    --cc=Paul.Durrant@citrix.com \
    --cc=xen-devel@lists.xenproject.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).