From: Paul Durrant <Paul.Durrant@citrix.com>
To: Ian Jackson <Ian.Jackson@citrix.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
Wei Liu <wei.liu2@citrix.com>
Subject: Re: [PATCH RESEND] tools/libxl: add support for emulated NVMe drives
Date: Wed, 22 Mar 2017 17:41:27 +0000 [thread overview]
Message-ID: <11751d6e8f2b405aa577cb64a61d165e@AMSPEX02CL03.citrite.net> (raw)
In-Reply-To: <22738.46339.565001.683740@mariner.uk.xensource.com>
> -----Original Message-----
> From: Ian Jackson [mailto:ian.jackson@eu.citrix.com]
> Sent: 22 March 2017 17:32
> To: Paul Durrant <Paul.Durrant@citrix.com>
> Cc: xen-devel@lists.xenproject.org; Wei Liu <wei.liu2@citrix.com>
> Subject: RE: [PATCH RESEND] tools/libxl: add support for emulated NVMe
> drives
>
> Paul Durrant writes ("RE: [PATCH RESEND] tools/libxl: add support for
> emulated NVMe drives"):
> > This is my VM:
> >
> > root@brixham:~# xenstore-ls "/libxl/3"
> > device = ""
> > vbd = ""
> > 51712 = ""
> ...
> > params = "qcow2:/root/winrs2-pv1.qcow2"
>
> > No problem using xvda... still ends up as IDE primary master.
>
> Right. The question is more whether this confuses the guest. I don't
> think the tools will actually mind.
>
> I guess that was with xapi rather than libxl ?
Nope. It was libxl.
Windows PV drivers treat hd*, sd* and xvd* numbering in the same way... they just parse the disk number out and use that as the target number of the synthetic SCSI bus exposed to Windows.
>
> > > So maybe they should reuse the hd* numbering ?
> >
> > That might be too limiting. The hd* numbering scheme doesn't stretch
> > very far.
>
> Indeed. sd is rather limited too.
>
> But, you say:
>
> Also, current versions of SeaBIOS do not support booting from
> NVMe devices, so the vdev should only be used for secondary drives.
>
> So currently this is mostly useful for testing ?
Yes. Just for testing at the moment.
>
> Normally the emulated devices are _intended_ for bootstrapping to an
> environment that can handle vbds. Which doesn't involve having very
> many of them.
>
> > > > That means modifications to PV frontends would be needed, which is
> > > > going to make things more difficult. Most OS find disks by UUID
> > > > these days anyway so I'm still not sure that just using xvd*
> > > > numbering would really be a problem.
> > >
> > > In terms of the "nominal disk type" discussed in
> > > xen-vbd-interface.markdown.7, I don't think these emulated devices,
> > > which get unplugged, should be have a "nomainl disk type" of "Xen
> > > virtual disk".
> >
> > Ok. I'll submit another patch to QEMU to distinguish between
> > IDE/SCSI disks and NVMe disks in the unplug protocol, come up with a
> > new PV numbering schemed and modify the Windows frontend to
> > understand it.
>
> Before you go away and do a lot of work, perhaps we should keep
> exploring whether my concerns are actualliy justified...
>
> Allocating a new numbering scheme might involve changing Linux guests
> too. (I haven't experimented with what happens if one specifies a
> reserved number.)
>
Yes, that's a point. IIRC the doc does say that guests should ignore numbers they don't understand... but who knows if this is actually the case.
Given that there's no booting from NVMe at the moment, even HVM linux will only ever see the PV device since the emulated device will be unplugged early in boot and PV drivers are 'in box' in Linux. Windows is really the concern, where PV drivers are installed after the OS has seen the emulated device and thus the PV device needs to appear with the same 'identity' as far as the storage stack is concerned. I'm pretty sure this worked when I tried it a few months back using xvd* numbering (while coming up with the QEMU patch) but I'll check again.
Paul
> Ian.
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel
next prev parent reply other threads:[~2017-03-22 17:41 UTC|newest]
Thread overview: 23+ messages / expand[flat|nested] mbox.gz Atom feed top
2017-03-22 13:09 [PATCH RESEND] tools/libxl: add support for emulated NVMe drives Paul Durrant
2017-03-22 14:16 ` Ian Jackson
2017-03-22 14:22 ` Paul Durrant
2017-03-22 15:01 ` Ian Jackson
2017-03-22 15:21 ` Paul Durrant
2017-03-22 16:03 ` Ian Jackson
2017-03-22 16:31 ` Paul Durrant
2017-03-22 16:45 ` Paul Durrant
2017-03-22 17:02 ` Ian Jackson
2017-03-22 17:16 ` Paul Durrant
2017-03-22 17:31 ` Ian Jackson
2017-03-22 17:41 ` Paul Durrant [this message]
2017-03-22 17:48 ` Ian Jackson
2017-03-23 8:55 ` Paul Durrant
-- strict thread matches above, loose matches on Subject: below --
2017-01-13 14:00 Paul Durrant
2017-01-18 10:28 ` Wei Liu
2017-01-18 10:51 ` Paul Durrant
2017-01-18 12:02 ` Wei Liu
2017-01-18 12:15 ` Paul Durrant
2017-01-18 12:20 ` Wei Liu
2017-01-18 15:07 ` Wei Liu
2017-01-19 8:58 ` Paul Durrant
2017-01-19 11:18 ` Wei Liu
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=11751d6e8f2b405aa577cb64a61d165e@AMSPEX02CL03.citrite.net \
--to=paul.durrant@citrix.com \
--cc=Ian.Jackson@citrix.com \
--cc=wei.liu2@citrix.com \
--cc=xen-devel@lists.xenproject.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).