xen-devel.lists.xenproject.org archive mirror
 help / color / mirror / Atom feed
From: Paul Durrant <Paul.Durrant@citrix.com>
To: Ian Jackson <Ian.Jackson@citrix.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
	Wei Liu <wei.liu2@citrix.com>
Subject: Re: [PATCH RESEND] tools/libxl: add support for emulated NVMe drives
Date: Thu, 23 Mar 2017 08:55:17 +0000	[thread overview]
Message-ID: <964dfcc62cdb47b4838fdf0fb3660127@AMSPEX02CL03.citrite.net> (raw)
In-Reply-To: <22738.47327.234826.651030@mariner.uk.xensource.com>

> -----Original Message-----
> From: Ian Jackson [mailto:ian.jackson@eu.citrix.com]
> Sent: 22 March 2017 17:48
> To: Paul Durrant <Paul.Durrant@citrix.com>
> Cc: xen-devel@lists.xenproject.org; Wei Liu <wei.liu2@citrix.com>
> Subject: RE: [PATCH RESEND] tools/libxl: add support for emulated NVMe
> drives
> 
> Paul Durrant writes ("RE: [PATCH RESEND] tools/libxl: add support for
> emulated NVMe drives"):
> > > I guess that was with xapi rather than libxl ?
> >
> > Nope. It was libxl.
> 
> That's weird.  You specify it as xvda in the config file ?
> 

Yes. You'd think that specifying xvda would mean no emulated device but, no, it appears as IDE.

> > Windows PV drivers treat hd*, sd* and xvd* numbering in the same way...
> they just parse the disk number out and use that as the target number of the
> synthetic SCSI bus exposed to Windows.
> 
> What if there's an hda /and/ and an xvda ?
> 

libxl: error: libxl_dm.c:2345:device_model_spawn_outcome: Domain 1:domain 1 device model: spawn failed (rc=-3)
libxl: error: libxl_create.c:1493:domcreate_devmodel_started: Domain 1:device model did not start: -3
libxl: error: libxl_dm.c:2459:kill_device_model: Device Model already exited
libxl: error: libxl_dom.c:38:libxl__domain_type: unable to get domain type for domid=1
libxl: error: libxl_domain.c:962:domain_destroy_callback: Domain 1:Unable to destroy guest
libxl: error: libxl_domain.c:889:domain_destroy_cb: Domain 1:Destruction of domain failed
root@brixham:~# tail -F /var/log/xen/qemu-dm-winrs2-1.hvm.log
qemu-system-i386:/root/events:12: WARNING: trace event 'xen_domid_restrict' does not exist
qemu-system-i386: -drive file=/root/disk.qcow2,if=ide,index=0,media=disk,format=qcow2,cache=writeback: drive with bus=0, unit=0 (index=0) exists

However, no such failure occurs is I choose 'nvme0' for my secondary disk so it is unsafe to re-use xvd* numbering without at least further modification to libxl to make sure that there is only ever one disk N, whatever number scheme is used.

> > > Allocating a new numbering scheme might involve changing Linux guests
> > > too.  (I haven't experimented with what happens if one specifies a
> > > reserved number.)
> >
> > Yes, that's a point. IIRC the doc does say that guests should ignore numbers
> they don't understand... but who knows if this is actually the case.
> >
> > Given that there's no booting from NVMe at the moment, even HVM linux
> will only ever see the PV device since the emulated device will be unplugged
> early in boot and PV drivers are 'in box' in Linux. Windows is really the
> concern, where PV drivers are installed after the OS has seen the emulated
> device and thus the PV device needs to appear with the same 'identity' as far
> as the storage stack is concerned. I'm pretty sure this worked when I tried it a
> few months back using xvd* numbering (while coming up with the QEMU
> patch) but I'll check again.
> 
> I guess I'm trying to look forward to a "real" use case, which is
> presumably emulated NVME booting ?
> 
> If it's just for testing we might not care about a low limit on the
> number of devices, or the precise unplug behaviour.  Or we might
> tolerate having such tests require special configuration.
> 

The potential use for NVMe in the long run is actually to avoid using PV at all. QEMU's emulation of NVMe is not as fast as QEMU acting as a PV backend, but it's not that far off, and the advantage is that NVMe is a standard and thus Windows has an in-box driver. So, having thought more about it, we definitely should separate NVMe devices from IDE/SCSI devices in the unplug protocol and - should a PV frontend choose to displace emulated NVMe - it does indeed need to be able to distinguish them.

I'll post a patch to QEMU today to revise the unplug protocol and I'll check what happens when blkfront encounters a vbd number it doesn't understand.

  Paul

> Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

  reply	other threads:[~2017-03-23  8:55 UTC|newest]

Thread overview: 23+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2017-03-22 13:09 [PATCH RESEND] tools/libxl: add support for emulated NVMe drives Paul Durrant
2017-03-22 14:16 ` Ian Jackson
2017-03-22 14:22   ` Paul Durrant
2017-03-22 15:01     ` Ian Jackson
2017-03-22 15:21       ` Paul Durrant
2017-03-22 16:03         ` Ian Jackson
2017-03-22 16:31           ` Paul Durrant
2017-03-22 16:45             ` Paul Durrant
2017-03-22 17:02             ` Ian Jackson
2017-03-22 17:16               ` Paul Durrant
2017-03-22 17:31                 ` Ian Jackson
2017-03-22 17:41                   ` Paul Durrant
2017-03-22 17:48                     ` Ian Jackson
2017-03-23  8:55                       ` Paul Durrant [this message]
  -- strict thread matches above, loose matches on Subject: below --
2017-01-13 14:00 Paul Durrant
2017-01-18 10:28 ` Wei Liu
2017-01-18 10:51   ` Paul Durrant
2017-01-18 12:02     ` Wei Liu
2017-01-18 12:15       ` Paul Durrant
2017-01-18 12:20         ` Wei Liu
2017-01-18 15:07           ` Wei Liu
2017-01-19  8:58             ` Paul Durrant
2017-01-19 11:18               ` Wei Liu

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=964dfcc62cdb47b4838fdf0fb3660127@AMSPEX02CL03.citrite.net \
    --to=paul.durrant@citrix.com \
    --cc=Ian.Jackson@citrix.com \
    --cc=wei.liu2@citrix.com \
    --cc=xen-devel@lists.xenproject.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).