xen-devel.lists.xenproject.org archive mirror
 help / color / mirror / Atom feed
From: Wei Liu <wei.liu2@citrix.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
Cc: Wei Liu <wei.liu2@citrix.com>
Subject: [HACKATHON] Data path and tapdisk3 session note
Date: Wed, 20 Apr 2016 18:33:06 +0100	[thread overview]
Message-ID: <20160420173306.GA11151@citrix.com> (raw)

Data path and tapdisk 3

* Stop using block protocol for Windows, status of pvscsi

Windows 8 (10?) scsi only -- PV driver fakes a scsi device and
translates.

Propose to use pvscsi in the data path.

Juergen: pvscsi is on track, no script provided in tree. but there is
script available. planning to integrate that with libxl. not sure
about how to deal with device removal, decide which device to delete,
sharing. (Make sure disk is not accidentally removed when assigned to
different domains)

* tapdisk 3 not in tree

Ross volunteer to work that out. Need guidance to work out. Contact
XenServer, maybe act as upstream.

There is work to build blktap3 build outside of XenServer build
system.

No-one in XenServer to support that. No-one works on that.

XenServer PoV: don't think tapdisk3 is maintained, doesn't believe to
be the way forward. Use qemu instead.

Ross to work out whether to take over or whatnot. XenServer will keep
in for compatibility reason.

Ross: using blktap2, old version. Seems that not that many ppl are
instrested in tapdisk3. can't swith to qemu at the momemnt.

Ian: tapdisk3 has similarity with qemu, might be able to port Ross's
tapdisk modification to qemu -- relief from maintenance burden.

Paul: qemu has better functionailities.

* qdisk

qemu qdisk uses grant map / unmap, performance is suboptimal.

qemu in dom0 used for mounting pv disk for pygrub.

For pv domain only creates qdisks when pv backend is required.

Process per domain doesn't scale, that is guest with large number of
disks. qemu is not multi-threaded.

Can we make one process per disk? Probably not upstream. Ian: maybe
4 disks per qemu?

Ian: there is way to implement that based on Stefano's work to spawn
multiple qemus

* emulation / pv

Windows will be able to boot from NVMe, would be good to add NVMe disk
in libxl, qemu already has required backend.

Ian: use vdev identifier to get the type of devices you want.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

                 reply	other threads:[~2016-04-20 17:31 UTC|newest]

Thread overview: [no followups] expand[flat|nested]  mbox.gz  Atom feed

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20160420173306.GA11151@citrix.com \
    --to=wei.liu2@citrix.com \
    --cc=xen-devel@lists.xenproject.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).