From: Paul Durrant <Paul.Durrant@citrix.com>
To: 'Markus Armbruster' <armbru@redhat.com>,
'Kevin Wolf' <kwolf@redhat.com>,
Anthony Perard <anthony.perard@citrix.com>
Cc: Tim Smith <tim.smith@citrix.com>,
Stefano Stabellini <sstabellini@kernel.org>,
"qemu-block@nongnu.org" <qemu-block@nongnu.org>,
"qemu-devel@nongnu.org" <qemu-devel@nongnu.org>,
Max Reitz <mreitz@redhat.com>,
"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Subject: Re: [Qemu-devel] xen_disk qdevification
Date: Thu, 8 Nov 2018 14:00:31 +0000 [thread overview]
Message-ID: <24d1c322d3ac4ee2af32efacb486e608@AMSPEX02CL03.citrite.net> (raw)
In-Reply-To: <871s7z5xg4.fsf@dusky.pond.sub.org>
> -----Original Message-----
> From: Markus Armbruster [mailto:armbru@redhat.com]
> Sent: 05 November 2018 15:58
> To: Paul Durrant <Paul.Durrant@citrix.com>
> Cc: 'Kevin Wolf' <kwolf@redhat.com>; Tim Smith <tim.smith@citrix.com>;
> Stefano Stabellini <sstabellini@kernel.org>; qemu-block@nongnu.org; qemu-
> devel@nongnu.org; Max Reitz <mreitz@redhat.com>; Anthony Perard
> <anthony.perard@citrix.com>; xen-devel@lists.xenproject.org
> Subject: Re: [Qemu-devel] xen_disk qdevification
>
> Paul Durrant <Paul.Durrant@citrix.com> writes:
>
> >> -----Original Message-----
> >> From: Kevin Wolf [mailto:kwolf@redhat.com]
> >> Sent: 02 November 2018 11:04
> >> To: Tim Smith <tim.smith@citrix.com>
> >> Cc: xen-devel@lists.xenproject.org; qemu-devel@nongnu.org; qemu-
> >> block@nongnu.org; Anthony Perard <anthony.perard@citrix.com>; Paul
> Durrant
> >> <Paul.Durrant@citrix.com>; Stefano Stabellini <sstabellini@kernel.org>;
> >> Max Reitz <mreitz@redhat.com>; armbru@redhat.com
> >> Subject: xen_disk qdevification (was: [PATCH 0/3] Performance
> improvements
> >> for xen_disk v2)
> >>
> >> Am 02.11.2018 um 11:00 hat Tim Smith geschrieben:
> >> > A series of performance improvements for disks using the Xen PV ring.
> >> >
> >> > These have had fairly extensive testing.
> >> >
> >> > The batching and latency improvements together boost the throughput
> >> > of small reads and writes by two to six percent (measured using fio
> >> > in the guest)
> >> >
> >> > Avoiding repeated calls to posix_memalign() reduced the dirty heap
> >> > from 25MB to 5MB in the case of a single datapath process while also
> >> > improving performance.
> >> >
> >> > v2 removes some checkpatch complaints and fixes the CCs
> >>
> >> Completely unrelated, but since you're the first person touching
> >> xen_disk in a while, you're my victim:
> >>
> >> At KVM Forum we discussed sending a patch to deprecate xen_disk because
> >> after all those years, it still hasn't been converted to qdev. Markus
> is
> >> currently fixing some other not yet qdevified block device, but after
> >> that xen_disk will be the only one left.
> >>
> >> A while ago, a downstream patch review found out that there are some
> QMP
> >> commands that would immediately crash if a xen_disk device were present
> >> because of the lacking qdevification. This is not the code quality
> >> standard I envision for QEMU. It's time for non-qdev devices to go.
> >>
> >> So if you guys are still interested in the device, could someone please
> >> finally look into converting it?
> >>
> >
> > I have a patch series to do exactly this. It's somewhat involved as I
> > need to convert the whole PV backend infrastructure. I will try to
> > rebase and clean up my series a.s.a.p.
>
> Awesome! Please coordinate with Anthony Prerard to avoid duplicating
> work if you haven't done so already.
I've come across a bit of a problem that I'm not sure how best to deal with and so am looking for some advice.
I now have a qdevified PV disk backend but I can't bring it up because it fails to acquire a write lock on the qcow2 it is pointing at. This is because there is also an emulated IDE drive using the same qcow2. This does not appear to be a problem for the non-qdev xen-disk, presumably because it is not opening the qcow2 until the emulated device is unplugged and I don't really want to introduce similar hackery in my new backend (i.e. I want it to attach to its drive, and hence open the qcow2, during realize).
So, I'm not sure what to do... It is not a problem that both a PV backend and an emulated device are using the same qcow2 because they will never actually operate simultaneously so is there any way I can bypass the qcow2 lock check when I create the drive for my PV backend? (BTW I tried re-using the drive created for the emulated device, but that doesn't work because there is a check if a drive is already attached to something).
Any ideas?
Paul
next prev parent reply other threads:[~2018-11-08 14:01 UTC|newest]
Thread overview: 25+ messages / expand[flat|nested] mbox.gz Atom feed top
2018-11-02 10:00 [Qemu-devel] [PATCH 0/3] Performance improvements for xen_disk v2 Tim Smith
2018-11-02 10:00 ` [Qemu-devel] [PATCH 1/3] Improve xen_disk batching behaviour Tim Smith
2018-11-02 11:14 ` Paul Durrant
2018-11-02 13:53 ` Anthony PERARD
2018-11-02 10:01 ` [Qemu-devel] [PATCH 2/3] Improve xen_disk response latency Tim Smith
2018-11-02 11:14 ` Paul Durrant
2018-11-02 13:53 ` Anthony PERARD
2018-11-02 10:01 ` [Qemu-devel] [PATCH 3/3] Avoid repeated memory allocation in xen_disk Tim Smith
2018-11-02 11:15 ` Paul Durrant
2018-11-02 13:53 ` Anthony PERARD
2018-11-02 11:04 ` [Qemu-devel] xen_disk qdevification (was: [PATCH 0/3] Performance improvements for xen_disk v2) Kevin Wolf
2018-11-02 11:13 ` Paul Durrant
2018-11-02 12:14 ` Kevin Wolf
2018-11-05 15:57 ` [Qemu-devel] xen_disk qdevification Markus Armbruster
2018-11-05 16:15 ` Paul Durrant
2018-11-08 14:00 ` Paul Durrant [this message]
2018-11-08 15:21 ` Kevin Wolf
2018-11-08 15:43 ` Paul Durrant
2018-11-08 16:44 ` Paul Durrant
2018-11-09 10:27 ` Paul Durrant
2018-11-09 10:40 ` Kevin Wolf
2018-12-12 8:59 ` [Qemu-devel] [Xen-devel] xen_disk qdevification (was: [PATCH 0/3] Performance improvements for xen_disk v2) Olaf Hering
2018-12-12 9:22 ` Paul Durrant
2018-12-12 12:03 ` Kevin Wolf
2018-12-12 12:04 ` [Qemu-devel] [Xen-devel] xen_disk qdevification Markus Armbruster
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=24d1c322d3ac4ee2af32efacb486e608@AMSPEX02CL03.citrite.net \
--to=paul.durrant@citrix.com \
--cc=anthony.perard@citrix.com \
--cc=armbru@redhat.com \
--cc=kwolf@redhat.com \
--cc=mreitz@redhat.com \
--cc=qemu-block@nongnu.org \
--cc=qemu-devel@nongnu.org \
--cc=sstabellini@kernel.org \
--cc=tim.smith@citrix.com \
--cc=xen-devel@lists.xenproject.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).