From: Paolo Bonzini <pbonzini@redhat.com>
To: Kevin Wolf <kwolf@redhat.com>
Cc: qemu-devel@nongnu.org
Subject: Re: [Qemu-devel] [RFC PATCH 09/16] qcow2: Move COW and L2 update into own coroutine
Date: Tue, 18 Sep 2012 16:59:50 +0200 [thread overview]
Message-ID: <50588C66.3060008@redhat.com> (raw)
In-Reply-To: <505888C6.1090609@redhat.com>
Il 18/09/2012 16:44, Kevin Wolf ha scritto:
>>> + qemu_co_mutex_unlock(&s->lock);
>>> >> + qemu_co_rwlock_rdlock(&s->l2meta_flush);
>> >
>> > Should this lock be taken in process_l2meta? It's a bit easier to follow.
> I'm pretty sure there was a reason, but it isn't obvious any more. I
> guess I should have put a comment there... Maybe it doesn't exist any
> more, or maybe it's not that obvious.
>
> The difference would be that while waiting for the lock, the original
> write request could complete instead of waiting as well, and that the
> lock is potentially taken only in a BH instead of immediately.
>
> What happens if bdrv_aio_flush() and bdrv_aio_writev() are both in
> flight? If the flush runs its stop_l2meta() after the write request has
> signalled completion, but before the COW coroutine has started, it gets
> the lock even though a COW must still be processed. I believe we could
> then return a successful flush when the metadata isn't really on disk yet.
Then you need two comments, because you also need to document that
process_l2meta expects the read lock to be taken.
> So if you agree, I think we need to leave it where it is.
We do, but there may be other alternatives:
1) Perhaps you can take the rdlock again in process_l2meta, and also add
an unlock here just after qemu_coroutine_enter?
2) Perhaps you can replace the rwlock with an explicit CoQueue, for
which an rwlock is just a wrapper. In thread terms, that would be a
condition variable, which I find easier to reason on than an rwlock. In
this case, the stop_l2meta would see that there is a pending write, and
yield. The bottom half then can pick up the work.
Do you actually need the writers side of the lock, since you're already
taking s->lock immediately after calling stop_l2meta? i.e. can
qcow2_{zero,discard}_clusters release the s->lock, or not? If not,
you're just using the write side to kick all readers, and that would be
a point in favor of (2).
Paolo
next prev parent reply other threads:[~2012-09-18 15:00 UTC|newest]
Thread overview: 31+ messages / expand[flat|nested] mbox.gz Atom feed top
2012-09-18 11:40 [Qemu-devel] [RFC PATCH 00/16] qcow2: Delayed COW Kevin Wolf
2012-09-18 11:40 ` [Qemu-devel] [RFC PATCH 01/16] qcow2: Round QCowL2Meta.offset down to cluster boundary Kevin Wolf
2012-09-18 11:40 ` [Qemu-devel] [RFC PATCH 02/16] qcow2: Introduce Qcow2COWRegion Kevin Wolf
2012-09-18 11:40 ` [Qemu-devel] [RFC PATCH 03/16] qcow2: Allocate l2meta dynamically Kevin Wolf
2012-09-18 11:40 ` [Qemu-devel] [RFC PATCH 04/16] qcow2: Drop l2meta.cluster_offset Kevin Wolf
2012-09-18 11:40 ` [Qemu-devel] [RFC PATCH 05/16] qcow2: Allocate l2meta only for cluster allocations Kevin Wolf
2012-09-18 11:40 ` [Qemu-devel] [RFC PATCH 06/16] qcow2: Enable dirty flag in qcow2_alloc_cluster_link_l2 Kevin Wolf
2012-09-18 11:40 ` [Qemu-devel] [RFC PATCH 07/16] qcow2: Factor out handle_dependencies() Kevin Wolf
2012-09-18 11:40 ` [Qemu-devel] [RFC PATCH 08/16] qcow2: Reading from areas not in L2 tables yet Kevin Wolf
2012-09-18 11:40 ` [Qemu-devel] [RFC PATCH 09/16] qcow2: Move COW and L2 update into own coroutine Kevin Wolf
2012-09-18 14:24 ` Paolo Bonzini
2012-09-18 14:44 ` Kevin Wolf
2012-09-18 14:59 ` Paolo Bonzini [this message]
2012-09-18 11:40 ` [Qemu-devel] [RFC PATCH 10/16] qcow2: Delay the COW Kevin Wolf
2012-09-18 14:27 ` Paolo Bonzini
2012-09-18 14:49 ` Kevin Wolf
2012-09-19 18:47 ` Blue Swirl
2012-09-20 6:58 ` Kevin Wolf
2012-09-18 11:40 ` [Qemu-devel] [RFC PATCH 11/16] qcow2: Add error handling to the l2meta coroutine Kevin Wolf
2012-09-18 14:29 ` Paolo Bonzini
2012-09-18 11:40 ` [Qemu-devel] [RFC PATCH 12/16] qcow2: Handle dependencies earlier Kevin Wolf
2012-09-18 11:40 ` [Qemu-devel] [RFC PATCH 13/16] qcow2: Change handle_dependency to byte granularity Kevin Wolf
2012-09-18 11:40 ` [Qemu-devel] [RFC PATCH 14/16] qcow2: Execute run_dependent_requests() without lock Kevin Wolf
2012-09-18 14:33 ` Paolo Bonzini
2012-09-18 14:54 ` Kevin Wolf
2012-09-18 11:40 ` [Qemu-devel] [RFC PATCH 15/16] qcow2: Cancel COW when overwritten Kevin Wolf
2012-09-18 14:44 ` Paolo Bonzini
2012-09-18 15:02 ` Kevin Wolf
2012-09-18 15:05 ` Paolo Bonzini
2012-09-18 15:08 ` Paolo Bonzini
2012-09-18 11:40 ` [Qemu-devel] [RFC PATCH 16/16] [BROKEN] qcow2: Overwrite COW and allocate new cluster at the same time Kevin Wolf
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=50588C66.3060008@redhat.com \
--to=pbonzini@redhat.com \
--cc=kwolf@redhat.com \
--cc=qemu-devel@nongnu.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).