From: Paolo Bonzini <pbonzini@redhat.com>
To: Peter Xu <peterx@redhat.com>
Cc: drjones@redhat.com, lersek@redhat.com, armbru@redhat.com,
qemu-devel@nongnu.org, famz@redhat.com, lcapitulino@redhat.com
Subject: Re: [Qemu-devel] [PATCH v3 10/12] Dump: add qmp command "query-dump"
Date: Tue, 1 Dec 2015 13:37:37 +0100 [thread overview]
Message-ID: <565D9491.5000909@redhat.com> (raw)
In-Reply-To: <20151201123250.GA31109@pxdev.xzpeter.org>
On 01/12/2015 13:32, Peter Xu wrote:
> On Tue, Dec 01, 2015 at 10:54:48AM +0100, Paolo Bonzini wrote:
>>
>>
>> On 01/12/2015 04:57, Peter Xu wrote:
>>>>> You need a mutex around the reads of ->status and ->written_size.
>>> Could I avoid using mutex here? Let me try to explain what I
>>> thought.
>>>
>>> The concurrency of this should only happen when:
>>>
>>> - detached dump thread is working (dump thread)
>>> - user queries dump status (main thread)
>>>
>>> What the dump thread is doing should be something like:
>>>
>>> - [start dumping]
>>> - inc written_size
>>> - inc written_size
>>> - ...
>>> - inc written_size
>>> - set ->status to COMPLETED|FAILED
>>> - [end dumping]
>>
>> Yes, it's possible but you need to use atomic_mb_read/atomic_mb_set to
>> write ->status. Otherwise a CPU can see the write to ->status before
>> some of the final writes to ->written_size.
>
> Hi, Paolo,
>
> Thanks to point out. However, would it be confusing to use
> atomic_mb_{read|set} rather than directly use smp_rmb() and
> smp_wmb()? Like:
>
> In dump thread:
>
> - inc written_size
> - inc written_size
> - ...
> - inc written_size
> - smp_wmb()
> - atomic_set(status, COMPLETED|FAILED)
>
> In main thread:
>
> - atomic_read(status)
> - smp_rmb()
> - read written_size
>
> What I understand from the doc (seems written by you, thanks :) ) is
> that: atomic_mb_{read|set} is the pair of helper functions for _one_
> specific variable, to make sure its operations are always in order
> as long as we are using atomic_mb_* functions to access it all the
> time. However, in the dump thread case, it's related to read/write
> order of two variables (status and written_size).
atomic_mb_{read,set} does order accesses to the variable against
all other accesses. In this case I'd prefer it to smp_wmb/rmb, because
the writes to written_size are far from the writes to status.
Compare with thread-pool.c:
req->ret = ret;
/* Write ret before state. */
smp_wmb();
req->state = THREAD_DONE;
/* Read state before ret. */
smp_rmb();
/* Schedule ourselves in case elem->common.cb() calls aio_poll() to
* wait for another request that completed at the same time.
*/
qemu_bh_schedule(pool->completion_bh);
elem->common.cb(elem->common.opaque, elem->ret);
It's a matter of taste though. What you wrote above is certainly okay
as well.
Paolo
next prev parent reply other threads:[~2015-12-01 12:37 UTC|newest]
Thread overview: 38+ messages / expand[flat|nested] mbox.gz Atom feed top
2015-11-30 11:32 [Qemu-devel] [PATCH v3 00/12] Add basic "detach" support for dump-guest-memory Peter Xu
2015-11-30 11:32 ` [Qemu-devel] [PATCH v3 01/12] dump-guest-memory: cleanup: removing dump_{error|cleanup}() Peter Xu
2015-11-30 11:32 ` [Qemu-devel] [PATCH v3 02/12] dump-guest-memory: add "detach" flag for QMP/HMP interfaces Peter Xu
2015-11-30 22:05 ` Eric Blake
2015-12-01 2:18 ` Peter Xu
2015-12-01 15:09 ` Paolo Bonzini
2015-12-02 2:31 ` Peter Xu
2015-11-30 11:32 ` [Qemu-devel] [PATCH v3 03/12] dump-guest-memory: using static DumpState, add DumpStatus Peter Xu
2015-11-30 13:00 ` Paolo Bonzini
2015-12-01 2:57 ` Peter Xu
2015-11-30 22:08 ` Eric Blake
2015-12-01 3:04 ` Peter Xu
2015-11-30 11:32 ` [Qemu-devel] [PATCH v3 04/12] dump-guest-memory: add dump_in_progress() helper function Peter Xu
2015-11-30 11:32 ` [Qemu-devel] [PATCH v3 05/12] dump-guest-memory: introduce dump_process() " Peter Xu
2015-11-30 12:55 ` Paolo Bonzini
2015-12-01 3:12 ` Peter Xu
2015-11-30 11:32 ` [Qemu-devel] [PATCH v3 06/12] dump-guest-memory: disable dump when in INMIGRATE state Peter Xu
2015-11-30 11:32 ` [Qemu-devel] [PATCH v3 07/12] dump-guest-memory: add "detach" support Peter Xu
2015-11-30 11:32 ` [Qemu-devel] [PATCH v3 08/12] dump-guest-memory: add qmp event DUMP_COMPLETED Peter Xu
2015-11-30 22:12 ` Eric Blake
2015-12-01 3:27 ` Peter Xu
2015-11-30 11:32 ` [Qemu-devel] [PATCH v3 09/12] DumpState: adding total_size and written_size fields Peter Xu
2015-11-30 11:32 ` [Qemu-devel] [PATCH v3 10/12] Dump: add qmp command "query-dump" Peter Xu
2015-11-30 12:56 ` Paolo Bonzini
2015-12-01 3:57 ` Peter Xu
2015-12-01 9:54 ` Paolo Bonzini
2015-12-01 12:32 ` Peter Xu
2015-12-01 12:37 ` Paolo Bonzini [this message]
2015-12-01 12:45 ` Peter Xu
2015-12-01 12:47 ` Paolo Bonzini
2015-12-01 13:03 ` Peter Xu
2015-11-30 22:17 ` Eric Blake
2015-12-01 4:40 ` Peter Xu
2015-12-01 13:43 ` Eric Blake
2015-11-30 11:32 ` [Qemu-devel] [PATCH v3 11/12] Dump: add hmp command "info dump" Peter Xu
2015-11-30 11:32 ` [Qemu-devel] [PATCH v3 12/12] Dump: enhance the documentations Peter Xu
2015-11-30 22:22 ` Eric Blake
2015-12-01 4:21 ` Peter Xu
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=565D9491.5000909@redhat.com \
--to=pbonzini@redhat.com \
--cc=armbru@redhat.com \
--cc=drjones@redhat.com \
--cc=famz@redhat.com \
--cc=lcapitulino@redhat.com \
--cc=lersek@redhat.com \
--cc=peterx@redhat.com \
--cc=qemu-devel@nongnu.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).