From: Paolo Bonzini <pbonzini@redhat.com>
To: Mike Day <ncmike@ncultra.org>
Cc: Paul Mckenney <paulmck@linux.vnet.ibm.com>,
Mathew Desnoyers <mathieu.desnoyers@efficios.com>,
qemu-devel@nongnu.org, Anthony Liguori <anthony@codemonkey.ws>
Subject: Re: [Qemu-devel] [RFC PATCH] convert ram_list to RCU DQ
Date: Wed, 28 Aug 2013 18:35:04 +0200 [thread overview]
Message-ID: <521E26B8.7090801@redhat.com> (raw)
In-Reply-To: <1377705768-21996-1-git-send-email-ncmike@ncultra.org>
Il 28/08/2013 18:02, Mike Day ha scritto:
> @@ -457,8 +459,9 @@ static int ram_save_block(QEMUFile *f, bool last_stage)
> MemoryRegion *mr;
> ram_addr_t current_addr;
>
> + rcu_read_lock();
> if (!block)
> - block = QTAILQ_FIRST(&ram_list.blocks);
> + block = QLIST_FIRST_RCU(&ram_list.blocks);
>
> while (true) {
> mr = block->mr;
> @@ -469,9 +472,9 @@ static int ram_save_block(QEMUFile *f, bool last_stage)
> }
> if (offset >= block->length) {
> offset = 0;
> - block = QTAILQ_NEXT(block, next);
> + block = QLIST_NEXT_RCU(block, next);
> if (!block) {
> - block = QTAILQ_FIRST(&ram_list.blocks);
> + block = QLIST_FIRST_RCU(&ram_list.blocks);
> complete_round = true;
> ram_bulk_stage = false;
> }
> @@ -526,6 +529,7 @@ static int ram_save_block(QEMUFile *f, bool last_stage)
> }
> }
> }
> + rcu_read_unlock();
block lives across calls to ram_save_block, which is why the mutex was
locked in the caller (ram_save_iterate) rather than here. For a first
conversion, keeping the long RCU critical sections is fine. We don't
use RCU enough yet to care about delaying other call_rcu callbacks.
We can later check push the check for ram_list.version inside
ram_save_block, which should let us make the critical section smaller.
But that would be a bit tricky, so it's better to do it in a separate patch.
> @@ -828,13 +829,18 @@ static inline void *host_from_stream_offset(QEMUFile *f,
> qemu_get_buffer(f, (uint8_t *)id, len);
> id[len] = 0;
>
> - QTAILQ_FOREACH(block, &ram_list.blocks, next) {
> - if (!strncmp(id, block->idstr, sizeof(id)))
> - return memory_region_get_ram_ptr(block->mr) + offset;
> + rcu_read_lock();
> + QLIST_FOREACH_RCU(block, &ram_list.blocks, next) {
> + if (!strncmp(id, block->idstr, sizeof(id))) {
> + ptr = memory_region_get_ram_ptr(block->mr) + offset;
> + goto unlock_out;
> + }
> }
>
> fprintf(stderr, "Can't find block %s!\n", id);
> - return NULL;
> +unlock_out:
> + rcu_read_unlock();
> + return ptr;
> }
Similarly, here the critical section includes the caller, and block is
living across calls to host. Again, for now just put all of ram_load
under a huge RCU critical section. Later we can use ram_list.version to
refresh the list and make the critical sections smaller.
Paolo
next prev parent reply other threads:[~2013-08-28 16:36 UTC|newest]
Thread overview: 6+ messages / expand[flat|nested] mbox.gz Atom feed top
2013-08-28 16:02 [Qemu-devel] [RFC PATCH] convert ram_list to RCU DQ Mike Day
2013-08-28 16:35 ` Paolo Bonzini [this message]
2013-08-29 19:18 ` Mike Day
2013-08-30 8:19 ` Paolo Bonzini
2013-08-30 14:08 ` Mike Day
2013-08-28 16:37 ` Paolo Bonzini
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=521E26B8.7090801@redhat.com \
--to=pbonzini@redhat.com \
--cc=anthony@codemonkey.ws \
--cc=mathieu.desnoyers@efficios.com \
--cc=ncmike@ncultra.org \
--cc=paulmck@linux.vnet.ibm.com \
--cc=qemu-devel@nongnu.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).