From: liu ping fan <qemulist@gmail.com>
To: Avi Kivity <avi@redhat.com>
Cc: Peter Maydell <peter.maydell@linaro.org>,
Jan Kiszka <jan.kiszka@siemens.com>,
Marcelo Tosatti <mtosatti@redhat.com>,
qemu-devel@nongnu.org, Anthony Liguori <anthony@codemonkey.ws>,
Stefan Hajnoczi <stefanha@gmail.com>,
Paolo Bonzini <pbonzini@redhat.com>
Subject: Re: [Qemu-devel] [patch v5 5/8] memory: introduce local lock for address space
Date: Mon, 29 Oct 2012 17:46:15 +0800 [thread overview]
Message-ID: <CAJnKYQmq3pA-QygCrLkEmKxu-w3sLxOqiAzNeOwdZBeJKqcewg@mail.gmail.com> (raw)
In-Reply-To: <508E4D45.5060106@redhat.com>
On Mon, Oct 29, 2012 at 5:32 PM, Avi Kivity <avi@redhat.com> wrote:
> On 10/29/2012 01:48 AM, Liu Ping Fan wrote:
>> For those address spaces which want to be able out of big lock, they
>> will be protected by their own local.
>>
>> Signed-off-by: Liu Ping Fan <pingfank@linux.vnet.ibm.com>
>> ---
>> memory.c | 11 ++++++++++-
>> memory.h | 5 ++++-
>> 2 files changed, 14 insertions(+), 2 deletions(-)
>>
>> diff --git a/memory.c b/memory.c
>> index 2f68d67..ff34aed 100644
>> --- a/memory.c
>> +++ b/memory.c
>> @@ -1532,9 +1532,15 @@ void memory_listener_unregister(MemoryListener *listener)
>> QTAILQ_REMOVE(&memory_listeners, listener, link);
>> }
>>
>> -void address_space_init(AddressSpace *as, MemoryRegion *root)
>> +void address_space_init(AddressSpace *as, MemoryRegion *root, bool lock)
>
>
> Why not always use the lock? Even if the big lock is taken, it doesn't
> hurt. And eventually all address spaces will be fine-grained.
>
I had thought only mmio is out of big lock's protection. While others
address space will take extra expense. So leave them until they are
ready to be out of big lock.
>> {
>> memory_region_transaction_begin();
>> + if (lock) {
>> + as->lock = g_new(QemuMutex, 1);
>> + qemu_mutex_init(as->lock);
>> + } else {
>> + as->lock = NULL;
>> + }
>> as->root = root;
>> as->current_map = g_new(FlatView, 1);
>> flatview_init(as->current_map);
>> @@ -1553,6 +1559,9 @@ void address_space_destroy(AddressSpace *as)
>> QTAILQ_REMOVE(&address_spaces, as, address_spaces_link);
>> address_space_destroy_dispatch(as);
>> flatview_destroy(as->current_map);
>> + if (as->lock) {
>> + g_free(as->lock);
>> + }
>> g_free(as->current_map);
>> }
>>
>> diff --git a/memory.h b/memory.h
>> index 79393f1..12d1c56 100644
>> --- a/memory.h
>> +++ b/memory.h
>> @@ -22,6 +22,7 @@
>> #include "cpu-common.h"
>> #include "targphys.h"
>> #include "qemu-queue.h"
>> +#include "qemu-thread.h"
>> #include "iorange.h"
>> #include "ioport.h"
>> #include "int128.h"
>> @@ -164,6 +165,7 @@ typedef struct AddressSpace AddressSpace;
>> */
>> struct AddressSpace {
>> /* All fields are private. */
>> + QemuMutex *lock;
>> const char *name;
>> MemoryRegion *root;
>> struct FlatView *current_map;
>> @@ -801,8 +803,9 @@ void mtree_info(fprintf_function mon_printf, void *f);
>> *
>> * @as: an uninitialized #AddressSpace
>> * @root: a #MemoryRegion that routes addesses for the address space
>> + * @lock: if true, the physmap protected by local lock, otherwise big lock
>> */
>> -void address_space_init(AddressSpace *as, MemoryRegion *root);
>> +void address_space_init(AddressSpace *as, MemoryRegion *root, bool lock);
>>
>>
>> /**
>>
>
>
> --
> error compiling committee.c: too many arguments to function
>
next prev parent reply other threads:[~2012-10-29 9:46 UTC|newest]
Thread overview: 26+ messages / expand[flat|nested] mbox.gz Atom feed top
2012-10-28 23:48 [Qemu-devel] [patch v5 0/8] push mmio dispatch out of big lock Liu Ping Fan
2012-10-28 23:48 ` [Qemu-devel] [patch v5 1/8] atomic: introduce atomic operations Liu Ping Fan
2012-10-28 23:48 ` [Qemu-devel] [patch v5 2/8] qom: apply atomic on object's refcount Liu Ping Fan
2012-10-28 23:48 ` [Qemu-devel] [patch v5 3/8] hotplug: introduce qdev_unplug_complete() to remove device from views Liu Ping Fan
2012-10-28 23:48 ` [Qemu-devel] [patch v5 4/8] pci: remove pci device from mem view when unplug Liu Ping Fan
2012-10-28 23:48 ` [Qemu-devel] [patch v5 5/8] memory: introduce local lock for address space Liu Ping Fan
2012-10-29 7:42 ` Peter Maydell
2012-10-29 8:41 ` liu ping fan
2012-10-29 9:32 ` Avi Kivity
2012-10-29 9:46 ` liu ping fan [this message]
2012-11-01 15:45 ` Avi Kivity
2012-11-01 18:44 ` Jan Kiszka
2012-11-02 0:52 ` liu ping fan
2012-11-02 8:00 ` Jan Kiszka
2012-11-05 12:36 ` Avi Kivity
2012-10-28 23:48 ` [Qemu-devel] [patch v5 6/8] memory: make mmio dispatch able to be out of biglock Liu Ping Fan
2012-10-29 9:41 ` Avi Kivity
2012-10-30 7:06 ` liu ping fan
2012-11-01 2:04 ` liu ping fan
2012-11-01 15:46 ` Avi Kivity
2012-10-28 23:48 ` [Qemu-devel] [patch v5 7/8] memory: introduce tls context to record nested dma Liu Ping Fan
2012-10-29 8:51 ` Paolo Bonzini
2012-11-05 5:35 ` liu ping fan
2012-11-02 10:39 ` Jan Kiszka
2012-11-05 5:35 ` liu ping fan
2012-10-28 23:48 ` [Qemu-devel] [patch v5 8/8] vcpu: push mmio dispatcher out of big lock Liu Ping Fan
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=CAJnKYQmq3pA-QygCrLkEmKxu-w3sLxOqiAzNeOwdZBeJKqcewg@mail.gmail.com \
--to=qemulist@gmail.com \
--cc=anthony@codemonkey.ws \
--cc=avi@redhat.com \
--cc=jan.kiszka@siemens.com \
--cc=mtosatti@redhat.com \
--cc=pbonzini@redhat.com \
--cc=peter.maydell@linaro.org \
--cc=qemu-devel@nongnu.org \
--cc=stefanha@gmail.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).