From: Avi Kivity <avi@redhat.com>
To: liu ping fan <qemulist@gmail.com>
Cc: kvm@vger.kernel.org, Stefan Hajnoczi <stefanha@gmail.com>,
Marcelo Tosatti <mtosatti@redhat.com>,
qemu-devel@nongnu.org, Anthony Liguori <anthony@codemonkey.ws>,
Jan Kiszka <jan.kiszka@siemens.com>,
Paolo Bonzini <pbonzini@redhat.com>
Subject: Re: [Qemu-devel] [PATCH 1/5] qom: adopt rwlock to protect accessing dev from removing it
Date: Thu, 26 Jul 2012 16:46:42 +0300 [thread overview]
Message-ID: <50114A42.2050702@redhat.com> (raw)
In-Reply-To: <CAJnKYQmJMaKycc1Xq4W_W1VOzEbvoDwLa8f1O_FJRqvQbuhh7w@mail.gmail.com>
On 07/26/2012 04:21 PM, liu ping fan wrote:
> On Thu, Jul 26, 2012 at 9:15 PM, Avi Kivity <avi@redhat.com> wrote:
>> On 07/26/2012 04:14 PM, liu ping fan wrote:
>>>>
>>>> From the description above, I don't see why it can't be a mutex.
>>>>
>>> Searching in the device tree (or MemoryRegion view) can be often in
>>> parallel, especially in mmio-dispatch code path
>>
>> In mmio dispatch we have a pointer to the object, we don't need to
>> search anything. Is device tree search a hot path?
>>
> I think, we need lock to protect searching --phys_page_find() from
> deleter--DeviceClass:unmap, so rwlock?
Better a lock on phys_map (because it is easily replaced by rcu, later).
I think phys_map is also better isolated, so it will be easier to find
all the placed that need protection and to avoid deadlock.
--
error compiling committee.c: too many arguments to function
next prev parent reply other threads:[~2012-07-26 13:47 UTC|newest]
Thread overview: 28+ messages / expand[flat|nested] mbox.gz Atom feed top
2012-07-25 3:31 [Qemu-devel] [PATCH 0/5] prepare unplug out of protection of global lock Liu Ping Fan
2012-07-25 3:31 ` [Qemu-devel] [PATCH 1/5] qom: adopt rwlock to protect accessing dev from removing it Liu Ping Fan
2012-07-25 9:08 ` Paolo Bonzini
2012-07-26 12:56 ` liu ping fan
2012-07-26 13:00 ` Avi Kivity
2012-07-26 13:14 ` liu ping fan
2012-07-26 13:15 ` Avi Kivity
2012-07-26 13:21 ` liu ping fan
2012-07-26 13:46 ` Avi Kivity [this message]
2012-07-25 3:31 ` [Qemu-devel] [PATCH 2/5] exec.c: use refcnt to protect device during dispatching Liu Ping Fan
2012-07-25 7:43 ` Stefan Hajnoczi
2012-07-25 8:12 ` liu ping fan
2012-07-25 9:18 ` Paolo Bonzini
2012-07-26 13:00 ` liu ping fan
2012-07-25 10:58 ` Avi Kivity
2012-07-25 12:27 ` Avi Kivity
2012-07-26 13:06 ` liu ping fan
2012-07-26 13:13 ` Avi Kivity
2012-07-25 3:31 ` [Qemu-devel] [PATCH 3/5] hotplug: introduce qdev_unplug_ack() to remove device from views Liu Ping Fan
2012-07-25 10:58 ` Avi Kivity
2012-07-25 3:31 ` [Qemu-devel] [PATCH 4/5] qom: delay DeviceState's reclaim to main-loop Liu Ping Fan
2012-07-25 7:03 ` Stefan Hajnoczi
2012-07-25 7:37 ` Paolo Bonzini
2012-07-25 8:16 ` liu ping fan
2012-07-25 8:22 ` Paolo Bonzini
2012-07-25 8:17 ` liu ping fan
2012-07-25 3:31 ` [Qemu-devel] [PATCH 5/5] e1000: using new interface--unmap to unplug Liu Ping Fan
2012-07-25 7:12 ` Stefan Hajnoczi
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=50114A42.2050702@redhat.com \
--to=avi@redhat.com \
--cc=anthony@codemonkey.ws \
--cc=jan.kiszka@siemens.com \
--cc=kvm@vger.kernel.org \
--cc=mtosatti@redhat.com \
--cc=pbonzini@redhat.com \
--cc=qemu-devel@nongnu.org \
--cc=qemulist@gmail.com \
--cc=stefanha@gmail.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).