qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
From: Jan Kiszka <jan.kiszka@web.de>
To: Anthony Liguori <anthony@codemonkey.ws>
Cc: "qemu-devel@nongnu.org" <qemu-devel@nongnu.org>,
	liu ping fan <qemulist@gmail.com>,
	Stefan Hajnoczi <stefanha@linux.vnet.ibm.com>
Subject: Re: [Qemu-devel] [RFC] use little granularity lock to substitue qemu_mutex_lock_iothread
Date: Fri, 22 Jun 2012 23:14:32 +0200	[thread overview]
Message-ID: <4FE4E038.6000404@web.de> (raw)
In-Reply-To: <4FE4D15C.2030708@codemonkey.ws>

[-- Attachment #1: Type: text/plain, Size: 2998 bytes --]

On 2012-06-22 22:11, Anthony Liguori wrote:
> On 06/22/2012 05:37 AM, Jan Kiszka wrote:
>> On 2012-06-22 12:24, liu ping fan wrote:
>>> On Thu, Jun 21, 2012 at 11:23 PM, Jan Kiszka<jan.kiszka@siemens.com> 
>>> wrote:
>>>> On 2012-06-21 16:49, Liu Ping Fan wrote:
>>>>> Nowadays, we use
>>>>> qemu_mutex_lock_iothread()/qemu_mutex_unlock_iothread() to
>>>>> protect the race to access the emulated dev launched by vcpu
>>>>> threads&  iothread.
>>>>>
>>>>> But this lock is too big. We can break it down.
>>>>> These patches separate the CPUArchState's protection from the other
>>>>> devices, so we
>>>>> can have a per-cpu lock for each CPUArchState, not the big lock any
>>>>> longer.
>>>>
>>>> Anything that reduces lock dependencies is generally welcome. But can
>>>> you specify in more details what you gain, and under which conditions?
>>>>
>>> In fact, there are several steps to break down the Qemu big lock. This
>>> step just aims to shrink the code area protected by
>>> qemu_mutex_lock_iothread()/qemu_mutex_unlock_iothread(). And I am
>>> working on the following steps, which focus on breaking down the big
>>> lock when calling handle_{io,mmio}
>>
>> Then let us discuss the strategy. This is important as it is unrealistic
>> to break up the lock for all code paths. We really need to focus on
>> goals that provide benefits for relevant use cases.
> 
> Stefan put together a proof of concept that implemented the data-plane
> portion of virtio-blk in a separate thread.  This is possible because of
> I/O eventfd (we were able to select() on that fd in a separate thread).
> 
> The performance difference between virtio-blk-pci and
> virtio-blk-data-plane is staggering when dealing with a very large
> storage system.
> 
> So we'd like to get the infrastructure in place where we can start
> multithreading devices in QEMU to we can integrate this work.

Can you name the primary bits? We really need to see the whole picture
before adding new locks. They alone are not the solution.

> 
> The basic plan is introduce granular locking starting at the KVM
> dispatch level until we can get to MemoryRegion dispatch.  We'll then
> have some way to indicate that a MemoryRegion's callbacks should be
> invoked without holding the qemu global mutex.

I don't disagree, but this end really looks like starting at the wrong
edge. The changes are not isolated and surely not yet correct
(run_on_cpu is broken for tcg e.g.).

Then, none of this locking should be needed for in-kernel irqchips. All
touched states are thread local or should be modifiable atomically - if
not let's fix *that*, it's more beneficial.

Actually, cpu_lock is counterproductive as it adds locking ops to a path
where we will not need them later on in the normal configuration. User
space irqchip is a slow path and perfectly fine to handle under BQL. So
is VCPU control (pause/resume/run-on). It's better to focus on the fast
path first.

Jan


[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 262 bytes --]

  reply	other threads:[~2012-06-22 21:14 UTC|newest]

Thread overview: 15+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
     [not found] <1340290158-11036-1-git-send-email-qemulist@gmail.com>
2012-06-21 15:23 ` [Qemu-devel] [RFC] use little granularity lock to substitue qemu_mutex_lock_iothread Jan Kiszka
2012-06-22 10:24   ` liu ping fan
2012-06-22 10:37     ` Jan Kiszka
2012-06-22 20:11       ` Anthony Liguori
2012-06-22 21:14         ` Jan Kiszka [this message]
2012-06-22 21:44           ` Anthony Liguori
2012-06-22 22:27             ` Jan Kiszka
2012-06-22 22:56               ` Anthony Liguori
2012-06-23  9:10                 ` Jan Kiszka
     [not found] ` <1340290158-11036-2-git-send-email-qemulist@gmail.com>
2012-06-22 20:01   ` [Qemu-devel] [PATCH 1/2] CPUArchState: introduce per-cpu lock Anthony Liguori
     [not found] ` <1340290158-11036-3-git-send-email-qemulist@gmail.com>
2012-06-21 15:02   ` [Qemu-devel] [PATCH 2/2] kvm: use per-cpu lock to free vcpu thread out of the big lock Jan Kiszka
2012-06-22 20:06   ` Anthony Liguori
2012-06-23  9:32     ` liu ping fan
2012-06-24 14:09       ` liu ping fan
2012-06-21 15:06 [Qemu-devel] [RFC] use little granularity lock to substitue qemu_mutex_lock_iothread Liu Ping Fan

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=4FE4E038.6000404@web.de \
    --to=jan.kiszka@web.de \
    --cc=anthony@codemonkey.ws \
    --cc=qemu-devel@nongnu.org \
    --cc=qemulist@gmail.com \
    --cc=stefanha@linux.vnet.ibm.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).