From: Jan Kiszka <jan.kiszka@siemens.com>
To: Marcelo Tosatti <mtosatti@redhat.com>
Cc: qemu-devel <qemu-devel@nongnu.org>,
Liu Ping Fan <pingfank@linux.vnet.ibm.com>,
Alexander Graf <agraf@suse.de>,
Anthony Liguori <anthony@codemonkey.ws>,
kvm <kvm@vger.kernel.org>, Avi Kivity <avi@redhat.com>
Subject: Re: [PATCH] kvm: First step to push iothread lock out of inner run loop
Date: Wed, 27 Jun 2012 16:36:25 +0200 [thread overview]
Message-ID: <4FEB1A69.9040104@siemens.com> (raw)
In-Reply-To: <20120626193420.GA19852@amt.cnet>
On 2012-06-26 21:34, Marcelo Tosatti wrote:
> The following plan would allow progressive convertion to parallel
> operation.
>
> Jan mentioned the MMIO handler->MMIO handler deadlock in a private message.
>
> Jan: if there is recursive MMIO accesses, you can detect that and skip
> such MMIO handlers in dev_can_use_lock() ? Or blacklist.
The problem is harder as it may appear on first sight. I checked our
code again, and it also still contains at least one unhandled lockup
scenario. We could try to detect this but it's tricky, maybe even
fragile in more complex scenarios (risk of false positives when using
timeouts e.g.).
Well, such kind of mutual device-to-device requests are likely all
pathological, and I guess it would be ok to actually let the devices
lock up. But then we need some way to recover them, at least via a
virtual machine reset. That implies, of course, they must not lock up
while holding the central lock...
Need to look into details of your approach now.
Jan
--
Siemens AG, Corporate Technology, CT T DE IT 1
Corporate Competence Center Embedded Linux
next prev parent reply other threads:[~2012-06-27 14:36 UTC|newest]
Thread overview: 33+ messages / expand[flat|nested] mbox.gz Atom feed top
[not found] <4FE4F56D.1020201@web.de>
2012-06-22 22:55 ` [PATCH] kvm: First step to push iothread lock out of inner run loop Jan Kiszka
2012-06-23 0:22 ` Marcelo Tosatti
2012-06-23 9:06 ` Marcelo Tosatti
2012-06-23 11:45 ` Jan Kiszka
2012-06-24 8:49 ` Avi Kivity
2012-06-24 14:08 ` [Qemu-devel] " Jan Kiszka
2012-06-24 14:31 ` Avi Kivity
2012-07-06 17:16 ` Jan Kiszka
2012-07-06 18:06 ` Jan Kiszka
2012-07-08 7:49 ` Avi Kivity
2012-06-24 13:34 ` liu ping fan
2012-06-24 14:08 ` Jan Kiszka
2012-06-24 14:35 ` Avi Kivity
2012-06-24 14:40 ` Jan Kiszka
2012-06-24 14:46 ` Avi Kivity
2012-06-24 14:51 ` Jan Kiszka
2012-06-24 14:56 ` Avi Kivity
2012-06-24 14:58 ` Jan Kiszka
2012-06-24 14:59 ` Avi Kivity
2012-06-23 9:22 ` Jan Kiszka
2012-06-28 1:11 ` Marcelo Tosatti
2012-06-26 19:34 ` Marcelo Tosatti
2012-06-27 7:39 ` Stefan Hajnoczi
2012-06-27 7:41 ` [Qemu-devel] " Stefan Hajnoczi
2012-06-27 11:09 ` Marcelo Tosatti
2012-06-27 11:19 ` [Qemu-devel] " Marcelo Tosatti
2012-06-28 8:45 ` Stefan Hajnoczi
2012-06-27 7:54 ` Avi Kivity
2012-06-27 14:36 ` Jan Kiszka [this message]
2012-06-28 14:10 ` [Qemu-devel] " Anthony Liguori
2012-06-28 15:12 ` Avi Kivity
2012-06-29 1:29 ` Marcelo Tosatti
2012-06-29 1:45 ` [Qemu-devel] " Marcelo Tosatti
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=4FEB1A69.9040104@siemens.com \
--to=jan.kiszka@siemens.com \
--cc=agraf@suse.de \
--cc=anthony@codemonkey.ws \
--cc=avi@redhat.com \
--cc=kvm@vger.kernel.org \
--cc=mtosatti@redhat.com \
--cc=pingfank@linux.vnet.ibm.com \
--cc=qemu-devel@nongnu.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).