From: Paolo Bonzini <pbonzini@redhat.com>
To: Jan Kiszka <jan.kiszka@siemens.com>
Cc: Kevin Wolf <kwolf@redhat.com>,
Anthony Liguori <aliguori@us.ibm.com>,
"qemu-devel@nongnu.org" <qemu-devel@nongnu.org>
Subject: Re: [Qemu-devel] [PATCH 0/5] Spread the use of QEMU threading & locking API
Date: Thu, 05 Apr 2012 16:11:45 +0200 [thread overview]
Message-ID: <4F7DA821.1030409@redhat.com> (raw)
In-Reply-To: <4F7DA5AA.8070100@siemens.com>
Il 05/04/2012 16:01, Jan Kiszka ha scritto:
> Either you signal via the fd or via a variable. Doing both won't work as
> the state can only be in the eventfd/pipe (for external triggers). We
> could switch the mode of our QemuEvent on init, but that will become
> ugly I'm afraid.
Yeah...
>>>>> RCU patches which were even posted on the list). We already have a
>>>>> perfectly good name for EventNotifiers, and there's no reason to break
>>>>> the history of event-notifier.c.
>>> Have you measured if the futex optimization is actually worth the
>>> effort, specifically compared to the fast path of mutex/cond loop?
>>
>> A futex is 30% faster than the mutex/cond combination. It's called on
>> fast paths (call_rcu and, depending on how you implement RCU,
>> rcu_read_unlock) so it's important.
>
> If RCU is the only user for this optimized signaling, then I would vote
> for doing it in the RCU layer directly. If there are also other users in
> sight that could benefit (because of mostly-set-rarely-reset patterns),
> I agree that a QemuEvent is the better home. Can you name more use cases
> in QEMU?
No idea, but the general use case is when you have something that is
multi-producer and single-consumer.
Paolo
prev parent reply other threads:[~2012-04-05 14:12 UTC|newest]
Thread overview: 25+ messages / expand[flat|nested] mbox.gz Atom feed top
2012-04-04 15:08 [Qemu-devel] [PATCH 0/5] Spread the use of QEMU threading & locking API Jan Kiszka
2012-04-04 15:08 ` [Qemu-devel] [PATCH 1/5] Introduce qemu_cond_timedwait for POSIX Jan Kiszka
2012-04-04 15:08 ` [Qemu-devel] [PATCH 2/5] Switch POSIX compat AIO to QEMU abstractions Jan Kiszka
2012-04-04 15:08 ` [Qemu-devel] [PATCH 3/5] Use qemu_eventfd for POSIX AIO Jan Kiszka
2012-04-04 15:08 ` [Qemu-devel] [PATCH 4/5] Reorder POSIX compat AIO code Jan Kiszka
2012-04-04 15:08 ` [Qemu-devel] [PATCH 5/5] Switch compatfd to QEMU thread Jan Kiszka
2012-04-04 15:18 ` [Qemu-devel] [PATCH 0/5] Spread the use of QEMU threading & locking API Paolo Bonzini
2012-04-04 15:24 ` Jan Kiszka
2012-04-04 15:29 ` Paolo Bonzini
2012-04-04 15:38 ` Jan Kiszka
2012-04-04 15:43 ` Jan Kiszka
2012-04-04 16:05 ` Jan Kiszka
2012-04-04 16:39 ` Paolo Bonzini
2012-04-04 16:55 ` Jan Kiszka
2012-04-04 17:19 ` Jan Kiszka
2012-04-05 7:51 ` Paolo Bonzini
2012-04-05 10:55 ` Jan Kiszka
2012-04-05 11:07 ` Paolo Bonzini
2012-04-05 11:18 ` Jan Kiszka
2012-04-05 11:29 ` Paolo Bonzini
2012-04-05 12:04 ` Jan Kiszka
2012-04-05 12:48 ` Paolo Bonzini
[not found] ` <4F7D977A.1020905@siemens.com>
2012-04-05 13:40 ` Paolo Bonzini
2012-04-05 14:01 ` Jan Kiszka
2012-04-05 14:11 ` Paolo Bonzini [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=4F7DA821.1030409@redhat.com \
--to=pbonzini@redhat.com \
--cc=aliguori@us.ibm.com \
--cc=jan.kiszka@siemens.com \
--cc=kwolf@redhat.com \
--cc=qemu-devel@nongnu.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).