From: jgmyers@netscape.com (John Myers)
To: linux-kernel <linux-kernel@vger.kernel.org>
Subject: [Fwd: Re: Comparing the aio and epoll event frameworks.]
Date: Wed, 21 May 2003 10:22:34 -0700 [thread overview]
Message-ID: <3ECBB5DA.40408@netscape.com> (raw)
[-- Attachment #1.1: Type: text/plain, Size: 56 bytes --]
Previously bounced due to some internal error on vger.
[-- Attachment #1.2: Re: Comparing the aio and epoll event frameworks. --]
[-- Type: message/rfc822, Size: 8088 bytes --]
[-- Attachment #1.2.1.1: Type: text/plain, Size: 1832 bytes --]
Davide Libenzi wrote:
>
>Hi John, you seem to have lost a few episodes of the epoll saga. You can
>use epoll in both Edge Triggered or Level Triggered ways
>
I was aware of that.
>You can easily do thread pooling also.
>
Using epoll with thread pooling has the problems I describe. You can
get multiple threads simultaneously handling the same event. This is
particularly true when using epoll in level triggered mode.
ep_reinject_items() reinjects items immediately before returning from
sys_epoll_wait(), so any second thread calling epoll_wait() shortly
thereafter is likely to also get a copy of the event. In edge triggered
mode, the window is significantly limited, but it is still there.
One can work around this issue by having user space maintain its own
globally locked data structure containing its idea of the current epoll
state, but this wastes CPU and becomes a likely site for locking
contention. The kernel is already serializing its own access to the
struct eventpoll; user space should be able to exploit that.
>Is poll/select a single threading API ?
>
Yes.
> A thread pooling one ?
>
No. You have to have a single thread calling poll/select on any given
set of file descriptors. The resulting events can then be farmed out to
threads using some other synchronization method, but the API can only
reasonably deliver events to that single calling thread.
Another difference I hadn't noticed before is that aio's
sys_io_getevents() uses wake-one semantics, whereas epoll's
sys_epoll_wait() appears to use wake-all semantics. Wake-one semantics
are important for thread pool callers in order to avoid thundering herd
performance problems. Aio unfortunately appears to wake up threads in
FIFO order, which results in pessimal use of cache. This should be
changed to LIFO order.
[-- Attachment #1.2.1.2: S/MIME Cryptographic Signature --]
[-- Type: application/x-pkcs7-signature, Size: 3711 bytes --]
[-- Attachment #2: S/MIME Cryptographic Signature --]
[-- Type: application/x-pkcs7-signature, Size: 3711 bytes --]
reply other threads:[~2003-05-21 17:09 UTC|newest]
Thread overview: [no followups] expand[flat|nested] mbox.gz Atom feed
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=3ECBB5DA.40408@netscape.com \
--to=jgmyers@netscape.com \
--cc=linux-kernel@vger.kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox