From: Larry McVoy <lm@bitmover.com>
To: Luca Veraldi <luca.veraldi@katamail.com>
Cc: uek32z@phoenix.hadiko.de, linux-kernel <linux-kernel@vger.kernel.org>
Subject: Re: Efficient IPC mechanism on Linux
Date: Wed, 10 Sep 2003 07:53:17 -0700 [thread overview]
Message-ID: <20030910145317.GA32321@work.bitmover.com> (raw)
In-Reply-To: <03ca01c37795$6497ac80$5aaf7450@wssupremo>
On Wed, Sep 10, 2003 at 02:16:40PM +0200, Luca Veraldi wrote:
> > I've read your posting on the lkml and also the answers
> > concerning IPC mechanisms on Linux.
> > You speak English very well - why don't you translate your
> > page into english, I think many people would be very interested
> > in it... at least I am ;) Unfortunately not many kernel hackers
> > are able to understand Italian, I think...
>
> Page is now in English since last night (Italian time).
> Please, refresh your browser.
>
> http://web.tiscali.it/lucavera/www/root/ecbm/index.htm
> for English users and
I read it and I must be missing something, which is possible, I need more
coffee.
I question the measurement methodology. Why didn't you grab the sources
to LMbench and use them to measure this? It may well be that you disagree
with how it measures things, which may be fine, but I think you'd benefit
from understanding it and thinking about it. It's also trivial to add
another test to the system, you can do it in a few lines of code.
I also question the results. I modified lat_pipe.c from LMbench to measure
a range of sizes, code is included below. The results don't match yours at
all. This is on a 466Mhz Celeron running RedHat 7.1, Linux 2.4.2. The
time reported is the time to send and receive the data between two processes,
i.e.,
Process A Process B
write
<context switch>
read
write
<context switch>
read
In other words, the time printed is for a round trip. Your numbers
appear to be off by a factor of two, they look pretty similar to mine
but as far as I can tell you are saying that it costs 4 usecs for a send
and 4 for a recv and that's not true, the real numbers are 2 sends and
2 receives and 2 context switches.
1 bytes: Pipe latency: 8.0272 microseconds
8 bytes: Pipe latency: 7.8736 microseconds
64 bytes: Pipe latency: 8.0279 microseconds
512 bytes: Pipe latency: 10.0920 microseconds
4096 bytes: Pipe latency: 19.6434 microseconds
40960 bytes: Pipe latency: 313.3328 microseconds
81920 bytes: Pipe latency: 1267.7174 microseconds
163840 bytes: Pipe latency: 3052.1020 microseconds
I want to stick some other numbers in here from LMbench, the signal
handler cost and the select cost. On this machine it is about 5 usecs
to handle the signal and about 4 for select on 10 file descriptors.
If I were faced with the problem of moving data between processes at very
low cost the path I choose would depend on whether it was a lot of data
or just an event notification. It would also depend on whether the
receiving process is doing anything else. Let's walk a couple of those
paths:
If all I want to do is let another process know that something has happened
then a signal is darn close to as cheap as I can get. That's what SIGUSR1
and SIGUSR2 are for.
If I wanted to move large quantities of data I'd combine signals with
mmap and mutexes. This is easier than it sounds. Map a scratch file,
truncate it up to the size you need, start writing into it and when you
have enough signal the other side. It's a lot like the I/O loop in
many device drivers.
So I guess I'm not seeing why there needs to be a new interface here.
This looks to me like you combine cheap messaging (signals, select,
or even pipes) with shared data (mmap). I don't know why you didn't
measure that, it's the obvious thing to measure and you are going to be
running at memory speeds.
The only justification I can see for a different mechanism is if the
signaling really hurt but it doesn't. What am I missing?
--
---
Larry McVoy lm at bitmover.com http://www.bitmover.com/lm
next prev parent reply other threads:[~2003-09-10 14:53 UTC|newest]
Thread overview: 66+ messages / expand[flat|nested] mbox.gz Atom feed top
[not found] <E19x3el-0002Fc-Rj@phoenix.hadiko.de>
2003-09-10 12:16 ` Efficient IPC mechanism on Linux Luca Veraldi
2003-09-10 14:53 ` Larry McVoy [this message]
2003-09-10 18:41 Manfred Spraul
[not found] <u9j3.1VB.27@gated-at.bofh.it>
[not found] ` <u9j3.1VB.29@gated-at.bofh.it>
[not found] ` <u9j3.1VB.31@gated-at.bofh.it>
[not found] ` <u9j3.1VB.25@gated-at.bofh.it>
[not found] ` <ubNY.5Ma.19@gated-at.bofh.it>
[not found] ` <uc79.6lg.13@gated-at.bofh.it>
[not found] ` <uc7d.6lg.23@gated-at.bofh.it>
[not found] ` <uch0.6zx.17@gated-at.bofh.it>
[not found] ` <ucqs.6NC.3@gated-at.bofh.it>
[not found] ` <ucqy.6NC.19@gated-at.bofh.it>
[not found] ` <udmB.8eZ.15@gated-at.bofh.it>
[not found] ` <udPF.BD.11@gated-at.bofh.it>
[not found] ` <3F5F37CD.6060808@softhome.net>
2003-09-10 15:28 ` Luca Veraldi
[not found] <fa.h06p421.1s00ojt@ifi.uio.no>
[not found] ` <fa.gc37hsp.34id89@ifi.uio.no>
[not found] ` <E19x47V-0002JG-J8@phoenix.hadiko.de>
2003-09-10 12:45 ` Luca Veraldi
[not found] <F71B37536F3B3D4FA521FEC7FCA17933164A@twinsrv.twinox.se>
2003-09-10 10:36 ` Luca Veraldi
-- strict thread matches above, loose matches on Subject: below --
2003-09-09 22:15 Luca Veraldi
2003-09-09 18:59 Luca Veraldi
2003-09-09 17:30 Luca Veraldi
2003-09-09 21:17 ` Alan Cox
2003-09-09 21:57 ` Luca Veraldi
2003-09-09 23:11 ` Alan Cox
2003-09-10 9:04 ` Luca Veraldi
2003-09-10 12:56 ` Alan Cox
[not found] ` <20030909175821.GL16080@Synopsys.COM>
[not found] ` <001d01c37703$8edc10e0$36af7450@wssupremo>
[not found] ` <20030910064508.GA25795@Synopsys.COM>
2003-09-10 9:18 ` Luca Veraldi
2003-09-10 9:23 ` Arjan van de Ven
2003-09-10 9:40 ` Luca Veraldi
2003-09-10 9:44 ` Arjan van de Ven
2003-09-10 10:09 ` Luca Veraldi
2003-09-10 10:14 ` Arjan van de Ven
2003-09-10 10:25 ` Luca Veraldi
2003-09-12 18:41 ` Timothy Miller
2003-09-12 19:05 ` Luca Veraldi
2003-09-12 22:37 ` Alan Cox
2003-09-10 12:50 ` Alan Cox
2003-09-10 19:16 ` Shawn
2003-09-10 20:05 ` Rik van Riel
2003-09-10 12:47 ` Alan Cox
2003-09-10 13:56 ` Luca Veraldi
2003-09-10 15:59 ` Alan Cox
2003-09-10 9:52 ` Jamie Lokier
2003-09-10 10:07 ` Arjan van de Ven
2003-09-10 10:17 ` Luca Veraldi
2003-09-10 10:37 ` Jamie Lokier
2003-09-10 10:41 ` Arjan van de Ven
2003-09-10 10:54 ` Luca Veraldi
2003-09-10 10:54 ` Arjan van de Ven
2003-09-10 11:16 ` Nick Piggin
2003-09-10 11:30 ` Luca Veraldi
2003-09-10 11:44 ` Nick Piggin
2003-09-10 12:14 ` Luca Veraldi
2003-09-10 12:42 ` Alan Cox
2003-09-10 10:11 ` Luca Veraldi
2003-09-10 19:24 ` Pavel Machek
2003-09-10 19:40 ` Jamie Lokier
2003-09-10 21:35 ` Pavel Machek
2003-09-10 22:06 ` Jamie Lokier
2003-09-10 11:52 ` Alex Riesen
2003-09-10 12:14 ` Luca Veraldi
2003-09-10 12:11 ` Alex Riesen
2003-09-10 12:29 ` Luca Veraldi
2003-09-10 12:28 ` Alex Riesen
2003-09-10 12:36 ` Luca Veraldi
2003-09-10 12:36 ` Alex Riesen
2003-09-10 13:33 ` Gábor Lénárt
2003-09-10 14:04 ` Luca Veraldi
2003-09-10 14:21 ` Stewart Smith
2003-09-10 14:39 ` Luca Veraldi
2003-09-10 16:59 ` Andrea Arcangeli
2003-09-10 17:05 ` Andrea Arcangeli
2003-09-10 17:21 ` Luca Veraldi
2003-09-10 17:41 ` Andrea Arcangeli
2003-09-10 17:39 ` Martin Konold
2003-09-10 18:01 ` Andrea Arcangeli
2003-09-10 18:05 ` Martin Konold
2003-09-10 18:31 ` Chris Friesen
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20030910145317.GA32321@work.bitmover.com \
--to=lm@bitmover.com \
--cc=linux-kernel@vger.kernel.org \
--cc=luca.veraldi@katamail.com \
--cc=uek32z@phoenix.hadiko.de \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox