netdev.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Chris Friesen <cfriesen@nortelnetworks.com>
To: Terje Eggestad <terje.eggestad@scali.com>
Cc: linux-kernel <linux-kernel@vger.kernel.org>,
	netdev@oss.sgi.com, linux-net@vger.kernel.org, davem@redhat.com
Subject: Re: anyone ever done multicast AF_UNIX sockets?
Date: Mon, 03 Mar 2003 12:09:37 -0500	[thread overview]
Message-ID: <3E638C51.2000904@nortelnetworks.com> (raw)
In-Reply-To: 1046695876.7731.78.camel@pc-16.office.scali.no

Terje Eggestad wrote:
> On a single box you would use a shared memory segment to do this. It has
> the following advantages:
> - no syscalls at all

Unless you poll for messages on the receiving side, how do you trigger 
the receiver to look for a message?  Shared memory doesn't have file 
descriptors.

> - whenever the recipients need to use the info, they access the shm
> directly (you may need to use a semaphore to enforce consistency, or if
> you're really pressed on time, spin lock a shm location) There is no
> need for the recipients to copy the info to private data structs.

How do they know the information has changed?  Suppose one process 
detects that the ethernet link has dropped.  How does it alert other 
processes which need to do something?

> Why does it help you to know that there are no recipients contra the
> wrong number recipients ???? OR asked differently, if you don't have a
> notion of who the recipients are/should be, why would you care if there
> are none??????
> There are practically no real applications for this feature. 

It's true that if I have a nonzero number of listeners it doesn't tell 
me anything since I don't know if the right one is included.  However, 
if I send a message and there were *no* listeners but I know that there 
should be at least one, then I can log the anomaly, raise an alarm, or 
take whatever action is appropriate.

> Also: Keep in mind that either you do multicast, or explisit send to
> all, the data you're sending are copied from you buffer to the dest
> sockets recv buffers anyway. If you're sending 1k you need somewhere
> between 250 to 1000 cycles to do the copy, depending on alignment. I've
> measured the syscall overhead for a write(len=0) to be about 800 cycles
> on a P3 or athlon, and about 2000 on P4. If you really have enough
> possible recipients, you should use a shm segment instead. If you have
> only a few (~10) the overhead is worst case 20000 cycles, or on a 2G P4,
> 10 microsecs to do a syscall for each. Who cares... 

Granted, shared memory (or sysV message queues) are the fastest way to 
transfer data between processes.  However, you still have to implement 
some way to alert the receiver that there is a message waiting for it.

For large packet sizes it may be sufficient to send a small unix socket 
message to alert it that there is a message waiting, but for small 
messages the cost of the copying is small compared to the cost of the 
context switch, and the unix multicast cuts the number of context 
switches in half.

Chris

-- 
Chris Friesen                    | MailStop: 043/33/F10
Nortel Networks                  | work: (613) 765-0557
3500 Carling Avenue              | fax:  (613) 765-2986
Nepean, ON K2H 8E9 Canada        | email: cfriesen@nortelnetworks.com

  parent reply	other threads:[~2003-03-03 17:09 UTC|newest]

Thread overview: 24+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2003-02-27 20:09 anyone ever done multicast AF_UNIX sockets? Chris Friesen
2003-02-27 22:21 ` Greg Daley
2003-02-28 13:33 ` jamal
2003-02-28 14:39   ` Chris Friesen
2003-03-01  3:18     ` jamal
2003-03-02  6:03       ` Chris Friesen
2003-03-02 14:11         ` jamal
2003-03-03 18:02           ` Chris Friesen
2003-03-03 12:51 ` Terje Eggestad
2003-03-03 12:35   ` David S. Miller
2003-03-03 17:09   ` Chris Friesen [this message]
2003-03-03 16:55     ` David S. Miller
2003-03-03 18:07       ` Chris Friesen
2003-03-03 17:56         ` David S. Miller
2003-03-03 19:11           ` Chris Friesen
2003-03-03 18:56             ` David S. Miller
2003-03-03 19:42               ` Terje Eggestad
2003-03-03 21:32                 ` Chris Friesen
2003-03-03 23:38                   ` Terje Eggestad
2003-03-03 19:39     ` Terje Eggestad
2003-03-03 22:29       ` Chris Friesen
2003-03-03 23:29         ` Terje Eggestad
2003-03-04  2:38           ` jamal
     [not found] <3E5E7081.6020704@nortelnetworks.com.suse.lists.linux.kernel>
     [not found] ` <20030228083009.Y53276@shell.cyberus.ca.suse.lists.linux.kernel>
     [not found]   ` <3E5F748E.2080605@nortelnetworks.com.suse.lists.linux.kernel>
     [not found]     ` <20030228212309.C57212@shell.cyberus.ca.suse.lists.linux.kernel>
     [not found]       ` <3E619E97.8010508@nortelnetworks.com.suse.lists.linux.kernel>
     [not found]         ` <20030302081916.S61365@shell.cyberus.ca.suse.lists.linux.kernel>
     [not found]           ` <3E6398C4.2020605@nortelnetworks.com.suse.lists.linux.kernel>
2003-03-03 18:18             ` Andi Kleen

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=3E638C51.2000904@nortelnetworks.com \
    --to=cfriesen@nortelnetworks.com \
    --cc=davem@redhat.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-net@vger.kernel.org \
    --cc=netdev@oss.sgi.com \
    --cc=terje.eggestad@scali.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).