netdev.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* Multicast socket behaviour?
@ 2008-08-13  8:21 Daniel Ng
  2008-08-13 11:19 ` Neil Horman
  2008-08-13 16:12 ` David Stevens
  0 siblings, 2 replies; 5+ messages in thread
From: Daniel Ng @ 2008-08-13  8:21 UTC (permalink / raw)
  To: netdev

Hi,

The below C code registers the socket with the multicast group 'HELLO_GROUP':

    mreq.imr_multiaddr.s_addr=inet_addr(HELLO_GROUP);
    mreq.imr_address.s_addr=htonl(INADDR_ANY);

    mreq.imr_ifindex = if_nametoindex("ppp1");

    if (setsockopt(fd,IPPROTO_IP,IP_ADD_MEMBERSHIP,&mreq,sizeof(mreq)) < 0) {
        perror("setsockopt");
        exit(1);
    }

I understand that if 'INADDR_ANY' is used, it is up to the kernel to decide 
what action to take.

>From my experiments, it seems that the interface corresponding to the 
highest '224.0.0.0' entry in the routing table is used.

Would someone please explain why this is so? 

How difficult would it be to have the kernel join the HELLO_GROUP on *all* 
available multicast-capable interfaces? Why isn't this currently implemented?

I'm using 2.6.14.

Cheers,
Daniel



^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: Multicast socket behaviour?
  2008-08-13  8:21 Multicast socket behaviour? Daniel Ng
@ 2008-08-13 11:19 ` Neil Horman
  2008-08-13 16:12 ` David Stevens
  1 sibling, 0 replies; 5+ messages in thread
From: Neil Horman @ 2008-08-13 11:19 UTC (permalink / raw)
  To: Daniel Ng; +Cc: netdev

On Wed, Aug 13, 2008 at 08:21:58AM +0000, Daniel Ng wrote:
> Hi,
> 
> The below C code registers the socket with the multicast group 'HELLO_GROUP':
> 
>     mreq.imr_multiaddr.s_addr=inet_addr(HELLO_GROUP);
>     mreq.imr_address.s_addr=htonl(INADDR_ANY);
> 
>     mreq.imr_ifindex = if_nametoindex("ppp1");
> 
>     if (setsockopt(fd,IPPROTO_IP,IP_ADD_MEMBERSHIP,&mreq,sizeof(mreq)) < 0) {
>         perror("setsockopt");
>         exit(1);
>     }
> 
> I understand that if 'INADDR_ANY' is used, it is up to the kernel to decide 
> what action to take.
> 
> >From my experiments, it seems that the interface corresponding to the 
> highest '224.0.0.0' entry in the routing table is used.
> 
> Would someone please explain why this is so? 
> 
> How difficult would it be to have the kernel join the HELLO_GROUP on *all* 
> available multicast-capable interfaces? Why isn't this currently implemented?
> 
> I'm using 2.6.14.
> 
> Cheers,
> Daniel
> 

I had this same confusion on netdev a few weeks ago:
http://marc.info/?l=linux-netdev&m=121580189427089&w=2

Its working as it should, allbeit somewhat counter-intuitively.  Although there
may be a bug in your kernel, you're several releases back.

Neil


-- 
/****************************************************
 * Neil Horman <nhorman@tuxdriver.com>
 * Software Engineer, Red Hat
 ****************************************************/

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: Multicast socket behaviour?
  2008-08-13  8:21 Multicast socket behaviour? Daniel Ng
  2008-08-13 11:19 ` Neil Horman
@ 2008-08-13 16:12 ` David Stevens
  2008-08-14  0:08   ` Daniel Ng
  1 sibling, 1 reply; 5+ messages in thread
From: David Stevens @ 2008-08-13 16:12 UTC (permalink / raw)
  To: Daniel Ng; +Cc: netdev, netdev-owner

netdev-owner@vger.kernel.org wrote on 08/13/2008 01:21:58 AM:

> Hi,
> 
> The below C code registers the socket with the multicast group 
'HELLO_GROUP':
> 
>     mreq.imr_multiaddr.s_addr=inet_addr(HELLO_GROUP);
>     mreq.imr_address.s_addr=htonl(INADDR_ANY);
> 
>     mreq.imr_ifindex = if_nametoindex("ppp1");
> 
>     if (setsockopt(fd,IPPROTO_IP,IP_ADD_MEMBERSHIP,&mreq,sizeof(mreq)) < 
0) {
>         perror("setsockopt");
>         exit(1);
>     }
> 
> I understand that if 'INADDR_ANY' is used, it is up to the kernel to 
decide 
> what action to take.

Daniel,
        Neil has already pointed you to a fuller description for your
question about joining on multiple interfaces, but just
to be clear, the code above has specified the interface, so there is
no choice for the kernel. The code above will always join on "ppp1"
You can specify the interface by address (imr_address), by index
(imr_ifindex, as above), or not at all.
        In the unlikely event that you wanted the kernel to
pick which interface you join the group on, you would need
both imr_address==INADDR_ANY and imr_ifindex==0.
        The sockets API has never included a "join on all
interfaces" mechanism. If you're looking for traffic from
a particular set of machines, you only need to join on a
single interface that includes that set. Joining on another
interface for the same set will give you duplicate traffic.
The same group number on a different interface is either an
entirely different set (from a different multicast routing
domain), or the same one (resulting in duplicate traffic). If
you want both of two sets, you need to join each separately. If you
only want one, you only need to join on one of them, and joining
on both would be bad.
        So, if you followed all that, the conclusion is that
it usually doesn't make sense for an app in a single multicast
routing domain to want to join on multiple interfaces, and if
you have multiple routing domains, joining each separately is
appropriate, especially since some of the interfaces may be
in the same routing domain even if some are not.

                                                                +-DLS


^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: Multicast socket behaviour?
  2008-08-13 16:12 ` David Stevens
@ 2008-08-14  0:08   ` Daniel Ng
  2008-08-14  1:30     ` David Stevens
  0 siblings, 1 reply; 5+ messages in thread
From: Daniel Ng @ 2008-08-14  0:08 UTC (permalink / raw)
  To: netdev

David Stevens <dlstevens <at> us.ibm.com> writes:
> it usually doesn't make sense for an app in a single multicast
> routing domain to want to join on multiple interfaces

Now, what if I have a linear network of say 3 Linux boxes, connected together 
simply:

A----PPP0-------B-----PPP1------C

All machines are DVMRP multicast routers.

In order for B to hear the multicast packets originated from both A and C, 
wouldn't B need to join the DVMRP multicast group on *both* its PPP interfaces?

If so, then can this be done with a single socket? How?

Daniel


^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: Multicast socket behaviour?
  2008-08-14  0:08   ` Daniel Ng
@ 2008-08-14  1:30     ` David Stevens
  0 siblings, 0 replies; 5+ messages in thread
From: David Stevens @ 2008-08-14  1:30 UTC (permalink / raw)
  To: Daniel Ng; +Cc: netdev

netdev-owner@vger.kernel.org wrote on 08/13/2008 05:08:58 PM:

> David Stevens <dlstevens <at> us.ibm.com> writes:
> > it usually doesn't make sense for an app in a single multicast
> > routing domain to want to join on multiple interfaces
> 
> Now, what if I have a linear network of say 3 Linux boxes, connected 
together 
> simply:
> 
> A----PPP0-------B-----PPP1------C
> 
> All machines are DVMRP multicast routers.
> 
> In order for B to hear the multicast packets originated from both A and 
C, 
> wouldn't B need to join the DVMRP multicast group on *both* its PPP 
interfaces?

        I don't believe so. An application on B can join the
group on either PPP0 or PPP1 and the multicast router on
whichever one you join is responsible for forwarding any
packets for that group to all links that have a listener
for it.
        If B itself is the multicast router, then it may
touch the packets twice, but logically all three hosts are
in the same routing domain and an application can listen
to groups on any link. If your app joins on both links,
I'd expect you to get 2 copies of everything.

        I've done a lot of work on the host-side of multicasting,
but nothing with multicast routing, so I certainly could
be wrong, but I believe an app on B in this case shouldn't
join on both links, either.

                                                        +-DLS



^ permalink raw reply	[flat|nested] 5+ messages in thread

end of thread, other threads:[~2008-08-14  1:30 UTC | newest]

Thread overview: 5+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2008-08-13  8:21 Multicast socket behaviour? Daniel Ng
2008-08-13 11:19 ` Neil Horman
2008-08-13 16:12 ` David Stevens
2008-08-14  0:08   ` Daniel Ng
2008-08-14  1:30     ` David Stevens

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).