netdev.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* multicast: bug or "feature"
@ 2007-10-17 19:58 Vlad Yasevich
  2007-10-17 20:10 ` David Stevens
                   ` (2 more replies)
  0 siblings, 3 replies; 17+ messages in thread
From: Vlad Yasevich @ 2007-10-17 19:58 UTC (permalink / raw)
  To: netdev; +Cc: Brian Haley

We've been trying to field some questions regarding multicast
behavior and one such behavior has stumped us.

I've reproduced the following behavior on 2.6.23.

The application opens 2 sockets.  One socket is the receiver
and it simply binds to 0.0.0.0:2000 and joins a multicast group
on interface eth0 (for the test we used 224.0.1.3).  The other
socket is the sender.  It turns off MULTICAST_LOOP, sets MULTICAST_IF
to eth1, and sends a packet to the group that the first socket
joined.

We are expecting to receive the data on the receiver socket, but
nothing comes back.

Running tcpdump on both interfaces during the test, I see the packet
on both interfaces, ie. I see it sent on eth0 and received on eth1 with
IP statistics going up appropriately.

Looking at the group memberships, I see the receiving interface as
part of the group and IGMP messages were on the wire.

So, before I try to spend time figuring out where the packet went is
why, I'd like to know if this is a Linux "feature".

Thanks
-vlad

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: multicast: bug or "feature"
  2007-10-17 19:58 multicast: bug or "feature" Vlad Yasevich
@ 2007-10-17 20:10 ` David Stevens
  2007-10-17 20:23   ` Vlad Yasevich
       [not found] ` <47166BD5.7090207@hp.com>
  2007-10-18 16:17 ` Vlad Yasevich
  2 siblings, 1 reply; 17+ messages in thread
From: David Stevens @ 2007-10-17 20:10 UTC (permalink / raw)
  To: Vlad Yasevich; +Cc: Brian Haley, netdev, netdev-owner

I'm not clear on your configuration.

Are the sender and receiver running on the same machine? Are
you saying eth0 and eth1 are connected on the same link?

                                        +-DLS



^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: multicast: bug or "feature"
       [not found] ` <47166BD5.7090207@hp.com>
@ 2007-10-17 20:19   ` Vlad Yasevich
  0 siblings, 0 replies; 17+ messages in thread
From: Vlad Yasevich @ 2007-10-17 20:19 UTC (permalink / raw)
  To: Rick Jones; +Cc: netdev

Rick Jones wrote:
> Vlad Yasevich wrote:
>> We've been trying to field some questions regarding multicast
>> behavior and one such behavior has stumped us.
>>
>> I've reproduced the following behavior on 2.6.23.
>>
>> The application opens 2 sockets.  One socket is the receiver
>> and it simply binds to 0.0.0.0:2000 and joins a multicast group
>> on interface eth0 (for the test we used 224.0.1.3).  The other
>> socket is the sender.  It turns off MULTICAST_LOOP, sets MULTICAST_IF
>> to eth1, and sends a packet to the group that the first socket
>> joined.
>>
>> We are expecting to receive the data on the receiver socket, but
>> nothing comes back.
>>
>> Running tcpdump on both interfaces during the test, I see the packet
>> on both interfaces, ie. I see it sent on eth0 and received on eth1 with
>> IP statistics going up appropriately.
> 
> I think you got things switched - in the beginning you said that the
> receiver has joined a multicast group on eth0 and the sender set
> MULTICAST_IF to eth1, but in the paragraph just above you say
> transmission is on eth0 and reception is on eth1...

Yes, that was just the typing switch.  The send packet was on eth1
(the checksum wasn't finished yet), and the received packet was
on eth0.

> 
> I am _guessing_ that the MULITCAST_LOOP stuff is implemented with "is
> this datagram's source IP one of the local IP's on the system" rather
> than "is this datagram from my own socket"
> 

Well, from what I've see, MULTICAST_LOOP is checked only on output, so
that doesn't seem to be the issue.

Also, IPv6 works as expected, i.e. I can receive the data in the app
above if I use IPv6 multicast addresses.

-vlad

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: multicast: bug or "feature"
  2007-10-17 20:10 ` David Stevens
@ 2007-10-17 20:23   ` Vlad Yasevich
  2007-10-17 20:29     ` David Stevens
  0 siblings, 1 reply; 17+ messages in thread
From: Vlad Yasevich @ 2007-10-17 20:23 UTC (permalink / raw)
  To: David Stevens; +Cc: Brian Haley, netdev

David Stevens wrote:
> I'm not clear on your configuration.
> 
> Are the sender and receiver running on the same machine? Are
> you saying eth0 and eth1 are connected on the same link?

Yes and Yes.

I know it's a strange config, but it works with IPv6.

Here is the info off the reproducing system:

# ifconfig
eth0      Link encap:Ethernet  HWaddr 00:17:08:7D:47:18  
          inet addr:10.202.1.23  Bcast:10.202.255.255  Mask:255.255.0.0
          inet6 addr: 2001:1890:1109:a10:217:8ff:fe7d:4718/64 Scope:Global
          inet6 addr: fe80::217:8ff:fe7d:4718/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:1094 errors:0 dropped:0 overruns:0 frame:0
          TX packets:669 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000 
          RX bytes:107776 (105.2 KiB)  TX bytes:109874 (107.2 KiB)
          Base address:0x4000 Memory:f9de0000-f9e00000 

eth1      Link encap:Ethernet  HWaddr 00:18:FE:7F:49:C8  
          inet addr:10.202.1.26  Bcast:10.202.255.255  Mask:255.255.0.0
          inet6 addr: 2001:1890:1109:a10:218:feff:fe7f:49c8/64 Scope:Global
          inet6 addr: fe80::218:feff:fe7f:49c8/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:479 errors:0 dropped:0 overruns:0 frame:0
          TX packets:35 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000 
          RX bytes:52236 (51.0 KiB)  TX bytes:4467 (4.3 KiB)
          Interrupt:41 Memory:f6000000-f6012100 


# ip r l
10.202.0.0/16 dev eth0  proto kernel  scope link  src 10.202.1.23 
10.202.0.0/16 dev eth1  proto kernel  scope link  src 10.202.1.26 
default via 10.202.1.1 dev eth0


-vlad

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: multicast: bug or "feature"
  2007-10-17 20:23   ` Vlad Yasevich
@ 2007-10-17 20:29     ` David Stevens
  2007-10-17 21:25       ` Vlad Yasevich
  0 siblings, 1 reply; 17+ messages in thread
From: David Stevens @ 2007-10-17 20:29 UTC (permalink / raw)
  To: Vlad Yasevich; +Cc: Brian Haley, netdev

Can you send the contents of /proc/net/igmp and the packet trace,
also? And the code?

                                        +-DLS


^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: multicast: bug or "feature"
  2007-10-17 20:29     ` David Stevens
@ 2007-10-17 21:25       ` Vlad Yasevich
  2007-10-17 23:11         ` David Stevens
  0 siblings, 1 reply; 17+ messages in thread
From: Vlad Yasevich @ 2007-10-17 21:25 UTC (permalink / raw)
  To: David Stevens; +Cc: Brian Haley, netdev

[-- Attachment #1: Type: text/plain, Size: 782 bytes --]

David Stevens wrote:
> Can you send the contents of /proc/net/igmp and the packet trace,
> also? And the code?
> 
>                                         +-DLS
> 

# cat /proc/net/igmp
Idx     Device    : Count Querier       Group    Users Timer    Reporter
1       lo        :     0      V3
                                010000E0     1 0:00000000               0
2       eth0      :     3      V2
                                010000E0     1 0:00000000               0
3       eth1      :     4      V2
                                030100E0     1 0:00000000               1
                                010000E0     1 0:00000000               0



Source attached.  The trace only shows a single udp packet and you can
re-create it with the attached small apps.

-vlad

[-- Attachment #2: client.c --]
[-- Type: text/x-csrc, Size: 984 bytes --]

#include <stdio.h>
#include <string.h>
#include <netinet/in.h>


int main(int argc, char **argv)
{
	struct sockaddr_storage dest;
	struct sockaddr_storage src;
	struct sockaddr_in *s = (struct sockaddr_in*) &src;
	struct sockaddr_in *d = (struct sockaddr_in *)&dest;
	int sock;
	char msg[] = "Hello Multicast";
	int off = 0;

	memset(&dest, 0, sizeof(dest));
	memset(&src, 0, sizeof(src));

	if (argc < 3) {
		printf("Usage: <src ip>  <mcast dest>\n");
		return 1;
	}


	d->sin_family = s->sin_family = AF_INET;
	d->sin_port = htons(2000);

	inet_aton(argv[1], &s->sin_addr);
	inet_aton(argv[2], &d->sin_addr);

	sock = socket(AF_INET, SOCK_DGRAM, 0);

	if (setsockopt(sock, IPPROTO_IP, IP_MULTICAST_LOOP, &off,
							sizeof(off))) {
		perror("setsockopt");
		return 1;
	}
	if (setsockopt(sock, IPPROTO_IP, IP_MULTICAST_IF, s, sizeof(*s))) {
		perror("setsockopt");
		return 1;
	}

	sendto(sock, msg, sizeof(msg), 0, (struct sockaddr *)d, sizeof(*d));

	close (socket);
	return 0;
}

[-- Attachment #3: server.c --]
[-- Type: text/x-csrc, Size: 852 bytes --]

#include <stdio.h>
#include <string.h>
#include <netinet/in.h>


int main(int argc, char **argv)
{
	struct sockaddr_in addr;
	int addr_len;
	struct ip_mreq req;
	int sock;
	char msg[256];

	memset(&addr, 0, sizeof(addr));

	if (argc < 3) {
		printf("Usage: <interface ip> <mcast group>\n");
		return 1;
	}

	sock = socket(AF_INET, SOCK_DGRAM, 0);
	addr.sin_family = AF_INET;
	addr.sin_port = htons(2000);
	if (bind(sock, (struct sockaddr *)&addr, sizeof(addr))) {
		perror("bind");
		return 1;
	}

	inet_aton(argv[1], &req.imr_interface);
	inet_aton(argv[2], &req.imr_multiaddr);

	if (setsockopt(sock, IPPROTO_IP, IP_ADD_MEMBERSHIP, &req,
							    sizeof(req))) {
		perror("setsockopt");
		return 1;
	}

	recvfrom(sock, msg, sizeof(msg), 0, (struct sockaddr *)&addr, &addr_len);

	printf("Message recieved: %s", msg);

	close (socket);
	return 0;
}

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: multicast: bug or "feature"
  2007-10-17 21:25       ` Vlad Yasevich
@ 2007-10-17 23:11         ` David Stevens
  2007-10-18  1:20           ` Vlad Yasevich
  0 siblings, 1 reply; 17+ messages in thread
From: David Stevens @ 2007-10-17 23:11 UTC (permalink / raw)
  To: Vlad Yasevich; +Cc: Brian Haley, netdev

You're joining the group on interface eth1, which is the
sender, right? You need to be a member on eth0 to receive it
there. I think your program needs another argument, to
specify the receiving interface, which you want to be
different from the sending interface.

                                        +-DLS


^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: multicast: bug or "feature"
  2007-10-17 23:11         ` David Stevens
@ 2007-10-18  1:20           ` Vlad Yasevich
  0 siblings, 0 replies; 17+ messages in thread
From: Vlad Yasevich @ 2007-10-18  1:20 UTC (permalink / raw)
  To: David Stevens; +Cc: Brian Haley, netdev



David Stevens wrote:
> You're joining the group on interface eth1, which is the
> sender, right?

I may have switched the ordering in the last test I ran,
but I always join the group on the interface different
from the one I send on.

-vlad

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: multicast: bug or "feature"
  2007-10-17 19:58 multicast: bug or "feature" Vlad Yasevich
  2007-10-17 20:10 ` David Stevens
       [not found] ` <47166BD5.7090207@hp.com>
@ 2007-10-18 16:17 ` Vlad Yasevich
  2007-10-19 11:43   ` Herbert Xu
  2 siblings, 1 reply; 17+ messages in thread
From: Vlad Yasevich @ 2007-10-18 16:17 UTC (permalink / raw)
  To: netdev; +Cc: Brian Haley, David Stevens

Vlad Yasevich wrote:
> We've been trying to field some questions regarding multicast
> behavior and one such behavior has stumped us.
> 
> I've reproduced the following behavior on 2.6.23.
> 
> The application opens 2 sockets.  One socket is the receiver
> and it simply binds to 0.0.0.0:2000 and joins a multicast group
> on interface eth0 (for the test we used 224.0.1.3).  The other
> socket is the sender.  It turns off MULTICAST_LOOP, sets MULTICAST_IF
> to eth1, and sends a packet to the group that the first socket
> joined.
> 
> We are expecting to receive the data on the receiver socket, but
> nothing comes back.
> 
> Running tcpdump on both interfaces during the test, I see the packet
> on both interfaces, ie. I see it sent on eth0 and received on eth1 with
> IP statistics going up appropriately.
> 
> Looking at the group memberships, I see the receiving interface as
> part of the group and IGMP messages were on the wire.
> 
> So, before I try to spend time figuring out where the packet went is
> why, I'd like to know if this is a Linux "feature".
> 

Ok, so I've traced the failure down to fib_validate_source().

Because the packet we received was sourced from one of our own
addresses, we end up finding a RTN_LOCAL route and fail out
of that function with -EINVAL.

I can see the reason for this behavior and I think dropping
in this case is fine.

Now, to figure out what IPv6 does different and why it works.
Seems to me that the two should have the same behavior.

-vlad

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: multicast: bug or "feature"
  2007-10-18 16:17 ` Vlad Yasevich
@ 2007-10-19 11:43   ` Herbert Xu
  2007-10-19 15:21     ` David Stevens
  0 siblings, 1 reply; 17+ messages in thread
From: Herbert Xu @ 2007-10-19 11:43 UTC (permalink / raw)
  To: Vlad Yasevich; +Cc: netdev, brian.haley, dlstevens

Vlad Yasevich <vladislav.yasevich@hp.com> wrote:
>
> Now, to figure out what IPv6 does different and why it works.
> Seems to me that the two should have the same behavior.

IPv6 on Linux uses a per-interface addressing model as opposed
to the per-host model used by IPv4.

Cheers,
-- 
Visit Openswan at http://www.openswan.org/
Email: Herbert Xu ~{PmV>HI~} <herbert@gondor.apana.org.au>
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: multicast: bug or "feature"
  2007-10-19 11:43   ` Herbert Xu
@ 2007-10-19 15:21     ` David Stevens
  2007-10-19 16:49       ` Vlad Yasevich
  0 siblings, 1 reply; 17+ messages in thread
From: David Stevens @ 2007-10-19 15:21 UTC (permalink / raw)
  To: Herbert Xu; +Cc: brian.haley, netdev, netdev-owner, Vlad Yasevich

netdev-owner@vger.kernel.org wrote on 10/19/2007 04:43:27 AM:

> Vlad Yasevich <vladislav.yasevich@hp.com> wrote:
> >
> > Now, to figure out what IPv6 does different and why it works.
> > Seems to me that the two should have the same behavior.
> 
> IPv6 on Linux uses a per-interface addressing model as opposed
> to the per-host model used by IPv4.

        For link-local addresses, yes.

        It's really a security feature; the ordinary
case where you'd receive something on an interface that's
using one of your source addresses is when someone is spoofing
you, has a duplicate address, or maybe an (unintentional)
routing loop. All of those are error cases, so dropping a
received packet that claims to be sent by you is a reasonable
thing to do.
        If you're getting link-local source addresses for your
IPv6 multicast packets, that may explain it. The link-local
addresses are required to be unique and valid only for that
link, so IPv6 should not consider a different interface's
link-local address as "local" for a destination address, or
a packet using that source address as bogus.
        For a global address, v4 and v6 use the same rules--
for a destination you can receive it on any interface for
any global address. So, if your source address was a global
IPv6 address and it worked, I'd guess IPv6 just isn't checking
the source address. I don't know that it's required by RFC for
either v4 or v6, though it's probably a good idea.

                                                        +-DLS


^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: multicast: bug or "feature"
  2007-10-19 15:21     ` David Stevens
@ 2007-10-19 16:49       ` Vlad Yasevich
  2007-10-19 17:43         ` David Stevens
  0 siblings, 1 reply; 17+ messages in thread
From: Vlad Yasevich @ 2007-10-19 16:49 UTC (permalink / raw)
  To: David Stevens; +Cc: Herbert Xu, brian.haley, netdev, netdev-owner

David Stevens wrote:
> netdev-owner@vger.kernel.org wrote on 10/19/2007 04:43:27 AM:
> 
>> Vlad Yasevich <vladislav.yasevich@hp.com> wrote:
>>> Now, to figure out what IPv6 does different and why it works.
>>> Seems to me that the two should have the same behavior.
>> IPv6 on Linux uses a per-interface addressing model as opposed
>> to the per-host model used by IPv4.
> 
>         For link-local addresses, yes.
> 
>         It's really a security feature; the ordinary
> case where you'd receive something on an interface that's
> using one of your source addresses is when someone is spoofing
> you, has a duplicate address, or maybe an (unintentional)
> routing loop. All of those are error cases, so dropping a
> received packet that claims to be sent by you is a reasonable
> thing to do.

I can see this as a good feature for unicast, but starting to
doubt it just a little bit for multicast.  With IGMPv3/MLDv2 and source
filtering, this could be done as a filter on individual sockets.

The problem them becomes IGMPv2 and MLDv1.

>         If you're getting link-local source addresses for your
> IPv6 multicast packets, that may explain it. The link-local
> addresses are required to be unique and valid only for that
> link, so IPv6 should not consider a different interface's
> link-local address as "local" for a destination address, or
> a packet using that source address as bogus.

Looks like the only time IPv6 does any type of source filtering
is when CONFIG_IPV6_MULTIPLE_TABLES is turned on.

I need to turn this on and see if I get the same results or not.

>         For a global address, v4 and v6 use the same rules--
> for a destination you can receive it on any interface for
> any global address. So, if your source address was a global
> IPv6 address and it worked, I'd guess IPv6 just isn't checking
> the source address. I don't know that it's required by RFC for
> either v4 or v6, though it's probably a good idea.

I've reproduced the multicast traffic using both global and link-local
addresses (both source and destination were of the same scope in the
tests, i.e. either global or link-local).

So, it appears that IPv6 didn't do any source verification for multicast
traffic.

-vlad

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: multicast: bug or "feature"
  2007-10-19 16:49       ` Vlad Yasevich
@ 2007-10-19 17:43         ` David Stevens
  2007-10-19 18:43           ` Vlad Yasevich
  0 siblings, 1 reply; 17+ messages in thread
From: David Stevens @ 2007-10-19 17:43 UTC (permalink / raw)
  To: Vlad Yasevich; +Cc: brian.haley, Herbert Xu, netdev, netdev-owner

I don't know why you'd want it to be different for multicasting. If you
want to hear your own multicasts, you should use MULTICAST_LOOP;
hearing them off the wire indicates all the same bad things -- a forger,
a duplicate address or a routing loop. Those aren't any better for
multicasting than they are for unicasting, that I can see.

Why would you want that to be delivered, other than (apparently)
to test the multicast capability of a pair of interfaces while using
only 1 machine?

                                        +-DLS



^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: multicast: bug or "feature"
  2007-10-19 17:43         ` David Stevens
@ 2007-10-19 18:43           ` Vlad Yasevich
  2007-10-19 20:23             ` David Stevens
  0 siblings, 1 reply; 17+ messages in thread
From: Vlad Yasevich @ 2007-10-19 18:43 UTC (permalink / raw)
  To: David Stevens; +Cc: brian.haley, Herbert Xu, netdev, netdev-owner

David Stevens wrote:
> I don't know why you'd want it to be different for multicasting. If you
> want to hear your own multicasts, you should use MULTICAST_LOOP;
> hearing them off the wire indicates all the same bad things -- a forger,
> a duplicate address or a routing loop. Those aren't any better for
> multicasting than they are for unicasting, that I can see.
> 
> Why would you want that to be delivered, other than (apparently)
> to test the multicast capability of a pair of interfaces while using
> only 1 machine?
> 

I don't really, but the customer is complaining.  The customer is 
transitioning from BSD to Linux, but wishes to support both.  The
behaviors are different.  Client applications that assume certain
behavior on BSD, do not work on Linux.

It's just a pain.  I was more wondering why v4 and v6 did things 
different in this case.

-vlad

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: multicast: bug or "feature"
  2007-10-19 18:43           ` Vlad Yasevich
@ 2007-10-19 20:23             ` David Stevens
  2007-10-19 20:39               ` Brian Haley
  0 siblings, 1 reply; 17+ messages in thread
From: David Stevens @ 2007-10-19 20:23 UTC (permalink / raw)
  To: Vlad Yasevich; +Cc: brian.haley, Herbert Xu, netdev, netdev-owner

        From looking at the code, it appears that validate
source is failing just because of the rp_filter. Do you have
rp_filter set to nonzero?
        If so, it may do what you want just by setting that
to 0:

sysctl -w net.ipv4.conf.all.rp_filter=0

                                        +-DLS


^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: multicast: bug or "feature"
  2007-10-19 20:23             ` David Stevens
@ 2007-10-19 20:39               ` Brian Haley
  2007-10-19 21:25                 ` David Stevens
  0 siblings, 1 reply; 17+ messages in thread
From: Brian Haley @ 2007-10-19 20:39 UTC (permalink / raw)
  To: David Stevens; +Cc: Vlad Yasevich, Herbert Xu, netdev, netdev-owner

Hi David,

David Stevens wrote:
>         From looking at the code, it appears that validate
> source is failing just because of the rp_filter. Do you have
> rp_filter set to nonzero?
>         If so, it may do what you want just by setting that
> to 0:
> 
> sysctl -w net.ipv4.conf.all.rp_filter=0

rp_filter is set to zero, it's the "if (res.type != RTN_UNICAST)" check 
in fib_validate_source() that's doing it.  If I add a new 
"accept_local_addr" sysctl to ipv4_devconf to allow RTN_LOCAL here, 
everything works just fine.  I just don't know how palatable that would 
be to upstream...

-Brian

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: multicast: bug or "feature"
  2007-10-19 20:39               ` Brian Haley
@ 2007-10-19 21:25                 ` David Stevens
  0 siblings, 0 replies; 17+ messages in thread
From: David Stevens @ 2007-10-19 21:25 UTC (permalink / raw)
  To: Brian Haley; +Cc: Herbert Xu, netdev, netdev-owner, Vlad Yasevich

I don't know about a new knob, but it's the same notion
as rp_filter, so why not use rpf for RTN_LOCAL types?
Ie, allow RTN_LOCAL and RTN_UNICAST at the top, but
check rpf if the devs aren't equal or RTN_LOCAL....

It seems like not a good thing to rely on in the first place, though;
usually receiving things from yourself is a bad thing. :-)

                                        +-DLS


^ permalink raw reply	[flat|nested] 17+ messages in thread

end of thread, other threads:[~2007-10-19 21:25 UTC | newest]

Thread overview: 17+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2007-10-17 19:58 multicast: bug or "feature" Vlad Yasevich
2007-10-17 20:10 ` David Stevens
2007-10-17 20:23   ` Vlad Yasevich
2007-10-17 20:29     ` David Stevens
2007-10-17 21:25       ` Vlad Yasevich
2007-10-17 23:11         ` David Stevens
2007-10-18  1:20           ` Vlad Yasevich
     [not found] ` <47166BD5.7090207@hp.com>
2007-10-17 20:19   ` Vlad Yasevich
2007-10-18 16:17 ` Vlad Yasevich
2007-10-19 11:43   ` Herbert Xu
2007-10-19 15:21     ` David Stevens
2007-10-19 16:49       ` Vlad Yasevich
2007-10-19 17:43         ` David Stevens
2007-10-19 18:43           ` Vlad Yasevich
2007-10-19 20:23             ` David Stevens
2007-10-19 20:39               ` Brian Haley
2007-10-19 21:25                 ` David Stevens

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).