* [IPv6] "sendmsg: invalid argument" to multicast group after some time
@ 2008-08-31 18:20 Bernhard Schmidt
2008-09-01 5:49 ` David Stevens
2008-09-01 13:03 ` David Stevens
0 siblings, 2 replies; 25+ messages in thread
From: Bernhard Schmidt @ 2008-08-31 18:20 UTC (permalink / raw)
To: netdev
Hello all,
this is about the same box as the message from Remi an hour ago, but
most probably not related.
I'm running a Teredo (RFC4830) relay on a i386 Xen domU with kernel
2.6.26 vanilla with the integrated pv_ops feature. This relay function
is implemented in a userspace daemon called Miredo which provides a tun
interface to the OS where native IPv6 for 2001::/32 is routed into. The
traffic is then handled in the userspace daemon and emitted encapsulated
in IPv4/UDP. Apart from a few scalability problems which seem to be
related to the neighbor or route cache size it works fine. The machine
is doing around 2kpps of IPv6 traffic (which means 4kpps of IPv4+IPv6).
As there are a couple of similar relays globally anycasted I'm supposed
to withdraw the route from BGP if the daemon or the machine fails. To do
this I'm running ripngd from the Quagga routing suite, which announces
the Teredo prefix to my core routers using RIPng (RFC2080). On a kernel
level RIPng is basically periodic UDP to a link-local multicast address
[ff02::9]:521.
Every few hours this announcement fails (no announcements reach the core
routers anymore, which kill the routing entry after a timeout). ripngd
debugging claims that it could not send the announcement due to
"Invalid argument". There are no outgoing packets in tcpdump anymore.
I even get the same error when doing a multicast ping6:
miredo:~# ping6 -I eth0 ff02::9
PING ff02::9(ff02::9) from fe80::216:3eff:feb9:29f5 eth0: 56 data bytes
ping: sendmsg: Invalid argument
64 bytes from fe80::216:3eff:feb9:29f5: icmp_seq=1 ttl=64 time=0.030 ms
64 bytes from fe80::216:3eff:feb9:29f5: icmp_seq=1 ttl=64 time=0.018 ms (DUP!)
(fe80::216:3eff:feb9:29f5 is the box itself, it's the only one that ever
answers ... duplicate however)
ping6 to other multicast addresses, even in the same scope works fine
miredo:~# ping6 -I eth0 ff02::2
PING ff02::2(ff02::2) from fe80::216:3eff:feb9:29f5 eth0: 56 data bytes
64 bytes from fe80::216:3eff:feb9:29f5: icmp_seq=1 ttl=64 time=0.057 ms
64 bytes from fe80::20c:86ff:fe9a:3819: icmp_seq=1 ttl=64 time=0.466 ms (DUP!)
64 bytes from fe80::20c:86ff:fe9a:2819: icmp_seq=1 ttl=64 time=0.476 ms (DUP!)
64 bytes from fe80::216:3eff:feb9:29f5: icmp_seq=2 ttl=64 time=0.043 ms
Now the freaky part ... the multicast ping to ff02::9 (and thus the
RIPng announcements) start to work again when I restart the miredo
daemon. This is sort of unexpected because miredo does not deal with
this address (or multicast) at all.
miredo:~# /etc/init.d/miredo stop
Stopping Teredo IPv6 tunneling daemon: miredo.
miredo:~# /etc/init.d/miredo start
Starting Teredo IPv6 tunneling daemon: miredo.
miredo:~# ping6 -c 2 -I eth0 ff02::9
PING ff02::9(ff02::9) from fe80::216:3eff:feb9:29f5 eth0: 56 data bytes
64 bytes from fe80::216:3eff:feb9:29f5: icmp_seq=1 ttl=64 time=0.044 ms
64 bytes from fe80::2c0:9fff:fe4b:8ccf: icmp_seq=1 ttl=64 time=0.441 ms (DUP!)
64 bytes from fe80::20c:86ff:fe9a:3819: icmp_seq=1 ttl=64 time=0.458 ms (DUP!)
64 bytes from fe80::20c:86ff:fe9a:2819: icmp_seq=1 ttl=64 time=0.466 ms (DUP!)
Someone else running a miredo relay on Linux has reported the same
problem, only using ospf6d instead of ripngd:
2008/08/31 17:25:54 OSPF6: sendmsg failed: ifindex: 2: Invalid argument (22)
OSPFv3 is link-local multicast as well (own protocol on ff02::5),
restarting the miredo daemon fixed the problem for him as well.
Regards,
Bernhard
^ permalink raw reply [flat|nested] 25+ messages in thread
* Re: [IPv6] "sendmsg: invalid argument" to multicast group after some time
2008-08-31 18:20 [IPv6] "sendmsg: invalid argument" to multicast group after some time Bernhard Schmidt
@ 2008-09-01 5:49 ` David Stevens
2008-09-01 9:09 ` Bernhard Schmidt
2008-09-01 13:03 ` David Stevens
1 sibling, 1 reply; 25+ messages in thread
From: David Stevens @ 2008-09-01 5:49 UTC (permalink / raw)
To: Bernhard Schmidt; +Cc: netdev, netdev-owner
My wild guess is that you have lost the multicast bit on
the interface, by a daemon that is just setting the interface
flags rather than doing get/or in new/set.
When it's failing, you might try looking at the interface
flags to see if IFF_MULTICAST is still there.
+-DLS
^ permalink raw reply [flat|nested] 25+ messages in thread
* Re: [IPv6] "sendmsg: invalid argument" to multicast group after some time
2008-09-01 5:49 ` David Stevens
@ 2008-09-01 9:09 ` Bernhard Schmidt
0 siblings, 0 replies; 25+ messages in thread
From: Bernhard Schmidt @ 2008-09-01 9:09 UTC (permalink / raw)
To: David Stevens; +Cc: netdev
On Sun, Aug 31, 2008 at 10:49:50PM -0700, David Stevens wrote:
Hello David,
> My wild guess is that you have lost the multicast bit on
> the interface, by a daemon that is just setting the interface
> flags rather than doing get/or in new/set.
>
> When it's failing, you might try looking at the interface
> flags to see if IFF_MULTICAST is still there.
It is. miredo isn't doing anything with the eth0 interface, and I can
still reach other multicast groups on te same interface.
Bernhard
^ permalink raw reply [flat|nested] 25+ messages in thread
* Re: [IPv6] "sendmsg: invalid argument" to multicast group after some time
2008-08-31 18:20 [IPv6] "sendmsg: invalid argument" to multicast group after some time Bernhard Schmidt
2008-09-01 5:49 ` David Stevens
@ 2008-09-01 13:03 ` David Stevens
2008-09-01 17:01 ` Bernhard Schmidt
1 sibling, 1 reply; 25+ messages in thread
From: David Stevens @ 2008-09-01 13:03 UTC (permalink / raw)
To: Bernhard Schmidt; +Cc: netdev, netdev-owner
Well, it'd certainly be good to identify exactly where in
the sendmsg path you're getting the EINVAL from. If
possible, I'd suggest putting in some debugging code
and reproducing it.
If you can pick up the exact arguments when it's happening
via strace and send those here, that may help, but I wouldn't
expect those to be incorrect from ping6 only some of the
time...
I'll look into this some more if someone doesn't beat me to
it, but I'm travelling now so it will be a few days at least before
I get to it.
+-DLS
^ permalink raw reply [flat|nested] 25+ messages in thread
* Re: [IPv6] "sendmsg: invalid argument" to multicast group after some time
2008-09-01 13:03 ` David Stevens
@ 2008-09-01 17:01 ` Bernhard Schmidt
2008-09-01 17:05 ` Bernhard Schmidt
` (2 more replies)
0 siblings, 3 replies; 25+ messages in thread
From: Bernhard Schmidt @ 2008-09-01 17:01 UTC (permalink / raw)
To: David Stevens; +Cc: netdev
On Mon, Sep 01, 2008 at 06:03:31AM -0700, David Stevens wrote:
> Well, it'd certainly be good to identify exactly where in
> the sendmsg path you're getting the EINVAL from. If
> possible, I'd suggest putting in some debugging code
> and reproducing it.
Do you have any suggestions/patches? I'm not an experienced programmer,
and since it usually takes a couple of hours for this problem to appear
I probably would need a few years to get anything reasonable.
Btw, it's happening again just right now (2.6.27-rc5), I've set a static
route so I can run some debugging without affecting the service. So if
you have any debugging commands you would like me to run this is the
time.
> If you can pick up the exact arguments when it's happening
> via strace and send those here, that may help, but I wouldn't
> expect those to be incorrect from ping6 only some of the
> time...
The working group:
# strace -e recvmsg,sendmsg ping6 -c 1 -I eth0 ff02::2
recvmsg(4, {msg_name(12)={sa_family=AF_NETLINK, pid=0, groups=00000000}, msg_iov(1)=[{"0\0\0\0\24\0\2\0\22\37\274H4x\0\0\2\10\200\376\1\0\0\0\10\0\1\0\177\0\0\1\10"..., 4096}], msg_controllen=0, msg_flags=0}, 0) = 108
recvmsg(4, {msg_name(12)={sa_family=AF_NETLINK, pid=0, groups=00000000}, msg_iov(1)=[{"@\0\0\0\24\0\2\0\22\37\274H4x\0\0\n\200\200\376\1\0\0\0\24\0\1\0\0\0\0\0\0"..., 4096}], msg_controllen=0, msg_flags=0}, 0) = 256
recvmsg(4, {msg_name(12)={sa_family=AF_NETLINK, pid=0, groups=00000000}, msg_iov(1)=[{"\24\0\0\0\3\0\2\0\22\37\274H4x\0\0\0\0\0\0\1\0\0\0\24\0\1\0\0\0\0\0\0"..., 4096}], msg_controllen=0, msg_flags=0}, 0) = 20
sendmsg(3, {msg_name(28)={sa_family=AF_INET6, sin6_port=htons(58), inet_pton(AF_INET6, "ff02::2", &sin6_addr), sin6_flowinfo=0, sin6_scope_id=0}, msg_iov(1)=[{"\200\0\0\0004x\0\1\22\37\274H\366\v\5\0\10\t\n\v\f\r\16\17\20\21\22\23\24\25\26\27\30"..., 64}], msg_controllen=32, {cmsg_len=32, cmsg_level=SOL_IPV6, cmsg_type=, ...}, msg_flags=MSG_PEEK|MSG_PROXY|MSG_WAITALL|MSG_CONFIRM|MSG_FIN|MSG_SYN|MSG_RST|MSG_CMSG_CLOEXEC|0x8bc0000}, 0) = 64
recvmsg(3, {msg_name(28)={sa_family=AF_INET6, sin6_port=htons(0), inet_pton(AF_INET6, "fe80::216:3eff:feb9:29f5", &sin6_addr), sin6_flowinfo=0, sin6_scope_id=if_nametoindex("eth0")}, msg_iov(1)=[{"\201\0\305\n4x\0\1\22\37\274H\366\v\5\0\10\t\n\v\f\r\16\17\20\21\22\23\24\25\26\27\30"..., 4208}], msg_controllen=36, {cmsg_len=20, cmsg_level=SOL_SOCKET, cmsg_type=0x1d /* SCM_??? */, ...}, msg_flags=0}, 0) = 64
The non-working RIPng group:
# strace -e recvmsg,sendmsg ping6 -c 1 -I eth0 ff02::9
recvmsg(4, {msg_name(12)={sa_family=AF_NETLINK, pid=0, groups=00000000}, msg_iov(1)=[{"0\0\0\0\24\0\2\0/\37\274HSx\0\0\2\10\200\376\1\0\0\0\10\0\1\0\177\0\0\1\10"..., 4096}], msg_controllen=0, msg_flags=0}, 0) = 108
recvmsg(4, {msg_name(12)={sa_family=AF_NETLINK, pid=0, groups=00000000}, msg_iov(1)=[{"@\0\0\0\24\0\2\0/\37\274HSx\0\0\n\200\200\376\1\0\0\0\24\0\1\0\0\0\0\0\0"..., 4096}], msg_controllen=0, msg_flags=0}, 0) = 256
recvmsg(4, {msg_name(12)={sa_family=AF_NETLINK, pid=0, groups=00000000}, msg_iov(1)=[{"\24\0\0\0\3\0\2\0/\37\274HSx\0\0\0\0\0\0\1\0\0\0\24\0\1\0\0\0\0\0\0"..., 4096}], msg_controllen=0, msg_flags=0}, 0) = 20
sendmsg(3, {msg_name(28)={sa_family=AF_INET6, sin6_port=htons(58), inet_pton(AF_INET6, "ff02::9", &sin6_addr), sin6_flowinfo=0, sin6_scope_id=0}, msg_iov(1)=[{"\200\0\0\0Sx\0\1/\37\274H}\202\n\0\10\t\n\v\f\r\16\17\20\21\22\23\24\25\26\27\30"..., 64}], msg_controllen=32, {cmsg_len=32, cmsg_level=SOL_IPV6, cmsg_type=, ...}, msg_flags=MSG_OOB|MSG_DONTROUTE|MSG_PEEK|MSG_CTRUNC|MSG_WAITALL|MSG_TRUNC|MSG_CONFIRM|MSG_FIN|MSG_SYN|MSG_RST|MSG_CMSG_CLOEXEC|0x8bc0000}, 0) = -1 EINVAL (Invalid argument)
recvmsg(3, 0xbf8c8350, MSG_ERRQUEUE|MSG_DONTWAIT) = -1 EAGAIN (Resource temporarily unavailable)
sendmsg(3, {msg_name(28)={sa_family=AF_INET6, sin6_port=htons(58), inet_pton(AF_INET6, "ff02::9", &sin6_addr), sin6_flowinfo=0, sin6_scope_id=0}, msg_iov(1)=[{"\200\0\0\0Sx\0\1/\37\274H\354\212\n\0\10\t\n\v\f\r\16\17\20\21\22\23\24\25\26\27\30"..., 64}], msg_controllen=32, {cmsg_len=32, cmsg_level=SOL_IPV6, cmsg_type=, ...}, msg_flags=MSG_CTRUNC}, 0) = -1 EINVAL (Invalid argument)
recvmsg(3, 0xbf8c8350, MSG_ERRQUEUE|MSG_DONTWAIT) = -1 EAGAIN (Resource temporarily unavailable)
ping: sendmsg: Invalid argument
recvmsg(3, {msg_name(28)={sa_family=AF_INET6, sin6_port=htons(0), inet_pton(AF_INET6, "fe80::216:3eff:feb9:29f5", &sin6_addr), sin6_flowinfo=0, sin6_scope_id=if_nametoindex("eth0")}, msg_iov(1)=[{"\201\0\374\223Sx\0\1/\37\274H}\202\n\0\10\t\n\v\f\r\16\17\20\21\22\23\24\25\26\27\30"..., 4208}], msg_controllen=36, {cmsg_len=20, cmsg_level=SOL_SOCKET, cmsg_type=0x1d /* SCM_??? */, ...}, msg_flags=0}, 0) = 64
So the flags look different, but why?
Bernhard
^ permalink raw reply [flat|nested] 25+ messages in thread
* Re: [IPv6] "sendmsg: invalid argument" to multicast group after some time
2008-09-01 17:01 ` Bernhard Schmidt
@ 2008-09-01 17:05 ` Bernhard Schmidt
2008-09-01 17:57 ` Pekka Savola
2008-09-02 13:57 ` Brian Haley
2 siblings, 0 replies; 25+ messages in thread
From: Bernhard Schmidt @ 2008-09-01 17:05 UTC (permalink / raw)
To: David Stevens; +Cc: netdev
On Mon, Sep 01, 2008 at 07:01:01PM +0200, Bernhard Schmidt wrote:
> > If you can pick up the exact arguments when it's happening
> > via strace and send those here, that may help, but I wouldn't
> > expect those to be incorrect from ping6 only some of the
> > time...
BTW, these are the sendmsg() calls from ripngd (strace attached to the
running process):
sendmsg(5, {msg_name(28)={sa_family=AF_INET6, sin6_port=htons(521), inet_pton(AF_INET6, "ff02::9", &sin6_addr), sin6_flowinfo=0, sin6_scope_id=0}, msg_iov(1)=[{"\2\1\0\0 \1\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 \1"..., 24}], msg_controllen=32, {cmsg_len=32, cmsg_level=SOL_IPV6, cmsg_type=, ...}, msg_flags=0}, 0) = -1 EINVAL (Invalid argument)
Regards,
Bernhard
^ permalink raw reply [flat|nested] 25+ messages in thread
* Re: [IPv6] "sendmsg: invalid argument" to multicast group after some time
2008-09-01 17:01 ` Bernhard Schmidt
2008-09-01 17:05 ` Bernhard Schmidt
@ 2008-09-01 17:57 ` Pekka Savola
2008-09-01 18:03 ` Bernhard Schmidt
2008-09-02 13:57 ` Brian Haley
2 siblings, 1 reply; 25+ messages in thread
From: Pekka Savola @ 2008-09-01 17:57 UTC (permalink / raw)
To: Bernhard Schmidt; +Cc: David Stevens, netdev
On Mon, 1 Sep 2008, Bernhard Schmidt wrote:
> sendmsg(3, {msg_name(28)={sa_family=AF_INET6, sin6_port=htons(58),
> inet_pton(AF_INET6, "ff02::2", &sin6_addr), sin6_flowinfo=0,
> sin6_scope_id=0},
> msg_iov(1)=[{"\200\0\0\0004x\0\1\22\37\274H\366\v\5\0\10\t\n\v\f\r\16\17\20\21\22\23\24\25\26\27\30"...,
> 64}], msg_controllen=32, {cmsg_len=32, cmsg_level=SOL_IPV6,
> cmsg_type=, ...},
> msg_flags=MSG_PEEK|MSG_PROXY|MSG_WAITALL|MSG_CONFIRM|MSG_FIN|MSG_SYN|MSG_RST|MSG_CMSG_CLOEXEC|0x8bc0000},
> 0) = 64
versus:
> sendmsg(3, {msg_name(28)={sa_family=AF_INET6, sin6_port=htons(58),
> inet_pton(AF_INET6, "ff02::9", &sin6_addr), sin6_flowinfo=0,
> sin6_scope_id=0},
> msg_iov(1)=[{"\200\0\0\0Sx\0\1/\37\274H\354\212\n\0\10\t\n\v\f\r\16\17\20\21\22\23\24\25\26\27\30"...,
> 64}], msg_controllen=32, {cmsg_len=32, cmsg_level=SOL_IPV6,
> cmsg_type=, ...}, msg_flags=MSG_CTRUNC}, 0) = -1 EINVAL (Invalid
> argument)
It seems that in the latter case, you haven't specified the interface
(sin6_scope_id=0), but in the former case you have. You can't send to
link-local multicast groups if you have multiple interface if the
interface isn't specified.
--
Pekka Savola "You each name yourselves king, yet the
Netcore Oy kingdom bleeds."
Systems. Networks. Security. -- George R.R. Martin: A Clash of Kings
^ permalink raw reply [flat|nested] 25+ messages in thread
* Re: [IPv6] "sendmsg: invalid argument" to multicast group after some time
2008-09-01 17:57 ` Pekka Savola
@ 2008-09-01 18:03 ` Bernhard Schmidt
2008-09-02 9:06 ` Pekka Savola
0 siblings, 1 reply; 25+ messages in thread
From: Bernhard Schmidt @ 2008-09-01 18:03 UTC (permalink / raw)
To: Pekka Savola; +Cc: David Stevens, netdev
On Mon, Sep 01, 2008 at 08:57:51PM +0300, Pekka Savola wrote:
Hello Pekka,
> On Mon, 1 Sep 2008, Bernhard Schmidt wrote:
>> sendmsg(3, {msg_name(28)={sa_family=AF_INET6, sin6_port=htons(58),
>> inet_pton(AF_INET6, "ff02::2", &sin6_addr), sin6_flowinfo=0,
>> sin6_scope_id=0},
>> msg_iov(1)=[{"\200\0\0\0004x\0\1\22\37\274H\366\v\5\0\10\t\n\v\f\r\16\17\20\21\22\23\24\25\26\27\30"...,
>> 64}], msg_controllen=32, {cmsg_len=32, cmsg_level=SOL_IPV6, cmsg_type=,
>> ...},
>> msg_flags=MSG_PEEK|MSG_PROXY|MSG_WAITALL|MSG_CONFIRM|MSG_FIN|MSG_SYN|MSG_RST|MSG_CMSG_CLOEXEC|0x8bc0000},
>> 0) = 64
>
> versus:
>
>> sendmsg(3, {msg_name(28)={sa_family=AF_INET6, sin6_port=htons(58),
>> inet_pton(AF_INET6, "ff02::9", &sin6_addr), sin6_flowinfo=0,
>> sin6_scope_id=0},
>> msg_iov(1)=[{"\200\0\0\0Sx\0\1/\37\274H\354\212\n\0\10\t\n\v\f\r\16\17\20\21\22\23\24\25\26\27\30"...,
>> 64}], msg_controllen=32, {cmsg_len=32, cmsg_level=SOL_IPV6, cmsg_type=,
>> ...}, msg_flags=MSG_CTRUNC}, 0) = -1 EINVAL (Invalid argument)
>
> It seems that in the latter case, you haven't specified the interface
> (sin6_scope_id=0), but in the former case you have. You can't send to
> link-local multicast groups if you have multiple interface if the
> interface isn't specified.
I did specify the interface:
miredo:~# ping6 -I eth0 ff02::2
PING ff02::2(ff02::2) from fe80::216:3eff:feb9:29f5 eth0: 56 data bytes
64 bytes from fe80::216:3eff:feb9:29f5: icmp_seq=1 ttl=64 time=0.034 ms
[...]
miredo:~# ping6 -I eth0 ff02::9
PING ff02::9(ff02::9) from fe80::216:3eff:feb9:29f5 eth0: 56 data bytes
ping: sendmsg: Invalid argument
64 bytes from fe80::216:3eff:feb9:29f5: icmp_seq=1 ttl=64 time=0.033 ms
I have no idea what the different flags mean or where they come from.
Also please note that broken ping6 is just a symptom, the real problem
is ripngd suddenly not being able to send to the multicast group
anymore.
Bernhard
^ permalink raw reply [flat|nested] 25+ messages in thread
* Re: [IPv6] "sendmsg: invalid argument" to multicast group after some time
2008-09-01 18:03 ` Bernhard Schmidt
@ 2008-09-02 9:06 ` Pekka Savola
0 siblings, 0 replies; 25+ messages in thread
From: Pekka Savola @ 2008-09-02 9:06 UTC (permalink / raw)
To: Bernhard Schmidt; +Cc: David Stevens, netdev
On Mon, 1 Sep 2008, Bernhard Schmidt wrote:
> miredo:~# ping6 -I eth0 ff02::2
> PING ff02::2(ff02::2) from fe80::216:3eff:feb9:29f5 eth0: 56 data bytes
> 64 bytes from fe80::216:3eff:feb9:29f5: icmp_seq=1 ttl=64 time=0.034 ms
> [...]
> miredo:~# ping6 -I eth0 ff02::9
> PING ff02::9(ff02::9) from fe80::216:3eff:feb9:29f5 eth0: 56 data bytes
> ping: sendmsg: Invalid argument
> 64 bytes from fe80::216:3eff:feb9:29f5: icmp_seq=1 ttl=64 time=0.033 ms
>
> I have no idea what the different flags mean or where they come from.
Sorry, as a matter of fact in the quoted text, both sin6_scope_id was
0. I was looking at recvmsg where it was different.
--
Pekka Savola "You each name yourselves king, yet the
Netcore Oy kingdom bleeds."
Systems. Networks. Security. -- George R.R. Martin: A Clash of Kings
^ permalink raw reply [flat|nested] 25+ messages in thread
* Re: [IPv6] "sendmsg: invalid argument" to multicast group after some time
2008-09-01 17:01 ` Bernhard Schmidt
2008-09-01 17:05 ` Bernhard Schmidt
2008-09-01 17:57 ` Pekka Savola
@ 2008-09-02 13:57 ` Brian Haley
2008-09-02 15:00 ` Bernhard Schmidt
2 siblings, 1 reply; 25+ messages in thread
From: Brian Haley @ 2008-09-02 13:57 UTC (permalink / raw)
To: Bernhard Schmidt; +Cc: David Stevens, netdev
[-- Attachment #1: Type: text/plain, Size: 4168 bytes --]
Bernhard Schmidt wrote:
> The working group:
>
> # strace -e recvmsg,sendmsg ping6 -c 1 -I eth0 ff02::2
> recvmsg(4, {msg_name(12)={sa_family=AF_NETLINK, pid=0, groups=00000000}, msg_iov(1)=[{"0\0\0\0\24\0\2\0\22\37\274H4x\0\0\2\10\200\376\1\0\0\0\10\0\1\0\177\0\0\1\10"..., 4096}], msg_controllen=0, msg_flags=0}, 0) = 108
> recvmsg(4, {msg_name(12)={sa_family=AF_NETLINK, pid=0, groups=00000000}, msg_iov(1)=[{"@\0\0\0\24\0\2\0\22\37\274H4x\0\0\n\200\200\376\1\0\0\0\24\0\1\0\0\0\0\0\0"..., 4096}], msg_controllen=0, msg_flags=0}, 0) = 256
> recvmsg(4, {msg_name(12)={sa_family=AF_NETLINK, pid=0, groups=00000000}, msg_iov(1)=[{"\24\0\0\0\3\0\2\0\22\37\274H4x\0\0\0\0\0\0\1\0\0\0\24\0\1\0\0\0\0\0\0"..., 4096}], msg_controllen=0, msg_flags=0}, 0) = 20
> sendmsg(3, {msg_name(28)={sa_family=AF_INET6, sin6_port=htons(58), inet_pton(AF_INET6, "ff02::2", &sin6_addr), sin6_flowinfo=0, sin6_scope_id=0}, msg_iov(1)=[{"\200\0\0\0004x\0\1\22\37\274H\366\v\5\0\10\t\n\v\f\r\16\17\20\21\22\23\24\25\26\27\30"..., 64}], msg_controllen=32, {cmsg_len=32, cmsg_level=SOL_IPV6, cmsg_type=, ...}, msg_flags=MSG_PEEK|MSG_PROXY|MSG_WAITALL|MSG_CONFIRM|MSG_FIN|MSG_SYN|MSG_RST|MSG_CMSG_CLOEXEC|0x8bc0000}, 0) = 64
> recvmsg(3, {msg_name(28)={sa_family=AF_INET6, sin6_port=htons(0), inet_pton(AF_INET6, "fe80::216:3eff:feb9:29f5", &sin6_addr), sin6_flowinfo=0, sin6_scope_id=if_nametoindex("eth0")}, msg_iov(1)=[{"\201\0\305\n4x\0\1\22\37\274H\366\v\5\0\10\t\n\v\f\r\16\17\20\21\22\23\24\25\26\27\30"..., 4208}], msg_controllen=36, {cmsg_len=20, cmsg_level=SOL_SOCKET, cmsg_type=0x1d /* SCM_??? */, ...}, msg_flags=0}, 0) = 64
>
>
> The non-working RIPng group:
> # strace -e recvmsg,sendmsg ping6 -c 1 -I eth0 ff02::9
> recvmsg(4, {msg_name(12)={sa_family=AF_NETLINK, pid=0, groups=00000000}, msg_iov(1)=[{"0\0\0\0\24\0\2\0/\37\274HSx\0\0\2\10\200\376\1\0\0\0\10\0\1\0\177\0\0\1\10"..., 4096}], msg_controllen=0, msg_flags=0}, 0) = 108
> recvmsg(4, {msg_name(12)={sa_family=AF_NETLINK, pid=0, groups=00000000}, msg_iov(1)=[{"@\0\0\0\24\0\2\0/\37\274HSx\0\0\n\200\200\376\1\0\0\0\24\0\1\0\0\0\0\0\0"..., 4096}], msg_controllen=0, msg_flags=0}, 0) = 256
> recvmsg(4, {msg_name(12)={sa_family=AF_NETLINK, pid=0, groups=00000000}, msg_iov(1)=[{"\24\0\0\0\3\0\2\0/\37\274HSx\0\0\0\0\0\0\1\0\0\0\24\0\1\0\0\0\0\0\0"..., 4096}], msg_controllen=0, msg_flags=0}, 0) = 20
> sendmsg(3, {msg_name(28)={sa_family=AF_INET6, sin6_port=htons(58), inet_pton(AF_INET6, "ff02::9", &sin6_addr), sin6_flowinfo=0, sin6_scope_id=0}, msg_iov(1)=[{"\200\0\0\0Sx\0\1/\37\274H}\202\n\0\10\t\n\v\f\r\16\17\20\21\22\23\24\25\26\27\30"..., 64}], msg_controllen=32, {cmsg_len=32, cmsg_level=SOL_IPV6, cmsg_type=, ...}, msg_flags=MSG_OOB|MSG_DONTROUTE|MSG_PEEK|MSG_CTRUNC|MSG_WAITALL|MSG_TRUNC|MSG_CONFIRM|MSG_FIN|MSG_SYN|MSG_RST|MSG_CMSG_CLOEXEC|0x8bc0000}, 0) = -1 EINVAL (Invalid argument)
> recvmsg(3, 0xbf8c8350, MSG_ERRQUEUE|MSG_DONTWAIT) = -1 EAGAIN (Resource temporarily unavailable)
> sendmsg(3, {msg_name(28)={sa_family=AF_INET6, sin6_port=htons(58), inet_pton(AF_INET6, "ff02::9", &sin6_addr), sin6_flowinfo=0, sin6_scope_id=0}, msg_iov(1)=[{"\200\0\0\0Sx\0\1/\37\274H\354\212\n\0\10\t\n\v\f\r\16\17\20\21\22\23\24\25\26\27\30"..., 64}], msg_controllen=32, {cmsg_len=32, cmsg_level=SOL_IPV6, cmsg_type=, ...}, msg_flags=MSG_CTRUNC}, 0) = -1 EINVAL (Invalid argument)
> recvmsg(3, 0xbf8c8350, MSG_ERRQUEUE|MSG_DONTWAIT) = -1 EAGAIN (Resource temporarily unavailable)
> ping: sendmsg: Invalid argument
> recvmsg(3, {msg_name(28)={sa_family=AF_INET6, sin6_port=htons(0), inet_pton(AF_INET6, "fe80::216:3eff:feb9:29f5", &sin6_addr), sin6_flowinfo=0, sin6_scope_id=if_nametoindex("eth0")}, msg_iov(1)=[{"\201\0\374\223Sx\0\1/\37\274H}\202\n\0\10\t\n\v\f\r\16\17\20\21\22\23\24\25\26\27\30"..., 4208}], msg_controllen=36, {cmsg_len=20, cmsg_level=SOL_SOCKET, cmsg_type=0x1d /* SCM_??? */, ...}, msg_flags=0}, 0) = 64
>
>
> So the flags look different, but why?
Well, at least in the ping6 sources I have, msg_flags is never
initialized before the sendmsg() call, and since it's allocated on the
stack it can have random bits set. Can you rebuild your ping6 with the
attached patch and retry?
-Brian
[-- Attachment #2: ping6.diff --]
[-- Type: text/x-patch, Size: 376 bytes --]
diff -c ping6.c.orig ping6.c
*** ping6.c.orig 2008-09-02 09:29:30.000000000 -0400
--- ping6.c 2008-09-02 09:36:06.000000000 -0400
***************
*** 727,732 ****
--- 727,733 ----
mhdr.msg_namelen = sizeof(struct sockaddr_in6);
mhdr.msg_iov = &iov;
mhdr.msg_iovlen = 1;
+ mhdr.msg_flags = 0;
mhdr.msg_control = cmsgbuf;
mhdr.msg_controllen = cmsglen;
^ permalink raw reply [flat|nested] 25+ messages in thread
* Re: [IPv6] "sendmsg: invalid argument" to multicast group after some time
2008-09-02 13:57 ` Brian Haley
@ 2008-09-02 15:00 ` Bernhard Schmidt
2008-09-02 15:48 ` Brian Haley
2008-09-09 0:34 ` David Stevens
0 siblings, 2 replies; 25+ messages in thread
From: Bernhard Schmidt @ 2008-09-02 15:00 UTC (permalink / raw)
To: Brian Haley; +Cc: David Stevens, netdev
Hello Brian,
>> So the flags look different, but why?
> Well, at least in the ping6 sources I have, msg_flags is never
> initialized before the sendmsg() call, and since it's allocated on the
> stack it can have random bits set. Can you rebuild your ping6 with the
> attached patch and retry?
Done, no change.
sendmsg(3, {msg_name(28)={sa_family=AF_INET6, sin6_port=htons(58), inet_pton(AF_INET6, "ff02::2", &sin6_addr), sin6_flowinfo=0, sin6_scope_id=0}, msg_iov(1)=[{"\200\0\0\0\3252\0\0010T\275H\274\314\7\0\10\t\n\v\f\r\16\17\20\21\22\23\24\25\26\27\30"..., 64}], msg_controllen=32, {cmsg_len=32, cmsg_level=SOL_IPV6, cmsg_type=, ...}, msg_flags=0}, 0) = 64
vs.
sendmsg(3, {msg_name(28)={sa_family=AF_INET6, sin6_port=htons(58), inet_pton(AF_INET6, "ff02::9", &sin6_addr), sin6_flowinfo=0, sin6_scope_id=0}, msg_iov(1)=[{"\200\0\0\0\3162\0\1+T\275H\255K\16\0\10\t\n\v\f\r\16\17\20\21\22\23\24\25\26\27\30"..., 64}], msg_controllen=32, {cmsg_len=32, cmsg_level=SOL_IPV6, cmsg_type=, ...}, msg_flags=0}, 0) = -1 EINVAL (Invalid argument)
don't push too hard on ping6, I just included it to show that all
processes are affected sending to this particular group.
Bernhard
^ permalink raw reply [flat|nested] 25+ messages in thread
* Re: [IPv6] "sendmsg: invalid argument" to multicast group after some time
2008-09-02 15:00 ` Bernhard Schmidt
@ 2008-09-02 15:48 ` Brian Haley
2008-09-09 0:34 ` David Stevens
1 sibling, 0 replies; 25+ messages in thread
From: Brian Haley @ 2008-09-02 15:48 UTC (permalink / raw)
To: Bernhard Schmidt; +Cc: David Stevens, netdev
Bernhard Schmidt wrote:
> Hello Brian,
>
>>> So the flags look different, but why?
>> Well, at least in the ping6 sources I have, msg_flags is never
>> initialized before the sendmsg() call, and since it's allocated on the
>> stack it can have random bits set. Can you rebuild your ping6 with the
>> attached patch and retry?
>
> Done, no change.
>
> sendmsg(3, {msg_name(28)={sa_family=AF_INET6, sin6_port=htons(58), inet_pton(AF_INET6, "ff02::2", &sin6_addr), sin6_flowinfo=0, sin6_scope_id=0}, msg_iov(1)=[{"\200\0\0\0\3252\0\0010T\275H\274\314\7\0\10\t\n\v\f\r\16\17\20\21\22\23\24\25\26\27\30"..., 64}], msg_controllen=32, {cmsg_len=32, cmsg_level=SOL_IPV6, cmsg_type=, ...}, msg_flags=0}, 0) = 64
>
> vs.
>
> sendmsg(3, {msg_name(28)={sa_family=AF_INET6, sin6_port=htons(58), inet_pton(AF_INET6, "ff02::9", &sin6_addr), sin6_flowinfo=0, sin6_scope_id=0}, msg_iov(1)=[{"\200\0\0\0\3162\0\1+T\275H\255K\16\0\10\t\n\v\f\r\16\17\20\21\22\23\24\25\26\27\30"..., 64}], msg_controllen=32, {cmsg_len=32, cmsg_level=SOL_IPV6, cmsg_type=, ...}, msg_flags=0}, 0) = -1 EINVAL (Invalid argument)
>
> don't push too hard on ping6, I just included it to show that all
> processes are affected sending to this particular group.
That was just the obvious answer to why the flags were different. Since
EINVAL is too generic to point at one place in the kernel code path, I'd
second David Stevens' suggestion of finding where in the sendmsg() code
this is coming from.
Maybe you can trace the miredo daemon to see what it's doing that might
fix the problem if you don't want to start hacking in the kernel.
-Brian
^ permalink raw reply [flat|nested] 25+ messages in thread
* Re: [IPv6] "sendmsg: invalid argument" to multicast group after some time
2008-09-02 15:00 ` Bernhard Schmidt
2008-09-02 15:48 ` Brian Haley
@ 2008-09-09 0:34 ` David Stevens
2008-09-09 0:38 ` Bernhard Schmidt
1 sibling, 1 reply; 25+ messages in thread
From: David Stevens @ 2008-09-09 0:34 UTC (permalink / raw)
To: Bernhard Schmidt; +Cc: Brian Haley, netdev
Bernhard,
I looked at this some more and didn't see anything obvious.
The send side doesn't need group membership to send, or anything
special, really. The only thing that comes to mind is that maybe you
have a bogus route installed (since you don't have a bogus interface
flag :-)).
Can you do an "ip -6 route list" when it's happening?
Also might be worthwhile to see the entire arg list, so
maybe using the "-s" option to strace to increase it, and we
probably only need sendmsg(), so maybe:
strace -s 1024 -e trace=sendmsg -e verbose=sendmsg ping6 -I eth0 ....
I wanted to see more detail than strace could fit in the default length.
:-)
+-DLS
^ permalink raw reply [flat|nested] 25+ messages in thread
* Re: [IPv6] "sendmsg: invalid argument" to multicast group after some time
2008-09-09 0:34 ` David Stevens
@ 2008-09-09 0:38 ` Bernhard Schmidt
2008-09-09 2:26 ` David Stevens
` (2 more replies)
0 siblings, 3 replies; 25+ messages in thread
From: Bernhard Schmidt @ 2008-09-09 0:38 UTC (permalink / raw)
To: David Stevens; +Cc: Brian Haley, netdev
On Mon, Sep 08, 2008 at 05:34:00PM -0700, David Stevens wrote:
Hi David,
> I looked at this some more and didn't see anything obvious.
> The send side doesn't need group membership to send, or anything
> special, really. The only thing that comes to mind is that maybe you
> have a bogus route installed (since you don't have a bogus interface
> flag :-)).
> Can you do an "ip -6 route list" when it's happening?
Sure, here we go
miredo:~# ip -6 route list
2001::/32 via fe80::1 dev teredo metric 1024 mtu 1280 advmss 1220 hoplimit 4294967295
2001:1b10:100::1:1 via fe80::2c0:9fff:fe4b:8ccf dev eth0 proto zebra metric 2 mtu 1500 advmss 1440 hoplimit 4294967295
2001:1b10:100::1:2 via fe80::2c0:9fff:fe4b:8a4d dev eth0 proto zebra metric 2 mtu 1500 advmss 1440 hoplimit 4294967295
2001:1b10:100::21:1 via fe80::2c0:9fff:fe4b:8a4d dev eth0 proto zebra metric 2 mtu 1500 advmss 1440 hoplimit 4294967295
2001:1b10:100::53:1 via fe80::2c0:9fff:fe4b:8ccf dev eth0 proto zebra metric 2 mtu 1500 advmss 1440 hoplimit 4294967295
2001:1b10:100::119:1 via fe80::2c0:9fff:fe4b:8a4d dev eth0 proto zebra metric 2 mtu 1500 advmss 1440 hoplimit 4294967295
2001:1b10:100::1:9000:1 via fe80::2c0:9fff:fe4b:8a4d dev eth0 proto zebra metric 2 mtu 1500 advmss 1440 hoplimit 4294967295
2001:1b10:100:3::/64 dev eth0 proto kernel metric 256 mtu 1500 advmss 1440 hoplimit 4294967295
fe80::/64 dev eth0 proto kernel metric 256 mtu 1500 advmss 1440 hoplimit 4294967295
fe80::/64 dev teredo proto kernel metric 256 mtu 1280 advmss 1220 hoplimit 4294967295
ff00::/8 dev eth0 metric 256 mtu 1500 advmss 1440 hoplimit 4294967295
ff00::/8 dev teredo metric 256 mtu 1280 advmss 1220 hoplimit 4294967295
default via 2001:1b10:100:3::1 dev eth0 metric 1 mtu 1500 advmss 1440 hoplimit 4294967295
> Also might be worthwhile to see the entire arg list, so
> maybe using the "-s" option to strace to increase it, and we
> probably only need sendmsg(), so maybe:
>
> strace -s 1024 -e trace=sendmsg -e verbose=sendmsg ping6 -I eth0 ....
>
> I wanted to see more detail than strace could fit in the default length.
> :-)
Working (all-hosts):
sendmsg(3, {msg_name(28)={sa_family=AF_INET6, sin6_port=htons(58), inet_pton(AF_INET6, "ff02::1", &sin6_addr), sin6_flowinfo=0, sin6_scope_id=0}, msg_iov(1)=[{"\200\0\0\0\25S\0\3\0\305\305H\212/\r\0\10\t\n\v\f\r\16\17\20\21\22\23\24\25\26\27\30\31\32\33\34\35\36\37 !\"#$%&'()*+,-./01234567"..., 64}], msg_controllen=32, {cmsg_len=32, cmsg_level=SOL_IPV6, cmsg_type=, ...}, msg_flags=0}, MSG_CONFIRM) = 64
Non-working (RIPng group):
sendmsg(3, {msg_name(28)={sa_family=AF_INET6, sin6_port=htons(58), inet_pton(AF_INET6, "ff02::9", &sin6_addr), sin6_flowinfo=0, sin6_scope_id=0}, msg_iov(1)=[{"\200\0\0\0MS\0\0012\305\305HCA\r\0\10\t\n\v\f\r\16\17\20\21\22\23\24\25\26\27\30\31\32\33\34\35\36\37 !\"#$%&'()*+,-./01234567"..., 64}], msg_controllen=32, {cmsg_len=32, cmsg_level=SOL_IPV6, cmsg_type=, ...}, msg_flags=0}, 0) = -1 EINVAL (Invalid argument)
sendmsg(3, {msg_name(28)={sa_family=AF_INET6, sin6_port=htons(58), inet_pton(AF_INET6, "ff02::9", &sin6_addr), sin6_flowinfo=0, sin6_scope_id=0}, msg_iov(1)=[{"\200\0\0\0MS\0\0012\305\305H\36F\r\0\10\t\n\v\f\r\16\17\20\21\22\23\24\25\26\27\30\31\32\33\34\35\36\37 !\"#$%&'()*+,-./01234567"..., 64}], msg_controllen=32, {cmsg_len=32, cmsg_level=SOL_IPV6, cmsg_type=, ...}, msg_flags=0}, 0) = -1 EINVAL (Invalid argument)
(yes, ping6 is actually trying twice when it's broken, no idea why)
Bernhard
^ permalink raw reply [flat|nested] 25+ messages in thread
* Re: [IPv6] "sendmsg: invalid argument" to multicast group after some time
2008-09-09 0:38 ` Bernhard Schmidt
@ 2008-09-09 2:26 ` David Stevens
2008-09-09 6:52 ` Rémi Denis-Courmont
2008-09-09 17:16 ` Pekka Savola
2 siblings, 0 replies; 25+ messages in thread
From: David Stevens @ 2008-09-09 2:26 UTC (permalink / raw)
To: Bernhard Schmidt; +Cc: Brian Haley, netdev, netdev-owner
Did you specify the interface on those?
I don't know whether or not I should trust that sin_scope_id==0,
which Brian also mentioned.
For link-local addresses, it must be specified, though you can
get there from an SO_BINDTODEVICE too. The error if it isn't
there is EINVAL, but sin_scope_id is showing as 0 for both cases.
The send side shouldn't actually care anything about the group
number unless they matched different routes, which I don't see
in your routing table.
So, I think it's back to running it on a debug kernel that prints
something distinguishing from the different EINVAL cases
to narrow it down.
+-DLS
^ permalink raw reply [flat|nested] 25+ messages in thread
* Re: [IPv6] "sendmsg: invalid argument" to multicast group after some time
2008-09-09 0:38 ` Bernhard Schmidt
2008-09-09 2:26 ` David Stevens
@ 2008-09-09 6:52 ` Rémi Denis-Courmont
2008-09-09 7:17 ` David Stevens
2008-09-09 17:16 ` Pekka Savola
2 siblings, 1 reply; 25+ messages in thread
From: Rémi Denis-Courmont @ 2008-09-09 6:52 UTC (permalink / raw)
To: Bernhard Schmidt; +Cc: David Stevens, Brian Haley, netdev
On Tue, 9 Sep 2008 02:38:53 +0200, Bernhard Schmidt <berni@birkenwald.de>
wrote:
> ff00::/8 dev teredo metric 256 mtu 1280 advmss 1220 hoplimit 4294967295
^^^^^^
Uho... that interface is not multicast-capable. Not sure how the kernel
handles the conflicting routes.
--
Rémi Denis-Courmont
^ permalink raw reply [flat|nested] 25+ messages in thread
* Re: [IPv6] "sendmsg: invalid argument" to multicast group after some time
2008-09-09 6:52 ` Rémi Denis-Courmont
@ 2008-09-09 7:17 ` David Stevens
2008-09-09 10:06 ` Bernhard Schmidt
0 siblings, 1 reply; 25+ messages in thread
From: David Stevens @ 2008-09-09 7:17 UTC (permalink / raw)
To: Rémi Denis-Courmont
Cc: Bernhard Schmidt, Brian Haley, netdev, netdev-owner
netdev-owner@vger.kernel.org wrote on 09/08/2008 11:52:05 PM:
>
> On Tue, 9 Sep 2008 02:38:53 +0200, Bernhard Schmidt
<berni@birkenwald.de>
>
> wrote:
>
> > ff00::/8 dev teredo metric 256 mtu 1280 advmss 1220 hoplimit
4294967295
>
> ^^^^^^
>
> Uho... that interface is not multicast-capable. Not sure how the kernel
>
> handles the conflicting routes.
Multicast programs shouldn't rely on the routing
table at all, IMAO. Unicast routing has nothing at all
to do with multicast routing, where a single address
means copying and forwarding to multiple different
segments, and the address has nothing at all to do with
the routing topology.
That's why the scope_id should be set (and to
the proper index for eth0), but I think either strace
is lying to us here or they're using SO_BINDTODEVICE
or IP_MULTICAST_IF.
But when it's breaking, I imagine it is ending
up on the wrong interface and thus the EINVAL... I
just don't see how some addresses still work while
others fail at the same time. :-)
+-DLS
^ permalink raw reply [flat|nested] 25+ messages in thread
* Re: [IPv6] "sendmsg: invalid argument" to multicast group after some time
2008-09-09 7:17 ` David Stevens
@ 2008-09-09 10:06 ` Bernhard Schmidt
2008-09-09 15:05 ` David Stevens
0 siblings, 1 reply; 25+ messages in thread
From: Bernhard Schmidt @ 2008-09-09 10:06 UTC (permalink / raw)
To: David Stevens; +Cc: Rémi Denis-Courmont, Brian Haley, netdev
On Tue, Sep 09, 2008 at 12:17:35AM -0700, David Stevens wrote:
> netdev-owner@vger.kernel.org wrote on 09/08/2008 11:52:05 PM:
>
> >
> > On Tue, 9 Sep 2008 02:38:53 +0200, Bernhard Schmidt
> <berni@birkenwald.de>
> >
> > wrote:
> >
> > > ff00::/8 dev teredo metric 256 mtu 1280 advmss 1220 hoplimit
> 4294967295
> >
> > ^^^^^^
> >
> > Uho... that interface is not multicast-capable. Not sure how the kernel
> > handles the conflicting routes.
>
> Multicast programs shouldn't rely on the routing
> table at all, IMAO. Unicast routing has nothing at all
> to do with multicast routing, where a single address
> means copying and forwarding to multiple different
> segments, and the address has nothing at all to do with
> the routing topology.
I agree with you that it should not rely on it. However, it obviously
did (in some way):
miredo:~# ip -6 route del ff00::/8 dev teredo metric 256 mtu 1280 advmss 1220 hoplimit 4294967295
miredo:~# ping6 -I eth0 ff02::9
PING ff02::9(ff02::9) from fe80::216:3eff:feb9:29f5 eth0: 56 data bytes
64 bytes from fe80::216:3eff:feb9:29f5: icmp_seq=1 ttl=64 time=0.098 ms
64 bytes from fe80::2c0:9fff:fe4b:8ccf: icmp_seq=1 ttl=64 time=0.453 ms (DUP!)
64 bytes from fe80::20c:86ff:fe9a:3819: icmp_seq=1 ttl=64 time=0.467 ms (DUP!)
64 bytes from fe80::20c:86ff:fe9a:2819: icmp_seq=1 ttl=64 time=0.472 ms (DUP!)
I'm not convinced that this route was the culprit though, I think it
might be related to some sort of routing-table locking foo that got
resolved when I changed something. I'll keep it running (will take one
or two days max to reappear) and try deleting an arbitrary route when it
happens again.
Bernhard
^ permalink raw reply [flat|nested] 25+ messages in thread
* Re: [IPv6] "sendmsg: invalid argument" to multicast group after some time
2008-09-09 10:06 ` Bernhard Schmidt
@ 2008-09-09 15:05 ` David Stevens
0 siblings, 0 replies; 25+ messages in thread
From: David Stevens @ 2008-09-09 15:05 UTC (permalink / raw)
To: Bernhard Schmidt; +Cc: Brian Haley, netdev, Rémi Denis-Courmont
Bernhard,
It'll definitely fall back to using the routing table
when the interface is not specified already, which is why
I was ranting about it (I think it should fail without a unicast
routing table lookup in that case, and always require the
application to provide the interface).
What is odd about your case is you specified
"-I eth0" on the ping6, so it should never be doing a
route lookup (but it also should have a nonzero scope_id).
That could be a bug in ping6. If it's passing
uninitialized flags to sendmsg, it clearly needs some work.
Then the question is whether RIPng is explicitly
providing the interface for the sendmsg or not. If it isn't,
that's the problem, but I'd expect it to-- to operate correctly,
it needs to know what interface it's talking on.
+-DLS
^ permalink raw reply [flat|nested] 25+ messages in thread
* Re: [IPv6] "sendmsg: invalid argument" to multicast group after some time
2008-09-09 0:38 ` Bernhard Schmidt
2008-09-09 2:26 ` David Stevens
2008-09-09 6:52 ` Rémi Denis-Courmont
@ 2008-09-09 17:16 ` Pekka Savola
2008-09-09 20:13 ` David Miller
2 siblings, 1 reply; 25+ messages in thread
From: Pekka Savola @ 2008-09-09 17:16 UTC (permalink / raw)
To: Bernhard Schmidt; +Cc: David Stevens, Brian Haley, netdev
On Tue, 9 Sep 2008, Bernhard Schmidt wrote:
> Working (all-hosts):
> sendmsg(3, {msg_name(28)={sa_family=AF_INET6, sin6_port=htons(58), inet_pton(AF_INET6, "ff02::1", &sin6_addr), sin6_flowinfo=0, sin6_scope_id=0}, msg_iov(1)=[{"\200\0\0\0\25S\0\3\0\305\305H\212/\r\0\10\t\n\v\f\r\16\17\20\21\22\23\24\25\26\27\30\31\32\33\34\35\36\37 !\"#$%&'()*+,-./01234567"..., 64}], msg_controllen=32, {cmsg_len=32, cmsg_level=SOL_IPV6, cmsg_type=, ...}, msg_flags=0}, MSG_CONFIRM) = 64
>
> Non-working (RIPng group):
> sendmsg(3, {msg_name(28)={sa_family=AF_INET6, sin6_port=htons(58), inet_pton(AF_INET6, "ff02::9", &sin6_addr), sin6_flowinfo=0, sin6_scope_id=0}, msg_iov(1)=[{"\200\0\0\0MS\0\0012\305\305HCA\r\0\10\t\n\v\f\r\16\17\20\21\22\23\24\25\26\27\30\31\32\33\34\35\36\37 !\"#$%&'()*+,-./01234567"..., 64}], msg_controllen=32, {cmsg_len=32, cmsg_level=SOL_IPV6, cmsg_type=, ...}, msg_flags=0}, 0) = -1 EINVAL (Invalid argument)
> sendmsg(3, {msg_name(28)={sa_family=AF_INET6, sin6_port=htons(58), inet_pton(AF_INET6, "ff02::9", &sin6_addr), sin6_flowinfo=0, sin6_scope_id=0}, msg_iov(1)=[{"\200\0\0\0MS\0\0012\305\305H\36F\r\0\10\t\n\v\f\r\16\17\20\21\22\23\24\25\26\27\30\31\32\33\34\35\36\37 !\"#$%&'()*+,-./01234567"..., 64}], msg_controllen=32, {cmsg_len=32, cmsg_level=SOL_IPV6, cmsg_type=, ...}, msg_flags=0}, 0) = -1 EINVAL (Invalid argument)
I wonder if it's relevant that msg_iov in working case has MSG_CONFIRM
but in non-working case this is zero? I guess not but apart from the
contents of msg_iov, that's the only difference..
--
Pekka Savola "You each name yourselves king, yet the
Netcore Oy kingdom bleeds."
Systems. Networks. Security. -- George R.R. Martin: A Clash of Kings
^ permalink raw reply [flat|nested] 25+ messages in thread
* Re: [IPv6] "sendmsg: invalid argument" to multicast group after some time
2008-09-09 17:16 ` Pekka Savola
@ 2008-09-09 20:13 ` David Miller
0 siblings, 0 replies; 25+ messages in thread
From: David Miller @ 2008-09-09 20:13 UTC (permalink / raw)
To: pekkas; +Cc: berni, dlstevens, brian.haley, netdev
From: Pekka Savola <pekkas@netcore.fi>
Date: Tue, 9 Sep 2008 20:16:24 +0300 (EEST)
> On Tue, 9 Sep 2008, Bernhard Schmidt wrote:
> > Working (all-hosts):
> > sendmsg(3, {msg_name(28)={sa_family=AF_INET6, sin6_port=htons(58), inet_pton(AF_INET6, "ff02::1", &sin6_addr), sin6_flowinfo=0, sin6_scope_id=0}, msg_iov(1)=[{"\200\0\0\0\25S\0\3\0\305\305H\212/\r\0\10\t\n\v\f\r\16\17\20\21\22\23\24\25\26\27\30\31\32\33\34\35\36\37 !\"#$%&'()*+,-./01234567"..., 64}], msg_controllen=32, {cmsg_len=32, cmsg_level=SOL_IPV6, cmsg_type=, ...}, msg_flags=0}, MSG_CONFIRM) = 64
> >
> > Non-working (RIPng group):
> > sendmsg(3, {msg_name(28)={sa_family=AF_INET6, sin6_port=htons(58), inet_pton(AF_INET6, "ff02::9", &sin6_addr), sin6_flowinfo=0, sin6_scope_id=0}, msg_iov(1)=[{"\200\0\0\0MS\0\0012\305\305HCA\r\0\10\t\n\v\f\r\16\17\20\21\22\23\24\25\26\27\30\31\32\33\34\35\36\37 !\"#$%&'()*+,-./01234567"..., 64}], msg_controllen=32, {cmsg_len=32, cmsg_level=SOL_IPV6, cmsg_type=, ...}, msg_flags=0}, 0) = -1 EINVAL (Invalid argument)
> > sendmsg(3, {msg_name(28)={sa_family=AF_INET6, sin6_port=htons(58), inet_pton(AF_INET6, "ff02::9", &sin6_addr), sin6_flowinfo=0, sin6_scope_id=0}, msg_iov(1)=[{"\200\0\0\0MS\0\0012\305\305H\36F\r\0\10\t\n\v\f\r\16\17\20\21\22\23\24\25\26\27\30\31\32\33\34\35\36\37 !\"#$%&'()*+,-./01234567"..., 64}], msg_controllen=32, {cmsg_len=32, cmsg_level=SOL_IPV6, cmsg_type=, ...}, msg_flags=0}, 0) = -1 EINVAL (Invalid argument)
>
> I wonder if it's relevant that msg_iov in working case has
> MSG_CONFIRM but in non-working case this is zero? I guess not but
> apart from the contents of msg_iov, that's the only difference..
MSG_CONFIRM refreshes the route's neighbour ->confirmed timestamp with
the current value of jiffies.
I really can't see how that could lead to the observed behavior, but who
knows at this point :)
^ permalink raw reply [flat|nested] 25+ messages in thread
* Re: [IPv6] "sendmsg: invalid argument" to multicast group after some time
@ 2008-12-28 4:47 Eduard Guzovsky
0 siblings, 0 replies; 25+ messages in thread
From: Eduard Guzovsky @ 2008-12-28 4:47 UTC (permalink / raw)
To: netdev
> I even get the same error when doing a multicast ping6:
> miredo:~# ping6 -I eth0 ff02::9
> PING ff02::9(ff02::9) from fe80::216:3eff:feb9:29f5 eth0: 56 data bytes
> ping: sendmsg: Invalid argument
We had a similar problem in our lab network. I tracked down the source
of the "Invalid argument" error to ip6_output_finish(). Here is the
stack
-----edg ip6_output_finish: failed to find neighbour
[<c010647a>] show_trace_log_lvl+0x1a/0x30
[<c0106ba2>] show_trace+0x12/0x20
[<c0106c09>] dump_stack+0x19/0x20
[<f14ab019>] ip6_output2+0x279/0x290 [ipv6]
[<f14ab40f>] ip6_output+0x2df/0x830 [ipv6]
[<f14abce7>] ip6_push_pending_frames+0x247/0x420 [ipv6]
[<f14bde2f>] udp_v6_push_pending_frames+0x13f/0x1f0 [ipv6]
[<f14bf8fe>] udpv6_sendmsg+0x7ae/0xa60 [ipv6]
[<c02ea254>] inet_sendmsg+0x34/0x60
[<c0297adc>] sock_sendmsg+0xfc/0x120
[<c029835f>] sys_sendto+0xbf/0xe0
[<c0299a37>] sys_socketcall+0x187/0x260
[<c0105b7b>] syscall_call+0x7/0xb
=======================
ip6_output_finish() returns EINVAL because the route cache entry has
NULL as a "neighbour" pointer.
These invalid route cache entries are created when ipv6 neighbour
table is filled up (one potential reason for that is a combination of
a lot of multicast traffic –"ff02:…" and xen hosts with interfaces in
promiscuous mode). In this case ndisc_get_neigh() returns NULL, but at
least in two places the routing code in net/ipv6/route.c ignores it
and inserts invalid entries in the cache anyway.
This is especially bad for frequently used multicast addresses.
Garbage collector does not remove them from the cache, probably
because of the frequent updates of the "__use" count. You need to
flush the cache to get rid of them.
One way to work around the problem is to increase "gc_thresh3" for
ipv6 neighbour table. That still leaves you open for DOS attacks.
Another way is to create permanent entries in neighbor/routing tables.
In any case routing cache pollution problem has to be fixed. I suggest
the following patch. I do not know this code and would appreciate if
code maintainers could comment on it.
Thanks,
-Ed
--- a/net/ipv6/route.c 2008-12-26 14:56:50.000000000 -0500
+++ b/net/ipv6/route.c 2008-12-26 14:57:19.000000000 -0500
@@ -638,6 +638,11 @@
rt->rt6i_nexthop = ndisc_get_neigh(rt->rt6i_dev,
&rt->rt6i_gateway);
+ if (rt->rt6i_nexthop == NULL) {
+ dst_free((struct dst_entry *)rt);
+ rt = NULL;
+ }
+
}
return rt;
@@ -991,9 +996,18 @@
dev_hold(dev);
if (neigh)
neigh_hold(neigh);
- else
+ else {
neigh = ndisc_get_neigh(dev, addr);
+ if (neigh == NULL) {
+ dev_put(dev);
+ in6_dev_put(idev);
+ dst_free((struct dst_entry *)rt);
+ rt = NULL;
+ goto out;
+ }
+ }
+
rt->rt6i_dev = dev;
rt->rt6i_idev = idev;
rt->rt6i_nexthop = neigh;
^ permalink raw reply [flat|nested] 25+ messages in thread
* Re: [IPv6] "sendmsg: invalid argument" to multicast group after some time
@ 2008-12-30 7:52 David Miller
2008-12-31 19:53 ` Eduard Guzovsky
0 siblings, 1 reply; 25+ messages in thread
From: David Miller @ 2008-12-30 7:52 UTC (permalink / raw)
To: eguzovsky; +Cc: berni, dlstevens, pekkas, netdev
Eduard, thanks for your analysis and RFC patch.
I agree this is an ugly situation.
Looking over this area the real problem is that the neighbour cache
can't do anything to apply back pressure on the routing cache when it
fills up with essentially unused multicast entries like this.
When we hit the upper limits (such as gc_thresh3) for the neighbour
cache, it tries to do things like neigh_forced_gc().
But this won't accomplish anything since all of these ipv6 multicast
routes have a reference on the neigh entries filling up the table, so
the forced GC won't be able to liberate them
So you're absolutely right that the route cache pollution is the core
problem.
Looking at the IPV4 routing cache we have code which goes:
int err = arp_bind_neighbour(&rt->u.dst);
if (err) {
...
/* Neighbour tables are full and nothing
can be released. Try to shrink route cache,
it is most likely it holds some neighbour records.
*/
and then proceeds to try and forcefully flush some routing cache
entries.
So the real fix is that IPV6 should do something similar.
Something like the following (untested) patch:
diff --git a/include/net/ndisc.h b/include/net/ndisc.h
index ce532f2..1459ed3 100644
--- a/include/net/ndisc.h
+++ b/include/net/ndisc.h
@@ -155,9 +155,9 @@ static inline struct neighbour * ndisc_get_neigh(struct net_device *dev, const s
{
if (dev)
- return __neigh_lookup(&nd_tbl, addr, dev, 1);
+ return __neigh_lookup_errno(&nd_tbl, addr, dev);
- return NULL;
+ return ERR_PTR(-ENODEV);
}
diff --git a/net/ipv6/route.c b/net/ipv6/route.c
index 18c486c..0db4129 100644
--- a/net/ipv6/route.c
+++ b/net/ipv6/route.c
@@ -627,6 +627,9 @@ static struct rt6_info *rt6_alloc_cow(struct rt6_info *ort, struct in6_addr *dad
rt = ip6_rt_copy(ort);
if (rt) {
+ struct neighbour *neigh;
+ int attempts = !in_softirq();
+
if (!(rt->rt6i_flags&RTF_GATEWAY)) {
if (rt->rt6i_dst.plen != 128 &&
ipv6_addr_equal(&rt->rt6i_dst.addr, daddr))
@@ -646,7 +649,35 @@ static struct rt6_info *rt6_alloc_cow(struct rt6_info *ort, struct in6_addr *dad
}
#endif
- rt->rt6i_nexthop = ndisc_get_neigh(rt->rt6i_dev, &rt->rt6i_gateway);
+ retry:
+ neigh = ndisc_get_neigh(rt->rt6i_dev, &rt->rt6i_gateway);
+ if (IS_ERR(neigh)) {
+ struct net *net = dev_net(rt->rt6i_dev);
+ int saved_rt_min_interval =
+ net->ipv6.sysctl.ip6_rt_gc_min_interval;
+ int saved_rt_elasticity =
+ net->ipv6.sysctl.ip6_rt_gc_elasticity;
+
+ if (attempts-- > 0) {
+ net->ipv6.sysctl.ip6_rt_gc_elasticity = 1;
+ net->ipv6.sysctl.ip6_rt_gc_min_interval = 0;
+
+ ip6_dst_gc(net->ipv6.ip6_dst_ops);
+
+ net->ipv6.sysctl.ip6_rt_gc_elasticity =
+ saved_rt_elasticity;
+ net->ipv6.sysctl.ip6_rt_gc_min_interval =
+ saved_rt_min_interval;
+ goto retry;
+ }
+
+ if (net_ratelimit())
+ printk(KERN_WARNING
+ "Neighbour table overflow.\n");
+ dst_free(&rt->u.dst);
+ return NULL;
+ }
+ rt->rt6i_nexthop = neigh;
}
@@ -945,8 +976,11 @@ struct dst_entry *icmp6_dst_alloc(struct net_device *dev,
dev_hold(dev);
if (neigh)
neigh_hold(neigh);
- else
+ else {
neigh = ndisc_get_neigh(dev, addr);
+ if (IS_ERR(neigh))
+ neigh = NULL;
+ }
rt->rt6i_dev = dev;
rt->rt6i_idev = idev;
@@ -1887,6 +1921,7 @@ struct rt6_info *addrconf_dst_alloc(struct inet6_dev *idev,
{
struct net *net = dev_net(idev->dev);
struct rt6_info *rt = ip6_dst_alloc(net->ipv6.ip6_dst_ops);
+ struct neighbour *neigh;
if (rt == NULL)
return ERR_PTR(-ENOMEM);
@@ -1909,11 +1944,18 @@ struct rt6_info *addrconf_dst_alloc(struct inet6_dev *idev,
rt->rt6i_flags |= RTF_ANYCAST;
else
rt->rt6i_flags |= RTF_LOCAL;
- rt->rt6i_nexthop = ndisc_get_neigh(rt->rt6i_dev, &rt->rt6i_gateway);
- if (rt->rt6i_nexthop == NULL) {
+ neigh = ndisc_get_neigh(rt->rt6i_dev, &rt->rt6i_gateway);
+ if (IS_ERR(neigh)) {
dst_free(&rt->u.dst);
- return ERR_PTR(-ENOMEM);
+
+ /* We are casting this because that is the return
+ * value type. But a errno encoded pointer is the
+ * same regardless of the underlying pointer type,
+ * and that's what we are returning. So this is OK.
+ */
+ return (struct rt6_info *) neigh;
}
+ rt->rt6i_nexthop = neigh;
ipv6_addr_copy(&rt->rt6i_dst.addr, addr);
rt->rt6i_dst.plen = 128;
^ permalink raw reply related [flat|nested] 25+ messages in thread
* Re: [IPv6] "sendmsg: invalid argument" to multicast group after some time
2008-12-30 7:52 David Miller
@ 2008-12-31 19:53 ` Eduard Guzovsky
2009-01-04 23:56 ` David Miller
0 siblings, 1 reply; 25+ messages in thread
From: Eduard Guzovsky @ 2008-12-31 19:53 UTC (permalink / raw)
To: David Miller; +Cc: berni, dlstevens, pekkas, netdev
David,
Thank you for the patch. I tested a slightly modified version of it
(could not test it "verbatim" - we use a 2.6.18 based kernel) - it
works!
Question: I noticed that with your patch ndisc_dst_alloc() can still
return a dst_entry with NULL as a neighbour. Is this Ok?
Suggestion: How about creating a per interface "catch all"
route+neighbour entry for all ipv6 link level multicasts. This entry
should be used when no "specific" route/neighbour entry is configured.
This would prevent cache pollution with useless entries created by
every incoming packet with link level multicast address.
Thanks,
-Ed
On Tue, Dec 30, 2008 at 2:52 AM, David Miller <davem@davemloft.net> wrote:
>
> Eduard, thanks for your analysis and RFC patch.
>
> I agree this is an ugly situation.
>
> Looking over this area the real problem is that the neighbour cache
> can't do anything to apply back pressure on the routing cache when it
> fills up with essentially unused multicast entries like this.
>
> When we hit the upper limits (such as gc_thresh3) for the neighbour
> cache, it tries to do things like neigh_forced_gc().
>
> But this won't accomplish anything since all of these ipv6 multicast
> routes have a reference on the neigh entries filling up the table, so
> the forced GC won't be able to liberate them
>
> So you're absolutely right that the route cache pollution is the core
> problem.
>
> Looking at the IPV4 routing cache we have code which goes:
>
> int err = arp_bind_neighbour(&rt->u.dst);
> if (err) {
> ...
> /* Neighbour tables are full and nothing
> can be released. Try to shrink route cache,
> it is most likely it holds some neighbour records.
> */
>
> and then proceeds to try and forcefully flush some routing cache
> entries.
>
> So the real fix is that IPV6 should do something similar.
>
> Something like the following (untested) patch:
>
> diff --git a/include/net/ndisc.h b/include/net/ndisc.h
> index ce532f2..1459ed3 100644
> --- a/include/net/ndisc.h
> +++ b/include/net/ndisc.h
> @@ -155,9 +155,9 @@ static inline struct neighbour * ndisc_get_neigh(struct net_device *dev, const s
> {
>
> if (dev)
> - return __neigh_lookup(&nd_tbl, addr, dev, 1);
> + return __neigh_lookup_errno(&nd_tbl, addr, dev);
>
> - return NULL;
> + return ERR_PTR(-ENODEV);
> }
>
>
> diff --git a/net/ipv6/route.c b/net/ipv6/route.c
> index 18c486c..0db4129 100644
> --- a/net/ipv6/route.c
> +++ b/net/ipv6/route.c
> @@ -627,6 +627,9 @@ static struct rt6_info *rt6_alloc_cow(struct rt6_info *ort, struct in6_addr *dad
> rt = ip6_rt_copy(ort);
>
> if (rt) {
> + struct neighbour *neigh;
> + int attempts = !in_softirq();
> +
> if (!(rt->rt6i_flags&RTF_GATEWAY)) {
> if (rt->rt6i_dst.plen != 128 &&
> ipv6_addr_equal(&rt->rt6i_dst.addr, daddr))
> @@ -646,7 +649,35 @@ static struct rt6_info *rt6_alloc_cow(struct rt6_info *ort, struct in6_addr *dad
> }
> #endif
>
> - rt->rt6i_nexthop = ndisc_get_neigh(rt->rt6i_dev, &rt->rt6i_gateway);
> + retry:
> + neigh = ndisc_get_neigh(rt->rt6i_dev, &rt->rt6i_gateway);
> + if (IS_ERR(neigh)) {
> + struct net *net = dev_net(rt->rt6i_dev);
> + int saved_rt_min_interval =
> + net->ipv6.sysctl.ip6_rt_gc_min_interval;
> + int saved_rt_elasticity =
> + net->ipv6.sysctl.ip6_rt_gc_elasticity;
> +
> + if (attempts-- > 0) {
> + net->ipv6.sysctl.ip6_rt_gc_elasticity = 1;
> + net->ipv6.sysctl.ip6_rt_gc_min_interval = 0;
> +
> + ip6_dst_gc(net->ipv6.ip6_dst_ops);
> +
> + net->ipv6.sysctl.ip6_rt_gc_elasticity =
> + saved_rt_elasticity;
> + net->ipv6.sysctl.ip6_rt_gc_min_interval =
> + saved_rt_min_interval;
> + goto retry;
> + }
> +
> + if (net_ratelimit())
> + printk(KERN_WARNING
> + "Neighbour table overflow.\n");
> + dst_free(&rt->u.dst);
> + return NULL;
> + }
> + rt->rt6i_nexthop = neigh;
>
> }
>
> @@ -945,8 +976,11 @@ struct dst_entry *icmp6_dst_alloc(struct net_device *dev,
> dev_hold(dev);
> if (neigh)
> neigh_hold(neigh);
> - else
> + else {
> neigh = ndisc_get_neigh(dev, addr);
> + if (IS_ERR(neigh))
> + neigh = NULL;
> + }
>
> rt->rt6i_dev = dev;
> rt->rt6i_idev = idev;
> @@ -1887,6 +1921,7 @@ struct rt6_info *addrconf_dst_alloc(struct inet6_dev *idev,
> {
> struct net *net = dev_net(idev->dev);
> struct rt6_info *rt = ip6_dst_alloc(net->ipv6.ip6_dst_ops);
> + struct neighbour *neigh;
>
> if (rt == NULL)
> return ERR_PTR(-ENOMEM);
> @@ -1909,11 +1944,18 @@ struct rt6_info *addrconf_dst_alloc(struct inet6_dev *idev,
> rt->rt6i_flags |= RTF_ANYCAST;
> else
> rt->rt6i_flags |= RTF_LOCAL;
> - rt->rt6i_nexthop = ndisc_get_neigh(rt->rt6i_dev, &rt->rt6i_gateway);
> - if (rt->rt6i_nexthop == NULL) {
> + neigh = ndisc_get_neigh(rt->rt6i_dev, &rt->rt6i_gateway);
> + if (IS_ERR(neigh)) {
> dst_free(&rt->u.dst);
> - return ERR_PTR(-ENOMEM);
> +
> + /* We are casting this because that is the return
> + * value type. But a errno encoded pointer is the
> + * same regardless of the underlying pointer type,
> + * and that's what we are returning. So this is OK.
> + */
> + return (struct rt6_info *) neigh;
> }
> + rt->rt6i_nexthop = neigh;
>
> ipv6_addr_copy(&rt->rt6i_dst.addr, addr);
> rt->rt6i_dst.plen = 128;
>
^ permalink raw reply [flat|nested] 25+ messages in thread
* Re: [IPv6] "sendmsg: invalid argument" to multicast group after some time
2008-12-31 19:53 ` Eduard Guzovsky
@ 2009-01-04 23:56 ` David Miller
0 siblings, 0 replies; 25+ messages in thread
From: David Miller @ 2009-01-04 23:56 UTC (permalink / raw)
To: eguzovsky; +Cc: berni, dlstevens, pekkas, netdev
From: "Eduard Guzovsky" <eguzovsky@gmail.com>
Date: Wed, 31 Dec 2008 14:53:04 -0500
> Thank you for the patch. I tested a slightly modified version of it
> (could not test it "verbatim" - we use a 2.6.18 based kernel) - it
> works!
Thanks for testing.
> Question: I noticed that with your patch ndisc_dst_alloc() can still
> return a dst_entry with NULL as a neighbour. Is this Ok?
There is no ndisc_dst_alloc() in the current tree, that whole
area got redesigned and rearranged since the tree you are using.
^ permalink raw reply [flat|nested] 25+ messages in thread
end of thread, other threads:[~2009-01-04 23:56 UTC | newest]
Thread overview: 25+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2008-08-31 18:20 [IPv6] "sendmsg: invalid argument" to multicast group after some time Bernhard Schmidt
2008-09-01 5:49 ` David Stevens
2008-09-01 9:09 ` Bernhard Schmidt
2008-09-01 13:03 ` David Stevens
2008-09-01 17:01 ` Bernhard Schmidt
2008-09-01 17:05 ` Bernhard Schmidt
2008-09-01 17:57 ` Pekka Savola
2008-09-01 18:03 ` Bernhard Schmidt
2008-09-02 9:06 ` Pekka Savola
2008-09-02 13:57 ` Brian Haley
2008-09-02 15:00 ` Bernhard Schmidt
2008-09-02 15:48 ` Brian Haley
2008-09-09 0:34 ` David Stevens
2008-09-09 0:38 ` Bernhard Schmidt
2008-09-09 2:26 ` David Stevens
2008-09-09 6:52 ` Rémi Denis-Courmont
2008-09-09 7:17 ` David Stevens
2008-09-09 10:06 ` Bernhard Schmidt
2008-09-09 15:05 ` David Stevens
2008-09-09 17:16 ` Pekka Savola
2008-09-09 20:13 ` David Miller
-- strict thread matches above, loose matches on Subject: below --
2008-12-28 4:47 Eduard Guzovsky
2008-12-30 7:52 David Miller
2008-12-31 19:53 ` Eduard Guzovsky
2009-01-04 23:56 ` David Miller
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).