* network interface state
@ 2007-11-14 20:59 Ulrich Drepper
2007-11-14 23:31 ` David Miller
0 siblings, 1 reply; 8+ messages in thread
From: Ulrich Drepper @ 2007-11-14 20:59 UTC (permalink / raw)
To: netdev
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1
Just FYI, with the current getaddrinfo code it is even more critical to
get to a point where I can cache network interface information and query
the kernel whether it changed. We now have to read the RTM_GETADDR
tables for every lookup. It was more limited with the old, incomplete
implementation.
Even if it's something as simple as a RTM_SEQUENCE request which returns
a number that is bumped at every interface change.
Related: I need to know about the device type (the ARPHRD_* values) to
determine whether a device is for a native transport or a tunnel. What
I currently do is:
- - at the beginning I get information about all interfaces using
RTM_GETADDR
- - them later I have to find the device type by
+ reading the RTM_GETLINK data to get to the device name
+ then using the name and ioctl(SIOCGIFHWADDR) I get the device type
It would be so much nicer if the device type would be part of the
RTM_GETADDR data, or at least the RTM_GETLINK data.
Any help on any of these issues?
- --
➧ Ulrich Drepper ➧ Red Hat, Inc. ➧ 444 Castro St ➧ Mountain View, CA ❖
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.7 (GNU/Linux)
iD8DBQFHO2HI2ijCOnn/RHQRAtQQAJ0QV6j/BKFmN5nWugrQ/zXf0cCu9wCffRYT
+aXv6y5S1m5iwR7gVfOhp9A=
=Uf3i
-----END PGP SIGNATURE-----
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: network interface state
2007-11-14 20:59 network interface state Ulrich Drepper
@ 2007-11-14 23:31 ` David Miller
2007-11-15 0:12 ` Ulrich Drepper
2008-01-04 20:58 ` Milan Kocian
0 siblings, 2 replies; 8+ messages in thread
From: David Miller @ 2007-11-14 23:31 UTC (permalink / raw)
To: drepper; +Cc: netdev
From: Ulrich Drepper <drepper@redhat.com>
Date: Wed, 14 Nov 2007 12:59:52 -0800
> Just FYI, with the current getaddrinfo code it is even more critical to
> get to a point where I can cache network interface information and query
> the kernel whether it changed. We now have to read the RTM_GETADDR
> tables for every lookup. It was more limited with the old, incomplete
> implementation.
>
> Even if it's something as simple as a RTM_SEQUENCE request which returns
> a number that is bumped at every interface change.
This sounds like a useful feature. Essentially you want a generation
ID that increments every time a configuration change is made?
Most daemons handle this by listening for events on the netlink
socket, but I understand how that might not be practical for
glibc.
> Related: I need to know about the device type (the ARPHRD_* values) to
> determine whether a device is for a native transport or a tunnel. What
> I currently do is:
>
> - - at the beginning I get information about all interfaces using
> RTM_GETADDR
>
> - - them later I have to find the device type by
>
> + reading the RTM_GETLINK data to get to the device name
>
> + then using the name and ioctl(SIOCGIFHWADDR) I get the device type
>
>
> It would be so much nicer if the device type would be part of the
> RTM_GETADDR data, or at least the RTM_GETLINK data.
It's part of the link information, Look in ifinfomsg->ifi_type
In general be suspicious if it seems netlink isn't providing
the same information available via the old ioctls :-)
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: network interface state
2007-11-14 23:31 ` David Miller
@ 2007-11-15 0:12 ` Ulrich Drepper
2007-11-15 0:22 ` David Miller
2008-01-04 20:58 ` Milan Kocian
1 sibling, 1 reply; 8+ messages in thread
From: Ulrich Drepper @ 2007-11-15 0:12 UTC (permalink / raw)
To: David Miller; +Cc: netdev
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1
David Miller wrote:
> Most daemons handle this by listening for events on the netlink
> socket, but I understand how that might not be practical for
> glibc.
Right, this cannot work. I have no inner loop which I can control. I
cannot install a listener.
At some point, when we have non-sequential, hidden file descriptors,
I'll be able to leave a socket file descriptor open. But that's about
it. Even then the generation counter interface is likely to be the best
choice.
> It's part of the link information, Look in ifinfomsg->ifi_type
Great, I fixed up the code. I guess in future, once I can cache the
data, I'll simply read the RTM_GETADDR and RTM_GETLINK data all at once
and be done with it.
BTW, is it possible to send both these requests out before starting to
read the results? This would reduce the amount of code quite a bit.
- --
➧ Ulrich Drepper ➧ Red Hat, Inc. ➧ 444 Castro St ➧ Mountain View, CA ❖
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.7 (GNU/Linux)
iD8DBQFHO47s2ijCOnn/RHQRApIIAJwNATDabXkfszG2e+gtJWO9f4wm4wCdFuoQ
Yn40KK+cs9Di4fq+WKTQalo=
=q02M
-----END PGP SIGNATURE-----
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: network interface state
2007-11-15 0:12 ` Ulrich Drepper
@ 2007-11-15 0:22 ` David Miller
2007-11-15 2:11 ` Herbert Xu
0 siblings, 1 reply; 8+ messages in thread
From: David Miller @ 2007-11-15 0:22 UTC (permalink / raw)
To: drepper; +Cc: netdev
From: Ulrich Drepper <drepper@redhat.com>
Date: Wed, 14 Nov 2007 16:12:28 -0800
> -----BEGIN PGP SIGNED MESSAGE-----
> Hash: SHA1
>
> David Miller wrote:
> > Most daemons handle this by listening for events on the netlink
> > socket, but I understand how that might not be practical for
> > glibc.
>
> Right, this cannot work. I have no inner loop which I can control. I
> cannot install a listener.
>
> At some point, when we have non-sequential, hidden file descriptors,
> I'll be able to leave a socket file descriptor open. But that's about
> it. Even then the generation counter interface is likely to be the best
> choice.
Ok, I'll think about how to implement this.
> BTW, is it possible to send both these requests out before starting to
> read the results? This would reduce the amount of code quite a bit.
Unfortunately, that won't work. Like datagram protocols,
netlink assumes one message per sendmsg() call.
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: network interface state
2007-11-15 0:22 ` David Miller
@ 2007-11-15 2:11 ` Herbert Xu
2007-11-15 3:39 ` David Miller
2007-11-15 13:58 ` jamal
0 siblings, 2 replies; 8+ messages in thread
From: Herbert Xu @ 2007-11-15 2:11 UTC (permalink / raw)
To: David Miller; +Cc: drepper, netdev
David Miller <davem@davemloft.net> wrote:
>
>> BTW, is it possible to send both these requests out before starting to
>> read the results? This would reduce the amount of code quite a bit.
>
> Unfortunately, that won't work. Like datagram protocols,
> netlink assumes one message per sendmsg() call.
Actually netlink packets come with headers so we do allow chaining.
See netlink_rcv_skb for details.
We don't make use of that on recvmsg() though although theoretically
user-space is supposed to be ready to handle that too.
Cheers,
--
Visit Openswan at http://www.openswan.org/
Email: Herbert Xu ~{PmV>HI~} <herbert@gondor.apana.org.au>
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt
^ permalink raw reply [flat|nested] 8+ messages in thread* Re: network interface state
2007-11-15 2:11 ` Herbert Xu
@ 2007-11-15 3:39 ` David Miller
2007-11-15 13:58 ` jamal
1 sibling, 0 replies; 8+ messages in thread
From: David Miller @ 2007-11-15 3:39 UTC (permalink / raw)
To: herbert; +Cc: drepper, netdev
From: Herbert Xu <herbert@gondor.apana.org.au>
Date: Thu, 15 Nov 2007 10:11:35 +0800
> David Miller <davem@davemloft.net> wrote:
> >
> >> BTW, is it possible to send both these requests out before starting to
> >> read the results? This would reduce the amount of code quite a bit.
> >
> > Unfortunately, that won't work. Like datagram protocols,
> > netlink assumes one message per sendmsg() call.
>
> Actually netlink packets come with headers so we do allow chaining.
> See netlink_rcv_skb for details.
My bad, you are indeed right.
Ulrich you can therefore send both messages in one go if you
like.
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: network interface state
2007-11-15 2:11 ` Herbert Xu
2007-11-15 3:39 ` David Miller
@ 2007-11-15 13:58 ` jamal
1 sibling, 0 replies; 8+ messages in thread
From: jamal @ 2007-11-15 13:58 UTC (permalink / raw)
To: Herbert Xu; +Cc: David Miller, drepper, netdev
On Thu, 2007-15-11 at 10:11 +0800, Herbert Xu wrote:
> We don't make use of that on recvmsg() though although theoretically
> user-space is supposed to be ready to handle that too.
iproute2 handles that well. Anyone writting netlink apps should program
with the thought that a single received datagram will include many
netlink messages.
On the concept of putting some generation marker/counter:
It is one of those things that have bothered me as well for sometime,
but i cant think of a clean way to solve it for every user of netlink.
One way to transport this from the kernel is stash it in the netlink
sequence but that would violate things when a user expects a specific
sequence.
For the ifla/iflink, it should be trivial to solve by adding a marker in
the kernel that gets set to jiffies (or some incremental counter) every
time an event happens. You then transport this to user space as an
attribute anytime someone does a GET.
Clearly the best way to solve it is to be generic, but we would need to
revamp netlink totaly.
Note, we do today signal to user space that a "message was lost" because
of buffer overrun. So a hack (not applicable to the poster given they
dont have a daemon) would be to listen to events and set the rx socket
buffer to be very small so you loose every message.
cheers,
jamal
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: network interface state
2007-11-14 23:31 ` David Miller
2007-11-15 0:12 ` Ulrich Drepper
@ 2008-01-04 20:58 ` Milan Kocian
1 sibling, 0 replies; 8+ messages in thread
From: Milan Kocian @ 2008-01-04 20:58 UTC (permalink / raw)
To: David Miller; +Cc: netdev, drepper
On Wed, 2007-11-14 at 15:31 -0800, David Miller wrote:
> From: Ulrich Drepper <drepper@redhat.com>
> Date: Wed, 14 Nov 2007 12:59:52 -0800
>
> > Just FYI, with the current getaddrinfo code it is even more critical to
> > get to a point where I can cache network interface information and query
> > the kernel whether it changed. We now have to read the RTM_GETADDR
> > tables for every lookup. It was more limited with the old, incomplete
> > implementation.
> >
> > Even if it's something as simple as a RTM_SEQUENCE request which returns
> > a number that is bumped at every interface change.
>
> This sounds like a useful feature. Essentially you want a generation
> ID that increments every time a configuration change is made?
>
> Most daemons handle this by listening for events on the netlink
> socket, but I understand how that might not be practical for
> glibc.
>
> > Related: I need to know about the device type (the ARPHRD_* values) to
> > determine whether a device is for a native transport or a tunnel. What
> > I currently do is:
> >
> > - - at the beginning I get information about all interfaces using
> > RTM_GETADDR
> >
> > - - them later I have to find the device type by
> >
> > + reading the RTM_GETLINK data to get to the device name
> >
> > + then using the name and ioctl(SIOCGIFHWADDR) I get the device type
> >
> >
> > It would be so much nicer if the device type would be part of the
> > RTM_GETADDR data, or at least the RTM_GETLINK data.
>
> It's part of the link information, Look in ifinfomsg->ifi_type
>
> In general be suspicious if it seems netlink isn't providing
> the same information available via the old ioctls :-)
Sorry for late little offtopic question: Exists any simple way how to
differentiate virtual network devices from real devices (e.g. vlans,
bridges)?
They have the same ifinfomsg->ifi_type as real devices (ARPHRD_ETHER). I
know to differentiate vlans via IFLA_LINK attribute. But how to
differentiate bridges from real devices I didn't determine.
Thanks for any answer.
regards,
milan kocian
^ permalink raw reply [flat|nested] 8+ messages in thread
end of thread, other threads:[~2008-01-04 21:53 UTC | newest]
Thread overview: 8+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2007-11-14 20:59 network interface state Ulrich Drepper
2007-11-14 23:31 ` David Miller
2007-11-15 0:12 ` Ulrich Drepper
2007-11-15 0:22 ` David Miller
2007-11-15 2:11 ` Herbert Xu
2007-11-15 3:39 ` David Miller
2007-11-15 13:58 ` jamal
2008-01-04 20:58 ` Milan Kocian
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).