* [BUG?] bonding, slave selection, carrier loss, etc.
[not found] ` <31087.1238198438@death.nxdomain.ibm.com>
@ 2012-02-10 23:47 ` Chris Friesen
2012-02-11 1:53 ` Jay Vosburgh
0 siblings, 1 reply; 8+ messages in thread
From: Chris Friesen @ 2012-02-10 23:47 UTC (permalink / raw)
To: Jay Vosburgh, andy, netdev
Hi all,
I'm resurrecting an ancient discussion I had with Jay, because I think
the issue described below is still present and the code he talked about
submitting to close it doesn't appear to have ever gone in.
Basically in active/backup mode with mii monitoring there is a window
between the active slave device losing carrier and calling
netif_carrier_off() and the miimon code actually detecting the loss of
the carrier and selecting a new active slave.
The best solution would be for bonding to just register for notification
of the link going down. Presumably most drivers should be doing that
properly by now, and for devices that get interrupt-driven notification
of link status changes this would allow the bonding code to react much
quicker.
Barring that, I think something like the following is needed. This is
against 2.6.27, but could easily be reworked against current.
---------------------- drivers/net/bonding/bond_main.c -----------------------
index 8499558..e4445d8 100644
@@ -4313,20 +4313,33 @@ static int bond_xmit_activebackup(struct sk_buff *skb, struct net_device *bond_d
read_lock(&bond->lock);
read_lock(&bond->curr_slave_lock);
if (!BOND_IS_OK(bond)) {
goto out;
}
if (!bond->curr_active_slave)
goto out;
+ /* Verify that the active slave is actually up before
+ * trying to send packets. If it isn't, then
+ * trigger the selection of a new active slave.
+ */
+ if (!IS_UP(bond->curr_active_slave->dev)) {
+ read_unlock(&bond->curr_slave_lock);
+ write_lock(&bond->curr_slave_lock);
+ bond_select_active_slave(bond);
+ write_unlock(&bond->curr_slave_lock);
+ read_lock(&bond->curr_slave_lock);
+ if (!bond->curr_active_slave)
+ goto out;
+ }
res = bond_dev_queue_xmit(bond, skb, bond->curr_active_slave->dev);
out:
if (res) {
/* no suitable interface, frame not sent */
dev_kfree_skb(skb);
}
read_unlock(&bond->curr_slave_lock);
read_unlock(&bond->lock);
return 0;
Chris
On 03/27/2009 06:00 PM, Jay Vosburgh wrote:
> Chris Friesen<cfriesen@nortel.com> wrote:
>
>> In a much earlier version of the bonding driver we ran into problems
>> where we could have lost carrier on one of the slaves, but at the time
>> of xmit the bonding driver hadn't yet switched to a better slave.
>> Because of this we added a patch very much like the one below.
>>
>> A quick glance at the current bonding code would seem to indicate that
>> there could still be a window between the active slave device losing
>> carrier and calling netif_carrier_off() and the miimon code actually
>> detecting the loss of the carrier and selecting a new active slave.
>> Do I have this correct? If so, would the patch below be correct?
>
> Yes, the window is equal to whatever the monitoring interval is
> (for miimon) or double the interval for ARP.
>
> Your patch, I think, would work, but it's suboptimal in that it
> only affects one mode, and doesn't resolve any of the bigger issues with
> the link monitoring system in bonding (see below). Trying to do the
> equivalent in other modes may have issues; some modes require RTNL to be
> held when changing slave states, so it's difficult to do that from the
> transmit routine.
>
>> On a related note--assuming the net driver can detect link loss and
>> is properly calling netif_carrier_off() why do we still need to poll
>> the status in the bonding driver? Isn't there some way to hook into
>> the network stack and get notified when the carrier goes down?
>
> This is actually something I'm working on now.
>
> There are notifier callbacks that are tied to a driver calling
> netif_carrier_on or _off. The problem is that a bunch of older (mostly
> 10/100, although acenic is a gigabit) drivers don't do netif_carrier_on
> or _off, or check their link state on a ridiculously long interval, so
> simply dropping the current miimon implementation and replacing it with
> the event notifier may not be feasible for backwards compatibility
> reasons. Heck, I've still got 3c59x and acenic cards in my test
> systems, neither of which do netif_carrier correctly; I can't be the
> only one.
>
> An additional goal is to permit the state change notifications
> (or miimon) and the ARP monitor to run concurrently. Sadly, the current
> "link state" system can't handle two things simultaneously poking at the
> slave's link state; if, e.g., ARP says down, but MII/notifiers says up,
> then the link state can flap, so it needs a sort of "arbitrator."
>
> A minor advantage of reworking all of that is that it should end
> up being less code when all done, and should be more modular, so it'd be
> easier if somebody wanted to add, say, an ICMP probe monitor.
>
> I'll probably be posting an RFC patch next week.
>
> -J
>
> ---
> -Jay Vosburgh, IBM Linux Technology Center, fubar@us.ibm.com
>
--
Chris Friesen
Software Developer
GENBAND
chris.friesen@genband.com
www.genband.com
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [BUG?] bonding, slave selection, carrier loss, etc.
2012-02-10 23:47 ` [BUG?] bonding, slave selection, carrier loss, etc Chris Friesen
@ 2012-02-11 1:53 ` Jay Vosburgh
2012-02-11 18:52 ` Ben Hutchings
0 siblings, 1 reply; 8+ messages in thread
From: Jay Vosburgh @ 2012-02-11 1:53 UTC (permalink / raw)
To: Chris Friesen; +Cc: andy, netdev
Chris Friesen <chris.friesen@genband.com> wrote:
>I'm resurrecting an ancient discussion I had with Jay, because I think
>the issue described below is still present and the code he talked about
>submitting to close it doesn't appear to have ever gone in.
Yah, I never got it to work quite right; I don't remember
exactly why.
>Basically in active/backup mode with mii monitoring there is a window
>between the active slave device losing carrier and calling
>netif_carrier_off() and the miimon code actually detecting the loss of
>the carrier and selecting a new active slave.
>
>The best solution would be for bonding to just register for notification
>of the link going down. Presumably most drivers should be doing that
>properly by now, and for devices that get interrupt-driven notification
>of link status changes this would allow the bonding code to react much
>quicker.
A quick look at some drivers shows that at least acenic still
doesn't do netif_carrier_off, so converting entirely to a notifier-based
failover mechanism would break drivers that work today.
Adding a notifier callback as an additional path into something
like bond_miimon_commit may be feasible.
>Barring that, I think something like the following is needed. This is
>against 2.6.27, but could easily be reworked against current.
>
>
>
>---------------------- drivers/net/bonding/bond_main.c -----------------------
>index 8499558..e4445d8 100644
>@@ -4313,20 +4313,33 @@ static int bond_xmit_activebackup(struct sk_buff *skb, struct net_device *bond_d
> read_lock(&bond->lock);
> read_lock(&bond->curr_slave_lock);
>
> if (!BOND_IS_OK(bond)) {
> goto out;
> }
>
> if (!bond->curr_active_slave)
> goto out;
>
>+ /* Verify that the active slave is actually up before
>+ * trying to send packets. If it isn't, then
>+ * trigger the selection of a new active slave.
>+ */
>+ if (!IS_UP(bond->curr_active_slave->dev)) {
>+ read_unlock(&bond->curr_slave_lock);
>+ write_lock(&bond->curr_slave_lock);
>+ bond_select_active_slave(bond);
>+ write_unlock(&bond->curr_slave_lock);
>+ read_lock(&bond->curr_slave_lock);
>+ if (!bond->curr_active_slave)
>+ goto out;
>+ }
The problem here is going to be that bond_select_active_slave()
should be called with RTNL held (because the notifier calls it makes
require RTNL), and I'm not sure it's permissible to acquire RTNL in a
driver transmit function.
-J
> res = bond_dev_queue_xmit(bond, skb, bond->curr_active_slave->dev);
>
> out:
> if (res) {
> /* no suitable interface, frame not sent */
> dev_kfree_skb(skb);
> }
> read_unlock(&bond->curr_slave_lock);
> read_unlock(&bond->lock);
> return 0;
>
>Chris
>
>
>
>
>On 03/27/2009 06:00 PM, Jay Vosburgh wrote:
>> Chris Friesen<cfriesen@nortel.com> wrote:
>>
>>> In a much earlier version of the bonding driver we ran into problems
>>> where we could have lost carrier on one of the slaves, but at the time
>>> of xmit the bonding driver hadn't yet switched to a better slave.
>>> Because of this we added a patch very much like the one below.
>>>
>>> A quick glance at the current bonding code would seem to indicate that
>>> there could still be a window between the active slave device losing
>>> carrier and calling netif_carrier_off() and the miimon code actually
>>> detecting the loss of the carrier and selecting a new active slave.
>>> Do I have this correct? If so, would the patch below be correct?
>>
>> Yes, the window is equal to whatever the monitoring interval is
>> (for miimon) or double the interval for ARP.
>>
>> Your patch, I think, would work, but it's suboptimal in that it
>> only affects one mode, and doesn't resolve any of the bigger issues with
>> the link monitoring system in bonding (see below). Trying to do the
>> equivalent in other modes may have issues; some modes require RTNL to be
>> held when changing slave states, so it's difficult to do that from the
>> transmit routine.
>>
>>> On a related note--assuming the net driver can detect link loss and
>>> is properly calling netif_carrier_off() why do we still need to poll
>>> the status in the bonding driver? Isn't there some way to hook into
>>> the network stack and get notified when the carrier goes down?
>>
>> This is actually something I'm working on now.
>>
>> There are notifier callbacks that are tied to a driver calling
>> netif_carrier_on or _off. The problem is that a bunch of older (mostly
>> 10/100, although acenic is a gigabit) drivers don't do netif_carrier_on
>> or _off, or check their link state on a ridiculously long interval, so
>> simply dropping the current miimon implementation and replacing it with
>> the event notifier may not be feasible for backwards compatibility
>> reasons. Heck, I've still got 3c59x and acenic cards in my test
>> systems, neither of which do netif_carrier correctly; I can't be the
>> only one.
>>
>> An additional goal is to permit the state change notifications
>> (or miimon) and the ARP monitor to run concurrently. Sadly, the current
>> "link state" system can't handle two things simultaneously poking at the
>> slave's link state; if, e.g., ARP says down, but MII/notifiers says up,
>> then the link state can flap, so it needs a sort of "arbitrator."
>>
>> A minor advantage of reworking all of that is that it should end
>> up being less code when all done, and should be more modular, so it'd be
>> easier if somebody wanted to add, say, an ICMP probe monitor.
>>
>> I'll probably be posting an RFC patch next week.
>>
>> -J
>>
>> ---
>> -Jay Vosburgh, IBM Linux Technology Center, fubar@us.ibm.com
>>
>
>
>--
>Chris Friesen
>Software Developer
>GENBAND
>chris.friesen@genband.com
>www.genband.com
>
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [BUG?] bonding, slave selection, carrier loss, etc.
2012-02-11 1:53 ` Jay Vosburgh
@ 2012-02-11 18:52 ` Ben Hutchings
2012-02-13 18:16 ` Chris Friesen
0 siblings, 1 reply; 8+ messages in thread
From: Ben Hutchings @ 2012-02-11 18:52 UTC (permalink / raw)
To: Jay Vosburgh; +Cc: Chris Friesen, andy, netdev
On Fri, 2012-02-10 at 17:53 -0800, Jay Vosburgh wrote:
> Chris Friesen <chris.friesen@genband.com> wrote:
>
> >I'm resurrecting an ancient discussion I had with Jay, because I think
> >the issue described below is still present and the code he talked about
> >submitting to close it doesn't appear to have ever gone in.
>
> Yah, I never got it to work quite right; I don't remember
> exactly why.
>
> >Basically in active/backup mode with mii monitoring there is a window
> >between the active slave device losing carrier and calling
> >netif_carrier_off() and the miimon code actually detecting the loss of
> >the carrier and selecting a new active slave.
> >
> >The best solution would be for bonding to just register for notification
> >of the link going down. Presumably most drivers should be doing that
> >properly by now, and for devices that get interrupt-driven notification
> >of link status changes this would allow the bonding code to react much
> >quicker.
>
> A quick look at some drivers shows that at least acenic still
> doesn't do netif_carrier_off, so converting entirely to a notifier-based
> failover mechanism would break drivers that work today.
[...]
It might be worth having some sort of feature flag (in priv_flags) that
indicates whether the driver updates the link state. Alternately,
disable polling of a device once you see a notification.
Ben.
--
Ben Hutchings, Staff Engineer, Solarflare
Not speaking for my employer; that's the marketing department's job.
They asked us to note that Solarflare product names are trademarked.
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [BUG?] bonding, slave selection, carrier loss, etc.
2012-02-11 18:52 ` Ben Hutchings
@ 2012-02-13 18:16 ` Chris Friesen
2012-02-13 18:48 ` Stephen Hemminger
0 siblings, 1 reply; 8+ messages in thread
From: Chris Friesen @ 2012-02-13 18:16 UTC (permalink / raw)
To: Ben Hutchings; +Cc: Jay Vosburgh, andy, netdev
On 02/11/2012 12:52 PM, Ben Hutchings wrote:
> On Fri, 2012-02-10 at 17:53 -0800, Jay Vosburgh wrote:
>> Chris Friesen<chris.friesen@genband.com> wrote:
>>> The best solution would be for bonding to just register for notification
>>> of the link going down. Presumably most drivers should be doing that
>>> properly by now, and for devices that get interrupt-driven notification
>>> of link status changes this would allow the bonding code to react much
>>> quicker.
>>
>> A quick look at some drivers shows that at least acenic still
>> doesn't do netif_carrier_off, so converting entirely to a notifier-based
>> failover mechanism would break drivers that work today.
> [...]
>
> It might be worth having some sort of feature flag (in priv_flags) that
> indicates whether the driver updates the link state. Alternately,
> disable polling of a device once you see a notification.
This makes a lot of sense to me...it is suboptimal to still be polling
when most people that care about bonding reliability are going to be
using ethernet hardware with interrupt-based link change notification.
Chris
--
Chris Friesen
Software Developer
GENBAND
chris.friesen@genband.com
www.genband.com
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [BUG?] bonding, slave selection, carrier loss, etc.
2012-02-13 18:16 ` Chris Friesen
@ 2012-02-13 18:48 ` Stephen Hemminger
2012-02-13 19:18 ` Chris Friesen
2012-02-13 20:24 ` Ben Hutchings
0 siblings, 2 replies; 8+ messages in thread
From: Stephen Hemminger @ 2012-02-13 18:48 UTC (permalink / raw)
To: Chris Friesen; +Cc: Ben Hutchings, Jay Vosburgh, andy, netdev
On Mon, 13 Feb 2012 12:16:59 -0600
Chris Friesen <chris.friesen@genband.com> wrote:
> On 02/11/2012 12:52 PM, Ben Hutchings wrote:
> > On Fri, 2012-02-10 at 17:53 -0800, Jay Vosburgh wrote:
> >> Chris Friesen<chris.friesen@genband.com> wrote:
>
> >>> The best solution would be for bonding to just register for notification
> >>> of the link going down. Presumably most drivers should be doing that
> >>> properly by now, and for devices that get interrupt-driven notification
> >>> of link status changes this would allow the bonding code to react much
> >>> quicker.
> >>
> >> A quick look at some drivers shows that at least acenic still
> >> doesn't do netif_carrier_off, so converting entirely to a notifier-based
> >> failover mechanism would break drivers that work today.
> > [...]
> >
> > It might be worth having some sort of feature flag (in priv_flags) that
> > indicates whether the driver updates the link state. Alternately,
> > disable polling of a device once you see a notification.
Just fix the drivers to update link state.
The whole mii polling method of bonding is really leftover from the era of
10 years ago when network drivers were stupid and didn't handle carrier.
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [BUG?] bonding, slave selection, carrier loss, etc.
2012-02-13 18:48 ` Stephen Hemminger
@ 2012-02-13 19:18 ` Chris Friesen
2012-02-13 20:24 ` Ben Hutchings
1 sibling, 0 replies; 8+ messages in thread
From: Chris Friesen @ 2012-02-13 19:18 UTC (permalink / raw)
To: Stephen Hemminger; +Cc: Ben Hutchings, Jay Vosburgh, andy, netdev
On 02/13/2012 12:48 PM, Stephen Hemminger wrote:
> On Mon, 13 Feb 2012 12:16:59 -0600
> Chris Friesen<chris.friesen@genband.com> wrote:
>
>> On 02/11/2012 12:52 PM, Ben Hutchings wrote:
>>> On Fri, 2012-02-10 at 17:53 -0800, Jay Vosburgh wrote:
>>>> Chris Friesen<chris.friesen@genband.com> wrote:
>>
>>>>> The best solution would be for bonding to just register for notification
>>>>> of the link going down. Presumably most drivers should be doing that
>>>>> properly by now, and for devices that get interrupt-driven notification
>>>>> of link status changes this would allow the bonding code to react much
>>>>> quicker.
>>>>
>>>> A quick look at some drivers shows that at least acenic still
>>>> doesn't do netif_carrier_off, so converting entirely to a notifier-based
>>>> failover mechanism would break drivers that work today.
>>> [...]
>>>
>>> It might be worth having some sort of feature flag (in priv_flags) that
>>> indicates whether the driver updates the link state. Alternately,
>>> disable polling of a device once you see a notification.
>
> Just fix the drivers to update link state.
> The whole mii polling method of bonding is really leftover from the era of
> 10 years ago when network drivers were stupid and didn't handle carrier.
In the interest of getting the bonding driver fixed sooner rather than
later, I'd prefer something that didn't require fixing up all the
network drivers first.
Once all the drivers are fixed up (assuming people care enough about
older drivers to do so) then we could remove the option and make it
mandatory.
Chris
--
Chris Friesen
Software Developer
GENBAND
chris.friesen@genband.com
www.genband.com
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [BUG?] bonding, slave selection, carrier loss, etc.
2012-02-13 18:48 ` Stephen Hemminger
2012-02-13 19:18 ` Chris Friesen
@ 2012-02-13 20:24 ` Ben Hutchings
2012-02-13 20:37 ` Jay Vosburgh
1 sibling, 1 reply; 8+ messages in thread
From: Ben Hutchings @ 2012-02-13 20:24 UTC (permalink / raw)
To: Stephen Hemminger; +Cc: Chris Friesen, Jay Vosburgh, andy, netdev
On Mon, 2012-02-13 at 10:48 -0800, Stephen Hemminger wrote:
> On Mon, 13 Feb 2012 12:16:59 -0600
> Chris Friesen <chris.friesen@genband.com> wrote:
>
> > On 02/11/2012 12:52 PM, Ben Hutchings wrote:
> > > On Fri, 2012-02-10 at 17:53 -0800, Jay Vosburgh wrote:
> > >> Chris Friesen<chris.friesen@genband.com> wrote:
> >
> > >>> The best solution would be for bonding to just register for notification
> > >>> of the link going down. Presumably most drivers should be doing that
> > >>> properly by now, and for devices that get interrupt-driven notification
> > >>> of link status changes this would allow the bonding code to react much
> > >>> quicker.
> > >>
> > >> A quick look at some drivers shows that at least acenic still
> > >> doesn't do netif_carrier_off, so converting entirely to a notifier-based
> > >> failover mechanism would break drivers that work today.
> > > [...]
> > >
> > > It might be worth having some sort of feature flag (in priv_flags) that
> > > indicates whether the driver updates the link state. Alternately,
> > > disable polling of a device once you see a notification.
>
> Just fix the drivers to update link state.
> The whole mii polling method of bonding is really leftover from the era of
> 10 years ago when network drivers were stupid and didn't handle carrier.
Lots of hardware doesn't generate link interrupts. Our SFC4000 was
supposed to generate events for link changes, but this didn't work
reliably and so we poll regularly in the driver. I think the older
drivers fail to update carrier because of similar hardware limitations.
If you want to remove link polling from the bonding driver then it has
to live *somewhere*. Rather than requiring every affected driver to
implement the timer or delayed work item, I would suggest you put that
in the networking core and then require drivers to either provide a link
polling function or specify that they don't require polling. Then
export the obvious implementations using ethtool or MII so that drivers
don't have to replicate those.
Ben.
--
Ben Hutchings, Staff Engineer, Solarflare
Not speaking for my employer; that's the marketing department's job.
They asked us to note that Solarflare product names are trademarked.
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [BUG?] bonding, slave selection, carrier loss, etc.
2012-02-13 20:24 ` Ben Hutchings
@ 2012-02-13 20:37 ` Jay Vosburgh
0 siblings, 0 replies; 8+ messages in thread
From: Jay Vosburgh @ 2012-02-13 20:37 UTC (permalink / raw)
To: Ben Hutchings; +Cc: Stephen Hemminger, Chris Friesen, andy, netdev
Ben Hutchings <bhutchings@solarflare.com> wrote:
>On Mon, 2012-02-13 at 10:48 -0800, Stephen Hemminger wrote:
>> On Mon, 13 Feb 2012 12:16:59 -0600
>> Chris Friesen <chris.friesen@genband.com> wrote:
>>
>> > On 02/11/2012 12:52 PM, Ben Hutchings wrote:
>> > > On Fri, 2012-02-10 at 17:53 -0800, Jay Vosburgh wrote:
>> > >> Chris Friesen<chris.friesen@genband.com> wrote:
>> >
>> > >>> The best solution would be for bonding to just register for notification
>> > >>> of the link going down. Presumably most drivers should be doing that
>> > >>> properly by now, and for devices that get interrupt-driven notification
>> > >>> of link status changes this would allow the bonding code to react much
>> > >>> quicker.
>> > >>
>> > >> A quick look at some drivers shows that at least acenic still
>> > >> doesn't do netif_carrier_off, so converting entirely to a notifier-based
>> > >> failover mechanism would break drivers that work today.
>> > > [...]
>> > >
>> > > It might be worth having some sort of feature flag (in priv_flags) that
>> > > indicates whether the driver updates the link state. Alternately,
>> > > disable polling of a device once you see a notification.
>>
>> Just fix the drivers to update link state.
>> The whole mii polling method of bonding is really leftover from the era of
>> 10 years ago when network drivers were stupid and didn't handle carrier.
>
>Lots of hardware doesn't generate link interrupts. Our SFC4000 was
>supposed to generate events for link changes, but this didn't work
>reliably and so we poll regularly in the driver. I think the older
>drivers fail to update carrier because of similar hardware limitations.
>
>If you want to remove link polling from the bonding driver then it has
>to live *somewhere*. Rather than requiring every affected driver to
>implement the timer or delayed work item, I would suggest you put that
>in the networking core and then require drivers to either provide a link
>polling function or specify that they don't require polling. Then
>export the obvious implementations using ethtool or MII so that drivers
>don't have to replicate those.
I think it's probably better all around to leave the miimon
(link polling) stuff in bonding alone for those drivers that need it,
and then add a notifier check that will do link down/up on demand if the
particular device does netif_carrier (which will be the majority).
If bonding is running miimon and gets a notifier from a driver,
then it can stop the polling (as Ben suggests). For the usual case
(drivers that support netif_carrier), this will be right after the
device is enslaved, because devices are enslaved in a down state and are
set administratively up as part of the enslavement process.
The only tricky bits are:
- insuring that the arp monitor and the notifiers don't conflict
if there is disagreement about the link state and cause flapping of the
perceived link state.
- handling drivers like 3c59x that do their own handling, but
run on a very long poll in the driver (5 seconds for 3c59x). I suspect
that if use_carrier=0 is set in bonding, then continuing to run the
miimon poll would handle this for most devices (because use_carrier=0
instructs bonding to check the device mii registers rather than relying
on the driver to set carrier). If use_carrier=0 doesn't work, then
bonding wouldn't detect a link change any faster than the driver is
reporting it anyway.
-J
---
-Jay Vosburgh, IBM Linux Technology Center, fubar@us.ibm.com
^ permalink raw reply [flat|nested] 8+ messages in thread
end of thread, other threads:[~2012-02-13 20:37 UTC | newest]
Thread overview: 8+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
[not found] <49CD5B93.7010407@nortel.com>
[not found] ` <31087.1238198438@death.nxdomain.ibm.com>
2012-02-10 23:47 ` [BUG?] bonding, slave selection, carrier loss, etc Chris Friesen
2012-02-11 1:53 ` Jay Vosburgh
2012-02-11 18:52 ` Ben Hutchings
2012-02-13 18:16 ` Chris Friesen
2012-02-13 18:48 ` Stephen Hemminger
2012-02-13 19:18 ` Chris Friesen
2012-02-13 20:24 ` Ben Hutchings
2012-02-13 20:37 ` Jay Vosburgh
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).