* Suggestion for hot-replace
@ 2012-11-25 6:37 H. Peter Anvin
2012-11-25 10:13 ` Piergiorgio Sartor
2012-11-25 17:59 ` joystick
0 siblings, 2 replies; 12+ messages in thread
From: H. Peter Anvin @ 2012-11-25 6:37 UTC (permalink / raw)
To: Linux RAID Mailing List
I was looking at the hot-replace (want_replacement) feature, and I had a
thought: it would be nice to have this in a form which *didn't* fail the
incumbent drive after the operation is over, and instead turned it into
a spare. This would make it much easier and safer to periodically
rotate and test any hot spares in the system. The main problem with hot
spares is that you don't actually know if they work properly until there
is a failover...
-hpa
--
H. Peter Anvin, Intel Open Source Technology Center
I work for Intel. I don't speak on their behalf.
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: Suggestion for hot-replace
2012-11-25 6:37 Suggestion for hot-replace H. Peter Anvin
@ 2012-11-25 10:13 ` Piergiorgio Sartor
2012-11-25 12:31 ` Tommy Apel Hansen
2012-11-25 17:59 ` joystick
1 sibling, 1 reply; 12+ messages in thread
From: Piergiorgio Sartor @ 2012-11-25 10:13 UTC (permalink / raw)
To: H. Peter Anvin; +Cc: Linux RAID Mailing List
On Sat, Nov 24, 2012 at 10:37:49PM -0800, H. Peter Anvin wrote:
> I was looking at the hot-replace (want_replacement) feature, and I
> had a thought: it would be nice to have this in a form which
> *didn't* fail the incumbent drive after the operation is over, and
> instead turned it into a spare. This would make it much easier and
> safer to periodically rotate and test any hot spares in the system.
> The main problem with hot spares is that you don't actually know if
> they work properly until there is a failover...
>
I go for this one.
Actually, this was also my original thinking for
the "proactive replacement".
The only thing that, in addition, should be done,
is to keep the spare in sleep mode until needed
(either for hot replacement or for real replacement).
bye,
--
piergiorgio
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: Suggestion for hot-replace
2012-11-25 10:13 ` Piergiorgio Sartor
@ 2012-11-25 12:31 ` Tommy Apel Hansen
2012-11-25 14:51 ` Piergiorgio Sartor
2012-11-25 15:31 ` Roy Sigurd Karlsbakk
0 siblings, 2 replies; 12+ messages in thread
From: Tommy Apel Hansen @ 2012-11-25 12:31 UTC (permalink / raw)
To: Linux RAID Mailing List
On Sunday 25 November 2012 11:13:06 Piergiorgio Sartor wrote:
> On Sat, Nov 24, 2012 at 10:37:49PM -0800, H. Peter Anvin wrote:
> > I was looking at the hot-replace (want_replacement) feature, and I
> > had a thought: it would be nice to have this in a form which
> > *didn't* fail the incumbent drive after the operation is over, and
> > instead turned it into a spare. This would make it much easier and
> > safer to periodically rotate and test any hot spares in the system.
> > The main problem with hot spares is that you don't actually know if
> > they work properly until there is a failover...
>
> I go for this one.
>
> Actually, this was also my original thinking for
> the "proactive replacement".
>
> The only thing that, in addition, should be done,
> is to keep the spare in sleep mode until needed
> (either for hot replacement or for real replacement).
>
> bye,
Hello, personally I would vote for an option to rotate spares into and array
like Peter suggests, keeping a drive idle doesn't guarrantee that it's
actually operational.
/Tommy
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: Suggestion for hot-replace
2012-11-25 12:31 ` Tommy Apel Hansen
@ 2012-11-25 14:51 ` Piergiorgio Sartor
2012-11-25 15:31 ` Roy Sigurd Karlsbakk
1 sibling, 0 replies; 12+ messages in thread
From: Piergiorgio Sartor @ 2012-11-25 14:51 UTC (permalink / raw)
To: Tommy Apel Hansen; +Cc: Linux RAID Mailing List
On Sun, Nov 25, 2012 at 01:31:06PM +0100, Tommy Apel Hansen wrote:
> On Sunday 25 November 2012 11:13:06 Piergiorgio Sartor wrote:
> > On Sat, Nov 24, 2012 at 10:37:49PM -0800, H. Peter Anvin wrote:
> > > I was looking at the hot-replace (want_replacement) feature, and I
> > > had a thought: it would be nice to have this in a form which
> > > *didn't* fail the incumbent drive after the operation is over, and
> > > instead turned it into a spare. This would make it much easier and
> > > safer to periodically rotate and test any hot spares in the system.
> > > The main problem with hot spares is that you don't actually know if
> > > they work properly until there is a failover...
> >
> > I go for this one.
> >
> > Actually, this was also my original thinking for
> > the "proactive replacement".
> >
> > The only thing that, in addition, should be done,
> > is to keep the spare in sleep mode until needed
> > (either for hot replacement or for real replacement).
> >
> > bye,
>
> Hello, personally I would vote for an option to rotate spares into and array
> like Peter suggests, keeping a drive idle doesn't guarrantee that it's
> actually operational.
The point is that the "Power_On_Hours" parameter of SMART
is quite a good hint on the driver expected lifetime.
Or, better, that parameter can be used to decide when to
change a disk, independently from anything else.
In other words, it would be possible to decide to change
a disk (change, not rotate with the spare) each 10000 hrs.
If the spare are not idle, than this SMART parameter will
not be reliable anymore.
This means that the ideal operation would be to rotate
the spare so that, for example, each disk has 1000 hours
lifetime difference from all the others.
Let's say a 4+1 HDD RAID-5 should result in disks having
"Power_On_Hours" of 1000, 2000, 3000, 4000 and 5000.
As soon as the oldest disk is X hours older than the spare,
it will be rotated (X could be 1000, in this case).
When a disk reaches 10000 (for example), it is eliminated
from the array and a new spare is required.
Again, this is possible only if the running time of each
disk is tracked properly, which means spares must be idling.
bye,
--
piergiorgio
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: Suggestion for hot-replace
2012-11-25 12:31 ` Tommy Apel Hansen
2012-11-25 14:51 ` Piergiorgio Sartor
@ 2012-11-25 15:31 ` Roy Sigurd Karlsbakk
2012-11-25 15:36 ` Tommy Apel Hansen
2012-11-25 18:01 ` Mikael Abrahamsson
1 sibling, 2 replies; 12+ messages in thread
From: Roy Sigurd Karlsbakk @ 2012-11-25 15:31 UTC (permalink / raw)
To: Tommy Apel Hansen; +Cc: Linux RAID Mailing List
> Hello, personally I would vote for an option to rotate spares into and
> array
> like Peter suggests, keeping a drive idle doesn't guarrantee that it's
> actually operational.
Only problem with this, is if you do it frequently, it'll degrade performance.
Btw, is there a way to replace a drive without failing one? In RAID-5, a common issue is to have a failed drive and then find bad sectors on another. In this setting (and possibly others), it'd be good to have md replace the drive while still active (like what can be done in ZFS).
--
Vennlige hilsener / Best regards
roy
--
Roy Sigurd Karlsbakk
(+47) 98013356
roy@karlsbakk.net
http://blogg.karlsbakk.net/
GPG Public key: http://karlsbakk.net/roysigurdkarlsbakk.pubkey.txt
--
I all pedagogikk er det essensielt at pensum presenteres intelligibelt. Det er et elementært imperativ for alle pedagoger å unngå eksessiv anvendelse av idiomer med xenotyp etymologi. I de fleste tilfeller eksisterer adekvate og relevante synonymer på norsk.
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: Suggestion for hot-replace
2012-11-25 15:31 ` Roy Sigurd Karlsbakk
@ 2012-11-25 15:36 ` Tommy Apel Hansen
2012-11-25 15:42 ` Piergiorgio Sartor
2012-11-25 18:01 ` Mikael Abrahamsson
1 sibling, 1 reply; 12+ messages in thread
From: Tommy Apel Hansen @ 2012-11-25 15:36 UTC (permalink / raw)
To: Roy Sigurd Karlsbakk; +Cc: Linux RAID Mailing List
On Sunday 25 November 2012 16:31:45 Roy Sigurd Karlsbakk wrote:
> > Hello, personally I would vote for an option to rotate spares into and
> > array
> > like Peter suggests, keeping a drive idle doesn't guarrantee that it's
> > actually operational.
>
> Only problem with this, is if you do it frequently, it'll degrade
> performance.
>
> Btw, is there a way to replace a drive without failing one? In RAID-5, a
> common issue is to have a failed drive and then find bad sectors on
> another. In this setting (and possibly others), it'd be good to have md
> replace the drive while still active (like what can be done in ZFS).
Well both options serve a purpose, but say you rotate a spare into the array
that then fails on spinup, then you would have a faulted array as your
implementation plan states that a drive cannot be "older" than x hours, then
you would have and endless loop where as the other option would suggest to
zero the former drive and reinstate it.
/Tommy
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: Suggestion for hot-replace
2012-11-25 15:36 ` Tommy Apel Hansen
@ 2012-11-25 15:42 ` Piergiorgio Sartor
0 siblings, 0 replies; 12+ messages in thread
From: Piergiorgio Sartor @ 2012-11-25 15:42 UTC (permalink / raw)
To: Tommy Apel Hansen; +Cc: Roy Sigurd Karlsbakk, Linux RAID Mailing List
On Sun, Nov 25, 2012 at 04:36:34PM +0100, Tommy Apel Hansen wrote:
> On Sunday 25 November 2012 16:31:45 Roy Sigurd Karlsbakk wrote:
> > > Hello, personally I would vote for an option to rotate spares into and
> > > array
> > > like Peter suggests, keeping a drive idle doesn't guarrantee that it's
> > > actually operational.
> >
> > Only problem with this, is if you do it frequently, it'll degrade
> > performance.
> >
> > Btw, is there a way to replace a drive without failing one? In RAID-5, a
> > common issue is to have a failed drive and then find bad sectors on
> > another. In this setting (and possibly others), it'd be good to have md
> > replace the drive while still active (like what can be done in ZFS).
>
> Well both options serve a purpose, but say you rotate a spare into the array
> that then fails on spinup, then you would have a faulted array as your
> implementation plan states that a drive cannot be "older" than x hours, then
> you would have and endless loop where as the other option would suggest to
> zero the former drive and reinstate it.
I do not know if you answered to my message, anyhow
the spare can fail in any case, idle or not.
This is a situation the system should be able to
couple with, for example testing the spare before
starting the hot replace operation.
Which is good in any case.
bye,
--
piergiorgio
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: Suggestion for hot-replace
2012-11-25 6:37 Suggestion for hot-replace H. Peter Anvin
2012-11-25 10:13 ` Piergiorgio Sartor
@ 2012-11-25 17:59 ` joystick
2012-11-25 21:49 ` NeilBrown
1 sibling, 1 reply; 12+ messages in thread
From: joystick @ 2012-11-25 17:59 UTC (permalink / raw)
To: H. Peter Anvin; +Cc: linux-raid
On 11/25/12 07:37, H. Peter Anvin wrote:
> I was looking at the hot-replace (want_replacement) feature, and I had
> a thought: it would be nice to have this in a form which *didn't* fail
> the incumbent drive after the operation is over, and instead turned it
> into a spare. This would make it much easier and safer to
> periodically rotate and test any hot spares in the system. The main
> problem with hot spares is that you don't actually know if they work
> properly until there is a failover...
>
> -hpa
>
Sorry I don't agree.
Firstly, it causes confusion. If you want a replacement in 90% of cases
it means that the current drive is defective. If you put the replaced
drive into the spare pool instead of kicking it out then you have to
remember (by serial number?) which one it was to actually remove it from
the system. If you forget to note it down, then you are in serious
troubles, because if that "spare" then gets caught in another (or the
same) array needing a recovery, you will have a high probability of
exotic and unexpected multiple failures situations.
Also, if you are uncertain of the health of your spares, risking your
array by throwing one into the array is definitely unwise. There are
other tecniques to test a spare that don't involve risking you array on
it: you can remove one spare from the spare pool (best if you have 2+
spares but can also be done with 1), read/write all of it various times
as a validation, then re-add it back to the spares pool. Even just
reading it from beginning to end with dd could be enough and for this
you don't even have to remove it from the spare pool. And this doesn't
degrade the array performances, while your suggestion would.
Thirdly, if you really want that (imho unwise) behaviour, it's easy to
implement from userspace without asing the MD developers to do so:
monitor the replacement process, as soon as you see it terminating and
you see the target drive in Failed status, remove and re-add it back as
a spare. That's it.
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: Suggestion for hot-replace
2012-11-25 15:31 ` Roy Sigurd Karlsbakk
2012-11-25 15:36 ` Tommy Apel Hansen
@ 2012-11-25 18:01 ` Mikael Abrahamsson
1 sibling, 0 replies; 12+ messages in thread
From: Mikael Abrahamsson @ 2012-11-25 18:01 UTC (permalink / raw)
To: Roy Sigurd Karlsbakk; +Cc: Tommy Apel Hansen, Linux RAID Mailing List
On Sun, 25 Nov 2012, Roy Sigurd Karlsbakk wrote:
> Btw, is there a way to replace a drive without failing one? In RAID-5, a
> common issue is to have a failed drive and then find bad sectors on
> another. In this setting (and possibly others), it'd be good to have md
> replace the drive while still active (like what can be done in ZFS).
That is what "want_replacement" does. You need 2.6.33 or later however.
<http://pl.digipedia.org/usenet/thread/19071/39605/> is the thread for
someone else who asked similar question.
--
Mikael Abrahamsson email: swmike@swm.pp.se
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: Suggestion for hot-replace
2012-11-25 17:59 ` joystick
@ 2012-11-25 21:49 ` NeilBrown
2012-11-25 23:43 ` H. Peter Anvin
0 siblings, 1 reply; 12+ messages in thread
From: NeilBrown @ 2012-11-25 21:49 UTC (permalink / raw)
To: joystick; +Cc: H. Peter Anvin, linux-raid
[-- Attachment #1: Type: text/plain, Size: 2664 bytes --]
On Sun, 25 Nov 2012 18:59:19 +0100 joystick <joystick@shiftmail.org> wrote:
> On 11/25/12 07:37, H. Peter Anvin wrote:
> > I was looking at the hot-replace (want_replacement) feature, and I had
> > a thought: it would be nice to have this in a form which *didn't* fail
> > the incumbent drive after the operation is over, and instead turned it
> > into a spare. This would make it much easier and safer to
> > periodically rotate and test any hot spares in the system. The main
> > problem with hot spares is that you don't actually know if they work
> > properly until there is a failover...
> >
> > -hpa
> >
>
> Sorry I don't agree.
>
> Firstly, it causes confusion. If you want a replacement in 90% of cases
> it means that the current drive is defective. If you put the replaced
> drive into the spare pool instead of kicking it out then you have to
> remember (by serial number?) which one it was to actually remove it from
> the system. If you forget to note it down, then you are in serious
> troubles, because if that "spare" then gets caught in another (or the
> same) array needing a recovery, you will have a high probability of
> exotic and unexpected multiple failures situations.
>
> Also, if you are uncertain of the health of your spares, risking your
> array by throwing one into the array is definitely unwise. There are
> other tecniques to test a spare that don't involve risking you array on
> it: you can remove one spare from the spare pool (best if you have 2+
> spares but can also be done with 1), read/write all of it various times
> as a validation, then re-add it back to the spares pool. Even just
> reading it from beginning to end with dd could be enough and for this
> you don't even have to remove it from the spare pool. And this doesn't
> degrade the array performances, while your suggestion would.
>
> Thirdly, if you really want that (imho unwise) behaviour, it's easy to
> implement from userspace without asing the MD developers to do so:
> monitor the replacement process, as soon as you see it terminating and
> you see the target drive in Failed status, remove and re-add it back as
> a spare. That's it.
I tend to agree with this position.
However it might make sense to record the reason that a device is marked
faulty and present this via a sysfs variable.
e.g.: manual, manual_replace, write_error, read_error ...
Then mdadm --monitor could notice the appearance of manual_replace faulty
devices and could convert them to spares.
I'm not likely to write this code myself, but I would probably accept patches.
NeilBrown
[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 828 bytes --]
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: Suggestion for hot-replace
2012-11-25 21:49 ` NeilBrown
@ 2012-11-25 23:43 ` H. Peter Anvin
2012-11-26 1:46 ` 王金浦
0 siblings, 1 reply; 12+ messages in thread
From: H. Peter Anvin @ 2012-11-25 23:43 UTC (permalink / raw)
To: NeilBrown, joystick; +Cc: linux-raid
The problem with this is that without automation the array is left with a needlessly faulty drive until the administrator can manually intervene. For automation it can be in the kernel or mdadm, but requiring an extra bit just for that is problematic.
NeilBrown <neilb@suse.de> wrote:
>On Sun, 25 Nov 2012 18:59:19 +0100 joystick <joystick@shiftmail.org>
>wrote:
>
>> On 11/25/12 07:37, H. Peter Anvin wrote:
>> > I was looking at the hot-replace (want_replacement) feature, and I
>had
>> > a thought: it would be nice to have this in a form which *didn't*
>fail
>> > the incumbent drive after the operation is over, and instead turned
>it
>> > into a spare. This would make it much easier and safer to
>> > periodically rotate and test any hot spares in the system. The
>main
>> > problem with hot spares is that you don't actually know if they
>work
>> > properly until there is a failover...
>> >
>> > -hpa
>> >
>>
>> Sorry I don't agree.
>>
>> Firstly, it causes confusion. If you want a replacement in 90% of
>cases
>> it means that the current drive is defective. If you put the replaced
>
>> drive into the spare pool instead of kicking it out then you have to
>> remember (by serial number?) which one it was to actually remove it
>from
>> the system. If you forget to note it down, then you are in serious
>> troubles, because if that "spare" then gets caught in another (or the
>
>> same) array needing a recovery, you will have a high probability of
>> exotic and unexpected multiple failures situations.
>>
>> Also, if you are uncertain of the health of your spares, risking your
>
>> array by throwing one into the array is definitely unwise. There are
>> other tecniques to test a spare that don't involve risking you array
>on
>> it: you can remove one spare from the spare pool (best if you have 2+
>
>> spares but can also be done with 1), read/write all of it various
>times
>> as a validation, then re-add it back to the spares pool. Even just
>> reading it from beginning to end with dd could be enough and for this
>
>> you don't even have to remove it from the spare pool. And this
>doesn't
>> degrade the array performances, while your suggestion would.
>>
>> Thirdly, if you really want that (imho unwise) behaviour, it's easy
>to
>> implement from userspace without asing the MD developers to do so:
>> monitor the replacement process, as soon as you see it terminating
>and
>> you see the target drive in Failed status, remove and re-add it back
>as
>> a spare. That's it.
>
>I tend to agree with this position.
>
>However it might make sense to record the reason that a device is
>marked
>faulty and present this via a sysfs variable.
> e.g.: manual, manual_replace, write_error, read_error ...
>
>Then mdadm --monitor could notice the appearance of manual_replace
>faulty
>devices and could convert them to spares.
>
>I'm not likely to write this code myself, but I would probably accept
>patches.
>
>NeilBrown
--
Sent from my mobile phone. Please excuse brevity and lack of formatting.
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: Suggestion for hot-replace
2012-11-25 23:43 ` H. Peter Anvin
@ 2012-11-26 1:46 ` 王金浦
0 siblings, 0 replies; 12+ messages in thread
From: 王金浦 @ 2012-11-26 1:46 UTC (permalink / raw)
To: H. Peter Anvin; +Cc: NeilBrown, joystick, linux-raid
2012/11/26 H. Peter Anvin <hpa@zytor.com>:
> The problem with this is that without automation the array is left with a needlessly faulty drive until the administrator can manually intervene. For automation it can be in the kernel or mdadm, but requiring an extra bit just for that is problematic.
>
> NeilBrown <neilb@suse.de> wrote:
>
>>On Sun, 25 Nov 2012 18:59:19 +0100 joystick <joystick@shiftmail.org>
>>wrote:
>>
>>> On 11/25/12 07:37, H. Peter Anvin wrote:
>>> > I was looking at the hot-replace (want_replacement) feature, and I
>>had
>>> > a thought: it would be nice to have this in a form which *didn't*
>>fail
>>> > the incumbent drive after the operation is over, and instead turned
>>it
>>> > into a spare. This would make it much easier and safer to
>>> > periodically rotate and test any hot spares in the system. The
>>main
>>> > problem with hot spares is that you don't actually know if they
>>work
>>> > properly until there is a failover...
>>> >
>>> > -hpa
>>> >
>>>
>>> Sorry I don't agree.
>>>
>>> Firstly, it causes confusion. If you want a replacement in 90% of
>>cases
>>> it means that the current drive is defective. If you put the replaced
>>
>>> drive into the spare pool instead of kicking it out then you have to
>>> remember (by serial number?) which one it was to actually remove it
>>from
>>> the system. If you forget to note it down, then you are in serious
>>> troubles, because if that "spare" then gets caught in another (or the
>>
>>> same) array needing a recovery, you will have a high probability of
>>> exotic and unexpected multiple failures situations.
>>>
>>> Also, if you are uncertain of the health of your spares, risking your
>>
>>> array by throwing one into the array is definitely unwise. There are
>>> other tecniques to test a spare that don't involve risking you array
>>on
>>> it: you can remove one spare from the spare pool (best if you have 2+
>>
>>> spares but can also be done with 1), read/write all of it various
>>times
>>> as a validation, then re-add it back to the spares pool. Even just
>>> reading it from beginning to end with dd could be enough and for this
>>
>>> you don't even have to remove it from the spare pool. And this
>>doesn't
>>> degrade the array performances, while your suggestion would.
>>>
>>> Thirdly, if you really want that (imho unwise) behaviour, it's easy
>>to
>>> implement from userspace without asing the MD developers to do so:
>>> monitor the replacement process, as soon as you see it terminating
>>and
>>> you see the target drive in Failed status, remove and re-add it back
>>as
>>> a spare. That's it.
>>
>>I tend to agree with this position.
>>
>>However it might make sense to record the reason that a device is
>>marked
>>faulty and present this via a sysfs variable.
>> e.g.: manual, manual_replace, write_error, read_error ...
>>
>>Then mdadm --monitor could notice the appearance of manual_replace
>>faulty
>>devices and could convert them to spares.
>>
>>I'm not likely to write this code myself, but I would probably accept
>>patches.
>>
>>NeilBrown
Hi,
Hannes(cc-ed) is working on a tool md_monitor which may meet your requirement.
quote from the readme
"
Automatic device failover detection with mdadm and md_monitor
Currently, mdadm detects any I/O failure on a device and will be
setting the affected device(s) to 'faulty'. The MD array is then set
to 'degraded', but continues to work, provided that enough disks for
the given RAID scenarios are present.
The MD array then requires manual interaction to resolve this
situation. 1) If the device had a temporary failure (eg connection
loss with the storage array) it can be re-integrated with the degraded
MD array. 2) If the device had a permanent failure it would need to be
replaced with a spare device.
"
https://github.com/hreinecke/md_monitor
I'm not try myself yet.
Regards!
Jack
^ permalink raw reply [flat|nested] 12+ messages in thread
end of thread, other threads:[~2012-11-26 1:46 UTC | newest]
Thread overview: 12+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2012-11-25 6:37 Suggestion for hot-replace H. Peter Anvin
2012-11-25 10:13 ` Piergiorgio Sartor
2012-11-25 12:31 ` Tommy Apel Hansen
2012-11-25 14:51 ` Piergiorgio Sartor
2012-11-25 15:31 ` Roy Sigurd Karlsbakk
2012-11-25 15:36 ` Tommy Apel Hansen
2012-11-25 15:42 ` Piergiorgio Sartor
2012-11-25 18:01 ` Mikael Abrahamsson
2012-11-25 17:59 ` joystick
2012-11-25 21:49 ` NeilBrown
2012-11-25 23:43 ` H. Peter Anvin
2012-11-26 1:46 ` 王金浦
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).