public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
* RAID5: mkraid --force /dev/md0 doesn't work properly
@ 2001-10-01  0:29 Evan Harris
  2001-10-01  0:41 ` Jakob Østergaard
  0 siblings, 1 reply; 8+ messages in thread
From: Evan Harris @ 2001-10-01  0:29 UTC (permalink / raw)
  To: Linux Kernel List


And yes, I'm using the real --force option.  :)

I have a 6 disk RAID5 scsi array that had one disk go offline through a
dying power supply, taking the array into degraded mode, and then another
went offline a couple of hours later from what I think was a loose cable.

The first drive to go offline was /dev/sde1.
The second to go offline was /dev/sdd1.

Both drives are actually fine after fixing the connection problems and a
reboot, but since the superblocks are out of sync, it won't init.

Here's the output from a raidstart /dev/md0:

(read) sdd1's sb offset: 35840896 [events: 00000009]
(read) sde1's sb offset: 35840896 [events: 00000008]
(read) sdf1's sb offset: 35840896 [events: 0000000b]
(read) sdg1's sb offset: 35840896 [events: 0000000b]
(read) sdh1's sb offset: 35840896 [events: 0000000b]
(read) sdi1's sb offset: 35840896 [events: 0000000b]
autorun ...
considering sdi1 ...
  adding sdi1 ...
  adding sdh1 ...
  adding sdg1 ...
  adding sdf1 ...
  adding sde1 ...
  adding sdd1 ...
created md0
bind<sdd1,1>
bind<sde1,2>
bind<sdf1,3>
bind<sdg1,4>
bind<sdh1,5>
bind<sdi1,6>
running: <sdi1><sdh1><sdg1><sdf1><sde1><sdd1>
now!
sdi1's event counter: 0000000b
sdh1's event counter: 0000000b
sdg1's event counter: 0000000b
sdf1's event counter: 0000000b
sde1's event counter: 00000008
sdd1's event counter: 00000009
md: superblock update time inconsistency -- using the most recent one
freshest: sdi1
md: kicking non-fresh sde1 from array!
unbind<sde1,5>
export_rdev(sde1)
md: kicking non-fresh sdd1 from array!
unbind<sdd1,4>
export_rdev(sdd1)
md0: removing former faulty sdd1!
md0: removing former faulty sde1!
md0: max total readahead window set to 5120k
md0: 5 data-disks, max readahead per data-disk: 1024k
raid5: device sdi1 operational as raid disk 5
raid5: device sdh1 operational as raid disk 4
raid5: device sdg1 operational as raid disk 3
raid5: device sdf1 operational as raid disk 2
raid5: not enough operational devices for md0 (2/6 failed)
RAID5 conf printout:
 --- rd:6 wd:4 fd:2
 disk 0, s:0, o:0, n:0 rd:0 us:1 dev:[dev 00:00]
 disk 1, s:0, o:0, n:1 rd:1 us:1 dev:[dev 00:00]
 disk 2, s:0, o:1, n:2 rd:2 us:1 dev:sdf1
 disk 3, s:0, o:1, n:3 rd:3 us:1 dev:sdg1
 disk 4, s:0, o:1, n:4 rd:4 us:1 dev:sdh1
 disk 5, s:0, o:1, n:5 rd:5 us:1 dev:sdi1
raid5: failed to run raid set md0
pers->run() failed ...
do_md_run() returned -22
md0 stopped.
unbind<sdi1,3>
export_rdev(sdi1)
unbind<sdh1,2>
export_rdev(sdh1)
unbind<sdg1,1>
export_rdev(sdg1)
unbind<sdf1,0>
export_rdev(sdf1)
... autorun DONE.

I set the first disk that went offline out with a failed-disk directive, and
tried to recover with a:

mkraid --force /dev/md0

I'm _positive_ that the /etc/raidtab is correct, but it fails to force the
update with:

DESTROYING the contents of /dev/md0 in 5 seconds, Ctrl-C if unsure!
handling MD device /dev/md0
analyzing super-block
raid_disk conflict on /dev/sde1 and /dev/sdi1 (1)
mkraid: aborted, see the syslog and /proc/mdstat for potential clues.

Nothing is in syslog, and
mdstat only has:

Personalities : [raid5]
read_ahead not set
unused devices: <none>

Why won't --force go ahead and force the reset of the superblocks?  Even
though I'm sure there will be some filesystem inconsistencies, they should
be minor, and nearly all of the data should be recoverable if only mkraid
would go ahead and force it, so that it could be raidstart'ed.

Is there any lower level tool that will do what mkraid --force should but
isn't?  The data on this raid represents a large chunk of time invested.

Any help would be much appreciated.  Web searches have not turned up any
useful info, and I can't seem to find a more recent version of raidtools
than 0.90.0 from 990824.

For info, here is my raidtab:

raiddev /dev/md0
        raid-level              5
        nr-raid-disks           6
        nr-spare-disks          0
        chunk-size              256
        persistent-superblock   1
        device                  /dev/sdd1
        raid-disk               0
        device                  /dev/sde1
        raid-disk               1
        device                  /dev/sdf1
        raid-disk               2
        device                  /dev/sdg1
        raid-disk               3
        device                  /dev/sdh1
        raid-disk               4
        device                  /dev/sdi1
        raid-disk               5
        failed-disk     1

Thanks in advance.

Evan

-- 
| Evan Harris - eharris@puremagic.com - All flames to /dev/nul
|
| RIP Bill Hicks - "I don't mean to sound cold or cruel or vicious... but I
|                   am, so that's the way it comes out."



^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: RAID5: mkraid --force /dev/md0 doesn't work properly
  2001-10-01  0:29 Evan Harris
@ 2001-10-01  0:41 ` Jakob Østergaard
  2001-10-01  0:51   ` Evan Harris
  0 siblings, 1 reply; 8+ messages in thread
From: Jakob Østergaard @ 2001-10-01  0:41 UTC (permalink / raw)
  To: Evan Harris; +Cc: Linux Kernel List

On Sun, Sep 30, 2001 at 07:29:06PM -0500, Evan Harris wrote:
> 
> And yes, I'm using the real --force option.  :)

Good (hush now, it's a secret ;)

> 
> I have a 6 disk RAID5 scsi array that had one disk go offline through a
> dying power supply, taking the array into degraded mode, and then another
> went offline a couple of hours later from what I think was a loose cable.
> 
> The first drive to go offline was /dev/sde1.
> The second to go offline was /dev/sdd1.
> 
> Both drives are actually fine after fixing the connection problems and a
> reboot, but since the superblocks are out of sync, it won't init.

Ok.

...
[huge snip]
...
> 
> I set the first disk that went offline out with a failed-disk directive, and
> tried to recover with a:
> 
> mkraid --force /dev/md0

Good !

(to anyone reading this without having read the docs:  don't pull this trick
 unless you absolutely positively understand the consequences of screwing up
 here)

> 
> I'm _positive_ that the /etc/raidtab is correct, but it fails to force the
> update with:
> 
> DESTROYING the contents of /dev/md0 in 5 seconds, Ctrl-C if unsure!
> handling MD device /dev/md0
> analyzing super-block
> raid_disk conflict on /dev/sde1 and /dev/sdi1 (1)
> mkraid: aborted, see the syslog and /proc/mdstat for potential clues.
...

Read on


[snip]
> For info, here is my raidtab:
> 
> raiddev /dev/md0
>         raid-level              5
>         nr-raid-disks           6
>         nr-spare-disks          0
>         chunk-size              256
>         persistent-superblock   1
>         device                  /dev/sdd1
>         raid-disk               0
>         device                  /dev/sde1
>         raid-disk               1
>         device                  /dev/sdf1
>         raid-disk               2
>         device                  /dev/sdg1
>         raid-disk               3
>         device                  /dev/sdh1
>         raid-disk               4
>         device                  /dev/sdi1
>         raid-disk               5
>         failed-disk     1


Wrong !   device /dev/sdi1 is railed-disk 5 not failed-disk 1,
that's why mkraid is confused.

What you want is:
      device                  /dev/sdd1
      raid-disk               0
      device                  /dev/sde1
      raid-disk               1
      device                  /dev/sdf1
      raid-disk               2
      device                  /dev/sdg1
      raid-disk               3
      device                  /dev/sdh1
      raid-disk               4
      device                  /dev/sdi1
      failed-disk               5


Good luck,

-- 
................................................................
:   jakob@unthought.net   : And I see the elder races,         :
:.........................: putrid forms of man                :
:   Jakob Østergaard      : See him rise and claim the earth,  :
:        OZ9ABN           : his downfall is at hand.           :
:.........................:............{Konkhra}...............:

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: RAID5: mkraid --force /dev/md0 doesn't work properly
  2001-10-01  0:41 ` Jakob Østergaard
@ 2001-10-01  0:51   ` Evan Harris
  2001-10-01  3:56     ` Jakob Østergaard
  0 siblings, 1 reply; 8+ messages in thread
From: Evan Harris @ 2001-10-01  0:51 UTC (permalink / raw)
  To: Jakob Østergaard; +Cc: Linux Kernel List

[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #1: Type: TEXT/PLAIN; charset=X-UNKNOWN, Size: 4926 bytes --]


Thanks for the fast reply!

I'm not sure I understand why drive 5 should be failed.  It is one of the
four disks with the most recently correct superblocks.  The disk with the
oldest superblock is #1.  Can you point me to documentation which explains
this better?  I'm a little afraid of doing that without reading more on it,
since it seems to mark yet another of the 4 remaining "good" drives as
"bad".

Also, should the failed-disk directive be substituted for the raid-disk
directive (as your example was), or should it be:

       device                   /dev/sdd1
       raid-disk                0
       device                   /dev/sde1
       raid-disk                1
       device                   /dev/sdf1
       raid-disk                2
       device                   /dev/sdg1
       raid-disk                3
       device                   /dev/sdh1
       raid-disk                4
       device                   /dev/sdi1
       raid-disk		5
       failed-disk              5

or should it really be:

       device                   /dev/sdd1
       raid-disk                0
       device                   /dev/sde1
       raid-disk                1
       failed-disk              1
       device                   /dev/sdf1
       raid-disk                2
       device                   /dev/sdg1
       raid-disk                3
       device                   /dev/sdh1
       raid-disk                4
       device                   /dev/sdi1
       raid-disk                5

Thanks!

Evan

-- 
| Evan Harris - Consultant, Harris Enterprises - eharris@puremagic.com
|
| Custom Solutions for your Software, Networking, and Telephony Needs

On Mon, 1 Oct 2001, Jakob Østergaard wrote:

> On Sun, Sep 30, 2001 at 07:29:06PM -0500, Evan Harris wrote:
> >
> > And yes, I'm using the real --force option.  :)
>
> Good (hush now, it's a secret ;)
>
> >
> > I have a 6 disk RAID5 scsi array that had one disk go offline through a
> > dying power supply, taking the array into degraded mode, and then another
> > went offline a couple of hours later from what I think was a loose cable.
> >
> > The first drive to go offline was /dev/sde1.
> > The second to go offline was /dev/sdd1.
> >
> > Both drives are actually fine after fixing the connection problems and a
> > reboot, but since the superblocks are out of sync, it won't init.
>
> Ok.
>
> ...
> [huge snip]
> ...
> >
> > I set the first disk that went offline out with a failed-disk directive, and
> > tried to recover with a:
> >
> > mkraid --force /dev/md0
>
> Good !
>
> (to anyone reading this without having read the docs:  don't pull this trick
>  unless you absolutely positively understand the consequences of screwing up
>  here)
>
> >
> > I'm _positive_ that the /etc/raidtab is correct, but it fails to force the
> > update with:
> >
> > DESTROYING the contents of /dev/md0 in 5 seconds, Ctrl-C if unsure!
> > handling MD device /dev/md0
> > analyzing super-block
> > raid_disk conflict on /dev/sde1 and /dev/sdi1 (1)
> > mkraid: aborted, see the syslog and /proc/mdstat for potential clues.
> ...
>
> Read on
>
>
> [snip]
> > For info, here is my raidtab:
> >
> > raiddev /dev/md0
> >         raid-level              5
> >         nr-raid-disks           6
> >         nr-spare-disks          0
> >         chunk-size              256
> >         persistent-superblock   1
> >         device                  /dev/sdd1
> >         raid-disk               0
> >         device                  /dev/sde1
> >         raid-disk               1
> >         device                  /dev/sdf1
> >         raid-disk               2
> >         device                  /dev/sdg1
> >         raid-disk               3
> >         device                  /dev/sdh1
> >         raid-disk               4
> >         device                  /dev/sdi1
> >         raid-disk               5
> >         failed-disk     1
>
>
> Wrong !   device /dev/sdi1 is railed-disk 5 not failed-disk 1,
> that's why mkraid is confused.
>
> What you want is:
>       device                  /dev/sdd1
>       raid-disk               0
>       device                  /dev/sde1
>       raid-disk               1
>       device                  /dev/sdf1
>       raid-disk               2
>       device                  /dev/sdg1
>       raid-disk               3
>       device                  /dev/sdh1
>       raid-disk               4
>       device                  /dev/sdi1
>       failed-disk               5
>
>
> Good luck,
>
> --
> ................................................................
> :   jakob@unthought.net   : And I see the elder races,         :
> :.........................: putrid forms of man                :
> :   Jakob Østergaard      : See him rise and claim the earth,  :
> :        OZ9ABN           : his downfall is at hand.           :
> :.........................:............{Konkhra}...............:
>


^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: RAID5: mkraid --force /dev/md0 doesn't work properly
  2001-10-01  0:51   ` Evan Harris
@ 2001-10-01  3:56     ` Jakob Østergaard
  2001-10-01  6:25       ` Evan Harris
  0 siblings, 1 reply; 8+ messages in thread
From: Jakob Østergaard @ 2001-10-01  3:56 UTC (permalink / raw)
  To: Evan Harris; +Cc: Linux Kernel List

On Sun, Sep 30, 2001 at 07:51:25PM -0500, Evan Harris wrote:
> 
> Thanks for the fast reply!
> 
> I'm not sure I understand why drive 5 should be failed.  It is one of the
> four disks with the most recently correct superblocks.  The disk with the
> oldest superblock is #1.  Can you point me to documentation which explains
> this better?  I'm a little afraid of doing that without reading more on it,
> since it seems to mark yet another of the 4 remaining "good" drives as
> "bad".

Oh, sorry,   of course the oldest disk should be marked as failed.

But the way you mark a disk failed is to replace "raid-disk" with "failed-disk".

What you did in your configuration was to say that sde1 was disk 1, and sdi1 was
disk 5 *AND* disk 1 *AND* it was failed.

Replace "raid-disk" with "failed-disk" for the device that you want to mark
as failed.  Don't touch the numbers.

Cheers,

-- 
................................................................
:   jakob@unthought.net   : And I see the elder races,         :
:.........................: putrid forms of man                :
:   Jakob Østergaard      : See him rise and claim the earth,  :
:        OZ9ABN           : his downfall is at hand.           :
:.........................:............{Konkhra}...............:

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: RAID5: mkraid --force /dev/md0 doesn't work properly
  2001-10-01  3:56     ` Jakob Østergaard
@ 2001-10-01  6:25       ` Evan Harris
  2001-10-01  7:36         ` Peter Svensson
  0 siblings, 1 reply; 8+ messages in thread
From: Evan Harris @ 2001-10-01  6:25 UTC (permalink / raw)
  To: Jakob Østergaard; +Cc: Linux Kernel List

[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #1: Type: TEXT/PLAIN; charset=X-UNKNOWN, Size: 3002 bytes --]


Ok, thanks.  I did that and it worked.  But I have (unfortunately) one more
question about how raid disks are used.  I've now remade the restarted the
raid, having left the oldest drive (/dev/sde1) as a failed-disk.  I do a
raidhotadd /dev/md0 /dev/sde1, and this starts the raid parity rebuild and
gives this status in /proc/mdstat:

md0 : active raid5 sde1[6] sdi1[5] sdh1[4] sdg1[3] sdf1[2] sdd1[0]
      179203840 blocks level 5, 256k chunk, algorithm 0 [6/5] [U_UUUU]
      [=>...................]  recovery =  8.4% (3023688/35840768)
finish=88.9min speed=6148K/sec

Now, my question is: the hotadd seems to have reordered the disks, so when
the rebuild is completed, do I need to reorder my raidtab to reflect this?
Like this?

        device                  /dev/sdd1
        raid-disk               0
        device                  /dev/sdf1
        raid-disk               1
        device                  /dev/sdg1
        raid-disk               2
        device                  /dev/sdh1
        raid-disk               3
        device                  /dev/sdi1
        raid-disk               4
        device                  /dev/sde1
        raid-disk               5

Or does the kernel still keep the drives in order as the raidtab already is,
even though they seem to be out of order in the syslog and /proc/mdstat?  If
I have to force the recreation of the superblocks at some later point, which
way will keep the data from being lost?

Thanks.  Evan

-- 
| Evan Harris - Consultant, Harris Enterprises - eharris@puremagic.com
|
| Custom Solutions for your Software, Networking, and Telephony Needs

On Mon, 1 Oct 2001, Jakob Østergaard wrote:

> On Sun, Sep 30, 2001 at 07:51:25PM -0500, Evan Harris wrote:
> >
> > Thanks for the fast reply!
> >
> > I'm not sure I understand why drive 5 should be failed.  It is one of the
> > four disks with the most recently correct superblocks.  The disk with the
> > oldest superblock is #1.  Can you point me to documentation which explains
> > this better?  I'm a little afraid of doing that without reading more on it,
> > since it seems to mark yet another of the 4 remaining "good" drives as
> > "bad".
>
> Oh, sorry,   of course the oldest disk should be marked as failed.
>
> But the way you mark a disk failed is to replace "raid-disk" with "failed-disk".
>
> What you did in your configuration was to say that sde1 was disk 1, and sdi1 was
> disk 5 *AND* disk 1 *AND* it was failed.
>
> Replace "raid-disk" with "failed-disk" for the device that you want to mark
> as failed.  Don't touch the numbers.
>
> Cheers,
>
> --
> ................................................................
> :   jakob@unthought.net   : And I see the elder races,         :
> :.........................: putrid forms of man                :
> :   Jakob Østergaard      : See him rise and claim the earth,  :
> :        OZ9ABN           : his downfall is at hand.           :
> :.........................:............{Konkhra}...............:
>


^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: RAID5: mkraid --force /dev/md0 doesn't work properly
  2001-10-01  6:25       ` Evan Harris
@ 2001-10-01  7:36         ` Peter Svensson
  0 siblings, 0 replies; 8+ messages in thread
From: Peter Svensson @ 2001-10-01  7:36 UTC (permalink / raw)
  To: Evan Harris; +Cc: Jakob Østergaard, Linux Kernel List

On Mon, 1 Oct 2001, Evan Harris wrote:

>
> md0 : active raid5 sde1[6] sdi1[5] sdh1[4] sdg1[3] sdf1[2] sdd1[0]
>       179203840 blocks level 5, 256k chunk, algorithm 0 [6/5] [U_UUUU]
>       [=>...................]  recovery =  8.4% (3023688/35840768)
> finish=88.9min speed=6148K/sec
>
> Now, my question is: the hotadd seems to have reordered the disks, so when
> the rebuild is completed, do I need to reorder my raidtab to reflect this?
> Like this?

Once the resync has completed the hotadded disk will drop into its slot.
Ie there is no need to change the numbers in /etc/raidtab, they wil be
correct once the array has recovered.

Peter
--
Peter Svensson      ! Pgp key available by finger, fingerprint:
<petersv@psv.nu>    ! 8A E9 20 98 C1 FF 43 E3  07 FD B9 0A 80 72 70 AF
------------------------------------------------------------------------
Remember, Luke, your source will be with you... always...



^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: RAID5: mkraid --force /dev/md0 doesn't work properly
@ 2001-10-01 12:09 Chris Andrews
  2001-10-02  5:09 ` Jakob Østergaard
  0 siblings, 1 reply; 8+ messages in thread
From: Chris Andrews @ 2001-10-01 12:09 UTC (permalink / raw)
  To: linux-kernel

Evan Harris (eharris@puremagic.com) said:

> I have a 6 disk RAID5 scsi array that had one disk go offline through a
> dying power supply, taking the array into degraded mode, and then another
> went offline a couple of hours later from what I think was a loose cable.

I had much the same happen, except that I lost 6 disks out of 12 (power
failure to one external rack of two), so I had no chance of starting in
degraded mode. In this situation, where there are not enough disks for a
viable raid, what is the recommended solution? In my case, there was nothing
wrong with the six disks, but their superblock event counters were out of
step.

Is the best idea to modify /etc/raidtab as discussed, and run mkraid with
the real force option? What I actually did was to hand-edit the superblocks
on the disks, and got the array going. That experience would lead me to
suggest that there's room for some more options to allow the use of disks
where there's actually nothing wrong, but right now the raid code won't use
them. I'm thinking of a set of '--ignore' options to raidstart:
--ignore-eventcounter, --ignore-failedflag, etc, which an admin could use as
an alternative to trying mkraid.

Right now it seems that software-raid works well, until it doesn't, at which
point you're stuck - there's very little in the way of tools or overrides to
sort problems out. Something other than 'try mkraid force as a last resort'
would be useful.

(If anyone thinks this is a good idea, yes, I am volunteering to provide
patches...)

Chris.


^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: RAID5: mkraid --force /dev/md0 doesn't work properly
  2001-10-01 12:09 RAID5: mkraid --force /dev/md0 doesn't work properly Chris Andrews
@ 2001-10-02  5:09 ` Jakob Østergaard
  0 siblings, 0 replies; 8+ messages in thread
From: Jakob Østergaard @ 2001-10-02  5:09 UTC (permalink / raw)
  To: Chris Andrews; +Cc: linux-kernel

On Mon, Oct 01, 2001 at 01:09:50PM +0100, Chris Andrews wrote:
> Evan Harris (eharris@puremagic.com) said:
> 
> > I have a 6 disk RAID5 scsi array that had one disk go offline through a
> > dying power supply, taking the array into degraded mode, and then another
> > went offline a couple of hours later from what I think was a loose cable.
> 
> I had much the same happen, except that I lost 6 disks out of 12 (power
> failure to one external rack of two), so I had no chance of starting in
> degraded mode. In this situation, where there are not enough disks for a
> viable raid, what is the recommended solution? In my case, there was nothing
> wrong with the six disks, but their superblock event counters were out of
> step.

The solution is exactly the same as before:  re-create the array with N-1
disks, so that parity reconstruction will not begin.

Find the "oldest" disk and mark that one as failed - in case you lose an
entire rack of disks, any one of those should do.

Re-create the RAID, fsck (and don't worry about quite some inconsistency), and
most of your data should be back.

If you screw up (eg. re-order disks), your data will never know what hit them.

> 
> Is the best idea to modify /etc/raidtab as discussed, and run mkraid with
> the real force option? What I actually did was to hand-edit the superblocks
> on the disks, and got the array going. That experience would lead me to
> suggest that there's room for some more options to allow the use of disks
> where there's actually nothing wrong, but right now the raid code won't use
> them. I'm thinking of a set of '--ignore' options to raidstart:
> --ignore-eventcounter, --ignore-failedflag, etc, which an admin could use as
> an alternative to trying mkraid.

re-creating the RAID does exactly that: "hand-modifies" the superblocks to
let the array run again.

Your idea is pretty good:  if you did not have to re-write the superblocks from
the raidtab, you would not risk screwing up drive-ordering because of
inconsistent raidtabs.

I'd do a patch if I wasn't busy re-constructing/creating/moving/reconfiguring
RAID arrays right now  ;)

> 
> Right now it seems that software-raid works well, until it doesn't, at which
> point you're stuck - there's very little in the way of tools or overrides to
> sort problems out. Something other than 'try mkraid force as a last resort'
> would be useful.

You're not stuck.  You have plenty of options, just as you stated in your post.

With a hardware solution you'd be *stuck* - not as in "there's no pretty tool",
but as in "game over, sucker!"   ;)

But I agree with you that the process could be improved, and I really like your
suggestion with --ignore-eventcounter (or --try-recover maybe ?).

> 
> (If anyone thinks this is a good idea, yes, I am volunteering to provide
> patches...)

Aha !

I think it's a great idea !

-- 
................................................................
:   jakob@unthought.net   : And I see the elder races,         :
:.........................: putrid forms of man                :
:   Jakob Østergaard      : See him rise and claim the earth,  :
:        OZ9ABN           : his downfall is at hand.           :
:.........................:............{Konkhra}...............:

^ permalink raw reply	[flat|nested] 8+ messages in thread

end of thread, other threads:[~2001-10-02  5:09 UTC | newest]

Thread overview: 8+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2001-10-01 12:09 RAID5: mkraid --force /dev/md0 doesn't work properly Chris Andrews
2001-10-02  5:09 ` Jakob Østergaard
  -- strict thread matches above, loose matches on Subject: below --
2001-10-01  0:29 Evan Harris
2001-10-01  0:41 ` Jakob Østergaard
2001-10-01  0:51   ` Evan Harris
2001-10-01  3:56     ` Jakob Østergaard
2001-10-01  6:25       ` Evan Harris
2001-10-01  7:36         ` Peter Svensson

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox