* Help.
@ 2004-04-01 16:56 Jason C. Leach
2004-04-01 17:00 ` Help Måns Rullgård
2004-04-02 9:23 ` hardware raid with IDE == bad? Mauricio
0 siblings, 2 replies; 24+ messages in thread
From: Jason C. Leach @ 2004-04-01 16:56 UTC (permalink / raw)
To: linux-raid
Help.
^ permalink raw reply [flat|nested] 24+ messages in thread
* RAID10 Layouts
@ 2009-08-21 13:27 Info
2009-08-21 16:43 ` Goswin von Brederlow
0 siblings, 1 reply; 24+ messages in thread
From: Info @ 2009-08-21 13:27 UTC (permalink / raw)
To: linux-raid
Hello list,
Researching RAID10, trying to learn the most advanced system for a 2 SATA drive system. Have two WD 2TB drives for a media computer, and the most important requirement is data redundancy. I realize that RAID is no substitute for backups, but this is a backup for the backups and the purpose here is data safety. The secondary goal is speed enhancement. It appears that RAID10 can give both.
First question is on layout of RAID10. In studying the man pages it seems that Far mode gives 95% of the speed of RAID0, but with increased seek for writes. And that Offset retains much of this benefit while increasing efficiency of writes. What should be the preference, Far or Offset? Are they equally as robust?
How safe is the data in Far or Offset mode? If a drive fails, will a complete, usable, bootable system exist on the other drive? (These two are the only drives in the system, which is Debian Testing, Debian kernel 2.6.30-5) Need I make any special Grub settings?
What about this Intel firmware 'RAID'? Would this assist in any way? How does it relate (if it does) to the linux md system? Should I set in BIOS to RAID, or leave it at ACPI?
How does this look:
# mdadm --create /dev/md0 --level=raid10 --layout=o2 --metadata=1.2 --chunk=64 --raid-disks=2 missing /dev/sdb1
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: RAID10 Layouts
2009-08-21 13:27 RAID10 Layouts Info
@ 2009-08-21 16:43 ` Goswin von Brederlow
2009-08-21 18:02 ` Info
0 siblings, 1 reply; 24+ messages in thread
From: Goswin von Brederlow @ 2009-08-21 16:43 UTC (permalink / raw)
To: Info; +Cc: linux-raid
Info@quantum-sci.net writes:
> Hello list,
>
> Researching RAID10, trying to learn the most advanced system for a 2
> SATA drive system. Have two WD 2TB drives for a media computer, and
> the most important requirement is data redundancy. I realize that
> RAID is no substitute for backups, but this is a backup for the
> backups and the purpose here is data safety. The secondary goal is
> speed enhancement. It appears that RAID10 can give both.
>
> First question is on layout of RAID10. In studying the man pages it
> seems that Far mode gives 95% of the speed of RAID0, but with
> increased seek for writes. And that Offset retains much of this
> benefit while increasing efficiency of writes. What should be the
> preference, Far or Offset? Are they equally as robust?
All raid10 layouts offer the same robustness. Which layout is best for
you really depends on your use case. Probably the biggest factor will
be the average file size. My experience is that with large files the
far copies do not cost noticeable write speed while being twice as
fast reading as raid1.
> How safe is the data in Far or Offset mode? If a drive fails, will
> a complete, usable, bootable system exist on the other drive?
> (These two are the only drives in the system, which is Debian
> Testing, Debian kernel 2.6.30-5) Need I make any special Grub
> settings?
I don't think lilo or grub1 can boot from raid10 at all with offset or
far copies. With near copies you are identical to a simple raid1 so
that would boot.
So to be bootable even with a failed drive you should partition the
disk. Create a small raid1 for the system and a large raid10 for the
data.
> What about this Intel firmware 'RAID'? Would this assist in any
> way? How does it relate (if it does) to the linux md system?
> Should I set in BIOS to RAID, or leave it at ACPI?
I would stay away from any half baked bios stuff. It will be no better
than linux software raid but will tie you to the specific bios. If
your mainboard fails and the next one has a different bios you can't
boot your disks.
> How does this look:
> # mdadm --create /dev/md0 --level=raid10 --layout=o2 --metadata=1.2 --chunk=64 --raid-disks=2 missing /dev/sdb1
On partitions it is save to use 1.1 format. Saves you 4k. Jupey.
You should play with the chunksize though and try with and without
bitmap and different bitmap sizes. Bitmap costs some write performance
but it greatly speeds up resyncs after a crash or temporary drive
failure.
MfG
Goswin
^ permalink raw reply [flat|nested] 24+ messages in thread* Re: RAID10 Layouts
2009-08-21 16:43 ` Goswin von Brederlow
@ 2009-08-21 18:02 ` Info
2009-08-21 19:20 ` Help Info
0 siblings, 1 reply; 24+ messages in thread
From: Info @ 2009-08-21 18:02 UTC (permalink / raw)
To: linux-raid
Thank you Goswin.
On Friday 21 August 2009 09:43:28 Goswin von Brederlow wrote:
> I don't think lilo or grub1 can boot from raid10 at all with offset or
> far copies. With near copies you are identical to a simple raid1 so
> that would boot.
>
> So to be bootable even with a failed drive you should partition the
> disk. Create a small raid1 for the system and a large raid10 for the
> data.
Uh oh, already set all 3 parts for RAID10, but haven't switched over yet.
As it happens my / is on sda1 and /home is sda3 (swap is sda2), so it'll be pretty easy to just make / RAID1. Do I need to make swap RAID1 and not 10?
> I would stay away from any half baked bios stuff. It will be no better
> than linux software raid but will tie you to the specific bios. If
> your mainboard fails and the next one has a different bios you can't
> boot your disks.
Thank you.
> > How does this look:
> > # mdadm --create /dev/md0 --level=raid10 --layout=o2 --metadata=1.2 --chunk=64 --raid-disks=2 missing /dev/sdb1
>
> On partitions it is save to use 1.1 format. Saves you 4k. Jupey.
4k of what? One time only, or on every cluster? Any additional benefit to 1.2?
My system records mpeg4 from DishNetwork satellite (R5000-HD), so it handles mostly files over 1GB. However its most rigorous duty is scanning those videos for commercials, and marking locations in a mysql database. The disk light is constantly on and system response is sluggish when this is being done. I don't understand how an advanced drive like this can be so bogged down, but I hope RAID10 will speed things up. Maybe there is a way to increase disk cache size?
> You should play with the chunksize though and try with and without
> bitmap and different bitmap sizes. Bitmap costs some write performance
> but it greatly speeds up resyncs after a crash or temporary drive
> failure.
My partitions and data are so enormous that I can't really do any experimenting. Definitely will use write-intent log.
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: Help
2009-08-21 18:02 ` Info
@ 2009-08-21 19:20 ` Info
2009-08-21 19:38 ` Help John Robinson
2009-08-22 6:14 ` Help Info
0 siblings, 2 replies; 24+ messages in thread
From: Info @ 2009-08-21 19:20 UTC (permalink / raw)
To: linux-raid
My God, the command is not working. I need to remove sdb1 from md0 so I can change it from a RAID10 to RAID1, and it simply ignores my command:
# mdadm /dev/md0 --fail /dev/sdb1 --remove /dev/sdb1
# cat /proc/mdstat
Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4] [multipath]
md2 : active raid10 sdb3[1]
1868560128 blocks super 1.2 64K chunks 2 offset-copies [2/1] [_U]
bitmap: 94/446 pages [376KB], 2048KB chunk
md1 : active raid10 sdb2[1]
6297344 blocks super 1.2 64K chunks 2 offset-copies [2/1] [_U]
bitmap: 0/25 pages [0KB], 128KB chunk
md0 : active raid10 sdb1[1]
78654080 blocks super 1.2 64K chunks 2 offset-copies [2/1] [_U]
bitmap: 76/151 pages [304KB], 256KB chunk
unused devices: <none>
#
My system is half-converted and is now unbootable. What am I going to do?
^ permalink raw reply [flat|nested] 24+ messages in thread* Re: Help
2009-08-21 19:20 ` Help Info
@ 2009-08-21 19:38 ` John Robinson
2009-08-21 20:51 ` Help Info
2009-08-22 6:14 ` Help Info
1 sibling, 1 reply; 24+ messages in thread
From: John Robinson @ 2009-08-21 19:38 UTC (permalink / raw)
To: Info; +Cc: linux-raid
On 21/08/2009 20:20, Info@quantum-sci.net wrote:
> My God, the command is not working. I need to remove sdb1 from md0 so I can change it from a RAID10 to RAID1, and it simply ignores my command:
> # mdadm /dev/md0 --fail /dev/sdb1 --remove /dev/sdb1
> # cat /proc/mdstat
> Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4] [multipath]
> md2 : active raid10 sdb3[1]
> 1868560128 blocks super 1.2 64K chunks 2 offset-copies [2/1] [_U]
> bitmap: 94/446 pages [376KB], 2048KB chunk
>
> md1 : active raid10 sdb2[1]
> 6297344 blocks super 1.2 64K chunks 2 offset-copies [2/1] [_U]
> bitmap: 0/25 pages [0KB], 128KB chunk
>
> md0 : active raid10 sdb1[1]
> 78654080 blocks super 1.2 64K chunks 2 offset-copies [2/1] [_U]
> bitmap: 76/151 pages [304KB], 256KB chunk
Well, it won't let you remove the only thing keeping the array active.
Stop the array first with `mdadm --stop /dev/md0`. After that I think
you can just create your new RAID-1 array without doing anything else.
Cheers,
John.
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: Help
2009-08-21 19:20 ` Help Info
2009-08-21 19:38 ` Help John Robinson
@ 2009-08-22 6:14 ` Info
2009-08-22 9:34 ` Help NeilBrown
1 sibling, 1 reply; 24+ messages in thread
From: Info @ 2009-08-22 6:14 UTC (permalink / raw)
To: linux-raid
Not able to boot to my RAID devices. md0 is / and ext3 RAID1, but md1 and md2 are swap and JFS respectively, RAID10 created like this:
mdadm --create /dev/md1 --level=raid10 --layout=o2 --metadata=1.2 --chunk=256 --raid-disks=2 missing /dev/sdb2
It gives the initial kernel boot message but then says
invalid raid superblock magic on sdb2
invalid raid superblock magic on sdb3
... and halts progress. I have to hard-reset to continue. Why isn't the error more specific?
I've tried setting the metadata to 1.1, and tried adjusting mdadm.conf from /dev/md/1 to /dev/md1, but neither helped. The parts are set to raid autodetect and the kernel parameter is set to md_autodetect. What could be wrong?
On Friday 21 August 2009 12:20:28 Info@quantum-sci.net wrote:
>
> My God, the command is not working. I need to remove sdb1 from md0 so I can change it from a RAID10 to RAID1, and it simply ignores my command:
> # mdadm /dev/md0 --fail /dev/sdb1 --remove /dev/sdb1
> # cat /proc/mdstat
> Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4] [multipath]
> md2 : active raid10 sdb3[1]
> 1868560128 blocks super 1.2 64K chunks 2 offset-copies [2/1] [_U]
> bitmap: 94/446 pages [376KB], 2048KB chunk
>
> md1 : active raid10 sdb2[1]
> 6297344 blocks super 1.2 64K chunks 2 offset-copies [2/1] [_U]
> bitmap: 0/25 pages [0KB], 128KB chunk
>
> md0 : active raid10 sdb1[1]
> 78654080 blocks super 1.2 64K chunks 2 offset-copies [2/1] [_U]
> bitmap: 76/151 pages [304KB], 256KB chunk
>
> unused devices: <none>
> #
>
>
> My system is half-converted and is now unbootable. What am I going to do?
>
>
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: Help
2009-08-22 6:14 ` Help Info
@ 2009-08-22 9:34 ` NeilBrown
2009-08-22 12:56 ` Help Info
0 siblings, 1 reply; 24+ messages in thread
From: NeilBrown @ 2009-08-22 9:34 UTC (permalink / raw)
To: Info; +Cc: linux-raid
On Sat, August 22, 2009 4:14 pm, Info@quantum-sci.net wrote:
>
> Not able to boot to my RAID devices. md0 is / and ext3 RAID1, but md1 and
> md2 are swap and JFS respectively, RAID10 created like this:
> mdadm --create /dev/md1 --level=raid10 --layout=o2 --metadata=1.2
> --chunk=256 --raid-disks=2 missing /dev/sdb2
>
> It gives the initial kernel boot message but then says
> invalid raid superblock magic on sdb2
> invalid raid superblock magic on sdb3
>
> ... and halts progress. I have to hard-reset to continue. Why isn't the
> error more specific?
You say md0 is raid1 but mdstat shows it to be raid10, so that won't boot.
'raid autodetect' only works for 0.90 metadata, and you are using 1.x.
You should not use 'raid autodetect' partitions. Rather the initrd
should use mdadm to assemble the arrays. Most distros seem to get this
right these days. Maybe you just need to rebuild your
initrd...
NeilBrown
>
> I've tried setting the metadata to 1.1, and tried adjusting mdadm.conf
> from /dev/md/1 to /dev/md1, but neither helped. The parts are set to raid
> autodetect and the kernel parameter is set to md_autodetect. What could
> be wrong?
>
>
>
> On Friday 21 August 2009 12:20:28 Info@quantum-sci.net wrote:
>>
>> My God, the command is not working. I need to remove sdb1 from md0 so I
>> can change it from a RAID10 to RAID1, and it simply ignores my command:
>> # mdadm /dev/md0 --fail /dev/sdb1 --remove /dev/sdb1
>> # cat /proc/mdstat
>> Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5]
>> [raid4] [multipath]
>> md2 : active raid10 sdb3[1]
>> 1868560128 blocks super 1.2 64K chunks 2 offset-copies [2/1] [_U]
>> bitmap: 94/446 pages [376KB], 2048KB chunk
>>
>> md1 : active raid10 sdb2[1]
>> 6297344 blocks super 1.2 64K chunks 2 offset-copies [2/1] [_U]
>> bitmap: 0/25 pages [0KB], 128KB chunk
>>
>> md0 : active raid10 sdb1[1]
>> 78654080 blocks super 1.2 64K chunks 2 offset-copies [2/1] [_U]
>> bitmap: 76/151 pages [304KB], 256KB chunk
>>
>> unused devices: <none>
>> #
>>
>>
>> My system is half-converted and is now unbootable. What am I going to
>> do?
>>
>>
>
> --
> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
>
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: Help
2009-08-22 9:34 ` Help NeilBrown
@ 2009-08-22 12:56 ` Info
2009-08-22 16:47 ` Help John Robinson
0 siblings, 1 reply; 24+ messages in thread
From: Info @ 2009-08-22 12:56 UTC (permalink / raw)
To: linux-raid
On Saturday 22 August 2009 02:34:12 NeilBrown wrote:
> You say md0 is raid1 but mdstat shows it to be raid10, so that won't boot.
Thanks Neil. However that was an early attempt before I knew RAID10 won't boot.
> 'raid autodetect' only works for 0.90 metadata, and you are using 1.x.
> You should not use 'raid autodetect' partitions. Rather the initrd
> should use mdadm to assemble the arrays. Most distros seem to get this
> right these days. Maybe you just need to rebuild your
> initrd...
I am not using an initrd. Have all the RAID and disk drivers built into the (custom-compiled) kernel. It uses mdadm to assemble the arrays? Maybe this is the problem.
I am using this procedure to build a RAID array from a live system:
http://www.howtoforge.com/software-raid1-grub-boot-debian-etch
It is very lucid and clear, however I am slightly modifying it to use RAID10 on my second and third partitions. When I come to
update-initramfs -u
... the only initrd it updates is for an old stock kernel. It doesn't build one for any of my compiled kernels.
What partition type should I use rather than raid autodetect? Or should I revert to 0.90 metadata?
Looking at dmesg it does say that md1 & 2 do not have a valid ==v.90== superblock. There is no other linux raid partition type, so I guess it's got to be v.090. Why do they make 1.1 and 1.2 then, if they do not work?
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: Help
2009-08-22 12:56 ` Help Info
@ 2009-08-22 16:47 ` John Robinson
2009-08-22 18:12 ` Help Info
0 siblings, 1 reply; 24+ messages in thread
From: John Robinson @ 2009-08-22 16:47 UTC (permalink / raw)
To: Info; +Cc: linux-raid
On 22/08/2009 13:56, Info@quantum-sci.net wrote:
[...]
> It is very lucid and clear, however I am slightly modifying it to use RAID10 on my second and third partitions. When I come to
> update-initramfs -u
> ... the only initrd it updates is for an old stock kernel. It doesn't build one for any of my compiled kernels.
You should have mkinitrd (that's what it is on Fedora/RHEL/CentOS) or
something similar with which you can build initramfs images for any kernel.
> What partition type should I use rather than raid autodetect? Or should I revert to 0.90 metadata?
Probably type DA, Non-FS data, though type FD will be fine even if
they're not auto-detected.
> Looking at dmesg it does say that md1 & 2 do not have a valid ==v.90== superblock. There is no other linux raid partition type, so I guess it's got to be v.090. Why do they make 1.1 and 1.2 then, if they do not work?
The newer metadata types have their benefits. Auto-detection is being
deprecated, I think it's because things which are only for boot-up time
are being pushed out of the permanently-loaded kernel into initramfs, so
they don't hang around wasting space on a running system. For example,
CentOS 5 uses autodetection, Fedora 10 automatically puts mdadm in the
initramfs and runs it at the right time.
Cheers,
John.
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: Help
2009-08-22 16:47 ` Help John Robinson
@ 2009-08-22 18:12 ` Info
2009-08-22 20:45 ` Help Info
2009-08-23 20:28 ` Help John Robinson
0 siblings, 2 replies; 24+ messages in thread
From: Info @ 2009-08-22 18:12 UTC (permalink / raw)
To: linux-raid
On Saturday 22 August 2009 09:47:48 John Robinson wrote:
> You should have mkinitrd (that's what it is on Fedora/RHEL/CentOS) or
> something similar with which you can build initramfs images for any kernel.
OK once I changed the version to 0.90 it stopped just at the kernel banner on boot and hung. I was about to give up on RAID when your message came through, and I created the initrd.img file. I always compile my own kernels and don't depend on an initrd, but it now seems to be necessary. So in Debian:
# update-initramfs -o /boot/initrd.img-2.6.30-5
... reboot, and voila it did what it was supposed to, for a change. I'm now resyncing my 2TB drives, which will take a good while.
> > What partition type should I use rather than raid autodetect? Or should I revert to 0.90 metadata?
>
> Probably type DA, Non-FS data, though type FD will be fine even if
> they're not auto-detected.
It simply found 'bad magick' with FD, so that doesn't work with the newer versions. I tried to use both newer versions, but it's not possible. You sound not quite sure of the partition type, so I'll stick with FD and 0.90. Thanks though John.
Goswin says, "For scanning your videos raid10 with far layout is probably best with
a large read ahead." I have the RAID10 blocksize set to 1024 for the video partition, but any idea how to set readahead?
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: Help
2009-08-22 18:12 ` Help Info
@ 2009-08-22 20:45 ` Info
2009-08-22 20:59 ` Help Guy Watkins
2009-08-23 20:28 ` Help John Robinson
1 sibling, 1 reply; 24+ messages in thread
From: Info @ 2009-08-22 20:45 UTC (permalink / raw)
To: linux-raid
On Saturday 22 August 2009 11:12:35 Info@quantum-sci.net wrote:
> Goswin says, "For scanning your videos raid10 with far layout is probably best with
> a large read ahead." I have the RAID10 blocksize set to 1024 for the video partition, but any idea how to set readahead?
My gosh, it turns out this setting is astounding. You test your drive speed with some large file, as such:
# time dd if={somelarge}.iso of=/dev/null bs=256k
... and check your drive's default readahead setting:
# blockdev --getra /dev/sda
256
... then test with various settings like 1024, 1536, 2048, 4096, 8192, and maybe 16384:
# blockdev --setra 4096 /dev/sda
Here are the results for my laptop. I can't test the HTPC with the array yet, as it's still syncing.
256 40.4 MB/s
1024 123 MB/s
1536 2.7 GB/s
2048 2.4 GB/s
4096 2.4 GB/s
8192 2.4 GB/s
16384 2.5 GB/s
I suspect it's best to use the minimum readahead for the best speed (in my case 1536), for two reasons:
- To save memory;
- So there isn't such a performance impact when the blocks are not sequential.
^ permalink raw reply [flat|nested] 24+ messages in thread* RE: Help
2009-08-22 20:45 ` Help Info
@ 2009-08-22 20:59 ` Guy Watkins
[not found] ` <200908230631.46865.Info@quantum-sci.net>
0 siblings, 1 reply; 24+ messages in thread
From: Guy Watkins @ 2009-08-22 20:59 UTC (permalink / raw)
To: Info, linux-raid
} -----Original Message-----
} From: linux-raid-owner@vger.kernel.org [mailto:linux-raid-
} owner@vger.kernel.org] On Behalf Of Info@quantum-sci.net
} Sent: Saturday, August 22, 2009 4:45 PM
} To: linux-raid@vger.kernel.org
} Subject: Re: Help
}
} On Saturday 22 August 2009 11:12:35 Info@quantum-sci.net wrote:
} > Goswin says, "For scanning your videos raid10 with far layout is
} probably best with
} > a large read ahead." I have the RAID10 blocksize set to 1024 for the
} video partition, but any idea how to set readahead?
}
} My gosh, it turns out this setting is astounding. You test your drive
} speed with some large file, as such:
} # time dd if={somelarge}.iso of=/dev/null bs=256k
}
} ... and check your drive's default readahead setting:
} # blockdev --getra /dev/sda
} 256
}
} ... then test with various settings like 1024, 1536, 2048, 4096, 8192, and
} maybe 16384:
} # blockdev --setra 4096 /dev/sda
}
} Here are the results for my laptop. I can't test the HTPC with the array
} yet, as it's still syncing.
} 256 40.4 MB/s
} 1024 123 MB/s
} 1536 2.7 GB/s
} 2048 2.4 GB/s
} 4096 2.4 GB/s
} 8192 2.4 GB/s
} 16384 2.5 GB/s
}
} I suspect it's best to use the minimum readahead for the best speed (in my
} case 1536), for two reasons:
} - To save memory;
} - So there isn't such a performance impact when the blocks are not
} sequential.
The disk cache is being used. You should reboot between each test, or use a
file much bigger than the amount of RAM you have. Or use a different file
each time.
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: Help
2009-08-22 18:12 ` Help Info
2009-08-22 20:45 ` Help Info
@ 2009-08-23 20:28 ` John Robinson
1 sibling, 0 replies; 24+ messages in thread
From: John Robinson @ 2009-08-23 20:28 UTC (permalink / raw)
To: Info; +Cc: linux-raid
On 22/08/2009 19:12, Info@quantum-sci.net wrote:
> On Saturday 22 August 2009 09:47:48 John Robinson wrote:
[...]
>>> What partition type should I use rather than raid autodetect? Or should I revert to 0.90 metadata?
>> Probably type DA, Non-FS data, though type FD will be fine even if
>> they're not auto-detected.
>
> It simply found 'bad magick' with FD, so that doesn't work with the newer versions. I tried to use both newer versions, but it's not possible. You sound not quite sure of the partition type, so I'll stick with FD and 0.90. Thanks though John.
I said "probably" DA because that's what's been suggested by others
previously on this list. Others have simply used 83, but that's not
ideal because if the partitions appear to have filesystems on (e.g. the
metadata's not at the beginning), they might get auto-mounted without md
RAID. I'm sure FD will work fine with later metadata versions as long as
you have mdadm in your initramfs, and while as you've noted there'll be
a whinge in the boot log about it not being version 0.90, it's not going
to cause the kernel to lock up or anything like that.
Cheers,
John.
^ permalink raw reply [flat|nested] 24+ messages in thread
* help
@ 2006-08-23 19:21 Archie Cotton
0 siblings, 0 replies; 24+ messages in thread
From: Archie Cotton @ 2006-08-23 19:21 UTC (permalink / raw)
To: linux-net
Do you want a w-atch?
In our online store you can buy r e p l i c a s of R o l e x watches and
other brands. They look and feel exactly like the real thing.
- We have 100+ different brands in our selection
- Best prices on the market Just For You
- Great Discount Live Support Extended Warranty
- Free shipping if you order 2 or more
- Save up to 85% compared to the cost of other r e p l i c a s
- Standard Features:
- Screw-in crown
- Unidirectional turning bezel where appropriate
- All the appropriate r o l e x logos, on crown and dial
- Heavy weight
Clisk here: http://superty.info
"It's yours.
That was the only way he could account for this bizarre behavior she had seen the marks after all, and this was the beginning of some new and spectacular punishment.
As a result, hadn't his "serious fiction»become steadily more self-conscious, a sort of scream?
-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply [flat|nested] 24+ messages in thread
* Help
@ 2006-02-04 2:21 Oren Ben-Menachem
0 siblings, 0 replies; 24+ messages in thread
From: Oren Ben-Menachem @ 2006-02-04 2:21 UTC (permalink / raw)
To: linux-raid
^ permalink raw reply [flat|nested] 24+ messages in thread
* help
@ 2004-10-20 5:05 Srinivasa S
2004-10-20 5:50 ` help Guy
2004-10-21 1:47 ` help Jon Lewis
0 siblings, 2 replies; 24+ messages in thread
From: Srinivasa S @ 2004-10-20 5:05 UTC (permalink / raw)
To: linux-raid
i've a raid 5 setup with 3 disks each of 1 GB. i'm using it for some
experimentation purpose. resync is n progress and is not ending at
all. its almost 15 hrs since the resync started and the "cat
/proc/mdstat" always shows something like this.
Personalities : [raid5]
md0 : active raid5 sdj1[2] sdi1[1] sdh1[0]
2002688 blocks level 5, 8k chunk, algorithm 2 [3/3] [UUU]
[>....................] resync = 0.0% (0/1001344)
finish=442287.5min speed=0K/sec
is there anything wrong - can i possibly stop the resync. the problem
i'm having is that whichever process tries to do IO with the array is
hanging. please help. thanks.
srinivasa s
^ permalink raw reply [flat|nested] 24+ messages in thread* RE: help
2004-10-20 5:05 help Srinivasa S
@ 2004-10-20 5:50 ` Guy
2004-10-21 1:47 ` help Jon Lewis
1 sibling, 0 replies; 24+ messages in thread
From: Guy @ 2004-10-20 5:50 UTC (permalink / raw)
To: 'Srinivasa S', linux-raid
Your ETA is 307 days! Your computer will be obsolete by then! :)
Something is wrong.
The re-sync should take less than 10 minutes.
But I don't know what is wrong.
Guy
-----Original Message-----
From: linux-raid-owner@vger.kernel.org
[mailto:linux-raid-owner@vger.kernel.org] On Behalf Of Srinivasa S
Sent: Wednesday, October 20, 2004 1:06 AM
To: linux-raid@vger.kernel.org
Subject: help
i've a raid 5 setup with 3 disks each of 1 GB. i'm using it for some
experimentation purpose. resync is n progress and is not ending at
all. its almost 15 hrs since the resync started and the "cat
/proc/mdstat" always shows something like this.
Personalities : [raid5]
md0 : active raid5 sdj1[2] sdi1[1] sdh1[0]
2002688 blocks level 5, 8k chunk, algorithm 2 [3/3] [UUU]
[>....................] resync = 0.0% (0/1001344)
finish=442287.5min speed=0K/sec
is there anything wrong - can i possibly stop the resync. the problem
i'm having is that whichever process tries to do IO with the array is
hanging. please help. thanks.
srinivasa s
-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply [flat|nested] 24+ messages in thread* Re: help
2004-10-20 5:05 help Srinivasa S
2004-10-20 5:50 ` help Guy
@ 2004-10-21 1:47 ` Jon Lewis
1 sibling, 0 replies; 24+ messages in thread
From: Jon Lewis @ 2004-10-21 1:47 UTC (permalink / raw)
To: Srinivasa S; +Cc: linux-raid
On Wed, 20 Oct 2004, Srinivasa S wrote:
> i've a raid 5 setup with 3 disks each of 1 GB. i'm using it for some
> experimentation purpose. resync is n progress and is not ending at
> all. its almost 15 hrs since the resync started and the "cat
> /proc/mdstat" always shows something like this.
>
> Personalities : [raid5]
> md0 : active raid5 sdj1[2] sdi1[1] sdh1[0]
> 2002688 blocks level 5, 8k chunk, algorithm 2 [3/3] [UUU]
> [>....................] resync = 0.0% (0/1001344)
> finish=442287.5min speed=0K/sec
What kind of disks? My bad Maxtor SATA drive caused similar issues.
----------------------------------------------------------------------
Jon Lewis | I route
Senior Network Engineer | therefore you are
Atlantic Net |
_________ http://www.lewis.org/~jlewis/pgp for PGP public key_________
^ permalink raw reply [flat|nested] 24+ messages in thread
* help
@ 2004-02-01 13:13 Rami Addady
0 siblings, 0 replies; 24+ messages in thread
From: Rami Addady @ 2004-02-01 13:13 UTC (permalink / raw)
To: linux-raid
^ permalink raw reply [flat|nested] 24+ messages in thread
end of thread, other threads:[~2009-08-27 12:47 UTC | newest]
Thread overview: 24+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2004-04-01 16:56 Help Jason C. Leach
2004-04-01 17:00 ` Help Måns Rullgård
2004-04-02 9:23 ` hardware raid with IDE == bad? Mauricio
-- strict thread matches above, loose matches on Subject: below --
2009-08-21 13:27 RAID10 Layouts Info
2009-08-21 16:43 ` Goswin von Brederlow
2009-08-21 18:02 ` Info
2009-08-21 19:20 ` Help Info
2009-08-21 19:38 ` Help John Robinson
2009-08-21 20:51 ` Help Info
2009-08-22 6:14 ` Help Info
2009-08-22 9:34 ` Help NeilBrown
2009-08-22 12:56 ` Help Info
2009-08-22 16:47 ` Help John Robinson
2009-08-22 18:12 ` Help Info
2009-08-22 20:45 ` Help Info
2009-08-22 20:59 ` Help Guy Watkins
[not found] ` <200908230631.46865.Info@quantum-sci.net>
2009-08-24 23:08 ` Help Info
2009-08-24 23:38 ` Help NeilBrown
2009-08-25 13:18 ` Help Info
2009-08-27 12:47 ` Help Info
2009-08-23 20:28 ` Help John Robinson
2006-08-23 19:21 help Archie Cotton
2006-02-04 2:21 Help Oren Ben-Menachem
2004-10-20 5:05 help Srinivasa S
2004-10-20 5:50 ` help Guy
2004-10-21 1:47 ` help Jon Lewis
2004-02-01 13:13 help Rami Addady
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).