* "No such device" on --remove
@ 2007-05-08 17:04 Benjamin Schieder
2007-05-08 18:22 ` Michael Tokarev
2007-05-09 11:42 ` Bernd Schubert
0 siblings, 2 replies; 11+ messages in thread
From: Benjamin Schieder @ 2007-05-08 17:04 UTC (permalink / raw)
To: linux-raid
[-- Attachment #1: Type: text/plain, Size: 1484 bytes --]
Hi list.
I recently had a crash on my RAID machine and now two out of five RAIDs
don't start anymore. I don't even understand the error:
blindcoder@crazyhorse:~$ cat /proc/mdstat
Personalities : [linear] [raid0] [raid1] [raid5] [raid4]
md5 : active raid5 hdh8[0] hde8[3] hdf8[2] hdg8[1]
677139456 blocks level 5, 64k chunk, algorithm 2 [4/4] [UUUU]
md4 : active raid5 hdh7[0] hdg7[1] hde7[3] hdf7[2]
20988480 blocks level 5, 64k chunk, algorithm 2 [4/4] [UUUU]
md3 : active raid5 hdh6[2] hdg6[0] hde6[3] hdf6[1]
12000192 blocks level 5, 64k chunk, algorithm 2 [4/4] [UUUU]
md2 : inactive hdh5[4](S) hdg5[1] hde5[3] hdf5[2]
11983872 blocks
md1 : inactive hdg3[1] hdh3[4](S) hde3[3] hdf3[2]
11984128 blocks
md0 : active raid1 hdh1[0] hde1[3] hdf1[2] hdg1[1]
497856 blocks [4/4] [UUUU]
unused devices: <none>
blindcoder@crazyhorse:~$ su -
Password:
root@crazyhorse:~# mdadm -R /dev/md/2
mdadm: failed to run array /dev/md/2: Input/output error
root@crazyhorse:~# mdadm /dev/md/
0 1 2 3 4 5
root@crazyhorse:~# mdadm /dev/md/2 -r /dev/hdh5
mdadm: hot remove failed for /dev/hdh5: No such device
md1 and md2 are supposed to be raid5 arrays.
The relevant software versions are:
root@crazyhorse:~# mine -q linux26 mdadm
linux26 2.6.17.7 0
mdadm 2.5.6 0
Anyone got an idea?
Greetings,
Benjamin
--
Go away, or I will replace you with a very small shellscript!
http://shellscripts.org/
[-- Attachment #2: Type: application/pgp-signature, Size: 189 bytes --]
^ permalink raw reply [flat|nested] 11+ messages in thread* Re: "No such device" on --remove
2007-05-08 17:04 "No such device" on --remove Benjamin Schieder
@ 2007-05-08 18:22 ` Michael Tokarev
2007-05-09 11:42 ` Bernd Schubert
1 sibling, 0 replies; 11+ messages in thread
From: Michael Tokarev @ 2007-05-08 18:22 UTC (permalink / raw)
To: Benjamin Schieder; +Cc: linux-raid
Benjamin Schieder wrote:
> Hi list.
>
> md2 : inactive hdh5[4](S) hdg5[1] hde5[3] hdf5[2]
> 11983872 blocks
> root@crazyhorse:~# mdadm -R /dev/md/2
> mdadm: failed to run array /dev/md/2: Input/output error
> root@crazyhorse:~# mdadm /dev/md/
> 0 1 2 3 4 5
> root@crazyhorse:~# mdadm /dev/md/2 -r /dev/hdh5
> mdadm: hot remove failed for /dev/hdh5: No such device
>
> md1 and md2 are supposed to be raid5 arrays.
The arrays are inactive. In this condition, an array can be
shut down, or bought up by adding another disk with proper
superblock. So running it isn't possible because kernel thinks
the array is inconsistent, and removing isn't possible because
the array isn't running.
It's inactive because when mdadm tried to assemble it, it didn't
find enough devices with recent-enough event counter. In other
words, the raid superblocks on the individual drives are inconsistent
(some are "older" than others).
If the problem is due to power failure, fixing the situation is
usually just a matter of adding -f (force) option to mdadm assemble
line, forcing mdadm to increment the "almost-recent" drive's event
counter before bringing the array up.
/mjt
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: "No such device" on --remove
2007-05-08 17:04 "No such device" on --remove Benjamin Schieder
2007-05-08 18:22 ` Michael Tokarev
@ 2007-05-09 11:42 ` Bernd Schubert
2007-05-09 21:23 ` Michael Tokarev
2007-05-10 21:09 ` Questions about the speed when MD-RAID array is being initialized Liang Yang
1 sibling, 2 replies; 11+ messages in thread
From: Bernd Schubert @ 2007-05-09 11:42 UTC (permalink / raw)
To: linux-raid
Benjamin Schieder wrote:
> root@crazyhorse:~# mdadm /dev/md/2 -r /dev/hdh5
> mdadm: hot remove failed for /dev/hdh5: No such device
>
> md1 and md2 are supposed to be raid5 arrays.
You are probably using udev, don't you? Somehow there's presently
no /dev/hdh5, but to remove /dev/hdh5 out of the raid, mdadm needs this
device. There's a workaround, you need to create the device in /dev using
mknod and then you can remove it with mdadm.
We are presently running into a similar problem, which I'm going to describe
on this and hotplug list right now.
Hope it helps,
Bernd
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: "No such device" on --remove
2007-05-09 11:42 ` Bernd Schubert
@ 2007-05-09 21:23 ` Michael Tokarev
2007-05-10 5:28 ` Benjamin Schieder
2007-05-10 21:09 ` Questions about the speed when MD-RAID array is being initialized Liang Yang
1 sibling, 1 reply; 11+ messages in thread
From: Michael Tokarev @ 2007-05-09 21:23 UTC (permalink / raw)
To: Bernd Schubert; +Cc: linux-raid
Bernd Schubert wrote:
> Benjamin Schieder wrote:
>
>
>> root@crazyhorse:~# mdadm /dev/md/2 -r /dev/hdh5
>> mdadm: hot remove failed for /dev/hdh5: No such device
>>
>> md1 and md2 are supposed to be raid5 arrays.
>
> You are probably using udev, don't you? Somehow there's presently
> no /dev/hdh5, but to remove /dev/hdh5 out of the raid, mdadm needs this
> device. There's a workaround, you need to create the device in /dev using
> mknod and then you can remove it with mdadm.
In case the /dev/hdh5 device node is missing, mdadm will complain
"No such file or directory" (ENOENT), instead of "No such device"
(ENODEV).
In this case, as I explained in my previous email, the arrays aren't
running, and the error refers to manipulations (md ioctls) with existing
/dev/md/2.
It has nothing to do with udev.
/mjt
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: "No such device" on --remove
2007-05-09 21:23 ` Michael Tokarev
@ 2007-05-10 5:28 ` Benjamin Schieder
0 siblings, 0 replies; 11+ messages in thread
From: Benjamin Schieder @ 2007-05-10 5:28 UTC (permalink / raw)
To: linux-raid
[-- Attachment #1: Type: text/plain, Size: 1319 bytes --]
On 10.05.2007 01:23:27, Michael Tokarev wrote:
> Bernd Schubert wrote:
> > Benjamin Schieder wrote:
> >
> >
> >> root@crazyhorse:~# mdadm /dev/md/2 -r /dev/hdh5
> >> mdadm: hot remove failed for /dev/hdh5: No such device
> >>
> >> md1 and md2 are supposed to be raid5 arrays.
> >
> > You are probably using udev, don't you? Somehow there's presently
> > no /dev/hdh5, but to remove /dev/hdh5 out of the raid, mdadm needs this
> > device. There's a workaround, you need to create the device in /dev using
> > mknod and then you can remove it with mdadm.
>
> In case the /dev/hdh5 device node is missing, mdadm will complain
> "No such file or directory" (ENOENT), instead of "No such device"
> (ENODEV).
>
> In this case, as I explained in my previous email, the arrays aren't
> running, and the error refers to manipulations (md ioctls) with existing
> /dev/md/2.
>
> It has nothing to do with udev.
Ah, that's good to know. I also thought that it couldn't find /dev/hdh5 from
the error message given.
The --force to -A worked fine, btw. Thanks for your help!
Greetings,
Benjamin
--
Benjamin 'blindCoder' Schieder
Registered Linux User #289529: http://counter.li.org
finger blindcoder@scavenger.homeip.net | gpg --import
--
/lusr/bin/brain: received signal: SIGIDIOT
[-- Attachment #2: Type: application/pgp-signature, Size: 189 bytes --]
^ permalink raw reply [flat|nested] 11+ messages in thread
* Questions about the speed when MD-RAID array is being initialized.
2007-05-09 11:42 ` Bernd Schubert
2007-05-09 21:23 ` Michael Tokarev
@ 2007-05-10 21:09 ` Liang Yang
2007-05-10 21:33 ` Justin Piszcz
2007-05-12 2:24 ` Benjamin Davenport
1 sibling, 2 replies; 11+ messages in thread
From: Liang Yang @ 2007-05-10 21:09 UTC (permalink / raw)
To: linux-raid
Hi,
I created a MD-RAID5 array using 8 Maxtor SAS Disk Drives (chunk size is
256k). I have measured the data transfer speed for single SAS disk drive
(physical drive, not filesystem on it), it is roughly about 80~90MB/s.
However, I notice MD also reports the speed for the RAID5 array when it is
being initialized (cat /proc/mdstat). The speed reported by MD is not
constant which is roughly from 70MB/s to 90MB/s (average is 85MB/s which is
very close to the single disk data transfer speed).
I just have three questions:
1. What is the exact meaning of the array speed reported by MD? Is that
mesured for the whole array (I used 8 disks) or for just single underlying
disk? If it is for the whole array, then 70~90B/s seems too low considering
8 disks are used for this array.
2. How is this speed measured and what is the I/O packet size being used
when the speed is measured?
3. From the beginning when MD-RAID 5 array is initialized to the end when
the intialization is done, the speed reports by MD gradually decrease from
90MB/s down to 70MB/s. Why does the speed change? Why does the speed
gradually decrease?
Could anyone give me some explanation?
I'm using RHEL 4U4 with 2.6.18 kernel. MDADM version is 1.6.
Thanks a lot,
Liang
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: Questions about the speed when MD-RAID array is being initialized.
2007-05-10 21:09 ` Questions about the speed when MD-RAID array is being initialized Liang Yang
@ 2007-05-10 21:33 ` Justin Piszcz
2007-05-10 21:38 ` Liang Yang
2007-05-10 23:03 ` Robin Hill
2007-05-12 2:24 ` Benjamin Davenport
1 sibling, 2 replies; 11+ messages in thread
From: Justin Piszcz @ 2007-05-10 21:33 UTC (permalink / raw)
To: Liang Yang; +Cc: linux-raid
On Thu, 10 May 2007, Liang Yang wrote:
> Hi,
>
> I created a MD-RAID5 array using 8 Maxtor SAS Disk Drives (chunk size is
> 256k). I have measured the data transfer speed for single SAS disk drive
> (physical drive, not filesystem on it), it is roughly about 80~90MB/s.
>
> However, I notice MD also reports the speed for the RAID5 array when it is
> being initialized (cat /proc/mdstat). The speed reported by MD is not
> constant which is roughly from 70MB/s to 90MB/s (average is 85MB/s which is
> very close to the single disk data transfer speed).
>
> I just have three questions:
> 1. What is the exact meaning of the array speed reported by MD? Is that
> mesured for the whole array (I used 8 disks) or for just single underlying
> disk? If it is for the whole array, then 70~90B/s seems too low considering 8
> disks are used for this array.
>
> 2. How is this speed measured and what is the I/O packet size being used when
> the speed is measured?
>
> 3. From the beginning when MD-RAID 5 array is initialized to the end when the
> intialization is done, the speed reports by MD gradually decrease from 90MB/s
> down to 70MB/s. Why does the speed change? Why does the speed gradually
> decrease?
>
> Could anyone give me some explanation?
>
> I'm using RHEL 4U4 with 2.6.18 kernel. MDADM version is 1.6.
>
> Thanks a lot,
>
> Liang
>
>
>
>
> -
> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
>
For no 3. because it starts from the fast end of the disk and works its
way to the slower part (slower speeds).
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: Questions about the speed when MD-RAID array is being initialized.
2007-05-10 21:33 ` Justin Piszcz
@ 2007-05-10 21:38 ` Liang Yang
2007-05-10 21:44 ` Justin Piszcz
2007-05-10 23:03 ` Robin Hill
1 sibling, 1 reply; 11+ messages in thread
From: Liang Yang @ 2007-05-10 21:38 UTC (permalink / raw)
To: Justin Piszcz; +Cc: linux-raid
Could you please give me more details about this? What do you mean the fast
end and slow end part of disk? Do you mean the location in each disk
platter?
Thanks,
Liang
----- Original Message -----
From: "Justin Piszcz" <jpiszcz@lucidpixels.com>
To: "Liang Yang" <multisyncfe991@hotmail.com>
Cc: <linux-raid@vger.kernel.org>
Sent: Thursday, May 10, 2007 2:33 PM
Subject: Re: Questions about the speed when MD-RAID array is being
initialized.
>
>
> On Thu, 10 May 2007, Liang Yang wrote:
>
>> Hi,
>>
>> I created a MD-RAID5 array using 8 Maxtor SAS Disk Drives (chunk size is
>> 256k). I have measured the data transfer speed for single SAS disk drive
>> (physical drive, not filesystem on it), it is roughly about 80~90MB/s.
>>
>> However, I notice MD also reports the speed for the RAID5 array when it
>> is being initialized (cat /proc/mdstat). The speed reported by MD is not
>> constant which is roughly from 70MB/s to 90MB/s (average is 85MB/s which
>> is very close to the single disk data transfer speed).
>>
>> I just have three questions:
>> 1. What is the exact meaning of the array speed reported by MD? Is that
>> mesured for the whole array (I used 8 disks) or for just single
>> underlying disk? If it is for the whole array, then 70~90B/s seems too
>> low considering 8 disks are used for this array.
>>
>> 2. How is this speed measured and what is the I/O packet size being used
>> when the speed is measured?
>>
>> 3. From the beginning when MD-RAID 5 array is initialized to the end when
>> the intialization is done, the speed reports by MD gradually decrease
>> from 90MB/s down to 70MB/s. Why does the speed change? Why does the speed
>> gradually decrease?
>>
>> Could anyone give me some explanation?
>>
>> I'm using RHEL 4U4 with 2.6.18 kernel. MDADM version is 1.6.
>>
>> Thanks a lot,
>>
>> Liang
>>
>>
>>
>>
>> -
>> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
>> the body of a message to majordomo@vger.kernel.org
>> More majordomo info at http://vger.kernel.org/majordomo-info.html
>>
>
> For no 3. because it starts from the fast end of the disk and works its
> way to the slower part (slower speeds).
>
>
>
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: Questions about the speed when MD-RAID array is being initialized.
2007-05-10 21:38 ` Liang Yang
@ 2007-05-10 21:44 ` Justin Piszcz
0 siblings, 0 replies; 11+ messages in thread
From: Justin Piszcz @ 2007-05-10 21:44 UTC (permalink / raw)
To: Liang Yang; +Cc: linux-raid
http://partition.radified.com/partitioning_2.htm
System and program files that wind up at the far end of the drive take
longer to access, and are transferred at a slower rate, which translates
into a less-responsive system. If you look at the graph of sustained
transfer rates (STRs) from the HD Tach benchmark posted here, you'll see
clearly that the outermost sectors of the drive transfer data the fastest.
On Thu, 10 May 2007, Liang Yang wrote:
> Could you please give me more details about this? What do you mean the fast
> end and slow end part of disk? Do you mean the location in each disk platter?
>
> Thanks,
>
> Liang
>
>
> ----- Original Message ----- From: "Justin Piszcz" <jpiszcz@lucidpixels.com>
> To: "Liang Yang" <multisyncfe991@hotmail.com>
> Cc: <linux-raid@vger.kernel.org>
> Sent: Thursday, May 10, 2007 2:33 PM
> Subject: Re: Questions about the speed when MD-RAID array is being
> initialized.
>
>
>>
>>
>> On Thu, 10 May 2007, Liang Yang wrote:
>>
>>> Hi,
>>>
>>> I created a MD-RAID5 array using 8 Maxtor SAS Disk Drives (chunk size is
>>> 256k). I have measured the data transfer speed for single SAS disk drive
>>> (physical drive, not filesystem on it), it is roughly about 80~90MB/s.
>>>
>>> However, I notice MD also reports the speed for the RAID5 array when it is
>>> being initialized (cat /proc/mdstat). The speed reported by MD is not
>>> constant which is roughly from 70MB/s to 90MB/s (average is 85MB/s which
>>> is very close to the single disk data transfer speed).
>>>
>>> I just have three questions:
>>> 1. What is the exact meaning of the array speed reported by MD? Is that
>>> mesured for the whole array (I used 8 disks) or for just single underlying
>>> disk? If it is for the whole array, then 70~90B/s seems too low
>>> considering 8 disks are used for this array.
>>>
>>> 2. How is this speed measured and what is the I/O packet size being used
>>> when the speed is measured?
>>>
>>> 3. From the beginning when MD-RAID 5 array is initialized to the end when
>>> the intialization is done, the speed reports by MD gradually decrease from
>>> 90MB/s down to 70MB/s. Why does the speed change? Why does the speed
>>> gradually decrease?
>>>
>>> Could anyone give me some explanation?
>>>
>>> I'm using RHEL 4U4 with 2.6.18 kernel. MDADM version is 1.6.
>>>
>>> Thanks a lot,
>>>
>>> Liang
>>>
>>>
>>>
>>>
>>> -
>>> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
>>> the body of a message to majordomo@vger.kernel.org
>>> More majordomo info at http://vger.kernel.org/majordomo-info.html
>>>
>>
>> For no 3. because it starts from the fast end of the disk and works its way
>> to the slower part (slower speeds).
>>
>>
>>
>
> -
> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
>
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: Questions about the speed when MD-RAID array is being initialized.
2007-05-10 21:33 ` Justin Piszcz
2007-05-10 21:38 ` Liang Yang
@ 2007-05-10 23:03 ` Robin Hill
1 sibling, 0 replies; 11+ messages in thread
From: Robin Hill @ 2007-05-10 23:03 UTC (permalink / raw)
To: linux-raid
[-- Attachment #1: Type: text/plain, Size: 2395 bytes --]
On Thu May 10, 2007 at 05:33:17PM -0400, Justin Piszcz wrote:
>
>
> On Thu, 10 May 2007, Liang Yang wrote:
>
> >Hi,
> >
> >I created a MD-RAID5 array using 8 Maxtor SAS Disk Drives (chunk size is
> >256k). I have measured the data transfer speed for single SAS disk drive
> >(physical drive, not filesystem on it), it is roughly about 80~90MB/s.
> >
> >However, I notice MD also reports the speed for the RAID5 array when it is
> >being initialized (cat /proc/mdstat). The speed reported by MD is not
> >constant which is roughly from 70MB/s to 90MB/s (average is 85MB/s which
> >is very close to the single disk data transfer speed).
> >
> >I just have three questions:
> >1. What is the exact meaning of the array speed reported by MD? Is that
> >mesured for the whole array (I used 8 disks) or for just single underlying
> >disk? If it is for the whole array, then 70~90B/s seems too low
> >considering 8 disks are used for this array.
> >
> >2. How is this speed measured and what is the I/O packet size being used
> >when the speed is measured?
> >
> >3. From the beginning when MD-RAID 5 array is initialized to the end when
> >the intialization is done, the speed reports by MD gradually decrease from
> >90MB/s down to 70MB/s. Why does the speed change? Why does the speed
> >gradually decrease?
> >
> >Could anyone give me some explanation?
> >
> >I'm using RHEL 4U4 with 2.6.18 kernel. MDADM version is 1.6.
> >
> >Thanks a lot,
> >
> >Liang
> >
>
> For no 3. because it starts from the fast end of the disk and works its
> way to the slower part (slower speeds).
>
And I'd assume for no 1 it's because it's only writing to a single disk
at this point, so will obviously be limited to the transfer rate of a
single disk. RAID5 arrays are created as a degraded array, then the
final disk is "recovered" - this is done so that the array is ready for
use very quickly. So what you're seeing in /proc/mdstat is the speed in
calculating and writing the data for the final drive (and is, unless
computationally limited, going to be the write speed of the single
drive).
HTH,
Robin
--
___
( ' } | Robin Hill <robin@robinhill.me.uk> |
/ / ) | Little Jim says .... |
// !! | "He fallen in de water !!" |
[-- Attachment #2: Type: application/pgp-signature, Size: 197 bytes --]
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: Questions about the speed when MD-RAID array is being initialized.
2007-05-10 21:09 ` Questions about the speed when MD-RAID array is being initialized Liang Yang
2007-05-10 21:33 ` Justin Piszcz
@ 2007-05-12 2:24 ` Benjamin Davenport
1 sibling, 0 replies; 11+ messages in thread
From: Benjamin Davenport @ 2007-05-12 2:24 UTC (permalink / raw)
To: Liang Yang; +Cc: linux-raid
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA512
When mdadm creates a raid5 array, it creates it in degraded mode. Then it kicks
in the last device and runs the rebuild code, which reads from the other n-1
disks and writes to the nth disk as appropriate to satisfy parity requirements.
This allows you to max out the write bandwidth on a single disk, and is the
quickest way to make the array consistent.
The speed reported by mdstat is the speed at which the array resync is
completing. That is, it is the speed at which new areas of the disk are being
brought into a consistent state. Because that speed is, absent other IO or high
CPU load or other constraint, limited by the write bandwidth of your disk,
that's what you're seeing. The speed decreases as you progress across the disk
because the disk's write speed decreases across the disk.
If you run iostat, you'll see that one of the raid's component disks is
sequentially writing and the others are sequentially reading, all at the same
speed (modulo a small amount of jitter caused by quantized timing). If you
think about the algorithm involved, there's no faster way to do it, despite your
initial gut feeling that an 8-disk array should be able to resync faster than that.
- -Ben
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.5 (MingW32)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org
iD8DBQFGRSVMcsocGMHJ2H8RCt8mAKCYPDi6t0Zi4ZKxiJvWSg26L3vsoACfS3MD
J8JTSSL3AukSprVlpLTJr50=
=YZrE
-----END PGP SIGNATURE-----
^ permalink raw reply [flat|nested] 11+ messages in thread
end of thread, other threads:[~2007-05-12 2:24 UTC | newest]
Thread overview: 11+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2007-05-08 17:04 "No such device" on --remove Benjamin Schieder
2007-05-08 18:22 ` Michael Tokarev
2007-05-09 11:42 ` Bernd Schubert
2007-05-09 21:23 ` Michael Tokarev
2007-05-10 5:28 ` Benjamin Schieder
2007-05-10 21:09 ` Questions about the speed when MD-RAID array is being initialized Liang Yang
2007-05-10 21:33 ` Justin Piszcz
2007-05-10 21:38 ` Liang Yang
2007-05-10 21:44 ` Justin Piszcz
2007-05-10 23:03 ` Robin Hill
2007-05-12 2:24 ` Benjamin Davenport
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).