From: "Geir Råness" <post@pulz.no>
To: linux-raid@vger.kernel.org
Subject: Strange error with raid array
Date: Mon, 05 Apr 2004 20:13:32 +0200 [thread overview]
Message-ID: <4071A1CC.5030805@pulz.no> (raw)
Hi
I got 2 raid arrays, both raid0.
For some weeks the raid array md0 have had errors, indicating one of the
disks where gonna die, so i jused the array as an temp partition untill
the array died.
Today was the day, but somethign else seem to have happend.
The other raid array is also broke now, and i can't get it it working
again, so im looking for suggestion for how to get it back and working
again.(this array is fully working and the disks are new)
Heres the info on the raid arrays, fetched from raidtab
raiddev /dev/md0
raid-level 0
nr-raid-disks 2
persistent-superblock 1
chunk-size 4
device /dev/hde
raid-disk 0
device /dev/hdh
raid-disk 1
raiddev /dev/md1
raid-level 0
nr-raid-disks 2
persistent-superblock 1
chunk-size 4
device /dev/hdf
raid-disk 0
device /dev/hdg
raid-disk 1
The array md0 is the array with a broken disk,
MD1 is the array that is suposed to be working, never had any errors on
it, the disks are relativly new.
Here is the error code, fetched from dmesg
md: raidstart(pid 4682) used deprecated
START_ARRAY ioctl. This will not be supported
beyond 2.6
md: could not lock unknown-block(34,64).
md: could not import unknown-block(34,64),
trying to run array nevertheless.
md: autorun ...
md: considering hde ...
md: adding hde ...
md: created md0
md: bind<hde>
md: running: <hde>
md0: setting max_sectors to 8, segment boundary
to 2047
blk_queue_segment_boundary: set to minimum fff
raid0: looking at hde
raid0: comparing hde(78150656) with hde(78150656)
raid0: END
raid0: ==> UNIQUE
raid0: 1 zones
raid0: FINAL 1 zones
raid0: too few disks (1 of 2) - aborting!
md: pers->run() failed ...
md :do_md_run() returned -22
md: md0 stopped.
md: unbind<hde>
md: export_rdev(hde)
md: ... autorun DONE.
md: raidstart(pid 4700) used deprecated
START_ARRAY ioctl. This will not be supported
beyond 2.6
md: could not lock unknown-block(34,0).
md: could not import unknown-block(34,0), trying
to run array nevertheless.
md: autorun ...
md: considering hdf ...
md: adding hdf ...
md: created md1
md: bind<hdf>
md: running: <hdf>
md1: setting max_sectors to 8, segment boundary
to 2047
blk_queue_segment_boundary: set to minimum fff
raid0: looking at hdf
raid0: comparing hdf(156290816) with
hdf(156290816)
raid0: END
raid0: ==> UNIQUE
raid0: 1 zones
raid0: FINAL 1 zones
raid0: too few disks (1 of 2) - aborting!
md: pers->run() failed ...
md :do_md_run() returned -22
md: md1 stopped.
md: unbind<hdf>
md: export_rdev(hdf)
md: ... autorun DONE.
So i realy need an suggestion whats wrong with md1, and how i can fix it
Best Regards
Geir Råness
-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
next reply other threads:[~2004-04-05 18:13 UTC|newest]
Thread overview: 3+ messages / expand[flat|nested] mbox.gz Atom feed top
2004-04-05 18:13 Geir Råness [this message]
2004-04-07 0:56 ` Strange error with raid array Neil Brown
-- strict thread matches above, loose matches on Subject: below --
2004-04-05 20:25 Geir Råness
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=4071A1CC.5030805@pulz.no \
--to=post@pulz.no \
--cc=linux-raid@vger.kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).