* vess raid stripes disappear
@ 2013-05-09 11:29 mashtin.bakir
2013-05-09 14:59 ` Keith Keller
2013-05-10 7:55 ` SOLVED " Stan Hoeppner
0 siblings, 2 replies; 11+ messages in thread
From: mashtin.bakir @ 2013-05-09 11:29 UTC (permalink / raw)
To: linux-raid
I have an interesting problem with a Vessraid 1830s.
We have a few of these that work fine but one seems
to lose its filesets. The only difference between the
good ones and the bad one is that the bad one has firmware
version 3.06 while the good ones are at 3.05 (This may
not be relevant).
Here's what happens. If I plug the raid into a 32 bit
RHEL5 box with large files enabled, syslog does pick
it up:
kernel: Vendor: Promise Model:VessRAID 1830s Rev: 0306
Type: Direct-Access ANSI SCSI revision: 05
SCSI device sdc:2929686528 2048-byte hdwr sectors (5999998 MB)
Using the web gui, I can carve out partitions,
I make three stripes across 4 disks of 2Terabytes each
using RAID5.
cat /proc/partitions shows the raw devices as well as
the correct sizes. I then use gnu-parted (v3.1) to make the
filesets:
mklabel gpt
mkpart primray 0 0
set 1 raid on
I create the fileset using
mkfs.ext3 -m0 /dev/sdc1
I can then mount the FS and write to it.
If I either reboot the RAID or the host, the FS disappears
ie cat/proc/partitions shows only sdc, not sdc1.
If I go back into parted, the label is intact
But I can't even mkfs without re-creating the label/partition,
in wich case I get:
...Have been written, but we have been
unable to
inform the kernel of the change, probably because it/they are in use. As a
result, the old partition(s) will remain in use. You should reboot now
before
making further changes.
Ignore/Cancel? i
Any ideas?
Thanks
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: vess raid stripes disappear
2013-05-09 11:29 vess raid stripes disappear mashtin.bakir
@ 2013-05-09 14:59 ` Keith Keller
2013-05-09 15:10 ` Roman Mamedov
2013-05-10 7:55 ` SOLVED " Stan Hoeppner
1 sibling, 1 reply; 11+ messages in thread
From: Keith Keller @ 2013-05-09 14:59 UTC (permalink / raw)
To: linux-raid
On 2013-05-09, mashtin.bakir@gmail.com <mashtin.bakir@gmail.com> wrote:
> I have an interesting problem with a Vessraid 1830s.
Is this a hardware RAID controller? If so, you are probably looking for
a different group. This mailing list is specific to linux md software
RAID. You can try the comp.os.linux.misc newsgroup instead.
--keith
--
kkeller@wombat.san-francisco.ca.us
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: vess raid stripes disappear
2013-05-09 14:59 ` Keith Keller
@ 2013-05-09 15:10 ` Roman Mamedov
2013-05-09 15:32 ` Keith Keller
0 siblings, 1 reply; 11+ messages in thread
From: Roman Mamedov @ 2013-05-09 15:10 UTC (permalink / raw)
To: Keith Keller; +Cc: linux-raid
[-- Attachment #1: Type: text/plain, Size: 1080 bytes --]
On Thu, 09 May 2013 07:59:41 -0700
Keith Keller <kkeller@wombat.san-francisco.ca.us> wrote:
> On 2013-05-09, mashtin.bakir@gmail.com <mashtin.bakir@gmail.com> wrote:
> > I have an interesting problem with a Vessraid 1830s.
>
> Is this a hardware RAID controller? If so, you are probably looking for
> a different group. This mailing list is specific to linux md software
> RAID. You can try the comp.os.linux.misc newsgroup instead.
I (also) was under impression that linux-raid is not just for md, but for all
kinds of RAID in GNU/Linux. It's just that historically it's mostly about md.
However the poster is highly unlikely to find anyone experienced with an
obscure h/w RAID controller here (I am almost willing to bet that majority of
the readers see the name "Vess RAID" for the first time in that very post).
My advice to them would be to switch the controller into a dumb SATA HBA mode
if possible, and use the Linux Software RAID (indeed, md). Here are some
reasons why: http://linux.yyz.us/why-software-raid.html
--
With respect,
Roman
[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 198 bytes --]
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: vess raid stripes disappear
2013-05-09 15:10 ` Roman Mamedov
@ 2013-05-09 15:32 ` Keith Keller
2013-05-09 15:41 ` Adam Goryachev
` (2 more replies)
0 siblings, 3 replies; 11+ messages in thread
From: Keith Keller @ 2013-05-09 15:32 UTC (permalink / raw)
To: linux-raid
On 2013-05-09, Roman Mamedov <rm@romanrm.ru> wrote:
>
> I (also) was under impression that linux-raid is not just for md, but for a=
> ll
> kinds of RAID in GNU/Linux. It's just that historically it's mostly about m=
> d.
Indeed, the majordomo info says only this:
"Welcome to the linux-raid mailing list, hosted on vger.kernel.org.
Discussions on this list should be relevant to using RAID technologies
with Linux. The list has a wiki howto at http://raid.wiki.kernel.org/"
This is somewhat contradictory with the wiki overview:
"This site is the Linux-raid kernel list community-managed reference for
Linux software RAID as implemented in recent 2.6 kernels."
So I admit to being a bit confused! Is there a consensus among the
admins what the list topics should be?
--keith
--
kkeller@wombat.san-francisco.ca.us
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: vess raid stripes disappear
2013-05-09 15:32 ` Keith Keller
@ 2013-05-09 15:41 ` Adam Goryachev
2013-05-09 16:23 ` David Brown
2013-05-09 21:27 ` NeilBrown
2 siblings, 0 replies; 11+ messages in thread
From: Adam Goryachev @ 2013-05-09 15:41 UTC (permalink / raw)
To: Keith Keller; +Cc: linux-raid
On 10/05/13 01:32, Keith Keller wrote:
> On 2013-05-09, Roman Mamedov <rm@romanrm.ru> wrote:
>> I (also) was under impression that linux-raid is not just for md, but for a=
>> ll
>> kinds of RAID in GNU/Linux. It's just that historically it's mostly about m=
>> d.
> Indeed, the majordomo info says only this:
>
> "Welcome to the linux-raid mailing list, hosted on vger.kernel.org.
> Discussions on this list should be relevant to using RAID technologies
> with Linux. The list has a wiki howto at http://raid.wiki.kernel.org/"
>
> This is somewhat contradictory with the wiki overview:
>
> "This site is the Linux-raid kernel list community-managed reference for
> Linux software RAID as implemented in recent 2.6 kernels."
>
> So I admit to being a bit confused! Is there a consensus among the
> admins what the list topics should be?
>
Also, should the wiki be updated or modified? I highly doubt anyone is
still running linux older than 2.6 (actually, I really believe there are
people out there who are, but I'm not sure they should be...).
Why does it need to be related to a specific version of Linux (2.6)?
Possibly it could say "as implemented in Linux kernel 2.6 or newer" or
"as implemented in the Linux kernel since version 2.6.0" or simply "as
implemented in the Linux kernel".
Regards,
Adam
--
Adam Goryachev
Website Managers
www.websitemanagers.com.au
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: vess raid stripes disappear
2013-05-09 15:32 ` Keith Keller
2013-05-09 15:41 ` Adam Goryachev
@ 2013-05-09 16:23 ` David Brown
2013-05-09 21:33 ` Keith Keller
2013-05-09 21:27 ` NeilBrown
2 siblings, 1 reply; 11+ messages in thread
From: David Brown @ 2013-05-09 16:23 UTC (permalink / raw)
To: Keith Keller; +Cc: linux-raid
On 09/05/13 17:32, Keith Keller wrote:
> On 2013-05-09, Roman Mamedov <rm@romanrm.ru> wrote:
>>
>> I (also) was under impression that linux-raid is not just for md, but for a=
>> ll
>> kinds of RAID in GNU/Linux. It's just that historically it's mostly about m=
>> d.
>
> Indeed, the majordomo info says only this:
>
> "Welcome to the linux-raid mailing list, hosted on vger.kernel.org.
> Discussions on this list should be relevant to using RAID technologies
> with Linux. The list has a wiki howto at http://raid.wiki.kernel.org/"
>
> This is somewhat contradictory with the wiki overview:
>
> "This site is the Linux-raid kernel list community-managed reference for
> Linux software RAID as implemented in recent 2.6 kernels."
>
> So I admit to being a bit confused! Is there a consensus among the
> admins what the list topics should be?
>
I don't know if there is any "official" status here, but certainly most
threads are about md raid - and this is the best source of help and
information about md raid. But dm-raid is also discussed here, and
occasionally hardware raid comes up. There are some regulars here who
are very experienced with hardware raid and provide helpful advice -
there are times when hardware raid is a better solution than software raid.
No one can be familiar with all the hardware raid implementations in
common use - but maybe the OP will get lucky and someone here can help.
So it is certainly on-topic for the list, even if it is unlikely to
get a useful reply.
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: vess raid stripes disappear
2013-05-09 15:32 ` Keith Keller
2013-05-09 15:41 ` Adam Goryachev
2013-05-09 16:23 ` David Brown
@ 2013-05-09 21:27 ` NeilBrown
2 siblings, 0 replies; 11+ messages in thread
From: NeilBrown @ 2013-05-09 21:27 UTC (permalink / raw)
To: Keith Keller; +Cc: linux-raid
[-- Attachment #1: Type: text/plain, Size: 1427 bytes --]
On Thu, 09 May 2013 08:32:20 -0700 Keith Keller
<kkeller@wombat.san-francisco.ca.us> wrote:
> On 2013-05-09, Roman Mamedov <rm@romanrm.ru> wrote:
> >
> > I (also) was under impression that linux-raid is not just for md, but for a=
> > ll
> > kinds of RAID in GNU/Linux. It's just that historically it's mostly about m=
> > d.
>
> Indeed, the majordomo info says only this:
>
> "Welcome to the linux-raid mailing list, hosted on vger.kernel.org.
> Discussions on this list should be relevant to using RAID technologies
> with Linux. The list has a wiki howto at http://raid.wiki.kernel.org/"
>
> This is somewhat contradictory with the wiki overview:
>
> "This site is the Linux-raid kernel list community-managed reference for
> Linux software RAID as implemented in recent 2.6 kernels."
I don't see a contradiction.
The mailing list is for anything about RAID on Linux. The community formed
around the list are generally interested in various sorts of RAID on Linux.
Software RAID is a popular subtopic and for that subtopic there is a
supporting wiki. It doesn't claim to be a complete reference for linux-raid,
only for the software RAID subtopic of linux-raid.
But the 2.6 should go.
>
> So I admit to being a bit confused! Is there a consensus among the
> admins what the list topics should be?
There aren't any admins. There are only subscribers and posters.
NeilBrown
[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 828 bytes --]
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: vess raid stripes disappear
2013-05-09 16:23 ` David Brown
@ 2013-05-09 21:33 ` Keith Keller
0 siblings, 0 replies; 11+ messages in thread
From: Keith Keller @ 2013-05-09 21:33 UTC (permalink / raw)
To: linux-raid
On 2013-05-09, David Brown <david.brown@hesbynett.no> wrote:
>
> I don't know if there is any "official" status here, but certainly most
> threads are about md raid - and this is the best source of help and
> information about md raid. But dm-raid is also discussed here, and
> occasionally hardware raid comes up. There are some regulars here who
> are very experienced with hardware raid and provide helpful advice -
> there are times when hardware raid is a better solution than software raid.
>
> No one can be familiar with all the hardware raid implementations in
> common use - but maybe the OP will get lucky and someone here can help.
> So it is certainly on-topic for the list, even if it is unlikely to
> get a useful reply.
That all sounds completely reasonable. Thanks to everyone for the
clarifications!
--keith
--
kkeller@wombat.san-francisco.ca.us
^ permalink raw reply [flat|nested] 11+ messages in thread
* SOLVED Re: vess raid stripes disappear
2013-05-09 11:29 vess raid stripes disappear mashtin.bakir
2013-05-09 14:59 ` Keith Keller
@ 2013-05-10 7:55 ` Stan Hoeppner
2013-05-10 12:44 ` mashtin.bakir
1 sibling, 1 reply; 11+ messages in thread
From: Stan Hoeppner @ 2013-05-10 7:55 UTC (permalink / raw)
To: mashtin.bakir; +Cc: linux-raid
On 5/9/2013 6:29 AM, mashtin.bakir@gmail.com wrote:
> I have an interesting problem with a Vessraid 1830s.
> We have a few of these that work fine but one seems
> to lose its filesets. The only difference between the
> good ones and the bad one is that the bad one has firmware
> version 3.06 while the good ones are at 3.05 (This may
> not be relevant).
It's not a firmware problem Mashtin. The problem here is incomplete
education. More accurately, the problem is that you've confused
concepts of hardware RAID and Linux software RAID. I will attempt to
help you separate these so you understand the line in the sand
separating the two.
> Here's what happens. If I plug the raid into a 32 bit
> RHEL5 box with large files enabled, syslog does pick
> it up:
>
> kernel: Vendor: Promise Model:VessRAID 1830s Rev: 0306
> Type: Direct-Access ANSI SCSI revision: 05
> SCSI device sdc:2929686528 2048-byte hdwr sectors (5999998 MB)
The kernel sees a single 6TB SCSI device/LUN presented by the Promise
array..
> Using the web gui, I can carve out partitions,
The Promise web gui doesn't create partitions. That's the job of the
operating system. What it does allow you to do is carve out multiple
virtual drives from a single RAID set and export them as individual LUNs.
> I make three stripes across 4 disks of 2Terabytes each
> using RAID5.
This is not possible with the Promise firmware. I think you're simply
using incorrect terminology here. According to your dmesg output above
you have created a single hardware RAID5 array of 4 disks, one 6TB
virtual drive, and exported it as a single LUN.
...
> I then use gnu-parted (v3.1) to make the
> filesets:
parted doesn't create "filesets". It creates partitions. What are
"filesets"?
> mklabel gpt
> mkpart primray 0 0
Ok so you created a primary partition.
> set 1 raid on
^^^^^^^^^^^^^^^
THIS IS THE PROBLEM. "set 1 raid on" is used exclusively with Linux
software RAID. What this does is tell the Kernel to look for a software
RAID superblock on the partition and auto start the array. You are not
using md/RAID, but hardware RAID, so the superblock doesn't exist. This
is the source of your problem. This is where you have confused hardware
and software RAID concepts.
> I create the fileset using
Ok so when you say "fileset" you actually mean "file system".
> mkfs.ext3 -m0 /dev/sdc1
> I can then mount the FS and write to it.
>
> If I either reboot the RAID or the host, the FS disappears
> ie cat/proc/partitions shows only sdc, not sdc1.
> If I go back into parted, the label is intact
> But I can't even mkfs without re-creating the label/partition,
> in wich case I get:
This is a direct result of "set 1 raid on" as explained above. You
should see other error messages in dmesg about no superblock being found.
> ...Have been written, but we have been
> unable to
> inform the kernel of the change, probably because it/they are in use. As a
> result, the old partition(s) will remain in use. You should reboot now
> before
> making further changes.
> Ignore/Cancel? i
Clearing the parted RAID flag on the partition should fix your problem,
assuming you haven't done anything else wonky WRT software RAID and this
partition that hasn't been presented here.
Always remember this: Any time your see "RAID" setup or configuration
referenced in Linux documentation or cheat sheets on the web, it is
invariably referring to a kernel software function, either md/RAID,
dm-raid, etc. It is never referring to hardware RAID devices. If you
have a hardware RAID device you will never configure anything RAID
related in Linux, whether it be parted, grub, md, dm, etc.
--
Stan
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: SOLVED Re: vess raid stripes disappear
2013-05-10 7:55 ` SOLVED " Stan Hoeppner
@ 2013-05-10 12:44 ` mashtin.bakir
2013-05-10 14:17 ` Stan Hoeppner
0 siblings, 1 reply; 11+ messages in thread
From: mashtin.bakir @ 2013-05-10 12:44 UTC (permalink / raw)
To: stan; +Cc: linux-raid
Thanks for your reply. When I run the identical commands on the otherwise
identical raid, stripes do get created and retained. But just to eliminate the
possibility, I re-created a couple of stripes without setting the raid flag and
once again, when the raid chassis was rebooted, they disappeared. I'm
thinking at this point that it's a hardware problem with the RAID controller.
Does that sound likely?
On Fri, May 10, 2013 at 3:55 AM, Stan Hoeppner <stan@hardwarefreak.com> wrote:
> On 5/9/2013 6:29 AM, mashtin.bakir@gmail.com wrote:
>> I have an interesting problem with a Vessraid 1830s.
>> We have a few of these that work fine but one seems
>> to lose its filesets. The only difference between the
>> good ones and the bad one is that the bad one has firmware
>> version 3.06 while the good ones are at 3.05 (This may
>> not be relevant).
>
> It's not a firmware problem Mashtin. The problem here is incomplete
> education. More accurately, the problem is that you've confused
> concepts of hardware RAID and Linux software RAID. I will attempt to
> help you separate these so you understand the line in the sand
> separating the two.
>
>> Here's what happens. If I plug the raid into a 32 bit
>> RHEL5 box with large files enabled, syslog does pick
>> it up:
>>
>> kernel: Vendor: Promise Model:VessRAID 1830s Rev: 0306
>> Type: Direct-Access ANSI SCSI revision: 05
>> SCSI device sdc:2929686528 2048-byte hdwr sectors (5999998 MB)
>
> The kernel sees a single 6TB SCSI device/LUN presented by the Promise
> array..
>
>> Using the web gui, I can carve out partitions,
>
> The Promise web gui doesn't create partitions. That's the job of the
> operating system. What it does allow you to do is carve out multiple
> virtual drives from a single RAID set and export them as individual LUNs.
>
>> I make three stripes across 4 disks of 2Terabytes each
>> using RAID5.
>
> This is not possible with the Promise firmware. I think you're simply
> using incorrect terminology here. According to your dmesg output above
> you have created a single hardware RAID5 array of 4 disks, one 6TB
> virtual drive, and exported it as a single LUN.
>
> ...
>> I then use gnu-parted (v3.1) to make the
>> filesets:
>
> parted doesn't create "filesets". It creates partitions. What are
> "filesets"?
>
>> mklabel gpt
>> mkpart primray 0 0
>
> Ok so you created a primary partition.
>
>> set 1 raid on
>
> ^^^^^^^^^^^^^^^
>
> THIS IS THE PROBLEM. "set 1 raid on" is used exclusively with Linux
> software RAID. What this does is tell the Kernel to look for a software
> RAID superblock on the partition and auto start the array. You are not
> using md/RAID, but hardware RAID, so the superblock doesn't exist. This
> is the source of your problem. This is where you have confused hardware
> and software RAID concepts.
>
>> I create the fileset using
>
> Ok so when you say "fileset" you actually mean "file system".
>
>> mkfs.ext3 -m0 /dev/sdc1
>> I can then mount the FS and write to it.
>>
>> If I either reboot the RAID or the host, the FS disappears
>> ie cat/proc/partitions shows only sdc, not sdc1.
>> If I go back into parted, the label is intact
>> But I can't even mkfs without re-creating the label/partition,
>> in wich case I get:
>
> This is a direct result of "set 1 raid on" as explained above. You
> should see other error messages in dmesg about no superblock being found.
>
>> ...Have been written, but we have been
>> unable to
>> inform the kernel of the change, probably because it/they are in use. As a
>> result, the old partition(s) will remain in use. You should reboot now
>> before
>> making further changes.
>> Ignore/Cancel? i
>
> Clearing the parted RAID flag on the partition should fix your problem,
> assuming you haven't done anything else wonky WRT software RAID and this
> partition that hasn't been presented here.
>
> Always remember this: Any time your see "RAID" setup or configuration
> referenced in Linux documentation or cheat sheets on the web, it is
> invariably referring to a kernel software function, either md/RAID,
> dm-raid, etc. It is never referring to hardware RAID devices. If you
> have a hardware RAID device you will never configure anything RAID
> related in Linux, whether it be parted, grub, md, dm, etc.
>
> --
> Stan
>
>
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: SOLVED Re: vess raid stripes disappear
2013-05-10 12:44 ` mashtin.bakir
@ 2013-05-10 14:17 ` Stan Hoeppner
0 siblings, 0 replies; 11+ messages in thread
From: Stan Hoeppner @ 2013-05-10 14:17 UTC (permalink / raw)
To: mashtin.bakir; +Cc: linux-raid
On 5/10/2013 7:44 AM, mashtin.bakir@gmail.com wrote:
> Thanks for your reply. When I run the identical commands on the otherwise
We need to get on the same page here, using the same terminology.
What commands? Are you referring to Linux commands or Promise RAID GUI
commands? There are no Linux commands that you would execute WRT an
external hardware RAID device.
> identical raid, stripes do get created and retained.
Please explain what you mean by "stripes do get created and retained".
Define "stripe" in this context. Is this a term Promise uses in their
documentation? I've used many brands of hardware RAID/SAN devices but
not Promise, which is pretty low end gear.
In industry standard parlance/jargon, with hardware RAID units, one
creates a RAID set consisting of a number of physical disks, a RAID
level, and a strip/chunk size. One then creates one or more of what are
typically called logical drives or virtual drives which are portions
carved out of the RAID set capacity. Then one assigns a LUN to each of
these logical/virtual drives and exports the LUN through one or more
external interfaces, be they SAS/SATA, fiber channel, or iSCSI.
> But just to eliminate the
> possibility, I re-created a couple of stripes without setting the raid flag and
> once again, when the raid chassis was rebooted, they disappeared.
Why are you rebooting the RAID box? That should never be necessary but
possibly after a firmware upgrade. This sounds like a SCSI hot plug
issue. You said this is with RHEL 5 correct? Is your support contract
with Red Hat still active? If so I'd definitely talk to them about
this. If not, I'll do the best I can to assist.
> I'm
> thinking at this point that it's a hardware problem with the RAID controller.
> Does that sound likely?
It's possible, but the problem you're describing doesn't appear to be
hardware related, not in absence of errors in dmesg. You've provided
none so I assume there are none. It sounds like a GPT problem.
Eliminating the RAID flag should have fixed it. Keep in mind my ability
to assist is limited by the quantity, accuracy, and relevance of the
information you provide. Thus far you're "telling" us what appears to
be wrong but you're not "showing" us. I.e. logs, partition tables, etc.
To eliminate possible partition issues as the cause of your problem,
directly format a LUN that has no partitions associated with it. If you
do this by reusing a LUN that has already been partitioned, delete the
partitions first. It is preferably to us a clean LUN though to
eliminate partitioning completely from the test.
--
Stan
> On Fri, May 10, 2013 at 3:55 AM, Stan Hoeppner <stan@hardwarefreak.com> wrote:
>> On 5/9/2013 6:29 AM, mashtin.bakir@gmail.com wrote:
>>> I have an interesting problem with a Vessraid 1830s.
>>> We have a few of these that work fine but one seems
>>> to lose its filesets. The only difference between the
>>> good ones and the bad one is that the bad one has firmware
>>> version 3.06 while the good ones are at 3.05 (This may
>>> not be relevant).
>>
>> It's not a firmware problem Mashtin. The problem here is incomplete
>> education. More accurately, the problem is that you've confused
>> concepts of hardware RAID and Linux software RAID. I will attempt to
>> help you separate these so you understand the line in the sand
>> separating the two.
>>
>>> Here's what happens. If I plug the raid into a 32 bit
>>> RHEL5 box with large files enabled, syslog does pick
>>> it up:
>>>
>>> kernel: Vendor: Promise Model:VessRAID 1830s Rev: 0306
>>> Type: Direct-Access ANSI SCSI revision: 05
>>> SCSI device sdc:2929686528 2048-byte hdwr sectors (5999998 MB)
>>
>> The kernel sees a single 6TB SCSI device/LUN presented by the Promise
>> array..
>>
>>> Using the web gui, I can carve out partitions,
>>
>> The Promise web gui doesn't create partitions. That's the job of the
>> operating system. What it does allow you to do is carve out multiple
>> virtual drives from a single RAID set and export them as individual LUNs.
>>
>>> I make three stripes across 4 disks of 2Terabytes each
>>> using RAID5.
>>
>> This is not possible with the Promise firmware. I think you're simply
>> using incorrect terminology here. According to your dmesg output above
>> you have created a single hardware RAID5 array of 4 disks, one 6TB
>> virtual drive, and exported it as a single LUN.
>>
>> ...
>>> I then use gnu-parted (v3.1) to make the
>>> filesets:
>>
>> parted doesn't create "filesets". It creates partitions. What are
>> "filesets"?
>>
>>> mklabel gpt
>>> mkpart primray 0 0
>>
>> Ok so you created a primary partition.
>>
>>> set 1 raid on
>>
>> ^^^^^^^^^^^^^^^
>>
>> THIS IS THE PROBLEM. "set 1 raid on" is used exclusively with Linux
>> software RAID. What this does is tell the Kernel to look for a software
>> RAID superblock on the partition and auto start the array. You are not
>> using md/RAID, but hardware RAID, so the superblock doesn't exist. This
>> is the source of your problem. This is where you have confused hardware
>> and software RAID concepts.
>>
>>> I create the fileset using
>>
>> Ok so when you say "fileset" you actually mean "file system".
>>
>>> mkfs.ext3 -m0 /dev/sdc1
>>> I can then mount the FS and write to it.
>>>
>>> If I either reboot the RAID or the host, the FS disappears
>>> ie cat/proc/partitions shows only sdc, not sdc1.
>>> If I go back into parted, the label is intact
>>> But I can't even mkfs without re-creating the label/partition,
>>> in wich case I get:
>>
>> This is a direct result of "set 1 raid on" as explained above. You
>> should see other error messages in dmesg about no superblock being found.
>>
>>> ...Have been written, but we have been
>>> unable to
>>> inform the kernel of the change, probably because it/they are in use. As a
>>> result, the old partition(s) will remain in use. You should reboot now
>>> before
>>> making further changes.
>>> Ignore/Cancel? i
>>
>> Clearing the parted RAID flag on the partition should fix your problem,
>> assuming you haven't done anything else wonky WRT software RAID and this
>> partition that hasn't been presented here.
>>
>> Always remember this: Any time your see "RAID" setup or configuration
>> referenced in Linux documentation or cheat sheets on the web, it is
>> invariably referring to a kernel software function, either md/RAID,
>> dm-raid, etc. It is never referring to hardware RAID devices. If you
>> have a hardware RAID device you will never configure anything RAID
>> related in Linux, whether it be parted, grub, md, dm, etc.
>>
>> --
>> Stan
>>
>>
> --
> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
>
^ permalink raw reply [flat|nested] 11+ messages in thread
end of thread, other threads:[~2013-05-10 14:17 UTC | newest]
Thread overview: 11+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2013-05-09 11:29 vess raid stripes disappear mashtin.bakir
2013-05-09 14:59 ` Keith Keller
2013-05-09 15:10 ` Roman Mamedov
2013-05-09 15:32 ` Keith Keller
2013-05-09 15:41 ` Adam Goryachev
2013-05-09 16:23 ` David Brown
2013-05-09 21:33 ` Keith Keller
2013-05-09 21:27 ` NeilBrown
2013-05-10 7:55 ` SOLVED " Stan Hoeppner
2013-05-10 12:44 ` mashtin.bakir
2013-05-10 14:17 ` Stan Hoeppner
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).