* Can you help me on Linux SW RAID?
@ 2005-04-28 10:11 miele
2005-04-28 11:29 ` Jakob Oestergaard
0 siblings, 1 reply; 9+ messages in thread
From: miele @ 2005-04-28 10:11 UTC (permalink / raw)
To: jakob, linux-raid, mingo, bueso
Hi.
First of all my compliments for your job about Linux SW RAID HOWTO and relative mailing list.
I have a question for you .
I'm using 2 redhat nodes (Kernel version 2.4.21...) with a shared (SCSI) disk array,
and I want to use SW RAID1 features to store data in it.
So I followed the steps below on first node:
1) Using fdisk I Created a partition of type fd (Linux-raid-autodetect) on each devices (disks) of my future raid device /dev/md0. The partition takes entire space of disks.
2) Set properly /etc/raidtab to include my new raid device /dev/md0 (also using persistent-block to have autodetect at startup)
3) Created my raid device using mkraid (mkraid /dev/md0)
Then cause I want to use LVM too I...
4) Created a Physical Volume on my raid device /dev/md0 (pvcreate /dev/md0...)
5) Created the Volume Group containing PV created above
6) Created the Logical Volume on VG created above
Actually I've not yet created nothing on my LV (fs, raw device, ...) cause it's not my goal, but...
It's OK! On my first node all is functioning well.
I also rebooted my first node to see if autodetect is OK and ..It's OK! My raid device /dev/md0 correctly starts at boot.
Then I copied the /etc/raidtab of the first node into the second node and I've rebooted the second node.
Great!!!
All things are correctly autodetected and functioning also on the second node!!!
Raid device /dev/md0 is started and (with my surprise) LVM is already correctly set..
.. vgscan, lvscan and pvscan stated on both node produce the same output!!
All is wonderful but.. when all seems ok and my work seems to be finished..
..I'm having hard doubts on this RAID configuration and its stability.
I fear that RAID SW modules that runs on each of two Linux nodes, and works on the same data of the shared disk array, could produce conflicts, misalignments, or I don't know what other..
Can you give me a comment or suggests about RAID SW use i've made?
Thanks a lot in advance for your attention!!
Bye,
Andrea Miele
____________________________________________________________
6X velocizzare la tua navigazione a 56k? 6X Web Accelerator di Libero!
Scaricalo su INTERNET GRATIS 6X http://www.libero.it
^ permalink raw reply [flat|nested] 9+ messages in thread
* Can you help me on Linux SW RAID?
@ 2005-04-28 10:27 miele
0 siblings, 0 replies; 9+ messages in thread
From: miele @ 2005-04-28 10:27 UTC (permalink / raw)
To: linux-raid
Hi.
First of all my compliments for your job about Linux SW RAID and relative mailing list.
I have a question for you .
I'm using 2 redhat nodes (Kernel version 2.4.21...) with a shared (SCSI) disk array,
and I want to use SW RAID1 features to store data in it.
So I followed the steps below on first node:
1) Using fdisk I Created a partition of type fd (Linux-raid-autodetect) on each devices (disks) of my future raid device /dev/md0. The partition takes entire space of disks.
2) Set properly /etc/raidtab to include my new raid device /dev/md0 (also using persistent-block to have autodetect at startup)
3) Created my raid device using mkraid (mkraid /dev/md0)
Then cause I want to use LVM too I...
4) Created a Physical Volume on my raid device /dev/md0 (pvcreate /dev/md0...)
5) Created the Volume Group containing PV created above
6) Created the Logical Volume on VG created above
Actually I've not yet created nothing on my LV (fs, raw device, ...) cause it's not my goal, but...
It's OK! On my first node all is functioning well.
I also rebooted my first node to see if autodetect is OK and ..It's OK! My raid device /dev/md0 correctly starts at boot.
Then I copied the /etc/raidtab of the first node into the second node and I've rebooted the second node.
Great!!!
All things are correctly autodetected and functioning also on the second node!!!
Raid device /dev/md0 is started and (with my surprise) LVM is already correctly set..
.. vgscan, lvscan and pvscan stated on both node produce the same output!!
All is wonderful but.. when all seems ok and my work seems to be finished..
..I'm having hard doubts on this RAID configuration and its stability.
I fear that RAID SW modules that runs on each of two Linux nodes, and works on the same data of the shared disk array, could produce conflicts, misalignments, or I don't know what other..
Can you give me a comment or suggests about RAID SW use i've made?
Thanks a lot in advance for your attention!!
Bye,
Andrea Miele
____________________________________________________________
6X velocizzare la tua navigazione a 56k? 6X Web Accelerator di Libero!
Scaricalo su INTERNET GRATIS 6X http://www.libero.it
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: Can you help me on Linux SW RAID?
2005-04-28 10:11 miele
@ 2005-04-28 11:29 ` Jakob Oestergaard
0 siblings, 0 replies; 9+ messages in thread
From: Jakob Oestergaard @ 2005-04-28 11:29 UTC (permalink / raw)
To: miele@inwind.it; +Cc: linux-raid, mingo, bueso
On Thu, Apr 28, 2005 at 12:11:37PM +0200, miele@inwind.it wrote:
>
> Hi.
>
...
> All is wonderful but.. when all seems ok and my work seems to be finished..
> ..I'm having hard doubts on this RAID configuration and its stability.
With good reason - there is no way that this can work reliably.
> I fear that RAID SW modules that runs on each of two Linux nodes, and
> works on the same data of the shared disk array, could produce
> conflicts, misalignments, or I don't know what other..
This is exactly what will happen. It will not work.
>
> Can you give me a comment or suggests about RAID SW use i've made?
>
You cannot share storage between multiple SW RAID "masters".
If you take SW RAID and LVM out of the equation, you will still have
problems with the filesystem - you cannot share storage between two
nodes by mounting the same storage. Again, both nodes will write to the
journal with no synchronization between them, both will cache data
locally with no cache synchronization, etc. etc. Disaster lies ahead.
Look into GFS for a solution to the filesystem problem.
You can use some of the linux-ha stuff to make sure that only one node
at a time will actually use the underlying storage, if that is an
acceptable solution (pure fail-over).
But all in all, there's no plug'n'play solution for what you're trying
to accomplish.
Personally, I think I'd get a "storage box" which did the RAID for me
(that could be an off the shelf iSCSI/FC box, or it could be another
Linux box exporting a software RAID over iSCSI, it could be a lot of
things depending on your needs and budget), then, I'd look into GFS or
Oracle's recently opensourced shared filesystem for the shared
filesystem.
Maybe others on this list have better suggestions.
But first of all; stop what you are doing now, you are hurting yourself
and your data ;)
--
/ jakob
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: Can you help me on Linux SW RAID?
@ 2005-04-28 11:41 miele
2005-04-28 11:49 ` Lars Marowsky-Bree
0 siblings, 1 reply; 9+ messages in thread
From: miele @ 2005-04-28 11:41 UTC (permalink / raw)
To: jakob; +Cc: linux-raid, mingo, bueso
Thanks for your reply Jakob!!
Obviously I've plan to use OCFS (Oracle Cluster File System) to share data access between nodes...
..I've to implement an Oracle 9i RAC.
Anyway i've made a test.
I've mount a normal fs on my LV, created a text file with "vi" and saved it on both node with different value. It seems function. To work-around data buffering, after unmount and remount the fs, my file contains the values modified my the last node.
Data buffering problem should be solved by OCFS..
You discourage anyway my use of RAID??
Thanks.
Andrea
> On Thu, Apr 28, 2005 at 12:11:37PM +0200, miele@inwind.it wrote:
> >
> > Hi.
> >
> ...
> > All is wonderful but.. when all seems ok and my work seems to be finished..
> > ..I'm having hard doubts on this RAID configuration and its stability.
>
> With good reason - there is no way that this can work reliably.
>
> > I fear that RAID SW modules that runs on each of two Linux nodes, and
> > works on the same data of the shared disk array, could produce
> > conflicts, misalignments, or I don't know what other..
>
> This is exactly what will happen. It will not work.
>
> >
> > Can you give me a comment or suggests about RAID SW use i've made?
> >
>
> You cannot share storage between multiple SW RAID "masters".
>
> If you take SW RAID and LVM out of the equation, you will still have
> problems with the filesystem - you cannot share storage between two
> nodes by mounting the same storage. Again, both nodes will write to the
> journal with no synchronization between them, both will cache data
> locally with no cache synchronization, etc. etc. Disaster lies ahead.
>
> Look into GFS for a solution to the filesystem problem.
>
> You can use some of the linux-ha stuff to make sure that only one node
> at a time will actually use the underlying storage, if that is an
> acceptable solution (pure fail-over).
>
> But all in all, there's no plug'n'play solution for what you're trying
> to accomplish.
>
> Personally, I think I'd get a "storage box" which did the RAID for me
> (that could be an off the shelf iSCSI/FC box, or it could be another
> Linux box exporting a software RAID over iSCSI, it could be a lot of
> things depending on your needs and budget), then, I'd look into GFS or
> Oracle's recently opensourced shared filesystem for the shared
> filesystem.
>
> Maybe others on this list have better suggestions.
>
> But first of all; stop what you are doing now, you are hurting yourself
> and your data ;)
>
> --
>
> / jakob
>
>
____________________________________________________________
6X velocizzare la tua navigazione a 56k? 6X Web Accelerator di Libero!
Scaricalo su INTERNET GRATIS 6X http://www.libero.it
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: Can you help me on Linux SW RAID?
2005-04-28 11:41 Can you help me on Linux SW RAID? miele
@ 2005-04-28 11:49 ` Lars Marowsky-Bree
0 siblings, 0 replies; 9+ messages in thread
From: Lars Marowsky-Bree @ 2005-04-28 11:49 UTC (permalink / raw)
To: miele@inwind.it, jakob; +Cc: linux-raid, mingo, bueso
On 2005-04-28T13:41:45, "miele@inwind.it" <miele@inwind.it> wrote:
> Obviously I've plan to use OCFS (Oracle Cluster File System) to share
> data access between nodes...
> ..I've to implement an Oracle 9i RAC.
OCFS2 is fine for that.
> You discourage anyway my use of RAID??
md is NOT CLUSTER-AWARE. It will not work, and it will eat your data.
You have been warned.
Run the RAID level of your choice on the SAN backend, and put OCFS2 on
top.
Sincerely,
Lars Marowsky-Brée <lmb@suse.de>
--
High Availability & Clustering
SUSE Labs, Research and Development
SUSE LINUX Products GmbH - A Novell Business
-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: Can you help me on Linux SW RAID?
@ 2005-04-28 12:57 miele
2005-04-28 13:06 ` Lars Marowsky-Bree
0 siblings, 1 reply; 9+ messages in thread
From: miele @ 2005-04-28 12:57 UTC (permalink / raw)
To: lmb; +Cc: jakob, linux-raid, mingo, bueso
OK.
Finally I can say that one solution is to have a disk array HW-RAID compliant.
:-)))
Anyway, omitting RAID features I think my nodes should correctly see the same data on the shared disk array, using Oracle OCFS or RedHat GFS.
Do you agree?
Have you experienced problem using OCFS insetad of OCFS2??
> On 2005-04-28T13:41:45, "miele@inwind.it" <miele@inwind.it> wrote:
>
> > Obviously I've plan to use OCFS (Oracle Cluster File System) to share
> > data access between nodes...
> > ..I've to implement an Oracle 9i RAC.
>
> OCFS2 is fine for that.
>
> > You discourage anyway my use of RAID??
>
> md is NOT CLUSTER-AWARE. It will not work, and it will eat your data.
>
> You have been warned.
>
> Run the RAID level of your choice on the SAN backend, and put OCFS2 on
> top.
>
>
> Sincerely,
> Lars Marowsky-Brée <lmb@suse.de>
>
> --
> High Availability & Clustering
> SUSE Labs, Research and Development
> SUSE LINUX Products GmbH - A Novell Business
>
>
____________________________________________________________
6X velocizzare la tua navigazione a 56k? 6X Web Accelerator di Libero!
Scaricalo su INTERNET GRATIS 6X http://www.libero.it
-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: Can you help me on Linux SW RAID?
2005-04-28 12:57 miele
@ 2005-04-28 13:06 ` Lars Marowsky-Bree
2005-04-28 19:18 ` J. Ryan Earl
0 siblings, 1 reply; 9+ messages in thread
From: Lars Marowsky-Bree @ 2005-04-28 13:06 UTC (permalink / raw)
To: miele@inwind.it; +Cc: jakob, linux-raid, mingo, bueso
On 2005-04-28T14:57:29, "miele@inwind.it" <miele@inwind.it> wrote:
> Have you experienced problem using OCFS insetad of OCFS2??
OCFS isn't available on 2.6. On 2.6, you have to use OCFS2.
Sincerely,
Lars Marowsky-Brée <lmb@suse.de>
--
High Availability & Clustering
SUSE Labs, Research and Development
SUSE LINUX Products GmbH - A Novell Business
-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply [flat|nested] 9+ messages in thread
* RE: Can you help me on Linux SW RAID?
2005-04-28 13:06 ` Lars Marowsky-Bree
@ 2005-04-28 19:18 ` J. Ryan Earl
2005-04-28 19:56 ` Lars Marowsky-Bree
0 siblings, 1 reply; 9+ messages in thread
From: J. Ryan Earl @ 2005-04-28 19:18 UTC (permalink / raw)
To: Lars Marowsky-Bree, miele; +Cc: jakob, linux-raid, mingo, bueso
OCFS2 isn't stable yet, I wouldn't suggest its use for production systems.
Furthermore, where did the requirement for 2.6 (kernel I assume) come from?
Sounds like he was using RHEL3 anyway...
I'm not familar with the 9i RAC setup, but I installed a 10g RAC that's in
production now. The only thing I use OCFS for is Cluster Registry, Disk
Voting, and the Oracle Parameter File. All database nodes have access to
the same set of raw LUNs on the SAN managed via ASM (Automated Storage
Management, a 10g feature) for keeping actual data on. You could just put
all your Oracle files/data on OCFS, but that's not the highest performance
solution.
To have multiple nodes access a non-Oracle data on a clustered filesystem,
go with GFS for sure, ie a normal POSIX complaint filesystem. OCFSv1 isn't
POSIX complaint and only properly stores Oracle specific files, and though
OCFSv2 is meant to be a generic clustered filesystem, it's not production
ready like I mentioned before: http://oss.oracle.com/projects/ocfs2/
How you setup the RAID is completely different issue from how you cluster
your data among nodes.
-ryan
-----Original Message-----
From: linux-raid-owner@vger.kernel.org
[mailto:linux-raid-owner@vger.kernel.org]On Behalf Of Lars Marowsky-Bree
Sent: Thursday, April 28, 2005 8:07 AM
To: miele@inwind.it
Cc: jakob; linux-raid; mingo; bueso
Subject: Re: Can you help me on Linux SW RAID?
On 2005-04-28T14:57:29, "miele@inwind.it" <miele@inwind.it> wrote:
> Have you experienced problem using OCFS insetad of OCFS2??
OCFS isn't available on 2.6. On 2.6, you have to use OCFS2.
Sincerely,
Lars Marowsky-Brée <lmb@suse.de>
--
High Availability & Clustering
SUSE Labs, Research and Development
SUSE LINUX Products GmbH - A Novell Business
-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: Can you help me on Linux SW RAID?
2005-04-28 19:18 ` J. Ryan Earl
@ 2005-04-28 19:56 ` Lars Marowsky-Bree
0 siblings, 0 replies; 9+ messages in thread
From: Lars Marowsky-Bree @ 2005-04-28 19:56 UTC (permalink / raw)
To: J. Ryan Earl, miele; +Cc: jakob, linux-raid, mingo, bueso
On 2005-04-28T14:18:28, "J. Ryan Earl" <ryan@dynaconnections.com> wrote:
> OCFS2 isn't stable yet, I wouldn't suggest its use for production systems.
> Furthermore, where did the requirement for 2.6 (kernel I assume) come from?
> Sounds like he was using RHEL3 anyway...
That wasn't mentioned in the mail I replied to; anyone deploying a new
2.4 based system right now is a bit behind the times in my opinion, I
hope 2.4 goes away soon ;-)
OCFS2 is already included in SLES9 SP2 beta; we'll ship it for
production use with SLES9 SP2 general availability.
Sincerely,
Lars Marowsky-Brée <lmb@suse.de>
--
High Availability & Clustering
SUSE Labs, Research and Development
SUSE LINUX Products GmbH - A Novell Business
-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply [flat|nested] 9+ messages in thread
end of thread, other threads:[~2005-04-28 19:56 UTC | newest]
Thread overview: 9+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2005-04-28 11:41 Can you help me on Linux SW RAID? miele
2005-04-28 11:49 ` Lars Marowsky-Bree
-- strict thread matches above, loose matches on Subject: below --
2005-04-28 12:57 miele
2005-04-28 13:06 ` Lars Marowsky-Bree
2005-04-28 19:18 ` J. Ryan Earl
2005-04-28 19:56 ` Lars Marowsky-Bree
2005-04-28 10:27 miele
2005-04-28 10:11 miele
2005-04-28 11:29 ` Jakob Oestergaard
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).