* [linux-lvm] HA Fileserver configuration recommendation sought.
@ 2004-11-19 9:47 Gary Mansell
2004-11-19 17:27 ` Chris Croswhite
2004-11-19 18:21 ` Dan Stromberg
0 siblings, 2 replies; 23+ messages in thread
From: Gary Mansell @ 2004-11-19 9:47 UTC (permalink / raw)
To: linux-lvm
[-- Attachment #1: Type: text/plain, Size: 2458 bytes --]
Dear all,
I currently run RHAS3.0 on a Dell PE2650 which is directly attached to
an EMC FC4500 2TB HW RAID 5 unit via two QLA2310F Fibre Channel cards. I
have EMC's Powerpath software installed to give me multiple paths to the
disk subsytem in case of controller or cable failure.
This configuration has served me well but is now about to be End Of
Life'd by Dell and I need to grow the storage. There is obviously quite
a price premium to pay for this sort of kit so I was hoping to find a
cheaper solution for the future.
I have seen the Nexsan ATABEAST which is a 16TB ATA array with dual
fibre channel controllers. It seems to be well liked and is reasonably
priced but my concern is how to achieve multiple paths to the disk. How
would you recommend achieving a highly available configuration with this
sort of disk subsystem?
Also, am I correct in thinking that the maximum filesystem size with
RHEL 3.0 is 2TB? Is it possible to get over this limitation with LVM or
some other method?
Any advice gladly received.
Thanks in advance
Gary Mansell
--
This e-mail and any files transmitted with it are confidential and intended
solely for the use of the individual or entity to whom they are addressed.
If you have received this e-mail in error please notify the sender immediately
and delete this e-mail from your system. Please note that any views or opinions
presented in this e-mail are solely those of the author and do not necessarily
represent those of Ricardo (save for reports and other documentation formally
approved and signed for release to the intended recipient). Only Directors
or Duly Authorised Officers are authorised to enter into legally binding
obligations on behalf of Ricardo unless the obligation is contained within
a Ricardo Purchase Order.
Ricardo may monitor outgoing and incoming e-mails and other telecommunications
on its e-mail and telecommunications systems. By replying to this e-mail you
give consent to such monitoring. The recipient should check this e-mail and
any attachments for the presence of viruses. Ricardo accepts no liability for
any damage caused by any virus transmitted by this e-mail. "Ricardo" means
Ricardo plc and its subsidiary companies.
Ricardo plc is a public limited company registered in England with registered
number 00222915.
The registered office of Ricardo plc is Bridge Works, Shoreham-by Sea,
West Sussex, BN43 5FG.
[-- Attachment #2: Type: text/html, Size: 2835 bytes --]
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: [linux-lvm] HA Fileserver configuration recommendation sought.
2004-11-19 9:47 [linux-lvm] HA Fileserver configuration recommendation sought Gary Mansell
@ 2004-11-19 17:27 ` Chris Croswhite
2004-11-19 18:24 ` Garrick Staples
2004-11-19 19:45 ` David S.
2004-11-19 18:21 ` Dan Stromberg
1 sibling, 2 replies; 23+ messages in thread
From: Chris Croswhite @ 2004-11-19 17:27 UTC (permalink / raw)
To: Gary.Mansell, LVM general discussion and development
I use the ATABoy 2x (dual active/active FC) with great success. I use
md to create a failover path (see mdadm) on newer kernels (2.6.9) and I
have multiple 3.2T LUNS (multiple ataboys). I use LVM2 to partition
some of the LUNs with great success.
Too, if this is a heavily beat file server, I would highly recommend
using the 2.6.8+ kernel and grab what ever patches you can for knfsd and
networking. Also look at proc/sys/vm/vfs_cache_pressure et al.
Best of Luck.
On Fri, 2004-11-19 at 01:47, Gary Mansell wrote:
> Dear all,
>
> I currently run RHAS3.0 on a Dell PE2650 which is directly attached to
> an EMC FC4500 2TB HW RAID 5 unit via two QLA2310F Fibre Channel cards.
> I have EMC's Powerpath software installed to give me multiple paths to
> the disk subsytem in case of controller or cable failure.
>
> This configuration has served me well but is now about to be End Of
> Life'd by Dell and I need to grow the storage. There is obviously
> quite a price premium to pay for this sort of kit so I was hoping to
> find a cheaper solution for the future.
>
> I have seen the Nexsan ATABEAST which is a 16TB ATA array with dual
> fibre channel controllers. It seems to be well liked and is reasonably
> priced but my concern is how to achieve multiple paths to the disk.
> How would you recommend achieving a highly available configuration
> with this sort of disk subsystem?
>
> Also, am I correct in thinking that the maximum filesystem size with
> RHEL 3.0 is 2TB? Is it possible to get over this limitation with LVM
> or some other method?
>
> Any advice gladly received.
>
> Thanks in advance
>
> Gary Mansell --
>
> This e-mail and any files transmitted with it are confidential and
> intended solely for the use of the individual or entity to whom they
> are addressed.
> If you have received this e-mail in error please notify the sender
> immediately and delete this e-mail from your system. Please note that
> any views
> or opinions presented in this e-mail are solely those of the author
> and do not necessarily represent those of Ricardo (save for reports
> and other
> documentation formally approved and signed for release to the intended
> recipient). Only Directors or Duly Authorised Officers are authorised
> to
> enter into legally binding obligations on behalf of Ricardo unless the
> obligation is contained within a Ricardo Purchase Order.
>
> Ricardo may monitor outgoing and incoming e-mails and other
> telecommunications on its e-mail and telecommunications systems. By
> replying to
> this e-mail you give consent to such monitoring. The recipient should
> check e-mail and any attachments for the presence of viruses. Ricardo
> accepts no liability for any damage caused by any virus transmitted by
> this e-mail. "Ricardo" means Ricardo plc and its subsidiary companies.
>
> Ricardo plc is a public limited company registered in England with
> registered number 00222915.
> The registered office of Ricardo plc is Bridge Works,Shoreham-by Sea,
> West Sussex, BN43 5FG.
>
>
> ______________________________________________________________________
> _______________________________________________
> linux-lvm mailing list
> linux-lvm@redhat.com
> https://www.redhat.com/mailman/listinfo/linux-lvm
> read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: [linux-lvm] HA Fileserver configuration recommendation sought.
2004-11-19 17:27 ` Chris Croswhite
@ 2004-11-19 18:24 ` Garrick Staples
2004-11-19 18:30 ` Chris Croswhite
2004-11-19 19:45 ` David S.
1 sibling, 1 reply; 23+ messages in thread
From: Garrick Staples @ 2004-11-19 18:24 UTC (permalink / raw)
To: LVM general discussion and development
[-- Attachment #1: Type: text/plain, Size: 553 bytes --]
On Fri, Nov 19, 2004 at 09:27:47AM -0800, Chris Croswhite alleged:
> I use the ATABoy 2x (dual active/active FC) with great success. I use
> md to create a failover path (see mdadm) on newer kernels (2.6.9) and I
> have multiple 3.2T LUNS (multiple ataboys). I use LVM2 to partition
> some of the LUNs with great success.
How do you solve the problem of lvm2 seeing 3 duplicate PVs (2 paths and 1 md
device)? Are you using the filter in /etc/lvm/lvm.conf?
--
Garrick Staples, Linux/HPCC Administrator
University of Southern California
[-- Attachment #2: Type: application/pgp-signature, Size: 189 bytes --]
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: [linux-lvm] HA Fileserver configuration recommendation sought.
2004-11-19 17:27 ` Chris Croswhite
2004-11-19 18:24 ` Garrick Staples
@ 2004-11-19 19:45 ` David S.
1 sibling, 0 replies; 23+ messages in thread
From: David S. @ 2004-11-19 19:45 UTC (permalink / raw)
To: linux-lvm
> I use the ATABoy 2x (dual active/active FC) with great success. I use
> md to create a failover path (see mdadm) on newer kernels (2.6.9) and I
> have multiple 3.2T LUNS (multiple ataboys). I use LVM2 to partition
> some of the LUNs with great success.
I tried using LVM2 on a 2.6.9 kernel to stripe together four 1T LUNs
from two 3 Ware 9000 SATA raid controllers to make ~4T file system,
but experienced file corruption under heavy I/O loads. In copying
large directories into the file system (via tar+rsh/ssh or rsync+rsh/ssh),
I'd get files that seemed to have blocks exchanged with a different file.
I saw this using XFS, Reiserfs, and EXT3 as the file system in the LVM2
stripe. Some tests I made of a similar configuration using 'md' instead
of LVM2 were more successful, but I didn't put quite the I/O load on
that configuration.
David S.
>
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: [linux-lvm] HA Fileserver configuration recommendation sought.
2004-11-19 9:47 [linux-lvm] HA Fileserver configuration recommendation sought Gary Mansell
2004-11-19 17:27 ` Chris Croswhite
@ 2004-11-19 18:21 ` Dan Stromberg
2004-11-19 19:24 ` David S.
1 sibling, 1 reply; 23+ messages in thread
From: Dan Stromberg @ 2004-11-19 18:21 UTC (permalink / raw)
To: Gary.Mansell, LVM general discussion and development
[-- Attachment #1: Type: text/plain, Size: 3726 bytes --]
I have the results I obtained while experimenting with > 2T filesystems
on RHEL 3 and FC 2 at the following URL:
http://dcs.nac.uci.edu/~strombrg/nbd.html
It mostly covers NBD and ENBD, but I believe you'll find some things
relevant to what you're looking for in it as well - EG there is some md
and LVM coverage.
Another alternative is Lustre. It is a filesystem which purports to be
able to aggregate the disks of multiple storage computers into one or
more gigantic filesystems. It isn't stable for us, but it may be
someday. :) The Lustre vendor claims to be able to break not just the
2T barrier, but also the 16T barrier. We do have a lustre filesystem of
over 16T set up now, but it remains to be seen what will happen when we
actually put over 16T of -data- in it.
Another option may be the "LBD" patches. I haven't tried them, nor even
studied about them.
On Fri, 2004-11-19 at 09:47 +0000, Gary Mansell wrote:
> Dear all,
>
> I currently run RHAS3.0 on a Dell PE2650 which is directly attached to
> an EMC FC4500 2TB HW RAID 5 unit via two QLA2310F Fibre Channel cards.
> I have EMC's Powerpath software installed to give me multiple paths to
> the disk subsytem in case of controller or cable failure.
>
> This configuration has served me well but is now about to be End Of
> Life'd by Dell and I need to grow the storage. There is obviously
> quite a price premium to pay for this sort of kit so I was hoping to
> find a cheaper solution for the future.
>
> I have seen the Nexsan ATABEAST which is a 16TB ATA array with dual
> fibre channel controllers. It seems to be well liked and is reasonably
> priced but my concern is how to achieve multiple paths to the disk.
> How would you recommend achieving a highly available configuration
> with this sort of disk subsystem?
>
> Also, am I correct in thinking that the maximum filesystem size with
> RHEL 3.0 is 2TB? Is it possible to get over this limitation with LVM
> or some other method?
>
> Any advice gladly received.
>
> Thanks in advance
>
> Gary Mansell --
>
> This e-mail and any files transmitted with it are confidential and
> intended solely for the use of the individual or entity to whom they
> are addressed.
> If you have received this e-mail in error please notify the sender
> immediately and delete this e-mail from your system. Please note that
> any views
> or opinions presented in this e-mail are solely those of the author
> and do not necessarily represent those of Ricardo (save for reports
> and other
> documentation formally approved and signed for release to the intended
> recipient). Only Directors or Duly Authorised Officers are authorised
> to
> enter into legally binding obligations on behalf of Ricardo unless the
> obligation is contained within a Ricardo Purchase Order.
>
> Ricardo may monitor outgoing and incoming e-mails and other
> telecommunications on its e-mail and telecommunications systems. By
> replying to
> this e-mail you give consent to such monitoring. The recipient should
> check e-mail and any attachments for the presence of viruses. Ricardo
> accepts no liability for any damage caused by any virus transmitted by
> this e-mail. "Ricardo" means Ricardo plc and its subsidiary companies.
>
> Ricardo plc is a public limited company registered in England with
> registered number 00222915.
> The registered office of Ricardo plc is Bridge Works,Shoreham-by Sea,
> West Sussex, BN43 5FG.
> _______________________________________________
> linux-lvm mailing list
> linux-lvm@redhat.com
> https://www.redhat.com/mailman/listinfo/linux-lvm
> read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/
[-- Attachment #2: This is a digitally signed message part --]
[-- Type: application/pgp-signature, Size: 189 bytes --]
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: [linux-lvm] HA Fileserver configuration recommendation sought.
2004-11-19 18:21 ` Dan Stromberg
@ 2004-11-19 19:24 ` David S.
2004-11-20 0:13 ` Dan Stromberg
0 siblings, 1 reply; 23+ messages in thread
From: David S. @ 2004-11-19 19:24 UTC (permalink / raw)
To: LVM general discussion and development
>
> Another alternative is Lustre. It is a filesystem which purports to be
> able to aggregate the disks of multiple storage computers into one or
> more gigantic filesystems. It isn't stable for us, but it may be
> someday. :) The Lustre vendor claims to be able to break not just the
> 2T barrier, but also the 16T barrier. We do have a lustre filesystem of
> over 16T set up now, but it remains to be seen what will happen when we
> actually put over 16T of -data- in it.
>
> Another option may be the "LBD" patches. I haven't tried them, nor even
> studied about them.
You can use PVFS2 (http://www.pvfs.org/pvfs2/) to aggregate disks from
multiple servers into on big file system. Though PVFS2 is specifically
designed for parallel applications, and may not be suited for your
purposes. You can sort-of do it with AFS (http://www.openafs.org),
with mutiple file servers servicing the same name space. AFS is again
rather different from an "ordinary" file system, however. I know you
can do it with IBM's GPFS (http://www-1.ibm.com/servers/eserver/clusters/software/gpfs.html);
we use GPFS here on two servers attached to an IBM SAN to create a
6.5T file system. But if you decide to take a run at GPFS, don't
blame me.
David S.
>
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: [linux-lvm] HA Fileserver configuration recommendation sought.
2004-11-19 19:24 ` David S.
@ 2004-11-20 0:13 ` Dan Stromberg
2004-11-20 0:22 ` Garrick Staples
2004-11-20 0:33 ` [linux-lvm] HA Fileserver configuration recommendation sought David S.
0 siblings, 2 replies; 23+ messages in thread
From: Dan Stromberg @ 2004-11-20 0:13 UTC (permalink / raw)
To: LVM general discussion and development
[-- Attachment #1: Type: text/plain, Size: 863 bytes --]
On Fri, 2004-11-19 at 11:24 -0800, David S. wrote:
> You can use PVFS2 (http://www.pvfs.org/pvfs2/) to aggregate disks from
> multiple servers into on big file system. Though PVFS2 is specifically
> designed for parallel applications, and may not be suited for your
> purposes. You can sort-of do it with AFS (http://www.openafs.org),
> with mutiple file servers servicing the same name space. AFS is again
> rather different from an "ordinary" file system, however. I know you
> can do it with IBM's GPFS (http://www-1.ibm.com/servers/eserver/clusters/software/gpfs.html);
> we use GPFS here on two servers attached to an IBM SAN to create a
> 6.5T file system. But if you decide to take a run at GPFS, don't
> blame me.
Does PVFS2 have posix semantics these days?
My understanding is that IBM will only support GPFS on IBM hardware...
[-- Attachment #2: This is a digitally signed message part --]
[-- Type: application/pgp-signature, Size: 189 bytes --]
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: [linux-lvm] HA Fileserver configuration recommendation sought.
2004-11-20 0:13 ` Dan Stromberg
@ 2004-11-20 0:22 ` Garrick Staples
2004-11-20 1:36 ` Dan Stromberg
2004-11-20 0:33 ` [linux-lvm] HA Fileserver configuration recommendation sought David S.
1 sibling, 1 reply; 23+ messages in thread
From: Garrick Staples @ 2004-11-20 0:22 UTC (permalink / raw)
To: LVM general discussion and development
[-- Attachment #1: Type: text/plain, Size: 1368 bytes --]
On Fri, Nov 19, 2004 at 04:13:45PM -0800, Dan Stromberg alleged:
> On Fri, 2004-11-19 at 11:24 -0800, David S. wrote:
> > You can use PVFS2 (http://www.pvfs.org/pvfs2/) to aggregate disks from
> > multiple servers into on big file system. Though PVFS2 is specifically
> > designed for parallel applications, and may not be suited for your
> > purposes. You can sort-of do it with AFS (http://www.openafs.org),
> > with mutiple file servers servicing the same name space. AFS is again
> > rather different from an "ordinary" file system, however. I know you
> > can do it with IBM's GPFS (http://www-1.ibm.com/servers/eserver/clusters/software/gpfs.html);
> > we use GPFS here on two servers attached to an IBM SAN to create a
> > 6.5T file system. But if you decide to take a run at GPFS, don't
> > blame me.
>
> Does PVFS2 have posix semantics these days?
No. I was at the pvfs2 BOF and SC04 last week and this is one of their selling
points... much greater scalibity and performance by throwing out half of POSIX.
> My understanding is that IBM will only support GPFS on IBM hardware...
Nor is it really FOSS.
Ultimately, it seems to me that GFS over clustered LVM is the way to go; though
all the bits don't seem to be quite in place just yet.
--
Garrick Staples, Linux/HPCC Administrator
University of Southern California
[-- Attachment #2: Type: application/pgp-signature, Size: 189 bytes --]
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: [linux-lvm] HA Fileserver configuration recommendation sought.
2004-11-20 0:22 ` Garrick Staples
@ 2004-11-20 1:36 ` Dan Stromberg
2004-11-20 2:08 ` Garrick Staples
0 siblings, 1 reply; 23+ messages in thread
From: Dan Stromberg @ 2004-11-20 1:36 UTC (permalink / raw)
To: LVM general discussion and development
[-- Attachment #1: Type: text/plain, Size: 868 bytes --]
On Fri, 2004-11-19 at 16:22 -0800, Garrick Staples wrote:
> > Does PVFS2 have posix semantics these days?
>
> No. I was at the pvfs2 BOF and SC04 last week and this is one of their selling
> points... much greater scalibity and performance by throwing out half of POSIX.
Thanks for the info!
> > My understanding is that IBM will only support GPFS on IBM hardware...
>
> Nor is it really FOSS.
I see. Is it kind of like the Sun Java or Mozilla licenses?
> Ultimately, it seems to me that GFS over clustered LVM is the way to go; though
> all the bits don't seem to be quite in place just yet.
What's this "clustered LVM" thing? Is it the same as (or built upon)
NBD/ENBD? Does it have the 2 terabyte or 16 terabyte limits? Google
just turns up some mailing list hits at the top - is there a "stable"
release of CLVM yet?
Thanks!
[-- Attachment #2: This is a digitally signed message part --]
[-- Type: application/pgp-signature, Size: 189 bytes --]
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: [linux-lvm] HA Fileserver configuration recommendation sought.
2004-11-20 1:36 ` Dan Stromberg
@ 2004-11-20 2:08 ` Garrick Staples
2004-11-20 2:18 ` Dan Stromberg
0 siblings, 1 reply; 23+ messages in thread
From: Garrick Staples @ 2004-11-20 2:08 UTC (permalink / raw)
To: LVM general discussion and development
[-- Attachment #1: Type: text/plain, Size: 1376 bytes --]
On Fri, Nov 19, 2004 at 05:36:26PM -0800, Dan Stromberg alleged:
> > > My understanding is that IBM will only support GPFS on IBM hardware...
> >
> > Nor is it really FOSS.
>
> I see. Is it kind of like the Sun Java or Mozilla licenses?
Last I checked, which has been while now, it was binary-only download; built
for only a few specific RH kernels.
Checking now, it's up on developerworks with a BSD license, but I don't
actually see any released files.
I'm half-way sure someone is going to correct me here :)
> > Ultimately, it seems to me that GFS over clustered LVM is the way to go; though
> > all the bits don't seem to be quite in place just yet.
>
> What's this "clustered LVM" thing? Is it the same as (or built upon)
> NBD/ENBD? Does it have the 2 terabyte or 16 terabyte limits? Google
> just turns up some mailing list hits at the top - is there a "stable"
> release of CLVM yet?
Cluster extensions for LVM. Given shared storage, a few configs, and running
clvmd, you can activate LVs on multiple machines at once. This is great for
GFS, where it replaces the gfs "pools".
http://sources.redhat.com/cluster/
As I said, not all of the right bits seem to be in place yet. But I expect
RHE4 to be very capable in this department.
--
Garrick Staples, Linux/HPCC Administrator
University of Southern California
[-- Attachment #2: Type: application/pgp-signature, Size: 189 bytes --]
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: [linux-lvm] HA Fileserver configuration recommendation sought.
2004-11-20 2:08 ` Garrick Staples
@ 2004-11-20 2:18 ` Dan Stromberg
2004-11-20 2:48 ` David Aquilina
2004-11-22 15:36 ` Kevin Anderson
0 siblings, 2 replies; 23+ messages in thread
From: Dan Stromberg @ 2004-11-20 2:18 UTC (permalink / raw)
To: LVM general discussion and development
[-- Attachment #1: Type: text/plain, Size: 1391 bytes --]
On Fri, 2004-11-19 at 18:08 -0800, Garrick Staples wrote:
> > > Ultimately, it seems to me that GFS over clustered LVM is the way to go; though
> > > all the bits don't seem to be quite in place just yet.
> >
> > What's this "clustered LVM" thing? Is it the same as (or built upon)
> > NBD/ENBD? Does it have the 2 terabyte or 16 terabyte limits? Google
> > just turns up some mailing list hits at the top - is there a "stable"
> > release of CLVM yet?
>
> Cluster extensions for LVM. Given shared storage, a few configs, and running
> clvmd, you can activate LVs on multiple machines at once. This is great for
> GFS, where it replaces the gfs "pools".
>
> http://sources.redhat.com/cluster/
>
> As I said, not all of the right bits seem to be in place yet. But I expect
> RHE4 to be very capable in this department.
Last I heard:
Sistina (prior to the Redhat purchase) had a roadmap for GFS, which
included bringing it up to, but not surpassing, the 16 terabyte limit.
Since the Redhat purchase, my understanding is that this previous
roadmap has been scrapped, and there are no longer any immediate plans
to raise the GFS filesystem size limit from 2 terabytes to 16 terabytes
- in which case, you can pretty much just use NFS, unless you're stuck
with small disks or low density servers. :)
I'd -really-like- to find out this isn't true.
[-- Attachment #2: This is a digitally signed message part --]
[-- Type: application/pgp-signature, Size: 189 bytes --]
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: [linux-lvm] HA Fileserver configuration recommendation sought.
2004-11-20 2:18 ` Dan Stromberg
@ 2004-11-20 2:48 ` David Aquilina
2004-11-22 15:36 ` Kevin Anderson
1 sibling, 0 replies; 23+ messages in thread
From: David Aquilina @ 2004-11-20 2:48 UTC (permalink / raw)
To: LVM general discussion and development
On Fri, 19 Nov 2004 18:18:16 -0800, Dan Stromberg
<strombrg@dcs.nac.uci.edu> wrote:
> Since the Redhat purchase, my understanding is that this previous
> roadmap has been scrapped, and there are no longer any immediate plans
> to raise the GFS filesystem size limit from 2 terabytes to 16 terabytes
> - in which case, you can pretty much just use NFS, unless you're stuck
> with small disks or low density servers. :)
As far as I understand it, right now RHEL3's 2.4 kernel is the cause
of the 2TB hard limit. Once everything moves to RHEL4/2.6, you should
be able to get significantly higher than the current limit, especially
on AMD64 hardware.
I could be completely wrong, though.
--
David Aquilina, RHCE
dwaquilina@gmail.com
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: [linux-lvm] HA Fileserver configuration recommendation sought.
2004-11-20 2:18 ` Dan Stromberg
2004-11-20 2:48 ` David Aquilina
@ 2004-11-22 15:36 ` Kevin Anderson
2004-11-23 0:42 ` Dan Stromberg
1 sibling, 1 reply; 23+ messages in thread
From: Kevin Anderson @ 2004-11-22 15:36 UTC (permalink / raw)
To: LVM general discussion and development
On Fri, 2004-11-19 at 20:18, Dan Stromberg wrote:
>
> Last I heard:
>
> Sistina (prior to the Redhat purchase) had a roadmap for GFS, which
> included bringing it up to, but not surpassing, the 16 terabyte limit.
Plans were always to go way past 16TB, requires 64bit hardware
architectures and the 2.6 kernel to push the limits.
>
> Since the Redhat purchase, my understanding is that this previous
> roadmap has been scrapped, and there are no longer any immediate plans
> to raise the GFS filesystem size limit from 2 terabytes to 16 terabytes
> - in which case, you can pretty much just use NFS, unless you're stuck
> with small disks or low density servers. :)
>
> I'd -really-like- to find out this isn't true.
>
For the 2.6 kernel, GFS should be able to go to 16TB on 32bit machines.
On 64 bit machines, the theoretical limit is something like 8 exabytes.
If anyone wants to pony up that amount of storage, we would be more than
happy to give it a go. In the meantime, smaller sizes are going to be
validated. So, plans have not been scrapped or even changed much from
Sistina to Red Hat.
>
> ______________________________________________________________________
> _______________________________________________
> linux-lvm mailing list
> linux-lvm@redhat.com
> https://www.redhat.com/mailman/listinfo/linux-lvm
> read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: [linux-lvm] HA Fileserver configuration recommendation sought.
2004-11-22 15:36 ` Kevin Anderson
@ 2004-11-23 0:42 ` Dan Stromberg
2004-11-23 2:10 ` [linux-lvm] Re: lvextend error on XFS Frank J. Buchholz
0 siblings, 1 reply; 23+ messages in thread
From: Dan Stromberg @ 2004-11-23 0:42 UTC (permalink / raw)
To: LVM general discussion and development
[-- Attachment #1: Type: text/plain, Size: 2121 bytes --]
On Mon, 2004-11-22 at 07:36, Kevin Anderson wrote:
> On Fri, 2004-11-19 at 20:18, Dan Stromberg wrote:
>
> >
> > Last I heard:
> >
> > Sistina (prior to the Redhat purchase) had a roadmap for GFS, which
> > included bringing it up to, but not surpassing, the 16 terabyte limit.
> Plans were always to go way past 16TB, requires 64bit hardware
> architectures and the 2.6 kernel to push the limits.
I see. Thank you for clearing that I'm.
I'm now guessing that the person who told me this was restricting his
vision to 32 bit hardware.
> >
> > Since the Redhat purchase, my understanding is that this previous
> > roadmap has been scrapped, and there are no longer any immediate plans
> > to raise the GFS filesystem size limit from 2 terabytes to 16 terabytes
> > - in which case, you can pretty much just use NFS, unless you're stuck
> > with small disks or low density servers. :)
> >
> > I'd -really-like- to find out this isn't true.
> >
> For the 2.6 kernel, GFS should be able to go to 16TB on 32bit machines.
> On 64 bit machines, the theoretical limit is something like 8 exabytes.
> If anyone wants to pony up that amount of storage, we would be more than
> happy to give it a go. In the meantime, smaller sizes are going to be
> validated. So, plans have not been scrapped or even changed much from
> Sistina to Red Hat.
Would would GFS on kernel 2.6 do with a bunch of 32 bit machines with
disks attached to them, but a 64 bit client?
Is GFS for kernel 2.6 ready at this point? Or nearly so?
Thanks for the great info. :)
> >
> > ______________________________________________________________________
> > _______________________________________________
> > linux-lvm mailing list
> > linux-lvm@redhat.com
> > https://www.redhat.com/mailman/listinfo/linux-lvm
> > read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/
>
> _______________________________________________
> linux-lvm mailing list
> linux-lvm@redhat.com
> https://www.redhat.com/mailman/listinfo/linux-lvm
> read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/
>
[-- Attachment #2: This is a digitally signed message part --]
[-- Type: application/pgp-signature, Size: 189 bytes --]
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: [linux-lvm] Re: lvextend error on XFS
2004-11-23 0:42 ` Dan Stromberg
@ 2004-11-23 2:10 ` Frank J. Buchholz
2004-11-23 8:38 ` David Greaves
0 siblings, 1 reply; 23+ messages in thread
From: Frank J. Buchholz @ 2004-11-23 2:10 UTC (permalink / raw)
To: LVM general discussion and development
>"Frank J. Buchholz" <frankb ercwc org> writes:
>
>>> "Frank J. Buchholz" <frankb ercwc org> writes:
>>>
>>>> Hello,
>>>>
>>>> I recently attempted to extend my logical volume. First I added an
>>>> additional physical volume to an existing volume group. This worked
>>>> fine.
>>>>
>>>> vgextend Volume00 /dev/sba
>>>> lvextend -L+100G /dev/Volume00/LogVol00
>>>>
>>>> However when it came time to run the lvextend command I received a
>>>> number of device-mapper errors. While I was trying to determine what
>>>> the errors were I noticed that the filesystem that sits on the logical
>>>> volume being extended was no longer available. I attempted to umount
>>>> the filesystem however the command froze. I then rebooted the system
>>>> without mounting the filesystem in question and manually mounted the
>>>> filesystem. XFS reported back that it could not locate the superblock.
>>>
>>> I have done lvextend followed by xfs_growfs many times without any
>>> problems. Have you checked for hardware errors?
>>
>> Unfortunately the timing is far too coincidental to be a hardware error.
>
>Since you had just added another disk, it may not be so coincidental
>after all.
>
>> Just after typing the following command and recieving the following error
>> lvm> lvextend -L+1G /dev/Volume00/LogVol00
>> Extending logical volume LogVol00 to 2.93 TB
>> device-mapper ioctl cmd 9 failed: Invalid argument
>> Couldn't load device 'Volume00-LogVol00'.
>> Problem reactivating LogVol00
>
>The last line would explain why the filesystem went offline.
>Something went wrong just after the LV had been extended, and was
>about to be reactivated.
>
>> I then noticed that the filesystem on LogVol00 was no longer available
>> and when I ran xfs_repair it stated the following:
>> # xfs_repair /dev/Volume00/LogVol00
>> Phase 1 - find and verify superblock...
>> superblock read failed, offset 0, size 524288, ag 0, rval 0
>>
>> fatal error -- Invalid argument
>
>What does "dmesg" have to say about this?
>
>I had a problem once with some strange errors from a disk. It turned
>out the cable wasn't plugged in properly.
>
>--
>M�ns Rullg�rd
>mru inprovide com
Hello M�ns
I now realize how I created this problem, I just don't know how to fix it.
I mistakingly added /dev/sba as the physical volume to a volume group
that contained /dev/sba1, the one partition on sba. These are
essentially one in the same. So when I executed the lvextend command
device-mapper had an error. I'm honestly surprised it did anything,
especially write over the superblock on the filesystem.
Any direction on how I can recover from within LVM? I never was able to
execute the xfs_grow so I'm hoping the data in the filesystem still exists.
Thanks,
Frank
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: [linux-lvm] Re: lvextend error on XFS
2004-11-23 2:10 ` [linux-lvm] Re: lvextend error on XFS Frank J. Buchholz
@ 2004-11-23 8:38 ` David Greaves
2004-11-23 12:37 ` Frank J. Buchholz
0 siblings, 1 reply; 23+ messages in thread
From: David Greaves @ 2004-11-23 8:38 UTC (permalink / raw)
To: LVM general discussion and development
Frank J. Buchholz wrote:
>>> I then noticed that the filesystem on LogVol00 was no longer available
>>> and when I ran xfs_repair it stated the following:
>>> # xfs_repair /dev/Volume00/LogVol00 Phase 1 - find and verify
>>> superblock...
>>> superblock read failed, offset 0, size 524288, ag 0, rval 0
>>>
>>> fatal error -- Invalid argument
>>
> Hello M�ns
> I now realize how I created this problem, I just don't know how to fix
> it.
>
> I mistakingly added /dev/sba as the physical volume to a volume group
> that contained /dev/sba1, the one partition on sba. These are
> essentially one in the same. So when I executed the lvextend command
> device-mapper had an error. I'm honestly surprised it did anything,
> especially write over the superblock on the filesystem.
>
> Any direction on how I can recover from within LVM? I never was able
> to execute the xfs_grow so I'm hoping the data in the filesystem still
> exists.
try
xfs_repair -n -o /dev/Volume00/LogVol00
Then remove the -n
(man xfs_repair)
David
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: [linux-lvm] Re: lvextend error on XFS
2004-11-23 8:38 ` David Greaves
@ 2004-11-23 12:37 ` Frank J. Buchholz
0 siblings, 0 replies; 23+ messages in thread
From: Frank J. Buchholz @ 2004-11-23 12:37 UTC (permalink / raw)
To: LVM general discussion and development
David Greaves wrote:
> Frank J. Buchholz wrote:
>
>>>> I then noticed that the filesystem on LogVol00 was no longer available
>>>> and when I ran xfs_repair it stated the following:
>>>> # xfs_repair /dev/Volume00/LogVol00 Phase 1 - find and verify
>>>> superblock...
>>>> superblock read failed, offset 0, size 524288, ag 0, rval 0
>>>>
>>>> fatal error -- Invalid argument
>>>
>>>
>> Hello M�ns
>> I now realize how I created this problem, I just don't know how to
>> fix it.
>>
>> I mistakingly added /dev/sba as the physical volume to a volume group
>> that contained /dev/sba1, the one partition on sba. These are
>> essentially one in the same. So when I executed the lvextend command
>> device-mapper had an error. I'm honestly surprised it did anything,
>> especially write over the superblock on the filesystem.
>>
>> Any direction on how I can recover from within LVM? I never was able
>> to execute the xfs_grow so I'm hoping the data in the filesystem
>> still exists.
>
>
> try
> xfs_repair -n -o /dev/Volume00/LogVol00
> Then remove the -n
>
> (man xfs_repair)
>
> David
>
> _______________________________________________
> linux-lvm mailing list
> linux-lvm@redhat.com
> https://www.redhat.com/mailman/listinfo/linux-lvm
> read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/
Unfortunately I've already tried this. Here are the results.
# xfs_repair -n -o assume_xfs /dev/Volume00/LogVol00
Phase 1 - find and verify superblock...
superblock read failed, offset 0, size 524288, ag 0, rval 0
fatal error -- Invalid argument
I've discussed on the XFS list and they recommended I try to repair this
via LVM.
Given that I never ran xfs_growfs, is it possible to reduce the logical
volume back to the original size and then remove the physical volume
that caused the problem? Is there someway to recover back using a
previous .vg file in /etc/lvm/archive?
Thanks,
Frank
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: [linux-lvm] HA Fileserver configuration recommendation sought.
2004-11-20 0:13 ` Dan Stromberg
2004-11-20 0:22 ` Garrick Staples
@ 2004-11-20 0:33 ` David S.
2004-11-20 1:40 ` Dan Stromberg
1 sibling, 1 reply; 23+ messages in thread
From: David S. @ 2004-11-20 0:33 UTC (permalink / raw)
To: LVM general discussion and development
On Fri, Nov 19, 2004 at 04:13:45PM -0800, Dan Stromberg wrote:
> On Fri, 2004-11-19 at 11:24 -0800, David S. wrote:
> > You can use PVFS2 (http://www.pvfs.org/pvfs2/) to aggregate disks from
> > multiple servers into on big file system. Though PVFS2 is specifically
> > designed for parallel applications, and may not be suited for your
> > purposes. You can sort-of do it with AFS (http://www.openafs.org),
> > with mutiple file servers servicing the same name space. AFS is again
> > rather different from an "ordinary" file system, however. I know you
> > can do it with IBM's GPFS (http://www-1.ibm.com/servers/eserver/clusters/software/gpfs.html);
> > we use GPFS here on two servers attached to an IBM SAN to create a
> > 6.5T file system. But if you decide to take a run at GPFS, don't
> > blame me.
>
> Does PVFS2 have posix semantics these days?
It has a kernel interface, so it supports the familiar open(2), lseek(2),
read(2), write(2), ... calls. It doesn't provide POSIX semantics.
>
> My understanding is that IBM will only support GPFS on IBM hardware...
>
Our SAN is from IBM, and our servers and most of our clients are
IBM-branded commodity x86 hardware. IBM will support GPFS on non-IBM
commodity hardware if you pay it enough. But you'd have to really
enjoy pain to pay for GPFS.
David S.
>
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: [linux-lvm] HA Fileserver configuration recommendation sought.
2004-11-20 0:33 ` [linux-lvm] HA Fileserver configuration recommendation sought David S.
@ 2004-11-20 1:40 ` Dan Stromberg
0 siblings, 0 replies; 23+ messages in thread
From: Dan Stromberg @ 2004-11-20 1:40 UTC (permalink / raw)
To: LVM general discussion and development
On Fri, 2004-11-19 at 16:33 -0800, David S. wrote:
> > Does PVFS2 have posix semantics these days?
>
> It has a kernel interface, so it supports the familiar open(2), lseek(2),
> read(2), write(2), ... calls. It doesn't provide POSIX semantics.
What's different then? Are stat and friends still there and non-
neutered? Or are we talking more along the lines of just minor things
(to my current application) like sync/fsync differences?
> >
> > My understanding is that IBM will only support GPFS on IBM hardware...
> >
>
> Our SAN is from IBM, and our servers and most of our clients are
> IBM-branded commodity x86 hardware. IBM will support GPFS on non-IBM
> commodity hardware if you pay it enough. But you'd have to really
> enjoy pain to pay for GPFS.
Why is GPFS painful? Do you have an order-of-magnitude guesstimate of
how much you'd have to pay IBM to support GPFS on non-IBM hardware?
Thanks!
^ permalink raw reply [flat|nested] 23+ messages in thread
* [linux-lvm] lvextend error on XFS
@ 2004-11-22 20:48 Frank J. Buchholz
2004-11-22 21:02 ` [linux-lvm] " Måns Rullgård
0 siblings, 1 reply; 23+ messages in thread
From: Frank J. Buchholz @ 2004-11-22 20:48 UTC (permalink / raw)
To: linux-lvm
Hello,
I recently attempted to extend my logical volume. First I added an
additional physical volume to an existing volume group. This worked fine.
vgextend Volume00 /dev/sba
lvextend -L+100G /dev/Volume00/LogVol00
However when it came time to run the lvextend command I received a
number of device-mapper errors. While I was trying to determine what
the errors were I noticed that the filesystem that sits on the logical
volume being extended was no longer available. I attempted to umount
the filesystem however the command froze. I then rebooted the system
without mounting the filesystem in question and manually mounted the
filesystem. XFS reported back that it could not locate the superblock.
I attempted to run xfs_repair however this command failed at the first
stage because it could not find the superblock.
Is it possible to repair this problem either through LVM or XFS? I
noticed there are a number of achieved .vg files in /etc/lvm/archive ,
is it possible to restore LVM from one of these?
I am not running any snapshot copies, which is where I've read this
occurring before.
Thanks,
Frank
--
Frank J. Buchholz
Education and Research Services
828-350-2421 Office
^ permalink raw reply [flat|nested] 23+ messages in thread
* [linux-lvm] Re: lvextend error on XFS
2004-11-22 20:48 [linux-lvm] lvextend error on XFS Frank J. Buchholz
@ 2004-11-22 21:02 ` Måns Rullgård
0 siblings, 0 replies; 23+ messages in thread
From: Måns Rullgård @ 2004-11-22 21:02 UTC (permalink / raw)
To: linux-lvm
"Frank J. Buchholz" <frankb@ercwc.org> writes:
> Hello,
>
> I recently attempted to extend my logical volume. First I added an
> additional physical volume to an existing volume group. This worked
> fine.
>
> vgextend Volume00 /dev/sba
> lvextend -L+100G /dev/Volume00/LogVol00
>
> However when it came time to run the lvextend command I received a
> number of device-mapper errors. While I was trying to determine what
> the errors were I noticed that the filesystem that sits on the logical
> volume being extended was no longer available. I attempted to umount
> the filesystem however the command froze. I then rebooted the system
> without mounting the filesystem in question and manually mounted the
> filesystem. XFS reported back that it could not locate the superblock.
I have done lvextend followed by xfs_growfs many times without any
problems. Have you checked for hardware errors?
--
M�ns Rullg�rd
mru@inprovide.com
^ permalink raw reply [flat|nested] 23+ messages in thread
* [linux-lvm] Re: lvextend error on XFS
@ 2004-11-22 22:08 Frank J. Buchholz
2004-11-22 22:23 ` Måns Rullgård
0 siblings, 1 reply; 23+ messages in thread
From: Frank J. Buchholz @ 2004-11-22 22:08 UTC (permalink / raw)
To: linux-lvm
"Frank J. Buchholz" <frankb ercwc org> writes:
> Hello,
>
> I recently attempted to extend my logical volume. First I added an
> additional physical volume to an existing volume group. This worked
> fine.
>
> vgextend Volume00 /dev/sba
> lvextend -L+100G /dev/Volume00/LogVol00
>
> However when it came time to run the lvextend command I received a
> number of device-mapper errors. While I was trying to determine what
> the errors were I noticed that the filesystem that sits on the logical
> volume being extended was no longer available. I attempted to umount
> the filesystem however the command froze. I then rebooted the system
> without mounting the filesystem in question and manually mounted the
> filesystem. XFS reported back that it could not locate the superblock.
I have done lvextend followed by xfs_growfs many times without any
problems. Have you checked for hardware errors?
--
M�ns Rullg�rd
mru inprovide com
M�ns -
Unfortunately the timing is far too coincidental to be a hardware error.
Just after typing the following command and recieving the following error
lvm> lvextend -L+1G /dev/Volume00/LogVol00
Extending logical volume LogVol00 to 2.93 TB
device-mapper ioctl cmd 9 failed: Invalid argument
Couldn't load device 'Volume00-LogVol00'.
Problem reactivating LogVol00
I then noticed that the filesystem on LogVol00 was no longer available
and when I ran xfs_repair it stated the following:
# xfs_repair /dev/Volume00/LogVol00
Phase 1 - find and verify superblock...
superblock read failed, offset 0, size 524288, ag 0, rval 0
fatal error -- Invalid argument
^ permalink raw reply [flat|nested] 23+ messages in thread
* [linux-lvm] Re: lvextend error on XFS
2004-11-22 22:08 Frank J. Buchholz
@ 2004-11-22 22:23 ` Måns Rullgård
0 siblings, 0 replies; 23+ messages in thread
From: Måns Rullgård @ 2004-11-22 22:23 UTC (permalink / raw)
To: linux-lvm
"Frank J. Buchholz" <frankb@ercwc.org> writes:
>> "Frank J. Buchholz" <frankb ercwc org> writes:
>>
>>> Hello,
>>>
>>> I recently attempted to extend my logical volume. First I added an
>>> additional physical volume to an existing volume group. This worked
>>> fine.
>>>
>>> vgextend Volume00 /dev/sba
>>> lvextend -L+100G /dev/Volume00/LogVol00
>>>
>>> However when it came time to run the lvextend command I received a
>>> number of device-mapper errors. While I was trying to determine what
>>> the errors were I noticed that the filesystem that sits on the logical
>>> volume being extended was no longer available. I attempted to umount
>>> the filesystem however the command froze. I then rebooted the system
>>> without mounting the filesystem in question and manually mounted the
>>> filesystem. XFS reported back that it could not locate the superblock.
>>
>> I have done lvextend followed by xfs_growfs many times without any
>> problems. Have you checked for hardware errors?
>
> Unfortunately the timing is far too coincidental to be a hardware error.
Since you had just added another disk, it may not be so coincidental
after all.
> Just after typing the following command and recieving the following error
> lvm> lvextend -L+1G /dev/Volume00/LogVol00
> Extending logical volume LogVol00 to 2.93 TB
> device-mapper ioctl cmd 9 failed: Invalid argument
> Couldn't load device 'Volume00-LogVol00'.
> Problem reactivating LogVol00
The last line would explain why the filesystem went offline.
Something went wrong just after the LV had been extended, and was
about to be reactivated.
> I then noticed that the filesystem on LogVol00 was no longer available
> and when I ran xfs_repair it stated the following:
> # xfs_repair /dev/Volume00/LogVol00
> Phase 1 - find and verify superblock...
> superblock read failed, offset 0, size 524288, ag 0, rval 0
>
> fatal error -- Invalid argument
What does "dmesg" have to say about this?
I had a problem once with some strange errors from a disk. It turned
out the cable wasn't plugged in properly.
--
M�ns Rullg�rd
mru@inprovide.com
^ permalink raw reply [flat|nested] 23+ messages in thread
end of thread, other threads:[~2004-11-23 12:38 UTC | newest]
Thread overview: 23+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2004-11-19 9:47 [linux-lvm] HA Fileserver configuration recommendation sought Gary Mansell
2004-11-19 17:27 ` Chris Croswhite
2004-11-19 18:24 ` Garrick Staples
2004-11-19 18:30 ` Chris Croswhite
2004-11-19 19:45 ` David S.
2004-11-19 18:21 ` Dan Stromberg
2004-11-19 19:24 ` David S.
2004-11-20 0:13 ` Dan Stromberg
2004-11-20 0:22 ` Garrick Staples
2004-11-20 1:36 ` Dan Stromberg
2004-11-20 2:08 ` Garrick Staples
2004-11-20 2:18 ` Dan Stromberg
2004-11-20 2:48 ` David Aquilina
2004-11-22 15:36 ` Kevin Anderson
2004-11-23 0:42 ` Dan Stromberg
2004-11-23 2:10 ` [linux-lvm] Re: lvextend error on XFS Frank J. Buchholz
2004-11-23 8:38 ` David Greaves
2004-11-23 12:37 ` Frank J. Buchholz
2004-11-20 0:33 ` [linux-lvm] HA Fileserver configuration recommendation sought David S.
2004-11-20 1:40 ` Dan Stromberg
-- strict thread matches above, loose matches on Subject: below --
2004-11-22 20:48 [linux-lvm] lvextend error on XFS Frank J. Buchholz
2004-11-22 21:02 ` [linux-lvm] " Måns Rullgård
2004-11-22 22:08 Frank J. Buchholz
2004-11-22 22:23 ` Måns Rullgård
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox