* mkfs.xfs created filesystem larger than underlying device
@ 2009-06-24 20:48 Michael Moody
2009-06-24 22:24 ` Eric Sandeen
0 siblings, 1 reply; 8+ messages in thread
From: Michael Moody @ 2009-06-24 20:48 UTC (permalink / raw)
To: xfs@oss.sgi.com
[-- Attachment #1.1: Type: text/plain, Size: 1677 bytes --]
Hello all.
I recently created an XFS filesystem on an x86_64 CentOS 5.3 system. I used all tools in the repository:
Xfsprogs-2.9.4-1
Kernel 2.6.18-128.1.10.el5.centos.plus
It is a somewhat complex configuration of:
Areca RAID card with 16 1.5TB drives in a RAID 6 with 1 hotspare (100GB volume was created for the OS, the rest was one large volume of ~19TB)
I used pvcreate /dev/sdb to create a physical volume for LVM on the 19TB volume.
I then used vgcreate to create a volume group of 17.64TB
I used lvcreate to create 5 logical volumes, 4x4TB, and 1x1.5TB
On top of those logical volumes is drbd (/dev/drbd0-/dev/drbd4)
On top of the drbd volumes, I created a volume group of 17.50TB (/dev/drbd0-/dev/drbd4)
I created a logical volume of 17.49TB, upon which was created an xfs filesystem with no options (mkfs.xfs mkfs.xfs /dev/Volume1-Rep-Store/Volume1-Replicated -L Replicated)
The resulting filesystem is larger than the underlying logical volume:
--- Logical volume ---
LV Name /dev/Volume1-Rep-Store/Volume1-Replicated
VG Name Volume1-Rep-Store
LV UUID fB0q3f-80Kq-yFuy-NjKl-pmlW-jeiX-uEruWC
LV Write Access read/write
LV Status available
# open 1
LV Size 17.49 TB
Current LE 4584899
Segments 5
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 253:5
/dev/mapper/Volume1--Rep--Store-Volume1--Replicated
18T 411M 18T 1% /mnt/Volume1
Why is this, and how can I fix it?
Thanks,
Michael
[-- Attachment #1.2: Type: text/html, Size: 8227 bytes --]
[-- Attachment #2: Type: text/plain, Size: 121 bytes --]
_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: mkfs.xfs created filesystem larger than underlying device
2009-06-24 20:48 mkfs.xfs created filesystem larger than underlying device Michael Moody
@ 2009-06-24 22:24 ` Eric Sandeen
2009-06-24 22:26 ` Michael Moody
2009-06-24 22:33 ` Michael Moody
0 siblings, 2 replies; 8+ messages in thread
From: Eric Sandeen @ 2009-06-24 22:24 UTC (permalink / raw)
To: Michael Moody; +Cc: xfs@oss.sgi.com
Michael Moody wrote:
> Hello all.
>
>
>
> I recently created an XFS filesystem on an x86_64 CentOS 5.3 system. I
> used all tools in the repository:
>
>
>
> Xfsprogs-2.9.4-1
>
> Kernel 2.6.18-128.1.10.el5.centos.plus
>
>
>
> It is a somewhat complex configuration of:
>
>
>
> Areca RAID card with 16 1.5TB drives in a RAID 6 with 1 hotspare (100GB
> volume was created for the OS, the rest was one large volume of ~19TB)
>
> I used pvcreate /dev/sdb to create a physical volume for LVM on the 19TB
> volume.
>
> I then used vgcreate to create a volume group of 17.64TB
>
> I used lvcreate to create 5 logical volumes, 4x4TB, and 1x1.5TB
>
> On top of those logical volumes is drbd (/dev/drbd0-/dev/drbd4)
>
> On top of the drbd volumes, I created a volume group of 17.50TB
> (/dev/drbd0-/dev/drbd4)
>
> I created a logical volume of 17.49TB, upon which was created an xfs
> filesystem with no options (mkfs.xfs mkfs.xfs
> /dev/Volume1-Rep-Store/Volume1-Replicated -L Replicated)
>
> The resulting filesystem is larger than the underlying logical volume:
>
> --- Logical volume ---
>
> LV Name /dev/Volume1-Rep-Store/Volume1-Replicated
> VG Name Volume1-Rep-Store
> LV UUID fB0q3f-80Kq-yFuy-NjKl-pmlW-jeiX-uEruWC
> LV Write Access read/write
> LV Status available
> # open 1
> LV Size 17.49 TB
> Current LE 4584899
> Segments 5
> Allocation inherit
> Read ahead sectors auto
> - currently set to 256
> Block device 253:5
>
> /dev/mapper/Volume1--Rep--Store-Volume1--Replicated
>
> 18T 411M 18T 1% /mnt/Volume1
>
> Why is this, and how can I fix it?
I'm guessing that this is df rounding up. Try df w/o -h, to see how
many 1k blocks you have and compare that to the size.
If it still looks wrong, can you include xfs_info output for
/mnt/Volume1 as well as the contents of /proc/partitions on your system?
I'd wager a beer that nothing is wrong, but that if something is wrong,
it's not xfs ;)
Thanks,
-Eric
_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs
^ permalink raw reply [flat|nested] 8+ messages in thread
* RE: mkfs.xfs created filesystem larger than underlying device
2009-06-24 22:24 ` Eric Sandeen
@ 2009-06-24 22:26 ` Michael Moody
2009-06-24 23:02 ` Eric Sandeen
2009-06-24 22:33 ` Michael Moody
1 sibling, 1 reply; 8+ messages in thread
From: Michael Moody @ 2009-06-24 22:26 UTC (permalink / raw)
To: Eric Sandeen; +Cc: xfs@oss.sgi.com
It still looks wrong:
[root@filer5 /]# xfs_info /mnt/Volume1/
meta-data=/dev/Volume1-Rep-Store/Volume1-Replicated isize=256 agcount=32, agsize=146716768 blks
= sectsz=512 attr=0
data = bsize=4096 blocks=4694936576, imaxpct=25
= sunit=0 swidth=0 blks, unwritten=1
naming =version 2 bsize=4096
log =internal bsize=4096 blocks=32768, version=1
= sectsz=512 sunit=0 blks, lazy-count=0
realtime =none extsz=4096 blocks=0, rtextents=0
[root@filer5 /]# df
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/sda3 92460904 1631680 86056716 2% /
/dev/sda1 101086 14393 81474 16% /boot
tmpfs 4089196 0 4089196 0% /dev/shm
/dev/mapper/Volume1--Rep--Store-Volume1--Replicated
18779615232 1056 18779614176 1% /mnt/Volume1
[root@filer5 /]# cat /proc/partitions
major minor #blocks name
8 0 97653504 sda
8 1 104391 sda1
8 2 2096482 sda2
8 3 95450197 sda3
8 16 18945308928 sdb
253 0 4294967296 dm-0
253 1 4294967296 dm-1
253 2 4294967296 dm-2
253 3 4294967296 dm-3
253 4 1610612736 dm-4
147 0 4294836188 drbd0
147 1 4294836188 drbd1
147 2 4294836188 drbd2
147 3 4294836188 drbd3
147 4 1610563548 drbd4
253 5 18779746304 dm-5
Michael S. Moody
Sr. Systems Engineer
Global Systems Consulting
Direct: (650) 265-4154
Web: http://www.GlobalSystemsConsulting.com
Engineering Support: support@gsc.cc
Billing Support: billing@gsc.cc
Customer Support Portal: http://my.gsc.cc
NOTICE - This message contains privileged and confidential information intended only for the use of the addressee named above. If you are not the intended recipient of this message, you are hereby notified that you must not disseminate, copy or take any action in reliance on it. If you have received this message in error, please immediately notify Global Systems Consulting, its subsidiaries or associates. Any views expressed in this message are those of the individual sender, except where the sender specifically states them to be the view of Global Systems Consulting, its subsidiaries and associates.
-----Original Message-----
From: Eric Sandeen [mailto:sandeen@sandeen.net]
Sent: Wednesday, June 24, 2009 4:25 PM
To: Michael Moody
Cc: xfs@oss.sgi.com
Subject: Re: mkfs.xfs created filesystem larger than underlying device
Michael Moody wrote:
> Hello all.
>
>
>
> I recently created an XFS filesystem on an x86_64 CentOS 5.3 system. I
> used all tools in the repository:
>
>
>
> Xfsprogs-2.9.4-1
>
> Kernel 2.6.18-128.1.10.el5.centos.plus
>
>
>
> It is a somewhat complex configuration of:
>
>
>
> Areca RAID card with 16 1.5TB drives in a RAID 6 with 1 hotspare (100GB
> volume was created for the OS, the rest was one large volume of ~19TB)
>
> I used pvcreate /dev/sdb to create a physical volume for LVM on the 19TB
> volume.
>
> I then used vgcreate to create a volume group of 17.64TB
>
> I used lvcreate to create 5 logical volumes, 4x4TB, and 1x1.5TB
>
> On top of those logical volumes is drbd (/dev/drbd0-/dev/drbd4)
>
> On top of the drbd volumes, I created a volume group of 17.50TB
> (/dev/drbd0-/dev/drbd4)
>
> I created a logical volume of 17.49TB, upon which was created an xfs
> filesystem with no options (mkfs.xfs mkfs.xfs
> /dev/Volume1-Rep-Store/Volume1-Replicated -L Replicated)
>
> The resulting filesystem is larger than the underlying logical volume:
>
> --- Logical volume ---
>
> LV Name /dev/Volume1-Rep-Store/Volume1-Replicated
> VG Name Volume1-Rep-Store
> LV UUID fB0q3f-80Kq-yFuy-NjKl-pmlW-jeiX-uEruWC
> LV Write Access read/write
> LV Status available
> # open 1
> LV Size 17.49 TB
> Current LE 4584899
> Segments 5
> Allocation inherit
> Read ahead sectors auto
> - currently set to 256
> Block device 253:5
>
> /dev/mapper/Volume1--Rep--Store-Volume1--Replicated
>
> 18T 411M 18T 1% /mnt/Volume1
>
> Why is this, and how can I fix it?
I'm guessing that this is df rounding up. Try df w/o -h, to see how
many 1k blocks you have and compare that to the size.
If it still looks wrong, can you include xfs_info output for
/mnt/Volume1 as well as the contents of /proc/partitions on your system?
I'd wager a beer that nothing is wrong, but that if something is wrong,
it's not xfs ;)
Thanks,
-Eric
_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs
^ permalink raw reply [flat|nested] 8+ messages in thread
* RE: mkfs.xfs created filesystem larger than underlying device
2009-06-24 22:24 ` Eric Sandeen
2009-06-24 22:26 ` Michael Moody
@ 2009-06-24 22:33 ` Michael Moody
2009-06-27 11:33 ` Peter Grandi
1 sibling, 1 reply; 8+ messages in thread
From: Michael Moody @ 2009-06-24 22:33 UTC (permalink / raw)
To: Eric Sandeen; +Cc: xfs@oss.sgi.com
In addition:
I experienced significant corruption. I had only about 3 files on the XFS filesystem, which was then exported via nfs. I ran nfs_stress.sh against it, and my files ended up corrupt, and the machine locked up. Ideas?
Michael S. Moody
Sr. Systems Engineer
Global Systems Consulting
Direct: (650) 265-4154
Web: http://www.GlobalSystemsConsulting.com
Engineering Support: support@gsc.cc
Billing Support: billing@gsc.cc
Customer Support Portal: http://my.gsc.cc
NOTICE - This message contains privileged and confidential information intended only for the use of the addressee named above. If you are not the intended recipient of this message, you are hereby notified that you must not disseminate, copy or take any action in reliance on it. If you have received this message in error, please immediately notify Global Systems Consulting, its subsidiaries or associates. Any views expressed in this message are those of the individual sender, except where the sender specifically states them to be the view of Global Systems Consulting, its subsidiaries and associates.
-----Original Message-----
From: Eric Sandeen [mailto:sandeen@sandeen.net]
Sent: Wednesday, June 24, 2009 4:25 PM
To: Michael Moody
Cc: xfs@oss.sgi.com
Subject: Re: mkfs.xfs created filesystem larger than underlying device
Michael Moody wrote:
> Hello all.
>
>
>
> I recently created an XFS filesystem on an x86_64 CentOS 5.3 system. I
> used all tools in the repository:
>
>
>
> Xfsprogs-2.9.4-1
>
> Kernel 2.6.18-128.1.10.el5.centos.plus
>
>
>
> It is a somewhat complex configuration of:
>
>
>
> Areca RAID card with 16 1.5TB drives in a RAID 6 with 1 hotspare (100GB
> volume was created for the OS, the rest was one large volume of ~19TB)
>
> I used pvcreate /dev/sdb to create a physical volume for LVM on the 19TB
> volume.
>
> I then used vgcreate to create a volume group of 17.64TB
>
> I used lvcreate to create 5 logical volumes, 4x4TB, and 1x1.5TB
>
> On top of those logical volumes is drbd (/dev/drbd0-/dev/drbd4)
>
> On top of the drbd volumes, I created a volume group of 17.50TB
> (/dev/drbd0-/dev/drbd4)
>
> I created a logical volume of 17.49TB, upon which was created an xfs
> filesystem with no options (mkfs.xfs mkfs.xfs
> /dev/Volume1-Rep-Store/Volume1-Replicated -L Replicated)
>
> The resulting filesystem is larger than the underlying logical volume:
>
> --- Logical volume ---
>
> LV Name /dev/Volume1-Rep-Store/Volume1-Replicated
> VG Name Volume1-Rep-Store
> LV UUID fB0q3f-80Kq-yFuy-NjKl-pmlW-jeiX-uEruWC
> LV Write Access read/write
> LV Status available
> # open 1
> LV Size 17.49 TB
> Current LE 4584899
> Segments 5
> Allocation inherit
> Read ahead sectors auto
> - currently set to 256
> Block device 253:5
>
> /dev/mapper/Volume1--Rep--Store-Volume1--Replicated
>
> 18T 411M 18T 1% /mnt/Volume1
>
> Why is this, and how can I fix it?
I'm guessing that this is df rounding up. Try df w/o -h, to see how
many 1k blocks you have and compare that to the size.
If it still looks wrong, can you include xfs_info output for
/mnt/Volume1 as well as the contents of /proc/partitions on your system?
I'd wager a beer that nothing is wrong, but that if something is wrong,
it's not xfs ;)
Thanks,
-Eric
_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: mkfs.xfs created filesystem larger than underlying device
2009-06-24 22:26 ` Michael Moody
@ 2009-06-24 23:02 ` Eric Sandeen
2009-06-24 23:05 ` Michael Moody
0 siblings, 1 reply; 8+ messages in thread
From: Eric Sandeen @ 2009-06-24 23:02 UTC (permalink / raw)
To: Michael Moody; +Cc: xfs@oss.sgi.com
Michael Moody wrote:
> It still looks wrong:
>
> [root@filer5 /]# xfs_info /mnt/Volume1/
> meta-data=/dev/Volume1-Rep-Store/Volume1-Replicated isize=256 agcount=32, agsize=146716768 blks
> = sectsz=512 attr=0
> data = bsize=4096 blocks=4694936576, imaxpct=25
> = sunit=0 swidth=0 blks, unwritten=1
> naming =version 2 bsize=4096
> log =internal bsize=4096 blocks=32768, version=1
> = sectsz=512 sunit=0 blks, lazy-count=0
> realtime =none extsz=4096 blocks=0, rtextents=0
4694936576*4096 = 19230460215296
> [root@filer5 /]# df
> Filesystem 1K-blocks Used Available Use% Mounted on
...
> /dev/mapper/Volume1--Rep--Store-Volume1--Replicated
> 18779615232 1056 18779614176 1% /mnt/Volume1
18779615232*1024 = 19230325997568
> [root@filer5 /]# cat /proc/partitions
> major minor #blocks name
>
...
> 253 5 18779746304 dm-5
18779746304*1024 = 19230460215296
so in bytes,
xfs_info says: 19230460215296
/proc/partitions says: 19230460215296 (same as above)
df says: 19230325997568 (a little smaller, but ok)
So, I don't see a problem here.
<later....>
> I experienced significant corruption. I had only about 3 files on the
> XFS filesystem, which was then exported via nfs. I ran nfs_stress.sh
> against it, and my files ended up corrupt, and the machine locked up.
> Ideas?
No, not really, not on a kernel this old, and without details about what
was corrupt, what xfs_repair said, what dmesg said, what sysrq-t said, etc.
-Eric
_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs
^ permalink raw reply [flat|nested] 8+ messages in thread
* RE: mkfs.xfs created filesystem larger than underlying device
2009-06-24 23:02 ` Eric Sandeen
@ 2009-06-24 23:05 ` Michael Moody
2009-06-24 23:06 ` Eric Sandeen
0 siblings, 1 reply; 8+ messages in thread
From: Michael Moody @ 2009-06-24 23:05 UTC (permalink / raw)
To: Eric Sandeen; +Cc: xfs@oss.sgi.com
Are there still known issues with NFS and XFS? I'm performing the same test against a jfs formatted filesystem (exported via NFS), and so far, no issues. This is the latest centosplus kernel. Are there mount options which could cause XFS to have corruption?
Michael S. Moody
Sr. Systems Engineer
Global Systems Consulting
Direct: (650) 265-4154
Web: http://www.GlobalSystemsConsulting.com
Engineering Support: support@gsc.cc
Billing Support: billing@gsc.cc
Customer Support Portal: http://my.gsc.cc
NOTICE - This message contains privileged and confidential information intended only for the use of the addressee named above. If you are not the intended recipient of this message, you are hereby notified that you must not disseminate, copy or take any action in reliance on it. If you have received this message in error, please immediately notify Global Systems Consulting, its subsidiaries or associates. Any views expressed in this message are those of the individual sender, except where the sender specifically states them to be the view of Global Systems Consulting, its subsidiaries and associates.
-----Original Message-----
From: Eric Sandeen [mailto:sandeen@sandeen.net]
Sent: Wednesday, June 24, 2009 5:03 PM
To: Michael Moody
Cc: xfs@oss.sgi.com
Subject: Re: mkfs.xfs created filesystem larger than underlying device
Michael Moody wrote:
> It still looks wrong:
>
> [root@filer5 /]# xfs_info /mnt/Volume1/
> meta-data=/dev/Volume1-Rep-Store/Volume1-Replicated isize=256 agcount=32, agsize=146716768 blks
> = sectsz=512 attr=0
> data = bsize=4096 blocks=4694936576, imaxpct=25
> = sunit=0 swidth=0 blks, unwritten=1
> naming =version 2 bsize=4096
> log =internal bsize=4096 blocks=32768, version=1
> = sectsz=512 sunit=0 blks, lazy-count=0
> realtime =none extsz=4096 blocks=0, rtextents=0
4694936576*4096 = 19230460215296
> [root@filer5 /]# df
> Filesystem 1K-blocks Used Available Use% Mounted on
...
> /dev/mapper/Volume1--Rep--Store-Volume1--Replicated
> 18779615232 1056 18779614176 1% /mnt/Volume1
18779615232*1024 = 19230325997568
> [root@filer5 /]# cat /proc/partitions
> major minor #blocks name
>
...
> 253 5 18779746304 dm-5
18779746304*1024 = 19230460215296
so in bytes,
xfs_info says: 19230460215296
/proc/partitions says: 19230460215296 (same as above)
df says: 19230325997568 (a little smaller, but ok)
So, I don't see a problem here.
<later....>
> I experienced significant corruption. I had only about 3 files on the
> XFS filesystem, which was then exported via nfs. I ran nfs_stress.sh
> against it, and my files ended up corrupt, and the machine locked up.
> Ideas?
No, not really, not on a kernel this old, and without details about what
was corrupt, what xfs_repair said, what dmesg said, what sysrq-t said, etc.
-Eric
_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: mkfs.xfs created filesystem larger than underlying device
2009-06-24 23:05 ` Michael Moody
@ 2009-06-24 23:06 ` Eric Sandeen
0 siblings, 0 replies; 8+ messages in thread
From: Eric Sandeen @ 2009-06-24 23:06 UTC (permalink / raw)
To: Michael Moody; +Cc: xfs@oss.sgi.com
Michael Moody wrote:
> Are there still known issues with NFS and XFS? I'm performing the
> same test against a jfs formatted filesystem (exported via NFS), and
> so far, no issues. This is the latest centosplus kernel. Are there
> mount options which could cause XFS to have corruption?
Not that I know of.
Without details about what was corrupt, what xfs_repair said, what dmesg
said, what sysrq-t said, etc. it's hard to say.
Could be 4k stack problems if it's x86.
-Eric
_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs
^ permalink raw reply [flat|nested] 8+ messages in thread
* RE: mkfs.xfs created filesystem larger than underlying device
2009-06-24 22:33 ` Michael Moody
@ 2009-06-27 11:33 ` Peter Grandi
0 siblings, 0 replies; 8+ messages in thread
From: Peter Grandi @ 2009-06-27 11:33 UTC (permalink / raw)
To: Linux XFS
>> I recently created an XFS filesystem on an x86_64 CentOS 5.3
>> system.[ .. ] It is a somewhat complex configuration of:
>> Areca RAID card with 16 1.5TB drives in a RAID 6 with 1
>> hotspare (100GB volume was created for the OS, the rest was
>> one large volume of ~19TB)
>> I used pvcreate /dev/sdb to create a physical volume for LVM
>> on the 19TB volume.
>> I then used vgcreate to create a volume group of 17.64TB
>> I used lvcreate to create 5 logical volumes, 4x4TB, and 1x1.5TB
>> On top of those logical volumes is drbd (/dev/drbd0-/dev/drbd4)
>> On top of the drbd volumes, I created a volume group of 17.50TB
>> (/dev/drbd0-/dev/drbd4)
>> I created a logical volume of 17.49TB, upon which was created
>> an xfs filesystem with no options (mkfs.xfs mkfs.xfs
>> /dev/Volume1-Rep-Store/Volume1-Replicated -L Replicated)
One of the values of the XFS mailing list is the entertainment
provided by some (many) of the posts. This is one of the best.
> In addition: I experienced significant corruption. I had only
> about 3 files on the XFS filesystem, which was then exported
> via nfs. I ran nfs_stress.sh against it, and my files ended up
> corrupt, and the machine locked up. Ideas?
Even better.
(the purpose of this message is not just to give thanks for the
entertainment, but also perhaps to induce second thoughts about
the wisdom of the setup above, but as a rule people who do this
kind of thing tend to think that they know better)
_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs
^ permalink raw reply [flat|nested] 8+ messages in thread
end of thread, other threads:[~2009-06-28 17:23 UTC | newest]
Thread overview: 8+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2009-06-24 20:48 mkfs.xfs created filesystem larger than underlying device Michael Moody
2009-06-24 22:24 ` Eric Sandeen
2009-06-24 22:26 ` Michael Moody
2009-06-24 23:02 ` Eric Sandeen
2009-06-24 23:05 ` Michael Moody
2009-06-24 23:06 ` Eric Sandeen
2009-06-24 22:33 ` Michael Moody
2009-06-27 11:33 ` Peter Grandi
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox