public inbox for linux-xfs@vger.kernel.org
 help / color / mirror / Atom feed
From: Eric Sandeen <sandeen@sandeen.net>
To: Michael Moody <michael@gsc.cc>
Cc: "xfs@oss.sgi.com" <xfs@oss.sgi.com>
Subject: Re: mkfs.xfs created filesystem larger than underlying device
Date: Wed, 24 Jun 2009 17:24:55 -0500	[thread overview]
Message-ID: <4A42A7B7.3040403@sandeen.net> (raw)
In-Reply-To: <98D6DBD179F61A46AF5C064829A832A0185042D261@erebus.totalmanaged.com>

Michael Moody wrote:
> Hello all.
> 
>  
> 
> I recently created an XFS filesystem on an x86_64 CentOS 5.3 system. I
> used all tools in the repository:
> 
>  
> 
> Xfsprogs-2.9.4-1
> 
> Kernel 2.6.18-128.1.10.el5.centos.plus
> 
>  
> 
> It is a somewhat complex configuration of:
> 
>  
> 
> Areca RAID card with 16 1.5TB drives in a RAID 6 with 1 hotspare (100GB
> volume was created for the OS, the rest was one large volume of ~19TB)
> 
> I used pvcreate /dev/sdb to create a physical volume for LVM on the 19TB
> volume.
> 
> I then used vgcreate to create a volume group of 17.64TB
> 
> I used lvcreate to create 5 logical volumes, 4x4TB, and 1x1.5TB
> 
> On top of those logical volumes is drbd (/dev/drbd0-/dev/drbd4)
> 
> On top of the drbd volumes, I created a volume group of 17.50TB
> (/dev/drbd0-/dev/drbd4)
> 
> I created a logical volume of 17.49TB, upon which was created an xfs
> filesystem with no options (mkfs.xfs mkfs.xfs
> /dev/Volume1-Rep-Store/Volume1-Replicated -L Replicated)
> 
> The resulting filesystem is larger than the underlying logical volume:
> 
> --- Logical volume ---
> 
>   LV Name                /dev/Volume1-Rep-Store/Volume1-Replicated
>   VG Name                Volume1-Rep-Store
>   LV UUID                fB0q3f-80Kq-yFuy-NjKl-pmlW-jeiX-uEruWC
>   LV Write Access        read/write
>   LV Status              available
>   # open                 1
>   LV Size                17.49 TB
>   Current LE             4584899
>   Segments               5
>   Allocation             inherit
>   Read ahead sectors     auto
>   - currently set to     256
>   Block device           253:5
> 
> /dev/mapper/Volume1--Rep--Store-Volume1--Replicated
> 
>                        18T  411M   18T   1% /mnt/Volume1
> 
> Why is this, and how can I fix it?

I'm guessing that this is df rounding up.  Try df w/o -h, to see how
many 1k blocks you have and compare that to the size.

If it still looks wrong, can you include xfs_info output for
/mnt/Volume1 as well as the contents of /proc/partitions on your system?

I'd wager a beer that nothing is wrong, but that if something is wrong,
it's not xfs ;)

Thanks,
-Eric

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

  reply	other threads:[~2009-06-24 22:24 UTC|newest]

Thread overview: 8+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2009-06-24 20:48 mkfs.xfs created filesystem larger than underlying device Michael Moody
2009-06-24 22:24 ` Eric Sandeen [this message]
2009-06-24 22:26   ` Michael Moody
2009-06-24 23:02     ` Eric Sandeen
2009-06-24 23:05       ` Michael Moody
2009-06-24 23:06         ` Eric Sandeen
2009-06-24 22:33   ` Michael Moody
2009-06-27 11:33     ` Peter Grandi

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=4A42A7B7.3040403@sandeen.net \
    --to=sandeen@sandeen.net \
    --cc=michael@gsc.cc \
    --cc=xfs@oss.sgi.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox