public inbox for linux-xfs@vger.kernel.org
 help / color / mirror / Atom feed
From: "L.A. Walsh" <xfs@tlinx.org>
To: xfs-oss <xfs@oss.sgi.com>
Subject: why crc req on free-inobt & file-type-indir options?
Date: Thu, 06 Aug 2015 19:52:37 -0700	[thread overview]
Message-ID: <55C41D75.4040504@tlinx.org> (raw)


Could anyone point me at the discussion or literature as to why
a free-inodeB-Tree and inline-types, should *REQUIRE* a -crc=1 option?

Ultimately isn't it about the users/customers and what they will want?

I not saying to not make it a default -- but to require it to try
the other features? 

Main reason I asked, is I have had disks get partly corrupted meta data
before, and it looks like the crc information just says "this disk
has errors, so assume ALL is LOST!  Seems like one bit flip out of
a 32+Tb(4TB) are not great odds.

Example:
sudo mkfs-xfs-raid SCR /dev/mapper/Data-Home2
mkfs.xfs -mcrc=1,finobt=1 -i maxpct=5,size=512 -l 
size=32752b,lazy-count=1 -d su=64k,sw=4  -s size=4096 -L SCR -f 
/dev/mapper/Data-Home2
meta-data=/dev/mapper/Data-Home2 isize=512    agcount=32, 
agsize=12582896 blks
        =                       sectsz=4096  attr=2, projid32bit=1
        =                       crc=1        finobt=1
data     =                       bsize=4096   blocks=402652672, imaxpct=5
        =                       sunit=16     swidth=64 blks
naming   =version 2              bsize=4096   ascii-ci=0 ftype=1
log      =internal log           bsize=4096   blocks=32752, version=2
        =                       sectsz=4096  sunit=1 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0
xfs_admin: WARNING - filesystem uses v1 dirs,limited functionality provided.
cache_node_purge: refcount was 1, not zero (node=0x891ea0)
xfs_admin: cannot read root inode (117)
cache_node_purge: refcount was 1, not zero (node=0x894410)
xfs_admin: cannot read realtime bitmap inode (117)
xfs_admin: WARNING - filesystem uses v1 dirs,limited functionality provided.
Clearing log and setting UUID
writing all SBs
bad sb version # 0xbda5 in AG 0
failed to set UUID in AG 0
new UUID = 55c29a43-19b6-ba02-2015-08051620352b
26.34sec 0.11usr 14.97sys (57.28% cpu)
Ishtar:law/bin> time sudo mkfs-xfs-raid SCR /dev/mapper/Data-Home2
mkfs.xfs  -i maxpct=5,size=512 -l size=32752b,lazy-count=1 -d 
su=64k,sw=4  -s size=4096 -L SCR -f /dev/mapper/Data-Home2
meta-data=/dev/mapper/Data-Home2 isize=512    agcount=32, 
agsize=12582896 blks
        =                       sectsz=4096  attr=2, projid32bit=1
        =                       crc=0        finobt=0
data     =                       bsize=4096   blocks=402652672, imaxpct=5
        =                       sunit=16     swidth=64 blks
naming   =version 2              bsize=4096   ascii-ci=0 ftype=0
log      =internal log           bsize=4096   blocks=32752, version=2
        =                       sectsz=4096  sunit=1 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0
Clearing log and setting UUID
writing all SBs55
new UUID = 55c29acd-2ce9-d15a-2015-08051622534b
In case you were curious why  ^^date^^:time^^^?? gives me an idea of
how long a disk  (or partition) has been in service....


I don't see any benefit in something that fails the disk that quickly.
While I've heard a patch for the GUID is in the input stream -- that's
one thing.  If I go in and a bit-error has reduced my disk to v1-dirs
as a side effect, it sounds like the potential for damage is far greater
than with that option turned on.  Sure it may make you **aware** of a
potential problem more quickly (or a new SW error), but I didn't see
how it helped repair the disk when it was at fault. 

Also, is it my imagination or is mkfs.xfs taking longer, occasionally
alot longer -- on the order of >60 seconds  at long end vs ~30 at
lower endd.  It sorta felt like a drop cache was being done before
the real long one but the memusage didn't change. at
least part of the time.   Has anyone done any benchmarks both
ways on meta-data intensive workloads (I guess lots of mkdirs,
touch, rm, adding large ACL's) ...?


Thanks much!
L. Walsh


_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

             reply	other threads:[~2015-08-07  2:52 UTC|newest]

Thread overview: 13+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2015-08-07  2:52 L.A. Walsh [this message]
2015-08-07  5:00 ` why crc req on free-inobt & file-type-indir options? Eric Sandeen
2015-08-07  8:14   ` L.A. Walsh
2015-08-07 17:01     ` Eric Sandeen
2015-08-07 22:55     ` Dave Chinner
2015-08-08  0:50       ` L.A. Walsh
2015-08-08  1:45         ` Eric Sandeen
2015-08-08  2:59           ` L.A. Walsh
2015-08-09  0:11             ` Dave Chinner
2015-08-13  0:24       ` L.A. Walsh
2015-08-07  8:17   ` L.A. Walsh
2015-08-07  5:36 ` Eric Sandeen
  -- strict thread matches above, loose matches on Subject: below --
2015-08-07  2:53 L.A. Walsh

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=55C41D75.4040504@tlinx.org \
    --to=xfs@tlinx.org \
    --cc=xfs@oss.sgi.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox