public inbox for linux-xfs@vger.kernel.org
 help / color / mirror / Atom feed
* Estimate xfs_repair run time
@ 2017-08-19 10:05 Gionatan Danti
  2017-08-19 12:14 ` Gionatan Danti
  2017-08-20  2:10 ` Eric Sandeen
  0 siblings, 2 replies; 10+ messages in thread
From: Gionatan Danti @ 2017-08-19 10:05 UTC (permalink / raw)
  To: linux-xfs

Hi list,
using XFS on relatively big filesystems (> 8TB) I was wondering if it is 
possible to estimate how much time an emergency "xfs_repair" would take.

Some specific questions:
- will total time depend on how much the filesystem is filled (I think 
so...)?
- will total time depend on how data are layed on the physical disks 
(ie: fragmented vs sequential)?
- will total time scale down with increasing spindle count (ie: single 
disk vs 4-way RAID10)?

On a related question: I generally use LVM to segregate/isolate my 
virtual machine images. In this manner, even a completely blowed up 
filesystem on one LV can not affect other LV.

How do you feel using a single big LV + XFS + preallocated RAW disk 
images? Can a problem on the main XFS filesystem be contained on only 
some VM files, or it really risks to destroy the entire filesystem?

Thank you all.

-- 
Danti Gionatan
Supporto Tecnico
Assyoma S.r.l. - www.assyoma.it
email: g.danti@assyoma.it - info@assyoma.it
GPG public key ID: FF5F32A8

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: Estimate xfs_repair run time
  2017-08-19 10:05 Estimate xfs_repair run time Gionatan Danti
@ 2017-08-19 12:14 ` Gionatan Danti
  2017-08-19 15:48   ` Darrick J. Wong
  2017-08-20  2:10 ` Eric Sandeen
  1 sibling, 1 reply; 10+ messages in thread
From: Gionatan Danti @ 2017-08-19 12:14 UTC (permalink / raw)
  To: linux-xfs; +Cc: g.danti

Sorry, reposting to CC me...

Il 19-08-2017 12:05 Gionatan Danti ha scritto:
> Hi list,
> using XFS on relatively big filesystems (> 8TB) I was wondering if it
> is possible to estimate how much time an emergency "xfs_repair" would
> take.
> 
> Some specific questions:
> - will total time depend on how much the filesystem is filled (I think 
> so...)?
> - will total time depend on how data are layed on the physical disks
> (ie: fragmented vs sequential)?
> - will total time scale down with increasing spindle count (ie: single
> disk vs 4-way RAID10)?
> 
> On a related question: I generally use LVM to segregate/isolate my
> virtual machine images. In this manner, even a completely blowed up
> filesystem on one LV can not affect other LV.
> 
> How do you feel using a single big LV + XFS + preallocated RAW disk
> images? Can a problem on the main XFS filesystem be contained on only
> some VM files, or it really risks to destroy the entire filesystem?
> 
> Thank you all.

-- 
Danti Gionatan
Supporto Tecnico
Assyoma S.r.l. - www.assyoma.it
email: g.danti@assyoma.it - info@assyoma.it
GPG public key ID: FF5F32A8

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: Estimate xfs_repair run time
  2017-08-19 12:14 ` Gionatan Danti
@ 2017-08-19 15:48   ` Darrick J. Wong
  2017-08-19 16:15     ` Gionatan Danti
  0 siblings, 1 reply; 10+ messages in thread
From: Darrick J. Wong @ 2017-08-19 15:48 UTC (permalink / raw)
  To: Gionatan Danti; +Cc: linux-xfs

On Sat, Aug 19, 2017 at 02:14:23PM +0200, Gionatan Danti wrote:
> Sorry, reposting to CC me...
> 
> Il 19-08-2017 12:05 Gionatan Danti ha scritto:
> >Hi list,
> >using XFS on relatively big filesystems (> 8TB) I was wondering if it
> >is possible to estimate how much time an emergency "xfs_repair" would
> >take.
> >
> >Some specific questions:
> >- will total time depend on how much the filesystem is filled (I think
> >so...)?
> >- will total time depend on how data are layed on the physical disks
> >(ie: fragmented vs sequential)?
> >- will total time scale down with increasing spindle count (ie: single
> >disk vs 4-way RAID10)?
> >
> >On a related question: I generally use LVM to segregate/isolate my
> >virtual machine images. In this manner, even a completely blowed up
> >filesystem on one LV can not affect other LV.
> >
> >How do you feel using a single big LV + XFS + preallocated RAW disk
> >images? Can a problem on the main XFS filesystem be contained on only
> >some VM files, or it really risks to destroy the entire filesystem?

Should work fine; be sure to set the extent size hint to reduce fragmentation.

--D

> >
> >Thank you all.
> 
> -- 
> Danti Gionatan
> Supporto Tecnico
> Assyoma S.r.l. - www.assyoma.it
> email: g.danti@assyoma.it - info@assyoma.it
> GPG public key ID: FF5F32A8
> --
> To unsubscribe from this list: send the line "unsubscribe linux-xfs" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: Estimate xfs_repair run time
  2017-08-19 15:48   ` Darrick J. Wong
@ 2017-08-19 16:15     ` Gionatan Danti
  2017-08-19 16:20       ` Darrick J. Wong
  0 siblings, 1 reply; 10+ messages in thread
From: Gionatan Danti @ 2017-08-19 16:15 UTC (permalink / raw)
  To: Darrick J. Wong; +Cc: linux-xfs, g.danti

Il 19-08-2017 17:48 Darrick J. Wong ha scritto:
> Should work fine; be sure to set the extent size hint to reduce 
> fragmentation.

Hi, do you mean tuning the "allocsize" mount option? If I remember 
correctly, this parameter should be auto-tuned by XFS during normal 
operation, right?

Any thoughts on the xfs_repair matter?
Thanks.

-- 
Danti Gionatan
Supporto Tecnico
Assyoma S.r.l. - www.assyoma.it
email: g.danti@assyoma.it - info@assyoma.it
GPG public key ID: FF5F32A8

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: Estimate xfs_repair run time
  2017-08-19 16:15     ` Gionatan Danti
@ 2017-08-19 16:20       ` Darrick J. Wong
  2017-08-19 16:39         ` Gionatan Danti
  0 siblings, 1 reply; 10+ messages in thread
From: Darrick J. Wong @ 2017-08-19 16:20 UTC (permalink / raw)
  To: Gionatan Danti; +Cc: linux-xfs

On Sat, Aug 19, 2017 at 06:15:28PM +0200, Gionatan Danti wrote:
> Il 19-08-2017 17:48 Darrick J. Wong ha scritto:
> >Should work fine; be sure to set the extent size hint to reduce
> >fragmentation.
> 
> Hi, do you mean tuning the "allocsize" mount option? If I remember
> correctly, this parameter should be auto-tuned by XFS during normal
> operation, right?

I meant the per-inode extent size hint.  See 'extsize' in the xfs_io manpage.

> Any thoughts on the xfs_repair matter?

{yes, yes, depends on which spindle(s) metadata end up on and how many
files cross AGs} in that order. :)

--D

> Thanks.
> 
> -- 
> Danti Gionatan
> Supporto Tecnico
> Assyoma S.r.l. - www.assyoma.it
> email: g.danti@assyoma.it - info@assyoma.it
> GPG public key ID: FF5F32A8
> --
> To unsubscribe from this list: send the line "unsubscribe linux-xfs" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: Estimate xfs_repair run time
  2017-08-19 16:20       ` Darrick J. Wong
@ 2017-08-19 16:39         ` Gionatan Danti
  2017-08-20  2:04           ` Dave Chinner
  0 siblings, 1 reply; 10+ messages in thread
From: Gionatan Danti @ 2017-08-19 16:39 UTC (permalink / raw)
  To: Darrick J. Wong; +Cc: linux-xfs, g.danti

Il 19-08-2017 18:20 Darrick J. Wong ha scritto:
> I meant the per-inode extent size hint.  See 'extsize' in the xfs_io 
> manpage.

I did not know about per-file extent hint. Thank you very much for 
pointing me that!

> {yes, yes, depends on which spindle(s) metadata end up on and how many
> files cross AGs} in that order. :)

Mmm ok, as expected :)
Do you have any direct experience of xfs_repair run times on big and 
full XFS filesystems?

Thanks.

-- 
Danti Gionatan
Supporto Tecnico
Assyoma S.r.l. - www.assyoma.it
email: g.danti@assyoma.it - info@assyoma.it
GPG public key ID: FF5F32A8

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: Estimate xfs_repair run time
  2017-08-19 16:39         ` Gionatan Danti
@ 2017-08-20  2:04           ` Dave Chinner
  2017-08-20  5:05             ` Gionatan Danti
  0 siblings, 1 reply; 10+ messages in thread
From: Dave Chinner @ 2017-08-20  2:04 UTC (permalink / raw)
  To: Gionatan Danti; +Cc: Darrick J. Wong, linux-xfs

On Sat, Aug 19, 2017 at 06:39:58PM +0200, Gionatan Danti wrote:
> >{yes, yes, depends on which spindle(s) metadata end up on and how many
> >files cross AGs} in that order. :)
> 
> Mmm ok, as expected :)
> Do you have any direct experience of xfs_repair run times on big and
> full XFS filesystems?

Yes. And there's no hard and fast number we can give you because
runtime will depends on the amount and density of the metadata, the
speed of the underlying storage, how much RAM you have and how badly
corrupted the filesystem is.

IOWs, runtime can vary from a few minutes to a few days as you vary
those parameters, and there's really no way to tell in advance where
in that massive variance any specific production filesystem will
land.

Doing a test run (xfs_repair -n) to get a ballpark figure during a
planned downtime period is the only way you'll get any idea of the
*best case* repair duration on a *clean* filesystem. But the moment
repair finds corruptions, all guesses you can make about runtime go
out the window...

Cheers,

Dave.
-- 
Dave Chinner
david@fromorbit.com

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: Estimate xfs_repair run time
  2017-08-19 10:05 Estimate xfs_repair run time Gionatan Danti
  2017-08-19 12:14 ` Gionatan Danti
@ 2017-08-20  2:10 ` Eric Sandeen
  2017-08-20  5:07   ` Gionatan Danti
  1 sibling, 1 reply; 10+ messages in thread
From: Eric Sandeen @ 2017-08-20  2:10 UTC (permalink / raw)
  To: Gionatan Danti, linux-xfs

As for this question:

On 8/19/17 5:05 AM, Gionatan Danti wrote:
> Hi list,

...

> How do you feel using a single big LV + XFS + preallocated RAW disk images? Can a problem on the main XFS filesystem be contained on only some VM files, or it really risks to destroy the entire filesystem?

It's pretty rare to have large-scale, filesystem-wide corruption.  The only times I've ever seen filesystems really get wiped out by corruption was when the underlying storage went haywire; a scrambled raid, etc.  The sort of thing no filesystem could cope with ...

-Eric

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: Estimate xfs_repair run time
  2017-08-20  2:04           ` Dave Chinner
@ 2017-08-20  5:05             ` Gionatan Danti
  0 siblings, 0 replies; 10+ messages in thread
From: Gionatan Danti @ 2017-08-20  5:05 UTC (permalink / raw)
  To: Dave Chinner; +Cc: Darrick J. Wong, linux-xfs, g.danti

Hi Dave,

Il 20-08-2017 04:04 Dave Chinner ha scritto:
> 
> Yes. And there's no hard and fast number we can give you because
> runtime will depends on the amount and density of the metadata, the
> speed of the underlying storage, how much RAM you have and how badly
> corrupted the filesystem is.
> 
> IOWs, runtime can vary from a few minutes to a few days as you vary
> those parameters, and there's really no way to tell in advance where
> in that massive variance any specific production filesystem will
> land.
> 
> Doing a test run (xfs_repair -n) to get a ballpark figure during a
> planned downtime period is the only way you'll get any idea of the
> *best case* repair duration on a *clean* filesystem. But the moment
> repair finds corruptions, all guesses you can make about runtime go
> out the window...

ok, so I at least need to try & see how well xfs_repair does on a clean 
filesystem.

Thanks.

-- 
Danti Gionatan
Supporto Tecnico
Assyoma S.r.l. - www.assyoma.it
email: g.danti@assyoma.it - info@assyoma.it
GPG public key ID: FF5F32A8

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: Estimate xfs_repair run time
  2017-08-20  2:10 ` Eric Sandeen
@ 2017-08-20  5:07   ` Gionatan Danti
  0 siblings, 0 replies; 10+ messages in thread
From: Gionatan Danti @ 2017-08-20  5:07 UTC (permalink / raw)
  To: Eric Sandeen; +Cc: linux-xfs, g.danti

Hi Eric,

Il 20-08-2017 04:10 Eric Sandeen ha scritto:
> It's pretty rare to have large-scale, filesystem-wide corruption.  The
> only times I've ever seen filesystems really get wiped out by
> corruption was when the underlying storage went haywire; a scrambled
> raid, etc.  The sort of thing no filesystem could cope with ...

this is reassuring! ;)

Thank you for reporting.

-- 
Danti Gionatan
Supporto Tecnico
Assyoma S.r.l. - www.assyoma.it
email: g.danti@assyoma.it - info@assyoma.it
GPG public key ID: FF5F32A8

^ permalink raw reply	[flat|nested] 10+ messages in thread

end of thread, other threads:[~2017-08-20  5:07 UTC | newest]

Thread overview: 10+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2017-08-19 10:05 Estimate xfs_repair run time Gionatan Danti
2017-08-19 12:14 ` Gionatan Danti
2017-08-19 15:48   ` Darrick J. Wong
2017-08-19 16:15     ` Gionatan Danti
2017-08-19 16:20       ` Darrick J. Wong
2017-08-19 16:39         ` Gionatan Danti
2017-08-20  2:04           ` Dave Chinner
2017-08-20  5:05             ` Gionatan Danti
2017-08-20  2:10 ` Eric Sandeen
2017-08-20  5:07   ` Gionatan Danti

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox