* xfs preallocation timeout
@ 2009-02-11 14:22 Michael Monnerie
2009-02-12 22:08 ` Dave Chinner
0 siblings, 1 reply; 3+ messages in thread
From: Michael Monnerie @ 2009-02-11 14:22 UTC (permalink / raw)
To: xfs
Does /proc/sys/vm/dirty_expire_centisecs influence the maximum time XFS
takes before the "in-memory but not on disk" preallocation actually
starts writing to disk? If not, which parameter does influence how long
files are grouped together in memory before writing them really to disk?
The longer the better for performance, but more risk for data loss of
course.
I ask this because I want to know when copying files over the net to an
XFS drive, how I can prevent fragmentation to occur. Example: If a
packet of 1024 Bytes arrives every 1/100th second, it needs 0.64s to get
64KB of data, 1.28s to get 128KB. If the prealloc timeout is one second,
it's effectively not used in this case. So if you got a server where you
know files arrive at a certain speed, fine tuning this prealloc timeout
could help prevent fragmentation.
mfg zmi
--
// Michael Monnerie, Ing.BSc ----- http://it-management.at
// Tel: 0660 / 415 65 31 .network.your.ideas.
// PGP Key: "curl -s http://zmi.at/zmi.asc | gpg --import"
// Fingerprint: AC19 F9D5 36ED CD8A EF38 500E CE14 91F7 1C12 09B4
// Keyserver: wwwkeys.eu.pgp.net Key-ID: 1C1209B4
_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs
^ permalink raw reply [flat|nested] 3+ messages in thread
* Re: xfs preallocation timeout
2009-02-11 14:22 xfs preallocation timeout Michael Monnerie
@ 2009-02-12 22:08 ` Dave Chinner
2009-02-13 9:35 ` Michael Monnerie
0 siblings, 1 reply; 3+ messages in thread
From: Dave Chinner @ 2009-02-12 22:08 UTC (permalink / raw)
To: Michael Monnerie; +Cc: xfs
On Wed, Feb 11, 2009 at 03:22:11PM +0100, Michael Monnerie wrote:
> Does /proc/sys/vm/dirty_expire_centisecs influence the maximum time XFS
> takes before the "in-memory but not on disk" preallocation actually
> starts writing to disk?
Yes.
BTW, you are referring to "delayed allocation", not
"pre-allocation". Pre-allocation is the act of immediately
allocating space on disk without writing data; delayed allocation is
avoiding allocation until the data is to be written to disk...
Cheers,
Dave.
--
Dave Chinner
david@fromorbit.com
_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs
^ permalink raw reply [flat|nested] 3+ messages in thread
* Re: xfs preallocation timeout
2009-02-12 22:08 ` Dave Chinner
@ 2009-02-13 9:35 ` Michael Monnerie
0 siblings, 0 replies; 3+ messages in thread
From: Michael Monnerie @ 2009-02-13 9:35 UTC (permalink / raw)
To: xfs
On Donnerstag 12 Februar 2009 Dave Chinner wrote:
> Yes.
>
> BTW, you are referring to "delayed allocation", not
> "pre-allocation". Pre-allocation is the act of immediately
> allocating space on disk without writing data; delayed allocation is
> avoiding allocation until the data is to be written to disk...
Thank you Dave. I didn't remember the correct wording. So
/proc/sys/vm/dirty_expire_centisecs for XFS says how much data, but not
metadata, I could loose on kernel crash/power outage etc. No need to
worry, right?
mfg zmi
--
// Michael Monnerie, Ing.BSc ----- http://it-management.at
// Tel: 0660 / 415 65 31 .network.your.ideas.
// PGP Key: "curl -s http://zmi.at/zmi.asc | gpg --import"
// Fingerprint: AC19 F9D5 36ED CD8A EF38 500E CE14 91F7 1C12 09B4
// Keyserver: wwwkeys.eu.pgp.net Key-ID: 1C1209B4
_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs
^ permalink raw reply [flat|nested] 3+ messages in thread
end of thread, other threads:[~2009-02-13 9:36 UTC | newest]
Thread overview: 3+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2009-02-11 14:22 xfs preallocation timeout Michael Monnerie
2009-02-12 22:08 ` Dave Chinner
2009-02-13 9:35 ` Michael Monnerie
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox