* [RFC] Defragmentation strategies
@ 2014-04-28 10:59 Dmitry Monakhov
2014-04-29 11:23 ` Lukáš Czerner
0 siblings, 1 reply; 2+ messages in thread
From: Dmitry Monakhov @ 2014-04-28 10:59 UTC (permalink / raw)
To: ext4 development
Hi.
In ext4 we have EXT4_IOC_MOVE_EXT ioctl which allow to
migrate data block. At this moment the only defragmentation
strategy we have in e4defrag(8) is defragmentation of big files.
But one can imagine different defragmentation strategies for
different file sizes and different purposes. I would like to start a
discussion about list of strategies which can be usable for us:
* Big file defragmentation
Good known strategy to make big files
** Example: In fact fragmented for big files may appear only in such cases
1) Creation big files on FS which has low free space
2) weird io pattern (multi-threaded small chunks random io + fsync) or
punch_hole/collapse_range etc.
* Compact small old files to continuous chunks.
** Example:
news, mail, web or cache server contains a lot of small files in
each directory. And files are periodically created and unlinked
after some period of time. Files has different(unpredictable)
life-time which result in fragmented fs because block allocator tries
to compact new files to each other, but later unlink result in
fragmentation. In case of thin-provision target this also result
in significant waste of space.
** Proposed strategy:
Scan directory and collect small old files it to continuous chunks.
Core idea is similar to block allocations smaller than
s_mb_stream_request. But at this moment we have more information about
file history because if mtime is close to ctime then append is
unlikely to happen in future so compaction is effective.
* Compact files according to IO access pattern.
Various tracers may collect statistics about IO access pattern, so
we can place such block close to each other and reduce number of seeks.
** Example:
1) Boot io pattern are almost identical across boots
2) Firefox start-up speedup http://glandium.org/blog/?p=1296
^ permalink raw reply [flat|nested] 2+ messages in thread
* Re: [RFC] Defragmentation strategies
2014-04-28 10:59 [RFC] Defragmentation strategies Dmitry Monakhov
@ 2014-04-29 11:23 ` Lukáš Czerner
0 siblings, 0 replies; 2+ messages in thread
From: Lukáš Czerner @ 2014-04-29 11:23 UTC (permalink / raw)
To: Dmitry Monakhov; +Cc: ext4 development
On Mon, 28 Apr 2014, Dmitry Monakhov wrote:
> Date: Mon, 28 Apr 2014 14:59:14 +0400
> From: Dmitry Monakhov <dmonakhov@openvz.org>
> To: ext4 development <linux-ext4@vger.kernel.org>
> Subject: [RFC] Defragmentation strategies
>
>
> Hi.
> In ext4 we have EXT4_IOC_MOVE_EXT ioctl which allow to
> migrate data block. At this moment the only defragmentation
> strategy we have in e4defrag(8) is defragmentation of big files.
> But one can imagine different defragmentation strategies for
> different file sizes and different purposes. I would like to start a
> discussion about list of strategies which can be usable for us:
>
> * Big file defragmentation
> Good known strategy to make big files
> ** Example: In fact fragmented for big files may appear only in such cases
> 1) Creation big files on FS which has low free space
> 2) weird io pattern (multi-threaded small chunks random io + fsync) or
> punch_hole/collapse_range etc.
Hi,
Some files might be be even worth to try to optimize (like those
accessed with random io) and rather use the available contiguous
space for files which will benefit more.
>
> * Compact small old files to continuous chunks.
> ** Example:
> news, mail, web or cache server contains a lot of small files in
> each directory. And files are periodically created and unlinked
> after some period of time. Files has different(unpredictable)
> life-time which result in fragmented fs because block allocator tries
> to compact new files to each other, but later unlink result in
> fragmentation. In case of thin-provision target this also result
> in significant waste of space.
> ** Proposed strategy:
> Scan directory and collect small old files it to continuous chunks.
> Core idea is similar to block allocations smaller than
> s_mb_stream_request. But at this moment we have more information about
> file history because if mtime is close to ctime then append is
> unlikely to happen in future so compaction is effective.
Makes sense to me, utilizing the information about when the file has
been modified might be useful for packing "read-only" files together
to make free space fragmentation a little bit better.
>
> * Compact files according to IO access pattern.
> Various tracers may collect statistics about IO access pattern, so
> we can place such block close to each other and reduce number of seeks.
> ** Example:
> 1) Boot io pattern are almost identical across boots
> 2) Firefox start-up speedup http://glandium.org/blog/?p=1296
This also sounds good. Having a general solution or a way to
configure, or script a way to "defragment" different files with a
different strategy might be very useful.
This configuration file, or script or receipt or whatever you want
to call it would have to be user generated so the way to create it
should be relatively easy to use. Of course we could provide generic
ones as well.
Thanks!
-Lukas
> --
> To unsubscribe from this list: send the line "unsubscribe linux-ext4" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
>
^ permalink raw reply [flat|nested] 2+ messages in thread
end of thread, other threads:[~2014-04-29 11:23 UTC | newest]
Thread overview: 2+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2014-04-28 10:59 [RFC] Defragmentation strategies Dmitry Monakhov
2014-04-29 11:23 ` Lukáš Czerner
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).