linux-fsdevel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v2 0/8] Filesystem io types statistic
@ 2011-11-10 10:34 Zheng Liu
  2011-11-10 10:34 ` [PATCH v2 1/8] vfs: Add a new flag and related functions in buffer to count io types Zheng Liu
                   ` (8 more replies)
  0 siblings, 9 replies; 21+ messages in thread
From: Zheng Liu @ 2011-11-10 10:34 UTC (permalink / raw)
  To: linux-ext4, linux-fsdevel

Hi all,

v1->v2: totally redesign this mechanism

This patchset implements an io types statistic mechanism for filesystem
and it has been added into ext4 to let us know how the ext4 is used by
applications. It is useful for us to analyze how to improve the filesystem
and applications. Nowadays, I have added it into ext4, but other filesytems
also can use it to count the io types by themselves.

A 'Issue' flag is added into buffer_head and will be set in submit_bh().
Thus, we can check this flag in filesystem to know that a request is issued
to the disk when this flag is set. Filesystems just need to check it in
read operation because filesystem should know whehter a write request hits
cache or not, at least in ext4. In filesystem, buffer needs to be locked in
checking and clearing this flag, but it doesn't cost much overhead.

In ext4, a per-cpu counter is defined and some functions are added to count
the io types of buffered/direct io. An exception is __breadahead() due to
this function doesn't need a buffer_head as argument or return value. So now
we cannot handle these requests calling __breadahead().

The IO types in ext4 have shown as following:
Metadata:
 - super block
 - group descriptor
 - inode bitmap
 - block bitmap
 - inode table
 - extent block
 - indirect block
 - dir index and entry
 - extended attribute
Data:
 - regular data block

The result is shown in sysfs. We can read from /sys/fs/ext4/$DEVICE/io_stats
to see the result. We can understand how much metadata or data requests are
issued to the disk according to the result.

I have finished some benchmarks to test its overhead that calling lock_buffer()
brings. The following fio script is used to run on a SSD. The result shows that
the ovheread can be ignored.

FIO config file:
[global]
ioengineshortync
bs=4k
filename=/mnt/sda1/testfile
size=64G
runtime=300
group_reporting
loops=500

[read]
rw=randread
numjobs=4

[write]
rw=randwrite
numjobs=1

The result (iops):
        w/o         w/
READ:  16304      15906 (-2.44%)
WRITE:  1332       1353 (+1.58%)

Any comments or suggestions are welcome.

Regards,
Zheng

^ permalink raw reply	[flat|nested] 21+ messages in thread

end of thread, other threads:[~2011-11-18  2:48 UTC | newest]

Thread overview: 21+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2011-11-10 10:34 [PATCH v2 0/8] Filesystem io types statistic Zheng Liu
2011-11-10 10:34 ` [PATCH v2 1/8] vfs: Add a new flag and related functions in buffer to count io types Zheng Liu
2011-11-11 10:48   ` Steven Whitehouse
2011-11-11 15:36     ` Zheng Liu
2011-11-10 10:34 ` [PATCH v2 2/8] ext4: Add new data structures and related functions " Zheng Liu
2011-11-11 10:58   ` Steven Whitehouse
2011-11-11 15:45     ` Zheng Liu
2011-11-10 10:34 ` [PATCH v2 3/8] ext4: Count metadata request of read operations in buffered io Zheng Liu
2011-11-10 10:34 ` [PATCH v2 4/8] ext4: Count data " Zheng Liu
2011-11-10 10:34 ` [PATCH v2 5/8] ext4: Count metadata request of write " Zheng Liu
2011-11-10 10:34 ` [PATCH v2 6/8] ext4: Count data " Zheng Liu
2011-11-10 10:34 ` [PATCH v2 7/8] ext4: Count all requests in direct io Zheng Liu
2011-11-10 10:34 ` [PATCH v2 8/8] ext4: Show the result of io types statistic in sysfs Zheng Liu
2011-11-11 10:55 ` [PATCH v2 0/8] Filesystem io types statistic Steven Whitehouse
2011-11-11 15:32   ` Zheng Liu
2011-11-14 10:23     ` Steven Whitehouse
2011-11-14 13:35       ` Zheng Liu
2011-11-15 18:34         ` Aditya Kali
2011-11-16  8:43           ` Zheng Liu
2011-11-16 10:14             ` Steven Whitehouse
2011-11-18  2:48               ` Zheng Liu

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).