linux-fsdevel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v1 resend hot_track 00/16] vfs: hot data tracking
@ 2012-11-16  9:51 zwu.kernel
  2012-11-16  9:51 ` [PATCH v1 hot_track 01/16] vfs: introduce some data structures zwu.kernel
                   ` (16 more replies)
  0 siblings, 17 replies; 22+ messages in thread
From: zwu.kernel @ 2012-11-16  9:51 UTC (permalink / raw)
  To: linux-fsdevel; +Cc: linux-kernel, viro, Zhi Yong Wu

From: Zhi Yong Wu <wuzhy@linux.vnet.ibm.com>

HI, guys,

  Any comments or ideas are appreciated, thanks.

NOTE:

  The patchset can be obtained via my kernel dev git on github:
git://github.com/wuzhy/kernel.git hot_tracking
  If you're interested, you can also review them via
https://github.com/wuzhy/kernel/commits/hot_tracking

  For more info, please check hot_tracking.txt in Documentation

TODO List:

 1.) Need to do scalability or performance tests. - Required
 2.) Need one simpler but efficient temp calculation function
 3.) How to save the file temperature among the umount to be able to
     preserve the file tempreture after reboot - Optional

Changelog:

 - Solved 64 bits inode number issue. [David Sterba]
 - Embed struct hot_type in struct file_system_type [Darrick J. Wong]
 - Cleanup Some issues [David Sterba]
 - Use a static hot debugfs root [Greg KH]
 - Rewritten debugfs support based on seq_file operation. [Dave Chinner]
 - Refactored workqueue support. [Dave Chinner]
 - Turn some Micro into be tunable   [Zhiyong, Zheng Liu]
       TIME_TO_KICK, and HEAT_UPDATE_DELAY
 - Introduce hot func registering framework [Zhiyong]
 - Remove global variable for hot tracking [Zhiyong]
 - Add xfs hot tracking support [Dave Chinner]
 - Add ext4 hot tracking support [Zheng Liu]
 - Cleanedup a lot of other issues [Dave Chinner]
 - Added memory shrinker [Dave Chinner]
 - Converted to one workqueue to update map info periodically [Dave Chinner]
 - Cleanedup a lot of other issues [Dave Chinner]
 - Reduce new files and put all in fs/hot_tracking.[ch] [Dave Chinner]
 - Add btrfs hot tracking support [Zhiyong]
 - The first three patches can probably just be flattened into one.
                                        [Marco Stornelli , Dave Chinner]

Zhi Yong Wu (16):
  vfs: introduce some data structures
  vfs: add init and cleanup functions
  vfs: add I/O frequency update function
  vfs: add two map arrays
  vfs: add hooks to enable hot tracking
  vfs: add temp calculation function
  vfs: add map info update function
  vfs: add aging function
  vfs: add one work queue
  vfs: add FS hot type support
  vfs: register one shrinker
  vfs: add one ioctl interface
  vfs: add debugfs support
  proc: add two hot_track proc files
  btrfs: add hot tracking support
  vfs: add documentation

 Documentation/filesystems/00-INDEX         |    2 +
 Documentation/filesystems/hot_tracking.txt |  263 ++++++
 fs/Makefile                                |    2 +-
 fs/btrfs/ctree.h                           |    1 +
 fs/btrfs/super.c                           |   22 +-
 fs/compat_ioctl.c                          |    5 +
 fs/dcache.c                                |    2 +
 fs/direct-io.c                             |    6 +
 fs/hot_tracking.c                          | 1306 ++++++++++++++++++++++++++++
 fs/hot_tracking.h                          |   52 ++
 fs/ioctl.c                                 |   74 ++
 include/linux/fs.h                         |    5 +
 include/linux/hot_tracking.h               |  152 ++++
 kernel/sysctl.c                            |   14 +
 mm/filemap.c                               |    6 +
 mm/page-writeback.c                        |   12 +
 mm/readahead.c                             |    7 +
 17 files changed, 1929 insertions(+), 2 deletions(-)
 create mode 100644 Documentation/filesystems/hot_tracking.txt
 create mode 100644 fs/hot_tracking.c
 create mode 100644 fs/hot_tracking.h
 create mode 100644 include/linux/hot_tracking.h

-- 
1.7.6.5

^ permalink raw reply	[flat|nested] 22+ messages in thread

* [PATCH v1 hot_track 01/16] vfs: introduce some data structures
  2012-11-16  9:51 [PATCH v1 resend hot_track 00/16] vfs: hot data tracking zwu.kernel
@ 2012-11-16  9:51 ` zwu.kernel
  2012-11-16  9:51 ` [PATCH v1 hot_track 02/16] vfs: add init and cleanup functions zwu.kernel
                   ` (15 subsequent siblings)
  16 siblings, 0 replies; 22+ messages in thread
From: zwu.kernel @ 2012-11-16  9:51 UTC (permalink / raw)
  To: linux-fsdevel; +Cc: linux-kernel, viro, Zhi Yong Wu

From: Zhi Yong Wu <wuzhy@linux.vnet.ibm.com>

  One root structure hot_info is defined, is hooked
up in super_block, and will be used to hold radix tree
root, hash list root and some other information, etc.
  Adds hot_inode_tree struct to keep track of
frequently accessed files, and be keyed by {inode, offset}.
Trees contain hot_inode_items representing those files
and ranges.
  Having these trees means that vfs can quickly determine the
temperature of some data by doing some calculations on the
hot_freq_data struct that hangs off of the tree item.
  Define two items hot_inode_item and hot_range_item,
one of them represents one tracked file
to keep track of its access frequency and the tree of
ranges in this file, while the latter represents
a file range of one inode.
  Each of the two structures contains a hot_freq_data
struct with its frequency of access metrics (number of
{reads, writes}, last {read,write} time, frequency of
{reads,writes}).
  Also, each hot_inode_item contains one hot_range_tree
struct which is keyed by {inode, offset, length}
and used to keep track of all the ranges in this file.

Signed-off-by: Zhi Yong Wu <wuzhy@linux.vnet.ibm.com>
---
 fs/Makefile                  |    2 +-
 fs/dcache.c                  |    2 +
 fs/hot_tracking.c            |  109 ++++++++++++++++++++++++++++++++++++++++++
 fs/hot_tracking.h            |   22 ++++++++
 include/linux/hot_tracking.h |   79 ++++++++++++++++++++++++++++++
 5 files changed, 213 insertions(+), 1 deletions(-)
 create mode 100644 fs/hot_tracking.c
 create mode 100644 fs/hot_tracking.h
 create mode 100644 include/linux/hot_tracking.h

diff --git a/fs/Makefile b/fs/Makefile
index 1d7af79..f966dea 100644
--- a/fs/Makefile
+++ b/fs/Makefile
@@ -11,7 +11,7 @@ obj-y :=	open.o read_write.o file_table.o super.o \
 		attr.o bad_inode.o file.o filesystems.o namespace.o \
 		seq_file.o xattr.o libfs.o fs-writeback.o \
 		pnode.o drop_caches.o splice.o sync.o utimes.o \
-		stack.o fs_struct.o statfs.o
+		stack.o fs_struct.o statfs.o hot_tracking.o
 
 ifeq ($(CONFIG_BLOCK),y)
 obj-y +=	buffer.o bio.o block_dev.o direct-io.o mpage.o ioprio.o
diff --git a/fs/dcache.c b/fs/dcache.c
index 3a463d0..7d5be16 100644
--- a/fs/dcache.c
+++ b/fs/dcache.c
@@ -37,6 +37,7 @@
 #include <linux/rculist_bl.h>
 #include <linux/prefetch.h>
 #include <linux/ratelimit.h>
+#include <linux/hot_tracking.h>
 #include "internal.h"
 #include "mount.h"
 
@@ -3172,4 +3173,5 @@ void __init vfs_caches_init(unsigned long mempages)
 	mnt_init();
 	bdev_cache_init();
 	chrdev_init();
+	hot_cache_init();
 }
diff --git a/fs/hot_tracking.c b/fs/hot_tracking.c
new file mode 100644
index 0000000..ef7ff09
--- /dev/null
+++ b/fs/hot_tracking.c
@@ -0,0 +1,109 @@
+/*
+ * fs/hot_tracking.c
+ *
+ * Copyright (C) 2012 IBM Corp. All rights reserved.
+ * Written by Zhi Yong Wu <wuzhy@linux.vnet.ibm.com>
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public
+ * License v2 as published by the Free Software Foundation.
+ */
+
+#include <linux/list.h>
+#include <linux/err.h>
+#include <linux/slab.h>
+#include <linux/module.h>
+#include <linux/spinlock.h>
+#include <linux/hardirq.h>
+#include <linux/fs.h>
+#include <linux/blkdev.h>
+#include <linux/types.h>
+#include <linux/limits.h>
+#include "hot_tracking.h"
+
+/* kmem_cache pointers for slab caches */
+static struct kmem_cache *hot_inode_item_cachep __read_mostly;
+static struct kmem_cache *hot_range_item_cachep __read_mostly;
+
+/*
+ * Initialize the inode tree. Should be called for each new inode
+ * access or other user of the hot_inode interface.
+ */
+static void hot_inode_tree_init(struct hot_info *root)
+{
+	root->hot_inode_tree.map = RB_ROOT;
+	spin_lock_init(&root->lock);
+}
+
+/*
+ * Initialize the hot range tree. Should be called for each new inode
+ * access or other user of the hot_range interface.
+ */
+void hot_range_tree_init(struct hot_inode_item *he)
+{
+	he->hot_range_tree.map = RB_ROOT;
+	spin_lock_init(&he->lock);
+}
+
+/*
+ * Initialize a new hot_range_item structure. The new structure is
+ * returned with a reference count of one and needs to be
+ * freed using free_range_item()
+ */
+static void hot_range_item_init(struct hot_range_item *hr, loff_t start,
+				struct hot_inode_item *he)
+{
+	hr->start = start;
+	hr->len = RANGE_SIZE;
+	hr->hot_inode = he;
+	kref_init(&hr->hot_range.refs);
+	spin_lock_init(&hr->hot_range.lock);
+	hr->hot_range.hot_freq_data.avg_delta_reads = (u64) -1;
+	hr->hot_range.hot_freq_data.avg_delta_writes = (u64) -1;
+	hr->hot_range.hot_freq_data.flags = FREQ_DATA_TYPE_RANGE;
+}
+
+/*
+ * Initialize a new hot_inode_item structure. The new structure is
+ * returned with a reference count of one and needs to be
+ * freed using hot_free_inode_item()
+ */
+static void hot_inode_item_init(struct hot_inode_item *he,
+				u64 ino,
+				struct hot_rb_tree *hot_inode_tree)
+{
+	he->i_ino = ino;
+	he->hot_inode_tree = hot_inode_tree;
+	kref_init(&he->hot_inode.refs);
+	spin_lock_init(&he->hot_inode.lock);
+	he->hot_inode.hot_freq_data.avg_delta_reads = (u64) -1;
+	he->hot_inode.hot_freq_data.avg_delta_writes = (u64) -1;
+	he->hot_inode.hot_freq_data.flags = FREQ_DATA_TYPE_INODE;
+	hot_range_tree_init(he);
+}
+
+/*
+ * Initialize kmem cache for hot_inode_item and hot_range_item.
+ */
+void __init hot_cache_init(void)
+{
+	hot_inode_item_cachep = kmem_cache_create("hot_inode_item",
+			sizeof(struct hot_inode_item), 0,
+			SLAB_RECLAIM_ACCOUNT | SLAB_MEM_SPREAD,
+			NULL);
+	if (!hot_inode_item_cachep)
+		return;
+
+	hot_range_item_cachep = kmem_cache_create("hot_range_item",
+			sizeof(struct hot_range_item), 0,
+			SLAB_RECLAIM_ACCOUNT | SLAB_MEM_SPREAD,
+			NULL);
+	if (!hot_range_item_cachep)
+		goto err;
+
+	return;
+
+err:
+	kmem_cache_destroy(hot_inode_item_cachep);
+}
+EXPORT_SYMBOL_GPL(hot_cache_init);
diff --git a/fs/hot_tracking.h b/fs/hot_tracking.h
new file mode 100644
index 0000000..d58a461
--- /dev/null
+++ b/fs/hot_tracking.h
@@ -0,0 +1,22 @@
+/*
+ * fs/hot_tracking.h
+ *
+ * Copyright (C) 2012 IBM Corp. All rights reserved.
+ * Written by Zhi Yong Wu <wuzhy@linux.vnet.ibm.com>
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public
+ * License v2 as published by the Free Software Foundation.
+ */
+
+#ifndef __HOT_TRACKING__
+#define __HOT_TRACKING__
+
+#include <linux/workqueue.h>
+#include <linux/hot_tracking.h>
+
+/* values for hot_freq_data flags */
+#define FREQ_DATA_TYPE_INODE (1 << 0)
+#define FREQ_DATA_TYPE_RANGE (1 << 1)
+
+#endif /* __HOT_TRACKING__ */
diff --git a/include/linux/hot_tracking.h b/include/linux/hot_tracking.h
new file mode 100644
index 0000000..aae8127
--- /dev/null
+++ b/include/linux/hot_tracking.h
@@ -0,0 +1,79 @@
+/*
+ *  include/linux/hot_tracking.h
+ *
+ * This file has definitions for VFS hot data tracking
+ * structures etc.
+ *
+ * Copyright (C) 2012 IBM Corp. All rights reserved.
+ * Written by Zhi Yong Wu <wuzhy@linux.vnet.ibm.com>
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public
+ * License v2 as published by the Free Software Foundation.
+ */
+
+#ifndef _LINUX_HOTTRACK_H
+#define _LINUX_HOTTRACK_H
+
+#include <linux/types.h>
+#include <linux/rbtree.h>
+#include <linux/kref.h>
+#include <linux/fs.h>
+
+struct hot_rb_tree {
+	struct rb_root map;
+	spinlock_t lock;
+};
+
+/*
+ * A frequency data struct holds values that are used to
+ * determine temperature of files and file ranges. These structs
+ * are members of hot_inode_item and hot_range_item
+ */
+struct hot_freq_data {
+	struct timespec last_read_time;
+	struct timespec last_write_time;
+	u32 nr_reads;
+	u32 nr_writes;
+	u64 avg_delta_reads;
+	u64 avg_delta_writes;
+	u32 flags;
+	u32 last_temp;
+};
+
+/* The common info for both following structures */
+struct hot_comm_item {
+	struct rb_node rb_node; /* rbtree index */
+	struct hot_freq_data hot_freq_data;  /* frequency data */
+	spinlock_t lock; /* protects object data */
+	struct kref refs;  /* prevents kfree */
+};
+
+/* An item representing an inode and its access frequency */
+struct hot_inode_item {
+	struct hot_comm_item hot_inode; /* node in hot_inode_tree */
+	struct hot_rb_tree hot_range_tree; /* tree of ranges */
+	spinlock_t lock; /* protect range tree */
+	struct hot_rb_tree *hot_inode_tree;
+	u64 i_ino; /* inode number from inode */
+};
+
+/*
+ * An item representing a range inside of
+ * an inode whose frequency is being tracked
+ */
+struct hot_range_item {
+	struct hot_comm_item hot_range;
+	struct hot_inode_item *hot_inode; /* associated hot_inode_item */
+	loff_t start; /* item offset in bytes in hot_range_tree */
+	size_t len; /* length in bytes */
+};
+
+struct hot_info {
+	struct hot_rb_tree hot_inode_tree;
+	spinlock_t lock; /*protect inode tree */
+};
+
+extern void __init hot_cache_init(void);
+
+#endif  /* _LINUX_HOTTRACK_H */
-- 
1.7.6.5


^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [PATCH v1 hot_track 02/16] vfs: add init and cleanup functions
  2012-11-16  9:51 [PATCH v1 resend hot_track 00/16] vfs: hot data tracking zwu.kernel
  2012-11-16  9:51 ` [PATCH v1 hot_track 01/16] vfs: introduce some data structures zwu.kernel
@ 2012-11-16  9:51 ` zwu.kernel
  2012-11-16  9:51 ` [PATCH v1 hot_track 03/16] vfs: add I/O frequency update function zwu.kernel
                   ` (14 subsequent siblings)
  16 siblings, 0 replies; 22+ messages in thread
From: zwu.kernel @ 2012-11-16  9:51 UTC (permalink / raw)
  To: linux-fsdevel; +Cc: linux-kernel, viro, Zhi Yong Wu

From: Zhi Yong Wu <wuzhy@linux.vnet.ibm.com>

  Add initialization function to create some
key data structures when hot tracking is enabled;
Clean up them when hot tracking is disabled

Signed-off-by: Zhi Yong Wu <wuzhy@linux.vnet.ibm.com>
---
 fs/hot_tracking.c            |  115 ++++++++++++++++++++++++++++++++++++++++++
 include/linux/fs.h           |    4 ++
 include/linux/hot_tracking.h |    3 +
 3 files changed, 122 insertions(+), 0 deletions(-)

diff --git a/fs/hot_tracking.c b/fs/hot_tracking.c
index ef7ff09..1fd4d0e 100644
--- a/fs/hot_tracking.c
+++ b/fs/hot_tracking.c
@@ -76,12 +76,92 @@ static void hot_inode_item_init(struct hot_inode_item *he,
 	he->hot_inode_tree = hot_inode_tree;
 	kref_init(&he->hot_inode.refs);
 	spin_lock_init(&he->hot_inode.lock);
+	INIT_LIST_HEAD(&he->hot_inode.n_list);
 	he->hot_inode.hot_freq_data.avg_delta_reads = (u64) -1;
 	he->hot_inode.hot_freq_data.avg_delta_writes = (u64) -1;
 	he->hot_inode.hot_freq_data.flags = FREQ_DATA_TYPE_INODE;
 	hot_range_tree_init(he);
 }
 
+static void hot_range_item_free(struct kref *kref)
+{
+	struct hot_comm_item *comm_item = container_of(kref,
+		struct hot_comm_item, refs);
+	struct hot_range_item *hr = container_of(comm_item,
+		struct hot_range_item, hot_range);
+
+	rb_erase(&hr->hot_range.rb_node,
+		&hr->hot_inode->hot_range_tree.map);
+	kmem_cache_free(hot_range_item_cachep, hr);
+}
+
+/*
+ * Drops the reference out on hot_range_item by one
+ * and free the structure if the reference count hits zero
+ */
+static void hot_range_item_put(struct hot_range_item *hr)
+{
+	kref_put(&hr->hot_range.refs, hot_range_item_free);
+}
+
+/* Frees the entire hot_range_tree. */
+static void hot_range_tree_free(struct hot_inode_item *he)
+{
+	struct rb_node *node;
+	struct hot_comm_item *ci;
+	struct hot_range_item *hr;
+
+	/* Free hot inode and range trees on fs root */
+	spin_lock(&he->lock);
+	while ((node = rb_first(&he->hot_range_tree.map))) {
+		ci = rb_entry(node, struct hot_comm_item, rb_node);
+		hr = container_of(ci,
+			struct hot_range_item, hot_range);
+		hot_range_item_put(hr);
+	}
+	spin_unlock(&he->lock);
+}
+
+static void hot_inode_item_free(struct kref *kref)
+{
+	struct hot_comm_item *comm_item = container_of(kref,
+			struct hot_comm_item, refs);
+	struct hot_inode_item *he = container_of(comm_item,
+			struct hot_inode_item, hot_inode);
+
+	hot_range_tree_free(he);
+	rb_erase(&he->hot_inode.rb_node, &he->hot_inode_tree->map);
+	kmem_cache_free(hot_inode_item_cachep, he);
+}
+
+/*
+ * Drops the reference out on hot_inode_item by one
+ * and free the structure if the reference count hits zero
+ */
+void hot_inode_item_put(struct hot_inode_item *he)
+{
+	kref_put(&he->hot_inode.refs, hot_inode_item_free);
+}
+EXPORT_SYMBOL_GPL(hot_inode_item_put);
+
+/* Frees the entire hot_inode_tree. */
+static void hot_inode_tree_exit(struct hot_info *root)
+{
+	struct rb_node *node;
+	struct hot_comm_item *ci;
+	struct hot_inode_item *he;
+
+	/* Free hot inode and range trees on fs root */
+	spin_lock(&root->lock);
+	while ((node = rb_first(&root->hot_inode_tree.map))) {
+		ci = rb_entry(node, struct hot_comm_item, rb_node);
+		he = container_of(ci,
+			struct hot_inode_item, hot_inode);
+		hot_inode_item_put(he);
+	}
+	spin_unlock(&root->lock);
+}
+
 /*
  * Initialize kmem cache for hot_inode_item and hot_range_item.
  */
@@ -107,3 +187,38 @@ err:
 	kmem_cache_destroy(hot_inode_item_cachep);
 }
 EXPORT_SYMBOL_GPL(hot_cache_init);
+
+/*
+ * Initialize the data structures for hot data tracking.
+ */
+int hot_track_init(struct super_block *sb)
+{
+	struct hot_info *root;
+	int ret = -ENOMEM;
+
+	root = kzalloc(sizeof(struct hot_info), GFP_NOFS);
+	if (!root) {
+		printk(KERN_ERR "%s: Failed to malloc memory for "
+				"hot_info\n", __func__);
+		return ret;
+	}
+
+	hot_inode_tree_init(root);
+
+	sb->s_hot_root = root;
+
+	printk(KERN_INFO "VFS: Turning on hot data tracking\n");
+
+	return 0;
+}
+EXPORT_SYMBOL_GPL(hot_track_init);
+
+void hot_track_exit(struct super_block *sb)
+{
+	struct hot_info *root = sb->s_hot_root;
+
+	hot_inode_tree_exit(root);
+	sb->s_hot_root = NULL;
+	kfree(root);
+}
+EXPORT_SYMBOL_GPL(hot_track_exit);
diff --git a/include/linux/fs.h b/include/linux/fs.h
index b33cfc9..c541ae7 100644
--- a/include/linux/fs.h
+++ b/include/linux/fs.h
@@ -27,6 +27,7 @@
 #include <linux/lockdep.h>
 #include <linux/percpu-rwsem.h>
 #include <linux/blk_types.h>
+#include <linux/hot_tracking.h>
 
 #include <asm/byteorder.h>
 #include <uapi/linux/fs.h>
@@ -1321,6 +1322,9 @@ struct super_block {
 
 	/* Being remounted read-only */
 	int s_readonly_remount;
+
+	/* Hot data tracking*/
+	struct hot_info *s_hot_root;
 };
 
 /* superblock cache pruning functions */
diff --git a/include/linux/hot_tracking.h b/include/linux/hot_tracking.h
index aae8127..99d0f63 100644
--- a/include/linux/hot_tracking.h
+++ b/include/linux/hot_tracking.h
@@ -75,5 +75,8 @@ struct hot_info {
 };
 
 extern void __init hot_cache_init(void);
+extern int hot_track_init(struct super_block *sb);
+extern void hot_track_exit(struct super_block *sb);
+extern void hot_inode_item_put(struct hot_inode_item *he);
 
 #endif  /* _LINUX_HOTTRACK_H */
-- 
1.7.6.5

^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [PATCH v1 hot_track 03/16] vfs: add I/O frequency update function
  2012-11-16  9:51 [PATCH v1 resend hot_track 00/16] vfs: hot data tracking zwu.kernel
  2012-11-16  9:51 ` [PATCH v1 hot_track 01/16] vfs: introduce some data structures zwu.kernel
  2012-11-16  9:51 ` [PATCH v1 hot_track 02/16] vfs: add init and cleanup functions zwu.kernel
@ 2012-11-16  9:51 ` zwu.kernel
  2012-11-16  9:51 ` [PATCH v1 hot_track 04/16] vfs: add two map arrays zwu.kernel
                   ` (13 subsequent siblings)
  16 siblings, 0 replies; 22+ messages in thread
From: zwu.kernel @ 2012-11-16  9:51 UTC (permalink / raw)
  To: linux-fsdevel; +Cc: linux-kernel, viro, Zhi Yong Wu

From: Zhi Yong Wu <wuzhy@linux.vnet.ibm.com>

  Add some util helpers to update access frequencies
for one file or its range.

Signed-off-by: Zhi Yong Wu <wuzhy@linux.vnet.ibm.com>
---
 fs/hot_tracking.c            |  178 ++++++++++++++++++++++++++++++++++++++++++
 fs/hot_tracking.h            |    5 +
 include/linux/hot_tracking.h |    4 +
 3 files changed, 187 insertions(+), 0 deletions(-)

diff --git a/fs/hot_tracking.c b/fs/hot_tracking.c
index 1fd4d0e..6d396fe 100644
--- a/fs/hot_tracking.c
+++ b/fs/hot_tracking.c
@@ -162,6 +162,135 @@ static void hot_inode_tree_exit(struct hot_info *root)
 	spin_unlock(&root->lock);
 }
 
+struct hot_inode_item
+*hot_inode_item_lookup(struct hot_info *root, u64 ino)
+{
+	struct rb_node **p = &root->hot_inode_tree.map.rb_node;
+	struct rb_node *parent = NULL;
+	struct hot_comm_item *ci;
+	struct hot_inode_item *entry;
+
+	/* walk tree to find insertion point */
+	spin_lock(&root->lock);
+	while (*p) {
+		parent = *p;
+		ci = rb_entry(parent, struct hot_comm_item, rb_node);
+		entry = container_of(ci, struct hot_inode_item, hot_inode);
+		if (ino < entry->i_ino)
+			p = &(*p)->rb_left;
+		else if (ino > entry->i_ino)
+			p = &(*p)->rb_right;
+		else {
+			spin_unlock(&root->lock);
+			kref_get(&entry->hot_inode.refs);
+			return entry;
+		}
+	}
+	spin_unlock(&root->lock);
+
+	entry = kmem_cache_zalloc(hot_inode_item_cachep, GFP_NOFS);
+	if (!entry)
+		return ERR_PTR(-ENOMEM);
+
+	spin_lock(&root->lock);
+	hot_inode_item_init(entry, ino, &root->hot_inode_tree);
+	rb_link_node(&entry->hot_inode.rb_node, parent, p);
+	rb_insert_color(&entry->hot_inode.rb_node,
+			&root->hot_inode_tree.map);
+	spin_unlock(&root->lock);
+
+	kref_get(&entry->hot_inode.refs);
+	return entry;
+}
+EXPORT_SYMBOL_GPL(hot_inode_item_lookup);
+
+static loff_t hot_range_end(struct hot_range_item *hr)
+{
+	if (hr->start + hr->len < hr->start)
+		return (loff_t)-1;
+
+	return hr->start + hr->len - 1;
+}
+
+static struct hot_range_item
+*hot_range_item_lookup(struct hot_inode_item *he,
+			loff_t start)
+{
+	struct rb_node **p = &he->hot_range_tree.map.rb_node;
+	struct rb_node *parent = NULL;
+	struct hot_comm_item *ci;
+	struct hot_range_item *entry;
+
+	/* walk tree to find insertion point */
+	spin_lock(&he->lock);
+	while (*p) {
+		parent = *p;
+		ci = rb_entry(parent, struct hot_comm_item, rb_node);
+		entry = container_of(ci, struct hot_range_item, hot_range);
+		if (start < entry->start)
+			p = &(*p)->rb_left;
+		else if (start > hot_range_end(entry))
+			p = &(*p)->rb_right;
+		else {
+			spin_unlock(&he->lock);
+			kref_get(&entry->hot_range.refs);
+			return entry;
+		}
+	}
+	spin_unlock(&he->lock);
+
+	entry = kmem_cache_zalloc(hot_range_item_cachep, GFP_NOFS);
+	if (!entry)
+		return ERR_PTR(-ENOMEM);
+
+	spin_lock(&he->lock);
+	hot_range_item_init(entry, start, he);
+	rb_link_node(&entry->hot_range.rb_node, parent, p);
+	rb_insert_color(&entry->hot_range.rb_node,
+			&he->hot_range_tree.map);
+	spin_unlock(&he->lock);
+
+	kref_get(&entry->hot_range.refs);
+	return entry;
+}
+
+/*
+ * This function does the actual work of updating
+ * the frequency numbers, whatever they turn out to be.
+ */
+static void hot_rw_freq_calc(struct timespec old_atime,
+		struct timespec cur_time, u64 *avg)
+{
+	struct timespec delta_ts;
+	u64 new_delta;
+
+	delta_ts = timespec_sub(cur_time, old_atime);
+	new_delta = timespec_to_ns(&delta_ts) >> FREQ_POWER;
+
+	*avg = (*avg << FREQ_POWER) - *avg + new_delta;
+	*avg = *avg >> FREQ_POWER;
+}
+
+static void hot_freq_data_update(struct hot_freq_data *freq_data, bool write)
+{
+	struct timespec cur_time = current_kernel_time();
+
+	if (write) {
+		freq_data->nr_writes += 1;
+		hot_rw_freq_calc(freq_data->last_write_time,
+				cur_time,
+				&freq_data->avg_delta_writes);
+		freq_data->last_write_time = cur_time;
+	} else {
+		freq_data->nr_reads += 1;
+		hot_rw_freq_calc(freq_data->last_read_time,
+				freq_data->last_read_time,
+				cur_time,
+				&freq_data->avg_delta_reads);
+		freq_data->last_read_time = cur_time;
+	}
+}
+
 /*
  * Initialize kmem cache for hot_inode_item and hot_range_item.
  */
@@ -189,6 +318,55 @@ err:
 EXPORT_SYMBOL_GPL(hot_cache_init);
 
 /*
+ * Main function to update access frequency from read/writepage(s) hooks
+ */
+void hot_update_freqs(struct inode *inode, loff_t start,
+			size_t len, int rw)
+{
+	struct hot_info *root = inode->i_sb->s_hot_root;
+	struct hot_inode_item *he;
+	struct hot_range_item *hr;
+	loff_t cur, end;
+
+	if (!root || (len == 0))
+		return;
+
+	he = hot_inode_item_lookup(root, inode->i_ino);
+	if (IS_ERR(he)) {
+		WARN_ON(1);
+		return;
+	}
+
+	spin_lock(&he->hot_inode.lock);
+	hot_freq_data_update(&he->hot_inode.hot_freq_data, rw);
+	spin_unlock(&he->hot_inode.lock);
+
+	/*
+	 * Align ranges on RANGE_SIZE boundary
+	 * to prevent proliferation of range structs
+	 */
+	end = (start + len + RANGE_SIZE - 1) >> RANGE_BITS;
+	for (cur = (start >> RANGE_BITS); cur < end; cur++) {
+		hr = hot_range_item_lookup(he, cur);
+		if (IS_ERR(hr)) {
+			WARN(1, "hot_range_item_lookup returns %ld\n",
+				PTR_ERR(hr));
+			hot_inode_item_put(he);
+			return;
+		}
+
+		spin_lock(&hr->hot_range.lock);
+		hot_freq_data_update(&hr->hot_range.hot_freq_data, rw);
+		spin_unlock(&hr->hot_range.lock);
+
+		hot_range_item_put(hr);
+	}
+
+	hot_inode_item_put(he);
+}
+EXPORT_SYMBOL_GPL(hot_update_freqs);
+
+/*
  * Initialize the data structures for hot data tracking.
  */
 int hot_track_init(struct super_block *sb)
diff --git a/fs/hot_tracking.h b/fs/hot_tracking.h
index d58a461..8571186 100644
--- a/fs/hot_tracking.h
+++ b/fs/hot_tracking.h
@@ -19,4 +19,9 @@
 #define FREQ_DATA_TYPE_INODE (1 << 0)
 #define FREQ_DATA_TYPE_RANGE (1 << 1)
 
+/* size of sub-file ranges */
+#define RANGE_BITS 20
+#define RANGE_SIZE (1 << RANGE_BITS)
+#define FREQ_POWER 4
+
 #endif /* __HOT_TRACKING__ */
diff --git a/include/linux/hot_tracking.h b/include/linux/hot_tracking.h
index 99d0f63..b9992c0 100644
--- a/include/linux/hot_tracking.h
+++ b/include/linux/hot_tracking.h
@@ -78,5 +78,9 @@ extern void __init hot_cache_init(void);
 extern int hot_track_init(struct super_block *sb);
 extern void hot_track_exit(struct super_block *sb);
 extern void hot_inode_item_put(struct hot_inode_item *he);
+extern void hot_update_freqs(struct inode *inode, loff_t start,
+				size_t len, int rw);
+extern struct hot_inode_item *hot_inode_item_lookup(struct hot_info *root,
+						u64 ino);
 
 #endif  /* _LINUX_HOTTRACK_H */
-- 
1.7.6.5

^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [PATCH v1 hot_track 04/16] vfs: add two map arrays
  2012-11-16  9:51 [PATCH v1 resend hot_track 00/16] vfs: hot data tracking zwu.kernel
                   ` (2 preceding siblings ...)
  2012-11-16  9:51 ` [PATCH v1 hot_track 03/16] vfs: add I/O frequency update function zwu.kernel
@ 2012-11-16  9:51 ` zwu.kernel
  2012-11-16  9:51 ` [PATCH v1 hot_track 05/16] vfs: add hooks to enable hot tracking zwu.kernel
                   ` (12 subsequent siblings)
  16 siblings, 0 replies; 22+ messages in thread
From: zwu.kernel @ 2012-11-16  9:51 UTC (permalink / raw)
  To: linux-fsdevel; +Cc: linux-kernel, viro, Zhi Yong Wu

From: Zhi Yong Wu <wuzhy@linux.vnet.ibm.com>

  Adds two map arrays which contains
a lot of list and is used to efficiently
look up the data temperature of a file or its
ranges.
  In each list of map arrays, the array node
will keep track of temperature info.

Signed-off-by: Zhi Yong Wu <wuzhy@linux.vnet.ibm.com>
---
 fs/hot_tracking.c            |   60 ++++++++++++++++++++++++++++++++++++++++++
 include/linux/hot_tracking.h |   16 +++++++++++
 2 files changed, 76 insertions(+), 0 deletions(-)

diff --git a/fs/hot_tracking.c b/fs/hot_tracking.c
index 6d396fe..bd2c353 100644
--- a/fs/hot_tracking.c
+++ b/fs/hot_tracking.c
@@ -58,6 +58,7 @@ static void hot_range_item_init(struct hot_range_item *hr, loff_t start,
 	hr->hot_inode = he;
 	kref_init(&hr->hot_range.refs);
 	spin_lock_init(&hr->hot_range.lock);
+	INIT_LIST_HEAD(&hr->hot_range.n_list);
 	hr->hot_range.hot_freq_data.avg_delta_reads = (u64) -1;
 	hr->hot_range.hot_freq_data.avg_delta_writes = (u64) -1;
 	hr->hot_range.hot_freq_data.flags = FREQ_DATA_TYPE_RANGE;
@@ -89,6 +90,16 @@ static void hot_range_item_free(struct kref *kref)
 		struct hot_comm_item, refs);
 	struct hot_range_item *hr = container_of(comm_item,
 		struct hot_range_item, hot_range);
+	struct hot_info *root = container_of(
+			hr->hot_inode->hot_inode_tree,
+		struct hot_info, hot_inode_tree);
+
+	spin_lock(&hr->hot_range.lock);
+	if (!list_empty(&hr->hot_range.n_list)) {
+		list_del_init(&hr->hot_range.n_list);
+		root->hot_map_nr--;
+	}
+	spin_unlock(&hr->hot_range.lock);
 
 	rb_erase(&hr->hot_range.rb_node,
 		&hr->hot_inode->hot_range_tree.map);
@@ -128,6 +139,15 @@ static void hot_inode_item_free(struct kref *kref)
 			struct hot_comm_item, refs);
 	struct hot_inode_item *he = container_of(comm_item,
 			struct hot_inode_item, hot_inode);
+	struct hot_info *root = container_of(he->hot_inode_tree,
+		struct hot_info, hot_inode_tree);
+
+	spin_lock(&he->hot_inode.lock);
+	if (!list_empty(&he->hot_inode.n_list)) {
+		list_del_init(&he->hot_inode.n_list);
+		root->hot_map_nr--;
+	}
+	spin_unlock(&he->hot_inode.lock);
 
 	hot_range_tree_free(he);
 	rb_erase(&he->hot_inode.rb_node, &he->hot_inode_tree->map);
@@ -292,6 +312,44 @@ static void hot_freq_data_update(struct hot_freq_data *freq_data, bool write)
 }
 
 /*
+ * Initialize inode and range map info.
+ */
+static void hot_map_init(struct hot_info *root)
+{
+	int i;
+	for (i = 0; i < HEAT_MAP_SIZE; i++) {
+		INIT_LIST_HEAD(&root->heat_inode_map[i].node_list);
+		INIT_LIST_HEAD(&root->heat_range_map[i].node_list);
+		root->heat_inode_map[i].temp = i;
+		root->heat_range_map[i].temp = i;
+	}
+}
+
+static void hot_map_list_free(struct list_head *node_list,
+				struct hot_info *root)
+{
+	struct list_head *pos, *next;
+	struct hot_comm_item *node;
+
+	list_for_each_safe(pos, next, node_list) {
+		node = list_entry(pos, struct hot_comm_item, n_list);
+		list_del_init(&node->n_list);
+		root->hot_map_nr--;
+	}
+
+}
+
+/* Free inode and range map info */
+static void hot_map_exit(struct hot_info *root)
+{
+	int i;
+	for (i = 0; i < HEAT_MAP_SIZE; i++) {
+		hot_map_list_free(&root->heat_inode_map[i].node_list, root);
+		hot_map_list_free(&root->heat_range_map[i].node_list, root);
+	}
+}
+
+/*
  * Initialize kmem cache for hot_inode_item and hot_range_item.
  */
 void __init hot_cache_init(void)
@@ -382,6 +440,7 @@ int hot_track_init(struct super_block *sb)
 	}
 
 	hot_inode_tree_init(root);
+	hot_map_init(root);
 
 	sb->s_hot_root = root;
 
@@ -395,6 +454,7 @@ void hot_track_exit(struct super_block *sb)
 {
 	struct hot_info *root = sb->s_hot_root;
 
+	hot_map_exit(root);
 	hot_inode_tree_exit(root);
 	sb->s_hot_root = NULL;
 	kfree(root);
diff --git a/include/linux/hot_tracking.h b/include/linux/hot_tracking.h
index b9992c0..34a0530 100644
--- a/include/linux/hot_tracking.h
+++ b/include/linux/hot_tracking.h
@@ -20,6 +20,9 @@
 #include <linux/kref.h>
 #include <linux/fs.h>
 
+#define HEAT_MAP_BITS 8
+#define HEAT_MAP_SIZE (1 << HEAT_MAP_BITS)
+
 struct hot_rb_tree {
 	struct rb_root map;
 	spinlock_t lock;
@@ -41,12 +44,19 @@ struct hot_freq_data {
 	u32 last_temp;
 };
 
+/* List heads in hot map array */
+struct hot_map_head {
+	struct list_head node_list;
+	u8 temp;
+};
+
 /* The common info for both following structures */
 struct hot_comm_item {
 	struct rb_node rb_node; /* rbtree index */
 	struct hot_freq_data hot_freq_data;  /* frequency data */
 	spinlock_t lock; /* protects object data */
 	struct kref refs;  /* prevents kfree */
+	struct list_head n_list; /* list node index */
 };
 
 /* An item representing an inode and its access frequency */
@@ -72,6 +82,12 @@ struct hot_range_item {
 struct hot_info {
 	struct hot_rb_tree hot_inode_tree;
 	spinlock_t lock; /*protect inode tree */
+
+	/* map of inode temperature */
+	struct hot_map_head heat_inode_map[HEAT_MAP_SIZE];
+	/* map of range temperature */
+	struct hot_map_head heat_range_map[HEAT_MAP_SIZE];
+	unsigned int hot_map_nr;
 };
 
 extern void __init hot_cache_init(void);
-- 
1.7.6.5


^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [PATCH v1 hot_track 05/16] vfs: add hooks to enable hot tracking
  2012-11-16  9:51 [PATCH v1 resend hot_track 00/16] vfs: hot data tracking zwu.kernel
                   ` (3 preceding siblings ...)
  2012-11-16  9:51 ` [PATCH v1 hot_track 04/16] vfs: add two map arrays zwu.kernel
@ 2012-11-16  9:51 ` zwu.kernel
  2012-11-16  9:51 ` [PATCH v1 hot_track 06/16] vfs: add temp calculation function zwu.kernel
                   ` (11 subsequent siblings)
  16 siblings, 0 replies; 22+ messages in thread
From: zwu.kernel @ 2012-11-16  9:51 UTC (permalink / raw)
  To: linux-fsdevel; +Cc: linux-kernel, viro, Zhi Yong Wu

From: Zhi Yong Wu <wuzhy@linux.vnet.ibm.com>

  Miscellaneous features that implement hot data tracking
and generally make the hot data functions a bit more friendly.

Signed-off-by: Zhi Yong Wu <wuzhy@linux.vnet.ibm.com>
---
 fs/direct-io.c      |    6 ++++++
 mm/filemap.c        |    6 ++++++
 mm/page-writeback.c |   12 ++++++++++++
 mm/readahead.c      |    7 +++++++
 4 files changed, 31 insertions(+), 0 deletions(-)

diff --git a/fs/direct-io.c b/fs/direct-io.c
index f86c720..51f13f4 100644
--- a/fs/direct-io.c
+++ b/fs/direct-io.c
@@ -37,6 +37,7 @@
 #include <linux/uio.h>
 #include <linux/atomic.h>
 #include <linux/prefetch.h>
+#include "hot_tracking.h"
 
 /*
  * How many user pages to map in one call to get_user_pages().  This determines
@@ -1297,6 +1298,11 @@ __blockdev_direct_IO(int rw, struct kiocb *iocb, struct inode *inode,
 	prefetch(bdev->bd_queue);
 	prefetch((char *)bdev->bd_queue + SMP_CACHE_BYTES);
 
+	/* Hot data tracking */
+	hot_update_freqs(inode, offset,
+			iov_length(iov, nr_segs),
+			rw & WRITE);
+
 	return do_blockdev_direct_IO(rw, iocb, inode, bdev, iov, offset,
 				     nr_segs, get_block, end_io,
 				     submit_io, flags);
diff --git a/mm/filemap.c b/mm/filemap.c
index 83efee7..6141374 100644
--- a/mm/filemap.c
+++ b/mm/filemap.c
@@ -33,6 +33,7 @@
 #include <linux/hardirq.h> /* for BUG_ON(!in_atomic()) only */
 #include <linux/memcontrol.h>
 #include <linux/cleancache.h>
+#include <linux/hot_tracking.h>
 #include "internal.h"
 
 /*
@@ -1224,6 +1225,11 @@ readpage:
 		 * PG_error will be set again if readpage fails.
 		 */
 		ClearPageError(page);
+
+		/* Hot data tracking */
+		hot_update_freqs(inode, (loff_t)page->index << PAGE_CACHE_SHIFT,
+				PAGE_CACHE_SIZE, 0);
+
 		/* Start the actual read. The read will unlock the page. */
 		error = mapping->a_ops->readpage(filp, page);
 
diff --git a/mm/page-writeback.c b/mm/page-writeback.c
index 830893b..dc8f721 100644
--- a/mm/page-writeback.c
+++ b/mm/page-writeback.c
@@ -35,6 +35,7 @@
 #include <linux/buffer_head.h> /* __set_page_dirty_buffers */
 #include <linux/pagevec.h>
 #include <linux/timer.h>
+#include <linux/hot_tracking.h>
 #include <trace/events/writeback.h>
 
 /*
@@ -1903,13 +1904,24 @@ EXPORT_SYMBOL(generic_writepages);
 int do_writepages(struct address_space *mapping, struct writeback_control *wbc)
 {
 	int ret;
+	loff_t start = 0;
+	size_t count = 0;
 
 	if (wbc->nr_to_write <= 0)
 		return 0;
+
+	start = mapping->writeback_index << PAGE_CACHE_SHIFT;
+	count = wbc->nr_to_write;
+
 	if (mapping->a_ops->writepages)
 		ret = mapping->a_ops->writepages(mapping, wbc);
 	else
 		ret = generic_writepages(mapping, wbc);
+
+	/* Hot data tracking */
+	hot_update_freqs(mapping->host, start,
+			(count - wbc->nr_to_write) * PAGE_CACHE_SIZE, 1);
+
 	return ret;
 }
 
diff --git a/mm/readahead.c b/mm/readahead.c
index 7963f23..d1ab688 100644
--- a/mm/readahead.c
+++ b/mm/readahead.c
@@ -19,6 +19,7 @@
 #include <linux/pagemap.h>
 #include <linux/syscalls.h>
 #include <linux/file.h>
+#include <linux/hot_tracking.h>
 
 /*
  * Initialise a struct file's readahead state.  Assumes that the caller has
@@ -138,6 +139,12 @@ static int read_pages(struct address_space *mapping, struct file *filp,
 out:
 	blk_finish_plug(&plug);
 
+	/* Hot data tracking */
+	hot_update_freqs(mapping->host,
+		(loff_t)(list_entry(pages->prev, struct page, lru)->index)
+			<< PAGE_CACHE_SHIFT,
+		(size_t)nr_pages * PAGE_CACHE_SIZE, 0);
+
 	return ret;
 }
 
-- 
1.7.6.5

^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [PATCH v1 hot_track 06/16] vfs: add temp calculation function
  2012-11-16  9:51 [PATCH v1 resend hot_track 00/16] vfs: hot data tracking zwu.kernel
                   ` (4 preceding siblings ...)
  2012-11-16  9:51 ` [PATCH v1 hot_track 05/16] vfs: add hooks to enable hot tracking zwu.kernel
@ 2012-11-16  9:51 ` zwu.kernel
  2012-11-16  9:51 ` [PATCH v1 hot_track 07/16] vfs: add map info update function zwu.kernel
                   ` (10 subsequent siblings)
  16 siblings, 0 replies; 22+ messages in thread
From: zwu.kernel @ 2012-11-16  9:51 UTC (permalink / raw)
  To: linux-fsdevel; +Cc: linux-kernel, viro, Zhi Yong Wu

From: Zhi Yong Wu <wuzhy@linux.vnet.ibm.com>

Signed-off-by: Zhi Yong Wu <wuzhy@linux.vnet.ibm.com>
---
 fs/hot_tracking.c |   74 +++++++++++++++++++++++++++++++++++++++++++++++++++++
 fs/hot_tracking.h |   21 +++++++++++++++
 2 files changed, 95 insertions(+), 0 deletions(-)

diff --git a/fs/hot_tracking.c b/fs/hot_tracking.c
index bd2c353..3cb14e2 100644
--- a/fs/hot_tracking.c
+++ b/fs/hot_tracking.c
@@ -25,6 +25,14 @@
 static struct kmem_cache *hot_inode_item_cachep __read_mostly;
 static struct kmem_cache *hot_range_item_cachep __read_mostly;
 
+static u64 hot_raw_shift(u64 counter, u32 bits, bool dir)
+{
+	if (dir)
+		return counter << bits;
+	else
+		return counter >> bits;
+}
+
 /*
  * Initialize the inode tree. Should be called for each new inode
  * access or other user of the hot_inode interface.
@@ -312,6 +320,72 @@ static void hot_freq_data_update(struct hot_freq_data *freq_data, bool write)
 }
 
 /*
+ * hot_temp_calc() is responsible for distilling the six heat
+ * criteria down into a single temperature value for the data,
+ * which is an integer between 0 and HEAT_MAX_VALUE.
+ */
+static u32 hot_temp_calc(struct hot_freq_data *freq_data)
+{
+	u32 result = 0;
+
+	struct timespec ckt = current_kernel_time();
+	u64 cur_time = timespec_to_ns(&ckt);
+
+	u32 nrr_heat = (u32)hot_raw_shift((u64)freq_data->nr_reads,
+					NRR_MULTIPLIER_POWER, true);
+	u32 nrw_heat = (u32)hot_raw_shift((u64)freq_data->nr_writes,
+					NRW_MULTIPLIER_POWER, true);
+
+	u64 ltr_heat =
+	hot_raw_shift((cur_time - timespec_to_ns(&freq_data->last_read_time)),
+			LTR_DIVIDER_POWER, false);
+	u64 ltw_heat =
+	hot_raw_shift((cur_time - timespec_to_ns(&freq_data->last_write_time)),
+			LTW_DIVIDER_POWER, false);
+
+	u64 avr_heat =
+	hot_raw_shift((((u64) -1) - freq_data->avg_delta_reads),
+			AVR_DIVIDER_POWER, false);
+	u64 avw_heat =
+	hot_raw_shift((((u64) -1) - freq_data->avg_delta_writes),
+			AVW_DIVIDER_POWER, false);
+
+	/* ltr_heat is now guaranteed to be u32 safe */
+	if (ltr_heat >= hot_raw_shift((u64) 1, 32, true))
+		ltr_heat = 0;
+	else
+		ltr_heat = hot_raw_shift((u64) 1, 32, true) - ltr_heat;
+
+	/* ltw_heat is now guaranteed to be u32 safe */
+	if (ltw_heat >= hot_raw_shift((u64) 1, 32, true))
+		ltw_heat = 0;
+	else
+		ltw_heat = hot_raw_shift((u64) 1, 32, true) - ltw_heat;
+
+	/* avr_heat is now guaranteed to be u32 safe */
+	if (avr_heat >= hot_raw_shift((u64) 1, 32, true))
+		avr_heat = (u32) -1;
+
+	/* avw_heat is now guaranteed to be u32 safe */
+	if (avw_heat >= hot_raw_shift((u64) 1, 32, true))
+		avw_heat = (u32) -1;
+
+	nrr_heat = (u32)hot_raw_shift((u64)nrr_heat,
+		(3 - NRR_COEFF_POWER), false);
+	nrw_heat = (u32)hot_raw_shift((u64)nrw_heat,
+		(3 - NRW_COEFF_POWER), false);
+	ltr_heat = hot_raw_shift(ltr_heat, (3 - LTR_COEFF_POWER), false);
+	ltw_heat = hot_raw_shift(ltw_heat, (3 - LTW_COEFF_POWER), false);
+	avr_heat = hot_raw_shift(avr_heat, (3 - AVR_COEFF_POWER), false);
+	avw_heat = hot_raw_shift(avw_heat, (3 - AVW_COEFF_POWER), false);
+
+	result = nrr_heat + nrw_heat + (u32) ltr_heat +
+		(u32) ltw_heat + (u32) avr_heat + (u32) avw_heat;
+
+	return result;
+}
+
+/*
  * Initialize inode and range map info.
  */
 static void hot_map_init(struct hot_info *root)
diff --git a/fs/hot_tracking.h b/fs/hot_tracking.h
index 8571186..f33066f 100644
--- a/fs/hot_tracking.h
+++ b/fs/hot_tracking.h
@@ -24,4 +24,25 @@
 #define RANGE_SIZE (1 << RANGE_BITS)
 #define FREQ_POWER 4
 
+/* NRR/NRW heat unit = 2^X accesses */
+#define NRR_MULTIPLIER_POWER 20 /* NRR - number of reads since mount */
+#define NRR_COEFF_POWER 0
+#define NRW_MULTIPLIER_POWER 20 /* NRW - number of writes since mount */
+#define NRW_COEFF_POWER 0
+
+/* LTR/LTW heat unit = 2^X ns of age */
+#define LTR_DIVIDER_POWER 30 /* LTR - time elapsed since last read(ns) */
+#define LTR_COEFF_POWER 1
+#define LTW_DIVIDER_POWER 30 /* LTW - time elapsed since last write(ns) */
+#define LTW_COEFF_POWER 1
+
+/*
+ * AVR/AVW cold unit = 2^X ns of average delta
+ * AVR/AVW heat unit = HEAT_MAX_VALUE - cold unit
+ */
+#define AVR_DIVIDER_POWER 40 /* AVR - average delta between recent reads(ns) */
+#define AVR_COEFF_POWER 0
+#define AVW_DIVIDER_POWER 40 /* AVW - average delta between recent writes(ns) */
+#define AVW_COEFF_POWER 0
+
 #endif /* __HOT_TRACKING__ */
-- 
1.7.6.5

^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [PATCH v1 hot_track 07/16] vfs: add map info update function
  2012-11-16  9:51 [PATCH v1 resend hot_track 00/16] vfs: hot data tracking zwu.kernel
                   ` (5 preceding siblings ...)
  2012-11-16  9:51 ` [PATCH v1 hot_track 06/16] vfs: add temp calculation function zwu.kernel
@ 2012-11-16  9:51 ` zwu.kernel
  2012-11-16  9:51 ` [PATCH v1 hot_track 08/16] vfs: add aging function zwu.kernel
                   ` (9 subsequent siblings)
  16 siblings, 0 replies; 22+ messages in thread
From: zwu.kernel @ 2012-11-16  9:51 UTC (permalink / raw)
  To: linux-fsdevel; +Cc: linux-kernel, viro, Zhi Yong Wu

From: Zhi Yong Wu <wuzhy@linux.vnet.ibm.com>

Signed-off-by: Zhi Yong Wu <wuzhy@linux.vnet.ibm.com>
---
 fs/hot_tracking.c |   67 +++++++++++++++++++++++++++++++++++++++++++++++++++++
 1 files changed, 67 insertions(+), 0 deletions(-)

diff --git a/fs/hot_tracking.c b/fs/hot_tracking.c
index 3cb14e2..446fbd4 100644
--- a/fs/hot_tracking.c
+++ b/fs/hot_tracking.c
@@ -386,6 +386,73 @@ static u32 hot_temp_calc(struct hot_freq_data *freq_data)
 }
 
 /*
+ * Calculate a new temperature and, if necessary,
+ * move the list_head corresponding to this inode or range
+ * to the proper list with the new temperature
+ */
+static void hot_map_update(struct hot_freq_data *freq_data,
+				struct hot_info *root)
+{
+	struct hot_map_head *buckets, *cur_bucket;
+	struct hot_comm_item *comm_item;
+	struct hot_inode_item *he;
+	struct hot_range_item *hr;
+	u32 temp = hot_temp_calc(freq_data);
+	u8 a_temp = (u8)hot_raw_shift((u64)temp, (32 - HEAT_MAP_BITS), false);
+	u8 b_temp = (u8)hot_raw_shift((u64)freq_data->last_temp,
+					(32 - HEAT_MAP_BITS), false);
+
+	comm_item = container_of(freq_data,
+			struct hot_comm_item, hot_freq_data);
+
+	if (freq_data->flags & FREQ_DATA_TYPE_INODE) {
+		he = container_of(comm_item,
+			struct hot_inode_item, hot_inode);
+		buckets = root->heat_inode_map;
+
+		if (he == NULL)
+			return;
+
+		spin_lock(&he->hot_inode.lock);
+		if (list_empty(&he->hot_inode.n_list) || (a_temp != b_temp)) {
+			if (!list_empty(&he->hot_inode.n_list)) {
+				list_del_init(&he->hot_inode.n_list);
+				root->hot_map_nr--;
+			}
+
+			cur_bucket = buckets + a_temp;
+			list_add_tail(&he->hot_inode.n_list,
+					&cur_bucket->node_list);
+			root->hot_map_nr++;
+			freq_data->last_temp = temp;
+		}
+		spin_unlock(&he->hot_inode.lock);
+	} else if (freq_data->flags & FREQ_DATA_TYPE_RANGE) {
+		hr = container_of(comm_item,
+			struct hot_range_item, hot_range);
+		buckets = root->heat_range_map;
+
+		if (hr == NULL)
+			return;
+
+		spin_lock(&hr->hot_range.lock);
+		if (list_empty(&hr->hot_range.n_list) || (a_temp != b_temp)) {
+			if (!list_empty(&hr->hot_range.n_list)) {
+				list_del_init(&hr->hot_range.n_list);
+				root->hot_map_nr--;
+			}
+
+			cur_bucket = buckets + a_temp;
+			list_add_tail(&hr->hot_range.n_list,
+					&cur_bucket->node_list);
+			root->hot_map_nr++;
+			freq_data->last_temp = temp;
+		}
+		spin_unlock(&hr->hot_range.lock);
+	}
+}
+
+/*
  * Initialize inode and range map info.
  */
 static void hot_map_init(struct hot_info *root)
-- 
1.7.6.5

^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [PATCH v1 hot_track 08/16] vfs: add aging function
  2012-11-16  9:51 [PATCH v1 resend hot_track 00/16] vfs: hot data tracking zwu.kernel
                   ` (6 preceding siblings ...)
  2012-11-16  9:51 ` [PATCH v1 hot_track 07/16] vfs: add map info update function zwu.kernel
@ 2012-11-16  9:51 ` zwu.kernel
  2012-11-16  9:51 ` [PATCH v1 hot_track 09/16] vfs: add one work queue zwu.kernel
                   ` (8 subsequent siblings)
  16 siblings, 0 replies; 22+ messages in thread
From: zwu.kernel @ 2012-11-16  9:51 UTC (permalink / raw)
  To: linux-fsdevel; +Cc: linux-kernel, viro, Zhi Yong Wu

From: Zhi Yong Wu <wuzhy@linux.vnet.ibm.com>

Signed-off-by: Zhi Yong Wu <wuzhy@linux.vnet.ibm.com>
---
 fs/hot_tracking.c |   49 +++++++++++++++++++++++++++++++++++++++++++++++++
 fs/hot_tracking.h |    6 ++++++
 2 files changed, 55 insertions(+), 0 deletions(-)

diff --git a/fs/hot_tracking.c b/fs/hot_tracking.c
index 446fbd4..304028d 100644
--- a/fs/hot_tracking.c
+++ b/fs/hot_tracking.c
@@ -385,6 +385,24 @@ static u32 hot_temp_calc(struct hot_freq_data *freq_data)
 	return result;
 }
 
+static bool hot_is_obsolete(struct hot_freq_data *freq_data)
+{
+	int ret = 0;
+	struct timespec ckt = current_kernel_time();
+
+	u64 cur_time = timespec_to_ns(&ckt);
+	u64 last_read_ns =
+		(cur_time - timespec_to_ns(&freq_data->last_read_time));
+	u64 last_write_ns =
+		(cur_time - timespec_to_ns(&freq_data->last_write_time));
+	u64 kick_ns =  TIME_TO_KICK * NSEC_PER_SEC;
+
+	if ((last_read_ns > kick_ns) && (last_write_ns > kick_ns))
+		ret = 1;
+
+	return ret;
+}
+
 /*
  * Calculate a new temperature and, if necessary,
  * move the list_head corresponding to this inode or range
@@ -452,6 +470,37 @@ static void hot_map_update(struct hot_freq_data *freq_data,
 	}
 }
 
+/* Update temperatures for each range item for aging purposes */
+static void hot_range_update(struct hot_inode_item *he,
+					struct hot_info *root)
+{
+	struct rb_node *node;
+	struct hot_comm_item *ci;
+	struct hot_range_item *hr;
+	bool obsolete;
+
+	spin_lock(&he->lock);
+	node = rb_first(&he->hot_range_tree.map);
+	while (node) {
+		ci = rb_entry(node, struct hot_comm_item, rb_node);
+		hr = container_of(ci, struct hot_range_item, hot_range);
+		kref_get(&hr->hot_range.refs);
+		hot_map_update(&hr->hot_range.hot_freq_data, root);
+
+		spin_lock(&hr->hot_range.lock);
+		obsolete = hot_is_obsolete(
+				&hr->hot_range.hot_freq_data);
+		spin_unlock(&hr->hot_range.lock);
+
+		node = rb_next(node);
+
+		hot_range_item_put(hr);
+		if (obsolete)
+			hot_range_item_put(hr);
+	}
+	spin_unlock(&he->lock);
+}
+
 /*
  * Initialize inode and range map info.
  */
diff --git a/fs/hot_tracking.h b/fs/hot_tracking.h
index f33066f..46d068a 100644
--- a/fs/hot_tracking.h
+++ b/fs/hot_tracking.h
@@ -24,6 +24,12 @@
 #define RANGE_SIZE (1 << RANGE_BITS)
 #define FREQ_POWER 4
 
+/*
+ * time to quit keeping track of
+ * tracking data (seconds)
+ */
+#define TIME_TO_KICK 300
+
 /* NRR/NRW heat unit = 2^X accesses */
 #define NRR_MULTIPLIER_POWER 20 /* NRR - number of reads since mount */
 #define NRR_COEFF_POWER 0
-- 
1.7.6.5

^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [PATCH v1 hot_track 09/16] vfs: add one work queue
  2012-11-16  9:51 [PATCH v1 resend hot_track 00/16] vfs: hot data tracking zwu.kernel
                   ` (7 preceding siblings ...)
  2012-11-16  9:51 ` [PATCH v1 hot_track 08/16] vfs: add aging function zwu.kernel
@ 2012-11-16  9:51 ` zwu.kernel
  2012-11-16  9:51 ` [PATCH v1 hot_track 10/16] vfs: add FS hot type support zwu.kernel
                   ` (7 subsequent siblings)
  16 siblings, 0 replies; 22+ messages in thread
From: zwu.kernel @ 2012-11-16  9:51 UTC (permalink / raw)
  To: linux-fsdevel; +Cc: linux-kernel, viro, Zhi Yong Wu

From: Zhi Yong Wu <wuzhy@linux.vnet.ibm.com>

  Add a per-superblock workqueue and a delayed_work
to run periodic work to update map info on each superblock.

Signed-off-by: Zhi Yong Wu <wuzhy@linux.vnet.ibm.com>
---
 fs/hot_tracking.c            |   81 ++++++++++++++++++++++++++++++++++++++++++
 fs/hot_tracking.h            |    3 ++
 include/linux/hot_tracking.h |    3 ++
 3 files changed, 87 insertions(+), 0 deletions(-)

diff --git a/fs/hot_tracking.c b/fs/hot_tracking.c
index 304028d..873d234 100644
--- a/fs/hot_tracking.c
+++ b/fs/hot_tracking.c
@@ -15,9 +15,12 @@
 #include <linux/module.h>
 #include <linux/spinlock.h>
 #include <linux/hardirq.h>
+#include <linux/kthread.h>
+#include <linux/freezer.h>
 #include <linux/fs.h>
 #include <linux/blkdev.h>
 #include <linux/types.h>
+#include <linux/list_sort.h>
 #include <linux/limits.h>
 #include "hot_tracking.h"
 
@@ -539,6 +542,63 @@ static void hot_map_exit(struct hot_info *root)
 	}
 }
 
+/* Temperature compare function*/
+static int hot_temp_cmp(void *priv, struct list_head *a,
+				struct list_head *b)
+{
+	struct hot_comm_item *ap =
+			container_of(a, struct hot_comm_item, n_list);
+	struct hot_comm_item *bp =
+			container_of(b, struct hot_comm_item, n_list);
+
+	int diff = ap->hot_freq_data.last_temp
+				- bp->hot_freq_data.last_temp;
+	if (diff > 0)
+		return -1;
+	if (diff < 0)
+		return 1;
+	return 0;
+}
+
+/*
+ * Every sync period we update temperatures for
+ * each hot inode item and hot range item for aging
+ * purposes.
+ */
+static void hot_update_worker(struct work_struct *work)
+{
+	struct hot_info *root = container_of(to_delayed_work(work),
+					struct hot_info, update_work);
+	struct rb_node *node;
+	struct hot_comm_item *ci;
+	struct hot_inode_item *he;
+	int i;
+
+	node = rb_first(&root->hot_inode_tree.map);
+	while (node) {
+		ci = rb_entry(node, struct hot_comm_item, rb_node);
+		he = container_of(ci, struct hot_inode_item, hot_inode);
+		kref_get(&he->hot_inode.refs);
+		hot_map_update(
+			&he->hot_inode.hot_freq_data, root);
+		hot_range_update(he, root);
+		node = rb_next(node);
+		hot_inode_item_put(he);
+	}
+
+	/* Sort temperature map info */
+	for (i = 0; i < HEAT_MAP_SIZE; i++) {
+		list_sort(NULL, &root->heat_inode_map[i].node_list,
+			hot_temp_cmp);
+		list_sort(NULL, &root->heat_range_map[i].node_list,
+			hot_temp_cmp);
+	}
+
+	/* Instert next delayed work */
+	queue_delayed_work(root->update_wq, &root->update_work,
+		msecs_to_jiffies(HEAT_UPDATE_DELAY * MSEC_PER_SEC));
+}
+
 /*
  * Initialize kmem cache for hot_inode_item and hot_range_item.
  */
@@ -632,11 +692,30 @@ int hot_track_init(struct super_block *sb)
 	hot_inode_tree_init(root);
 	hot_map_init(root);
 
+	root->update_wq = alloc_workqueue(
+		"hot_update_wq", WQ_NON_REENTRANT, 0);
+	if (!root->update_wq) {
+		printk(KERN_ERR "%s: Failed to create "
+				"hot update workqueue\n", __func__);
+		goto failed_wq;
+	}
+
+	/* Initialize hot tracking wq and arm one delayed work */
+	INIT_DELAYED_WORK(&root->update_work, hot_update_worker);
+	queue_delayed_work(root->update_wq, &root->update_work,
+		msecs_to_jiffies(HEAT_UPDATE_DELAY * MSEC_PER_SEC));
+
 	sb->s_hot_root = root;
 
 	printk(KERN_INFO "VFS: Turning on hot data tracking\n");
 
 	return 0;
+
+failed_wq:
+	hot_map_exit(root);
+	hot_inode_tree_exit(root);
+	kfree(root);
+	return ret;
 }
 EXPORT_SYMBOL_GPL(hot_track_init);
 
@@ -644,6 +723,8 @@ void hot_track_exit(struct super_block *sb)
 {
 	struct hot_info *root = sb->s_hot_root;
 
+	cancel_delayed_work_sync(&root->update_work);
+	destroy_workqueue(root->update_wq);
 	hot_map_exit(root);
 	hot_inode_tree_exit(root);
 	sb->s_hot_root = NULL;
diff --git a/fs/hot_tracking.h b/fs/hot_tracking.h
index 46d068a..96379a6 100644
--- a/fs/hot_tracking.h
+++ b/fs/hot_tracking.h
@@ -30,6 +30,9 @@
  */
 #define TIME_TO_KICK 300
 
+/* set how often to update temperatures (seconds) */
+#define HEAT_UPDATE_DELAY 300
+
 /* NRR/NRW heat unit = 2^X accesses */
 #define NRR_MULTIPLIER_POWER 20 /* NRR - number of reads since mount */
 #define NRR_COEFF_POWER 0
diff --git a/include/linux/hot_tracking.h b/include/linux/hot_tracking.h
index 34a0530..ef12748 100644
--- a/include/linux/hot_tracking.h
+++ b/include/linux/hot_tracking.h
@@ -88,6 +88,9 @@ struct hot_info {
 	/* map of range temperature */
 	struct hot_map_head heat_range_map[HEAT_MAP_SIZE];
 	unsigned int hot_map_nr;
+
+	struct workqueue_struct *update_wq;
+	struct delayed_work update_work;
 };
 
 extern void __init hot_cache_init(void);
-- 
1.7.6.5

^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [PATCH v1 hot_track 10/16] vfs: add FS hot type support
  2012-11-16  9:51 [PATCH v1 resend hot_track 00/16] vfs: hot data tracking zwu.kernel
                   ` (8 preceding siblings ...)
  2012-11-16  9:51 ` [PATCH v1 hot_track 09/16] vfs: add one work queue zwu.kernel
@ 2012-11-16  9:51 ` zwu.kernel
  2012-11-16  9:51 ` [PATCH v1 hot_track 11/16] vfs: register one shrinker zwu.kernel
                   ` (6 subsequent siblings)
  16 siblings, 0 replies; 22+ messages in thread
From: zwu.kernel @ 2012-11-16  9:51 UTC (permalink / raw)
  To: linux-fsdevel; +Cc: linux-kernel, viro, Zhi Yong Wu

From: Zhi Yong Wu <wuzhy@linux.vnet.ibm.com>

  Introduce one way to enable that specific FS
can inject its own hot tracking type.

Signed-off-by: Zhi Yong Wu <wuzhy@linux.vnet.ibm.com>
---
 fs/hot_tracking.c            |   43 +++++++++++++++++++++++++++++++----------
 fs/hot_tracking.h            |    1 -
 include/linux/fs.h           |    1 +
 include/linux/hot_tracking.h |   19 ++++++++++++++++++
 4 files changed, 52 insertions(+), 12 deletions(-)

diff --git a/fs/hot_tracking.c b/fs/hot_tracking.c
index 873d234..81fb084 100644
--- a/fs/hot_tracking.c
+++ b/fs/hot_tracking.c
@@ -64,8 +64,11 @@ void hot_range_tree_init(struct hot_inode_item *he)
 static void hot_range_item_init(struct hot_range_item *hr, loff_t start,
 				struct hot_inode_item *he)
 {
+	struct hot_info *root = container_of(he->hot_inode_tree,
+				struct hot_info, hot_inode_tree);
+
 	hr->start = start;
-	hr->len = RANGE_SIZE;
+	hr->len = hot_raw_shift(1, root->hot_type->range_bits, true);
 	hr->hot_inode = he;
 	kref_init(&hr->hot_range.refs);
 	spin_lock_init(&hr->hot_range.lock);
@@ -302,19 +305,21 @@ static void hot_rw_freq_calc(struct timespec old_atime,
 	*avg = *avg >> FREQ_POWER;
 }
 
-static void hot_freq_data_update(struct hot_freq_data *freq_data, bool write)
+static void hot_freq_data_update(struct hot_info *root,
+		struct hot_freq_data *freq_data, bool write)
 {
 	struct timespec cur_time = current_kernel_time();
 
 	if (write) {
 		freq_data->nr_writes += 1;
-		hot_rw_freq_calc(freq_data->last_write_time,
+		root->hot_type->ops.hot_rw_freq_calc_fn(
+				freq_data->last_write_time,
 				cur_time,
 				&freq_data->avg_delta_writes);
 		freq_data->last_write_time = cur_time;
 	} else {
 		freq_data->nr_reads += 1;
-		hot_rw_freq_calc(freq_data->last_read_time,
+			root->hot_type->ops.hot_rw_freq_calc_fn(
 				freq_data->last_read_time,
 				cur_time,
 				&freq_data->avg_delta_reads);
@@ -418,7 +423,7 @@ static void hot_map_update(struct hot_freq_data *freq_data,
 	struct hot_comm_item *comm_item;
 	struct hot_inode_item *he;
 	struct hot_range_item *hr;
-	u32 temp = hot_temp_calc(freq_data);
+	u32 temp = root->hot_type->ops.hot_temp_calc_fn(freq_data);
 	u8 a_temp = (u8)hot_raw_shift((u64)temp, (32 - HEAT_MAP_BITS), false);
 	u8 b_temp = (u8)hot_raw_shift((u64)freq_data->last_temp,
 					(32 - HEAT_MAP_BITS), false);
@@ -491,7 +496,7 @@ static void hot_range_update(struct hot_inode_item *he,
 		hot_map_update(&hr->hot_range.hot_freq_data, root);
 
 		spin_lock(&hr->hot_range.lock);
-		obsolete = hot_is_obsolete(
+		obsolete = root->hot_type->ops.hot_is_obsolete_fn(
 				&hr->hot_range.hot_freq_data);
 		spin_unlock(&hr->hot_range.lock);
 
@@ -634,6 +639,7 @@ void hot_update_freqs(struct inode *inode, loff_t start,
 	struct hot_info *root = inode->i_sb->s_hot_root;
 	struct hot_inode_item *he;
 	struct hot_range_item *hr;
+	u64 range_size;
 	loff_t cur, end;
 
 	if (!root || (len == 0))
@@ -646,15 +652,19 @@ void hot_update_freqs(struct inode *inode, loff_t start,
 	}
 
 	spin_lock(&he->hot_inode.lock);
-	hot_freq_data_update(&he->hot_inode.hot_freq_data, rw);
+	hot_freq_data_update(root, &he->hot_inode.hot_freq_data, rw);
 	spin_unlock(&he->hot_inode.lock);
 
 	/*
-	 * Align ranges on RANGE_SIZE boundary
+	 * Align ranges on range size boundary
 	 * to prevent proliferation of range structs
 	 */
-	end = (start + len + RANGE_SIZE - 1) >> RANGE_BITS;
-	for (cur = (start >> RANGE_BITS); cur < end; cur++) {
+	range_size  = hot_raw_shift(1,
+			root->hot_type->range_bits, true);
+	end = hot_raw_shift((start + len + range_size - 1),
+			root->hot_type->range_bits, false);
+	cur = hot_raw_shift(start, root->hot_type->range_bits, false);
+	for (; cur < end; cur++) {
 		hr = hot_range_item_lookup(he, cur);
 		if (IS_ERR(hr)) {
 			WARN(1, "hot_range_item_lookup returns %ld\n",
@@ -664,7 +674,7 @@ void hot_update_freqs(struct inode *inode, loff_t start,
 		}
 
 		spin_lock(&hr->hot_range.lock);
-		hot_freq_data_update(&hr->hot_range.hot_freq_data, rw);
+		hot_freq_data_update(root, &hr->hot_range.hot_freq_data, rw);
 		spin_unlock(&hr->hot_range.lock);
 
 		hot_range_item_put(hr);
@@ -692,6 +702,17 @@ int hot_track_init(struct super_block *sb)
 	hot_inode_tree_init(root);
 	hot_map_init(root);
 
+	/* Get hot type for specific FS */
+	root->hot_type = &sb->s_type->hot_type;
+	if (!root->hot_type->ops.hot_rw_freq_calc_fn)
+		root->hot_type->ops.hot_rw_freq_calc_fn = hot_rw_freq_calc;
+	if (!root->hot_type->ops.hot_temp_calc_fn)
+		root->hot_type->ops.hot_temp_calc_fn = hot_temp_calc;
+	if (!root->hot_type->ops.hot_is_obsolete_fn)
+		root->hot_type->ops.hot_is_obsolete_fn = hot_is_obsolete;
+	if (root->hot_type->range_bits == 0)
+		root->hot_type->range_bits = RANGE_BITS;
+
 	root->update_wq = alloc_workqueue(
 		"hot_update_wq", WQ_NON_REENTRANT, 0);
 	if (!root->update_wq) {
diff --git a/fs/hot_tracking.h b/fs/hot_tracking.h
index 96379a6..73d2a3e 100644
--- a/fs/hot_tracking.h
+++ b/fs/hot_tracking.h
@@ -21,7 +21,6 @@
 
 /* size of sub-file ranges */
 #define RANGE_BITS 20
-#define RANGE_SIZE (1 << RANGE_BITS)
 #define FREQ_POWER 4
 
 /*
diff --git a/include/linux/fs.h b/include/linux/fs.h
index c541ae7..4e2607d 100644
--- a/include/linux/fs.h
+++ b/include/linux/fs.h
@@ -1821,6 +1821,7 @@ struct file_system_type {
 	struct dentry *(*mount) (struct file_system_type *, int,
 		       const char *, void *);
 	void (*kill_sb) (struct super_block *);
+	struct hot_type hot_type;
 	struct module *owner;
 	struct file_system_type * next;
 	struct hlist_head fs_supers;
diff --git a/include/linux/hot_tracking.h b/include/linux/hot_tracking.h
index ef12748..f73111e 100644
--- a/include/linux/hot_tracking.h
+++ b/include/linux/hot_tracking.h
@@ -79,6 +79,24 @@ struct hot_range_item {
 	size_t len; /* length in bytes */
 };
 
+typedef void (hot_rw_freq_calc_fn) (struct timespec old_atime,
+			struct timespec cur_time, u64 *avg);
+typedef u32 (hot_temp_calc_fn) (struct hot_freq_data *freq_data);
+typedef bool (hot_is_obsolete_fn) (struct hot_freq_data *freq_data);
+
+struct hot_func_ops {
+	hot_rw_freq_calc_fn *hot_rw_freq_calc_fn;
+	hot_temp_calc_fn *hot_temp_calc_fn;
+	hot_is_obsolete_fn *hot_is_obsolete_fn;
+};
+
+/* identifies an hot type */
+struct hot_type {
+	u64 range_bits;
+	/* fields provided by specific FS */
+	struct hot_func_ops ops;
+};
+
 struct hot_info {
 	struct hot_rb_tree hot_inode_tree;
 	spinlock_t lock; /*protect inode tree */
@@ -91,6 +109,7 @@ struct hot_info {
 
 	struct workqueue_struct *update_wq;
 	struct delayed_work update_work;
+	struct hot_type *hot_type;
 };
 
 extern void __init hot_cache_init(void);
-- 
1.7.6.5

^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [PATCH v1 hot_track 11/16] vfs: register one shrinker
  2012-11-16  9:51 [PATCH v1 resend hot_track 00/16] vfs: hot data tracking zwu.kernel
                   ` (9 preceding siblings ...)
  2012-11-16  9:51 ` [PATCH v1 hot_track 10/16] vfs: add FS hot type support zwu.kernel
@ 2012-11-16  9:51 ` zwu.kernel
  2012-11-16  9:51 ` [PATCH v1 hot_track 12/16] vfs: add one ioctl interface zwu.kernel
                   ` (5 subsequent siblings)
  16 siblings, 0 replies; 22+ messages in thread
From: zwu.kernel @ 2012-11-16  9:51 UTC (permalink / raw)
  To: linux-fsdevel; +Cc: linux-kernel, viro, Zhi Yong Wu

From: Zhi Yong Wu <wuzhy@linux.vnet.ibm.com>

  Register a shrinker to control the amount of
memory that is used in tracking hot regions - if we are throwing
inodes out of memory due to memory pressure, we most definitely are
going to need to reduce the amount of memory the tracking code is
using, even if it means losing useful information (i.e. the shrinker
accelerates the aging process).

Signed-off-by: Zhi Yong Wu <wuzhy@linux.vnet.ibm.com>
---
 fs/hot_tracking.c            |   61 ++++++++++++++++++++++++++++++++++++++++++
 include/linux/hot_tracking.h |    1 +
 2 files changed, 62 insertions(+), 0 deletions(-)

diff --git a/fs/hot_tracking.c b/fs/hot_tracking.c
index 81fb084..8144200 100644
--- a/fs/hot_tracking.c
+++ b/fs/hot_tracking.c
@@ -630,6 +630,61 @@ err:
 }
 EXPORT_SYMBOL_GPL(hot_cache_init);
 
+static int hot_track_prune_map(struct hot_map_head *map_head,
+				bool type, int nr)
+{
+	struct hot_comm_item *node;
+	int i;
+
+	for (i = 0; i < HEAT_MAP_SIZE; i++) {
+		while (!list_empty(&(map_head + i)->node_list)) {
+			if (nr-- <= 0)
+				break;
+
+			node = list_first_entry(&(map_head + i)->node_list,
+					struct hot_comm_item, n_list);
+			if (type) {
+				struct hot_inode_item *hot_inode =
+					container_of(node,
+					struct hot_inode_item, hot_inode);
+				hot_inode_item_put(hot_inode);
+			} else {
+				struct hot_range_item *hot_range =
+					container_of(node,
+					struct hot_range_item, hot_range);
+				hot_range_item_put(hot_range);
+			}
+		}
+	}
+
+	return nr;
+}
+
+/* The shrinker callback function */
+static int hot_track_prune(struct shrinker *shrink,
+			struct shrink_control *sc)
+{
+	struct hot_info *root =
+		container_of(shrink, struct hot_info, hot_shrink);
+	int ret;
+
+	if (sc->nr_to_scan == 0)
+		return root->hot_map_nr;
+
+	if (!(sc->gfp_mask & __GFP_FS))
+		return -1;
+
+	ret = hot_track_prune_map(root->heat_range_map,
+				false, sc->nr_to_scan);
+	if (ret > 0)
+		ret = hot_track_prune_map(root->heat_inode_map,
+					true, ret);
+	if (ret > 0)
+		root->hot_map_nr -= (sc->nr_to_scan - ret);
+
+	return root->hot_map_nr;
+}
+
 /*
  * Main function to update access frequency from read/writepage(s) hooks
  */
@@ -726,6 +781,11 @@ int hot_track_init(struct super_block *sb)
 	queue_delayed_work(root->update_wq, &root->update_work,
 		msecs_to_jiffies(HEAT_UPDATE_DELAY * MSEC_PER_SEC));
 
+	/* Register a shrinker callback */
+	root->hot_shrink.shrink = hot_track_prune;
+	root->hot_shrink.seeks = DEFAULT_SEEKS;
+	register_shrinker(&root->hot_shrink);
+
 	sb->s_hot_root = root;
 
 	printk(KERN_INFO "VFS: Turning on hot data tracking\n");
@@ -744,6 +804,7 @@ void hot_track_exit(struct super_block *sb)
 {
 	struct hot_info *root = sb->s_hot_root;
 
+	unregister_shrinker(&root->hot_shrink);
 	cancel_delayed_work_sync(&root->update_work);
 	destroy_workqueue(root->update_wq);
 	hot_map_exit(root);
diff --git a/include/linux/hot_tracking.h b/include/linux/hot_tracking.h
index f73111e..24e91ff 100644
--- a/include/linux/hot_tracking.h
+++ b/include/linux/hot_tracking.h
@@ -110,6 +110,7 @@ struct hot_info {
 	struct workqueue_struct *update_wq;
 	struct delayed_work update_work;
 	struct hot_type *hot_type;
+	struct shrinker hot_shrink;
 };
 
 extern void __init hot_cache_init(void);
-- 
1.7.6.5

^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [PATCH v1 hot_track 12/16] vfs: add one ioctl interface
  2012-11-16  9:51 [PATCH v1 resend hot_track 00/16] vfs: hot data tracking zwu.kernel
                   ` (10 preceding siblings ...)
  2012-11-16  9:51 ` [PATCH v1 hot_track 11/16] vfs: register one shrinker zwu.kernel
@ 2012-11-16  9:51 ` zwu.kernel
  2012-11-16  9:51 ` [PATCH v1 hot_track 13/16] vfs: add debugfs support zwu.kernel
                   ` (4 subsequent siblings)
  16 siblings, 0 replies; 22+ messages in thread
From: zwu.kernel @ 2012-11-16  9:51 UTC (permalink / raw)
  To: linux-fsdevel; +Cc: linux-kernel, viro, Zhi Yong Wu

From: Zhi Yong Wu <wuzhy@linux.vnet.ibm.com>

  FS_IOC_GET_HEAT_INFO: return a struct containing the various
metrics collected in hot_freq_data structs, and also return a
calculated data temperature based on those metrics. Optionally, retrieve
the temperature from the hot data hash list instead of recalculating it.

Signed-off-by: Zhi Yong Wu <wuzhy@linux.vnet.ibm.com>
---
 fs/compat_ioctl.c            |    5 +++
 fs/ioctl.c                   |   74 ++++++++++++++++++++++++++++++++++++++++++
 include/linux/hot_tracking.h |   19 +++++++++++
 3 files changed, 98 insertions(+), 0 deletions(-)

diff --git a/fs/compat_ioctl.c b/fs/compat_ioctl.c
index 4c6285f..ad1d603 100644
--- a/fs/compat_ioctl.c
+++ b/fs/compat_ioctl.c
@@ -57,6 +57,7 @@
 #include <linux/i2c-dev.h>
 #include <linux/atalk.h>
 #include <linux/gfp.h>
+#include <linux/hot_tracking.h>
 
 #include <net/bluetooth/bluetooth.h>
 #include <net/bluetooth/hci.h>
@@ -1400,6 +1401,9 @@ COMPATIBLE_IOCTL(TIOCSTART)
 COMPATIBLE_IOCTL(TIOCSTOP)
 #endif
 
+/*Hot data tracking*/
+COMPATIBLE_IOCTL(FS_IOC_GET_HEAT_INFO)
+
 /* fat 'r' ioctls. These are handled by fat with ->compat_ioctl,
    but we don't want warnings on other file systems. So declare
    them as compatible here. */
@@ -1579,6 +1583,7 @@ asmlinkage long compat_sys_ioctl(unsigned int fd, unsigned int cmd,
 	case FIBMAP:
 	case FIGETBSZ:
 	case FIONREAD:
+	case FS_IOC_GET_HEAT_INFO:
 		if (S_ISREG(f.file->f_path.dentry->d_inode->i_mode))
 			break;
 		/*FALL THROUGH*/
diff --git a/fs/ioctl.c b/fs/ioctl.c
index 3bdad6d..79fe81f 100644
--- a/fs/ioctl.c
+++ b/fs/ioctl.c
@@ -15,6 +15,7 @@
 #include <linux/writeback.h>
 #include <linux/buffer_head.h>
 #include <linux/falloc.h>
+#include <linux/hot_tracking.h>
 
 #include <asm/ioctls.h>
 
@@ -537,6 +538,76 @@ static int ioctl_fsthaw(struct file *filp)
 }
 
 /*
+ * Retrieve information about access frequency for the given file. Return it in
+ * a userspace-friendly struct for btrfsctl (or another tool) to parse.
+ *
+ * The temperature that is returned can be "live" -- that is, recalculated when
+ * the ioctl is called -- or it can be returned from the hashtable, reflecting
+ * the (possibly old) value that the system will use when considering files
+ * for migration. This behavior is determined by hot_heat_info->live.
+ */
+static int ioctl_heat_info(struct file *file, void __user *argp)
+{
+	struct inode *inode = file->f_dentry->d_inode;
+	struct hot_heat_info heat_info;
+	struct hot_inode_item *he;
+	int ret = 0;
+
+	if (copy_from_user((void *)&heat_info,
+			argp,
+			sizeof(struct hot_heat_info)) != 0) {
+		ret = -EFAULT;
+		goto err;
+	}
+
+	he = hot_inode_item_lookup(inode->i_sb->s_hot_root, inode->i_ino);
+	if (!he) {
+		/* we don't have any info on this file yet */
+		ret = -ENODATA;
+		goto err;
+	}
+
+	spin_lock(&he->hot_inode.lock);
+	heat_info.avg_delta_reads =
+		(__u64) he->hot_inode.hot_freq_data.avg_delta_reads;
+	heat_info.avg_delta_writes =
+		(__u64) he->hot_inode.hot_freq_data.avg_delta_writes;
+	heat_info.last_read_time =
+	(__u64) timespec_to_ns(&he->hot_inode.hot_freq_data.last_read_time);
+	heat_info.last_write_time =
+	(__u64) timespec_to_ns(&he->hot_inode.hot_freq_data.last_write_time);
+	heat_info.num_reads =
+		(__u32) he->hot_inode.hot_freq_data.nr_reads;
+	heat_info.num_writes =
+		(__u32) he->hot_inode.hot_freq_data.nr_writes;
+
+	if (heat_info.live > 0) {
+		/*
+		 * got a request for live temperature,
+		 * call hot_hash_calc_temperature to recalculate
+		 */
+		heat_info.temp =
+		inode->i_sb->s_hot_root->hot_type->ops.hot_temp_calc_fn(
+					&he->hot_inode.hot_freq_data);
+	} else {
+		/* not live temperature, get it from the hashlist */
+		heat_info.temp = he->hot_inode.hot_freq_data.last_temp;
+	}
+	spin_unlock(&he->hot_inode.lock);
+
+	hot_inode_item_put(he);
+
+	if (copy_to_user(argp, (void *)&heat_info,
+			sizeof(struct hot_heat_info))) {
+		ret = -EFAULT;
+		goto err;
+	}
+
+err:
+	return ret;
+}
+
+/*
  * When you add any new common ioctls to the switches above and below
  * please update compat_sys_ioctl() too.
  *
@@ -591,6 +662,9 @@ int do_vfs_ioctl(struct file *filp, unsigned int fd, unsigned int cmd,
 	case FIGETBSZ:
 		return put_user(inode->i_sb->s_blocksize, argp);
 
+	case FS_IOC_GET_HEAT_INFO:
+		return ioctl_heat_info(filp, argp);
+
 	default:
 		if (S_ISREG(inode->i_mode))
 			error = file_ioctl(filp, cmd, arg);
diff --git a/include/linux/hot_tracking.h b/include/linux/hot_tracking.h
index 24e91ff..97283b3 100644
--- a/include/linux/hot_tracking.h
+++ b/include/linux/hot_tracking.h
@@ -44,6 +44,17 @@ struct hot_freq_data {
 	u32 last_temp;
 };
 
+struct hot_heat_info {
+	__u64 avg_delta_reads;
+	__u64 avg_delta_writes;
+	__u64 last_read_time;
+	__u64 last_write_time;
+	__u32 num_reads;
+	__u32 num_writes;
+	__u32 temp;
+	__u8 live;
+};
+
 /* List heads in hot map array */
 struct hot_map_head {
 	struct list_head node_list;
@@ -113,6 +124,14 @@ struct hot_info {
 	struct shrinker hot_shrink;
 };
 
+/*
+ * Hot data tracking ioctls:
+ *
+ * HOT_INFO - retrieve info on frequency of access
+ */
+#define FS_IOC_GET_HEAT_INFO _IOR('f', 17, \
+			struct hot_heat_info)
+
 extern void __init hot_cache_init(void);
 extern int hot_track_init(struct super_block *sb);
 extern void hot_track_exit(struct super_block *sb);
-- 
1.7.6.5


^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [PATCH v1 hot_track 13/16] vfs: add debugfs support
  2012-11-16  9:51 [PATCH v1 resend hot_track 00/16] vfs: hot data tracking zwu.kernel
                   ` (11 preceding siblings ...)
  2012-11-16  9:51 ` [PATCH v1 hot_track 12/16] vfs: add one ioctl interface zwu.kernel
@ 2012-11-16  9:51 ` zwu.kernel
  2012-11-16  9:51 ` [PATCH v1 hot_track 14/16] proc: add two hot_track proc files zwu.kernel
                   ` (3 subsequent siblings)
  16 siblings, 0 replies; 22+ messages in thread
From: zwu.kernel @ 2012-11-16  9:51 UTC (permalink / raw)
  To: linux-fsdevel; +Cc: linux-kernel, viro, Zhi Yong Wu

From: Zhi Yong Wu <wuzhy@linux.vnet.ibm.com>

  Add a /sys/kernel/debug/hot_track/<device_name>/ directory for each
volume that contains two files. The first, `inode_stats', contains the
heat information for inodes that have been brought into the hot data map
structures. The second, `range_stats', contains similar information for
subfile ranges.

Signed-off-by: Zhi Yong Wu <wuzhy@linux.vnet.ibm.com>
---
 fs/hot_tracking.c            |  489 +++++++++++++++++++++++++++++++++++++++++-
 fs/hot_tracking.h            |    5 +
 include/linux/hot_tracking.h |    1 +
 3 files changed, 493 insertions(+), 2 deletions(-)

diff --git a/fs/hot_tracking.c b/fs/hot_tracking.c
index 8144200..a98bfe6 100644
--- a/fs/hot_tracking.c
+++ b/fs/hot_tracking.c
@@ -21,9 +21,12 @@
 #include <linux/blkdev.h>
 #include <linux/types.h>
 #include <linux/list_sort.h>
+#include <linux/debugfs.h>
 #include <linux/limits.h>
 #include "hot_tracking.h"
 
+static struct dentry *hot_debugfs_root;
+
 /* kmem_cache pointers for slab caches */
 static struct kmem_cache *hot_inode_item_cachep __read_mostly;
 static struct kmem_cache *hot_range_item_cachep __read_mostly;
@@ -215,8 +218,8 @@ struct hot_inode_item
 		else if (ino > entry->i_ino)
 			p = &(*p)->rb_right;
 		else {
-			spin_unlock(&root->lock);
 			kref_get(&entry->hot_inode.refs);
+			spin_unlock(&root->lock);
 			return entry;
 		}
 	}
@@ -266,8 +269,8 @@ static struct hot_range_item
 		else if (start > hot_range_end(entry))
 			p = &(*p)->rb_right;
 		else {
-			spin_unlock(&he->lock);
 			kref_get(&entry->hot_range.refs);
+			spin_unlock(&he->lock);
 			return entry;
 		}
 	}
@@ -604,6 +607,475 @@ static void hot_update_worker(struct work_struct *work)
 		msecs_to_jiffies(HEAT_UPDATE_DELAY * MSEC_PER_SEC));
 }
 
+static void *hot_range_seq_start(struct seq_file *seq, loff_t *pos)
+{
+	struct hot_info *root = seq->private;
+	struct rb_node *node, *node2;
+	struct hot_comm_item *ci;
+	struct hot_inode_item *he;
+	struct hot_range_item *hr;
+	loff_t l = *pos;
+
+	spin_lock(&root->lock);
+	node = rb_first(&root->hot_inode_tree.map);
+	while (node) {
+		ci = rb_entry(node, struct hot_comm_item, rb_node);
+		he = container_of(ci, struct hot_inode_item, hot_inode);
+		spin_lock(&he->lock);
+		node2 = rb_first(&he->hot_range_tree.map);
+		while (node2) {
+			if (!l--) {
+				ci = rb_entry(node2,
+					struct hot_comm_item, rb_node);
+				hr = container_of(ci,
+					struct hot_range_item, hot_range);
+				kref_get(&hr->hot_range.refs);
+				spin_unlock(&he->lock);
+				spin_unlock(&root->lock);
+				return hr;
+			}
+			node2 = rb_next(node2);
+		}
+		node = rb_next(node);
+		spin_unlock(&he->lock);
+	}
+	spin_unlock(&root->lock);
+	return NULL;
+}
+
+static void *hot_range_seq_next(struct seq_file *seq,
+				void *v, loff_t *pos)
+{
+	struct rb_node *node, *node2;
+	struct hot_comm_item *ci;
+	struct hot_inode_item *he;
+	struct hot_range_item *hr_next = NULL, *hr = v;
+
+	spin_lock(&hr->hot_range.lock);
+	(*pos)++;
+	node2 = rb_next(&hr->hot_range.rb_node);
+	if (node2)
+		goto next;
+
+	node = rb_next(&hr->hot_inode->hot_inode.rb_node);
+	if (node) {
+		ci = rb_entry(node, struct hot_comm_item, rb_node);
+		he = container_of(ci, struct hot_inode_item, hot_inode);
+		node2 = rb_first(&he->hot_range_tree.map);
+		if (node2) {
+next:
+			ci = rb_entry(node2,
+				struct hot_comm_item, rb_node);
+			hr_next = container_of(ci,
+				struct hot_range_item, hot_range);
+			kref_get(&hr_next->hot_range.refs);
+		}
+	}
+	spin_unlock(&hr->hot_range.lock);
+
+	hot_range_item_put(hr);
+	return hr_next;
+}
+
+static void hot_range_seq_stop(struct seq_file *seq, void *v)
+{
+	struct hot_range_item *hr = v;
+
+	if (hr)
+		hot_range_item_put(hr);
+}
+
+static int hot_range_seq_show(struct seq_file *seq, void *v)
+{
+	struct hot_range_item *hr = v;
+	struct hot_inode_item *he = hr->hot_inode;
+	struct hot_freq_data *freq_data = &hr->hot_range.hot_freq_data;
+	struct hot_info *root = container_of(he->hot_inode_tree,
+		struct hot_info, hot_inode_tree);
+	loff_t start = hr->start * hot_raw_shift(1,
+			root->hot_type->range_bits, true);
+
+	/* Always lock hot_inode_item first */
+	spin_lock(&he->hot_inode.lock);
+	spin_lock(&hr->hot_range.lock);
+	seq_printf(seq, "inode %llu, range " \
+			"%llu+%llu, reads %u, writes %u, temp %u\n",
+			he->i_ino, (unsigned long long)start,
+			(unsigned long long)hr->len,
+			freq_data->nr_reads,
+			freq_data->nr_writes,
+			(u8)hot_raw_shift((u64)freq_data->last_temp,
+					(32 - HEAT_MAP_BITS), false));
+	spin_unlock(&hr->hot_range.lock);
+	spin_unlock(&he->hot_inode.lock);
+
+	return 0;
+}
+
+static void *hot_inode_seq_start(struct seq_file *seq, loff_t *pos)
+{
+	struct hot_info *root = seq->private;
+	struct rb_node *node;
+	struct hot_comm_item *ci;
+	struct hot_inode_item *he = NULL;
+	loff_t l = *pos;
+
+	spin_lock(&root->lock);
+	node = rb_first(&root->hot_inode_tree.map);
+	while (node) {
+		if (!l--) {
+			ci = rb_entry(node, struct hot_comm_item, rb_node);
+			he = container_of(ci,
+				struct hot_inode_item, hot_inode);
+			kref_get(&he->hot_inode.refs);
+			break;
+		}
+		node = rb_next(node);
+	}
+	spin_unlock(&root->lock);
+
+	return he;
+}
+
+static void *hot_inode_seq_next(struct seq_file *seq, void *v, loff_t *pos)
+{
+	struct hot_inode_item *he_next = NULL, *he = v;
+	struct rb_node *node;
+	struct hot_comm_item *ci;
+
+	spin_lock(&he->hot_inode.lock);
+	(*pos)++;
+	node = rb_next(&he->hot_inode.rb_node);
+	if (node) {
+		ci = rb_entry(node, struct hot_comm_item, rb_node);
+		he_next = container_of(ci,
+			struct hot_inode_item, hot_inode);
+		kref_get(&he_next->hot_inode.refs);
+	}
+	spin_unlock(&he->hot_inode.lock);
+
+	hot_inode_item_put(he);
+
+	return he_next;
+}
+
+static void hot_inode_seq_stop(struct seq_file *seq, void *v)
+{
+	struct hot_inode_item *he = v;
+
+	if (he)
+		hot_inode_item_put(he);
+}
+
+static int hot_inode_seq_show(struct seq_file *seq, void *v)
+{
+	struct hot_inode_item *he = v;
+	struct hot_freq_data *freq_data = &he->hot_inode.hot_freq_data;
+
+	spin_lock(&he->hot_inode.lock);
+	seq_printf(seq, "inode %llu, reads %u, writes %u, temp %d\n",
+		he->i_ino,
+		freq_data->nr_reads,
+		freq_data->nr_writes,
+		(u8)hot_raw_shift((u64)freq_data->last_temp,
+				(32 - HEAT_MAP_BITS), false));
+	spin_unlock(&he->hot_inode.lock);
+
+	return 0;
+}
+
+static void *hot_spot_range_seq_start(struct seq_file *seq, loff_t *pos)
+{
+	struct hot_info *root = seq->private;
+	struct hot_range_item *hr;
+	struct hot_comm_item *comm_item;
+	struct list_head *n_list;
+	int i;
+
+	for (i = HEAT_MAP_SIZE - 1; i >= 0; i--) {
+		n_list = seq_list_start(
+			&root->heat_range_map[i].node_list, *pos);
+		if (n_list) {
+			comm_item = container_of(n_list,
+				struct hot_comm_item, n_list);
+			hr = container_of(comm_item,
+				struct hot_range_item, hot_range);
+			kref_get(&hr->hot_range.refs);
+			return hr;
+		}
+	}
+
+	return NULL;
+}
+
+static void *hot_spot_range_seq_next(struct seq_file *seq, void *v, loff_t *pos)
+{
+	struct hot_info *root = seq->private;
+	struct hot_range_item *hr_next, *hr = v;
+	struct hot_comm_item *comm_item;
+	struct list_head *n_list;
+	int i = (int)hot_raw_shift(hr->hot_range.hot_freq_data.last_temp,
+				(32 - HEAT_MAP_BITS), false);
+
+	n_list = seq_list_next(&hr->hot_range.n_list,
+		&root->heat_range_map[i].node_list, pos);
+	hot_range_item_put(hr);
+next:
+	if (n_list) {
+		comm_item = container_of(n_list,
+			struct hot_comm_item, n_list);
+		hr_next = container_of(comm_item,
+			struct hot_range_item, hot_range);
+		kref_get(&hr_next->hot_range.refs);
+		return hr_next;
+	} else if (--i >= 0) {
+		n_list = seq_list_next(&root->heat_range_map[i].node_list,
+				&root->heat_range_map[i].node_list, pos);
+		goto next;
+	}
+
+	return NULL;
+}
+
+static void *hot_spot_inode_seq_start(struct seq_file *seq, loff_t *pos)
+{
+	struct hot_info *root = seq->private;
+	struct hot_inode_item *he;
+	struct hot_comm_item *comm_item;
+	struct list_head *n_list;
+	int i;
+
+	for (i = HEAT_MAP_SIZE - 1; i >= 0; i--) {
+		n_list = seq_list_start(
+			&root->heat_inode_map[i].node_list, *pos);
+		if (n_list) {
+			comm_item = container_of(n_list,
+				struct hot_comm_item, n_list);
+			he = container_of(comm_item,
+				struct hot_inode_item, hot_inode);
+			kref_get(&he->hot_inode.refs);
+			return he;
+		}
+	}
+
+	return NULL;
+}
+
+static void *hot_spot_inode_seq_next(struct seq_file *seq, void *v, loff_t *pos)
+{
+	struct hot_info *root = seq->private;
+	struct hot_inode_item *he_next, *he = v;
+	struct hot_comm_item *comm_item;
+	struct list_head *n_list;
+	int i = (int)hot_raw_shift(he->hot_inode.hot_freq_data.last_temp,
+				(32 - HEAT_MAP_BITS), false);
+
+	n_list = seq_list_next(&he->hot_inode.n_list,
+			&root->heat_inode_map[i].node_list, pos);
+	hot_inode_item_put(he);
+next:
+	if (n_list) {
+		comm_item = container_of(n_list,
+			struct hot_comm_item, n_list);
+		he_next = container_of(comm_item,
+			struct hot_inode_item, hot_inode);
+		kref_get(&he_next->hot_inode.refs);
+		return he_next;
+	} else if (--i >= 0) {
+		n_list = seq_list_next(&root->heat_inode_map[i].node_list,
+				&root->heat_inode_map[i].node_list, pos);
+		goto next;
+	}
+
+	return NULL;
+}
+
+static const struct seq_operations hot_range_seq_ops = {
+	.start = hot_range_seq_start,
+	.next = hot_range_seq_next,
+	.stop = hot_range_seq_stop,
+	.show = hot_range_seq_show
+};
+
+static const struct seq_operations hot_inode_seq_ops = {
+	.start = hot_inode_seq_start,
+	.next = hot_inode_seq_next,
+	.stop = hot_inode_seq_stop,
+	.show = hot_inode_seq_show
+};
+
+static const struct seq_operations hot_spot_range_seq_ops = {
+	.start = hot_spot_range_seq_start,
+	.next = hot_spot_range_seq_next,
+	.stop = hot_range_seq_stop,
+	.show = hot_range_seq_show
+};
+
+static const struct seq_operations hot_spot_inode_seq_ops = {
+	.start = hot_spot_inode_seq_start,
+	.next = hot_spot_inode_seq_next,
+	.stop = hot_inode_seq_stop,
+	.show = hot_inode_seq_show
+};
+
+static int hot_range_seq_open(struct inode *inode, struct file *file)
+{
+	int ret = seq_open_private(file, &hot_range_seq_ops, 0);
+	if (ret == 0) {
+		struct seq_file *seq = file->private_data;
+		seq->private = inode->i_private;
+	}
+	return ret;
+}
+
+static int hot_inode_seq_open(struct inode *inode, struct file *file)
+{
+	int ret = seq_open_private(file, &hot_inode_seq_ops, 0);
+	if (ret == 0) {
+		struct seq_file *seq = file->private_data;
+		seq->private = inode->i_private;
+	}
+	return ret;
+}
+
+static int hot_spot_range_seq_open(struct inode *inode, struct file *file)
+{
+	int ret = seq_open_private(file, &hot_spot_range_seq_ops, 0);
+	if (ret == 0) {
+		struct seq_file *seq = file->private_data;
+		seq->private = inode->i_private;
+	}
+	return ret;
+}
+
+static int hot_spot_inode_seq_open(struct inode *inode, struct file *file)
+{
+	int ret = seq_open_private(file, &hot_spot_inode_seq_ops, 0);
+	if (ret == 0) {
+		struct seq_file *seq = file->private_data;
+		seq->private = inode->i_private;
+	}
+	return ret;
+}
+
+/* fops to override for printing range data */
+static const struct file_operations hot_debugfs_range_fops = {
+	.open = hot_range_seq_open,
+	.read = seq_read,
+	.llseek = seq_lseek,
+	.release = seq_release,
+};
+
+/* fops to override for printing inode data */
+static const struct file_operations hot_debugfs_inode_fops = {
+	.open = hot_inode_seq_open,
+	.read = seq_read,
+	.llseek = seq_lseek,
+	.release = seq_release,
+};
+
+/* fops to override for printing temperature data */
+static const struct file_operations hot_debugfs_spot_range_fops = {
+	.open = hot_spot_range_seq_open,
+	.read = seq_read,
+	.llseek = seq_lseek,
+	.release = seq_release,
+};
+
+static const struct file_operations hot_debugfs_spot_inode_fops = {
+	.open = hot_spot_inode_seq_open,
+	.read = seq_read,
+	.llseek = seq_lseek,
+	.release = seq_release,
+};
+
+static const struct hot_debugfs hot_debugfs[] = {
+	{
+		.name = "rt_stats_range",
+		.fops  = &hot_debugfs_range_fops,
+	},
+	{
+		.name = "rt_stats_inode",
+		.fops  = &hot_debugfs_inode_fops,
+	},
+	{
+		.name = "hot_spots_range",
+		.fops  = &hot_debugfs_spot_range_fops,
+	},
+	{
+		.name = "hot_spots_inode",
+		.fops  = &hot_debugfs_spot_inode_fops,
+	},
+};
+
+/* initialize debugfs */
+static int hot_debugfs_init(struct super_block *sb)
+{
+	static const char hot_name[] = "hot_track";
+	struct dentry *dentry;
+	int i, ret = 0;
+
+	/* Determine if hot debufs root has existed */
+	if (!hot_debugfs_root) {
+		hot_debugfs_root = debugfs_create_dir(hot_name, NULL);
+		if (IS_ERR(hot_debugfs_root)) {
+			ret = PTR_ERR(hot_debugfs_root);
+			return ret;
+		}
+	}
+
+	if (!S_ISDIR(hot_debugfs_root->d_inode->i_mode))
+		return -ENOTDIR;
+
+	/* create debugfs folder for this volume by mounted dev name */
+	sb->s_hot_root->vol_dentry =
+			debugfs_create_dir(sb->s_id, hot_debugfs_root);
+	if (IS_ERR(sb->s_hot_root->vol_dentry)) {
+		ret = PTR_ERR(sb->s_hot_root->vol_dentry);
+		goto err;
+	}
+
+	/* create debugfs hot data files */
+	for (i = 0; i < ARRAY_SIZE(hot_debugfs); i++) {
+		dentry = debugfs_create_file(hot_debugfs[i].name,
+					S_IFREG | S_IRUSR | S_IWUSR,
+					sb->s_hot_root->vol_dentry,
+					sb->s_hot_root,
+					hot_debugfs[i].fops);
+		if (IS_ERR(dentry)) {
+			ret = PTR_ERR(dentry);
+			goto err;
+		}
+	}
+
+	return 0;
+
+err:
+	debugfs_remove_recursive(sb->s_hot_root->vol_dentry);
+
+	if (list_empty(&hot_debugfs_root->d_subdirs)) {
+		debugfs_remove(hot_debugfs_root);
+		hot_debugfs_root = NULL;
+	}
+
+	return ret;
+}
+
+/* remove dentries for debugsfs */
+static void hot_debugfs_exit(struct super_block *sb)
+{
+	/* remove all debugfs entries recursively from the volume root */
+	if (sb->s_hot_root->vol_dentry)
+		debugfs_remove_recursive(sb->s_hot_root->vol_dentry);
+	else
+		BUG();
+
+	if (list_empty(&hot_debugfs_root->d_subdirs)) {
+		debugfs_remove(hot_debugfs_root);
+		hot_debugfs_root = NULL;
+	}
+}
+
 /*
  * Initialize kmem cache for hot_inode_item and hot_range_item.
  */
@@ -788,10 +1260,22 @@ int hot_track_init(struct super_block *sb)
 
 	sb->s_hot_root = root;
 
+	ret = hot_debugfs_init(sb);
+	if (ret) {
+		printk(KERN_ERR "%s: hot_debugfs_init error: %d\n",
+				__func__, ret);
+		goto failed_debugfs;
+	}
+
 	printk(KERN_INFO "VFS: Turning on hot data tracking\n");
 
 	return 0;
 
+failed_debugfs:
+	unregister_shrinker(&root->hot_shrink);
+	cancel_delayed_work_sync(&root->update_work);
+	destroy_workqueue(root->update_wq);
+	sb->s_hot_root = NULL;
 failed_wq:
 	hot_map_exit(root);
 	hot_inode_tree_exit(root);
@@ -804,6 +1288,7 @@ void hot_track_exit(struct super_block *sb)
 {
 	struct hot_info *root = sb->s_hot_root;
 
+	hot_debugfs_exit(sb);
 	unregister_shrinker(&root->hot_shrink);
 	cancel_delayed_work_sync(&root->update_work);
 	destroy_workqueue(root->update_wq);
diff --git a/fs/hot_tracking.h b/fs/hot_tracking.h
index 73d2a3e..a969940 100644
--- a/fs/hot_tracking.h
+++ b/fs/hot_tracking.h
@@ -53,4 +53,9 @@
 #define AVW_DIVIDER_POWER 40 /* AVW - average delta between recent writes(ns) */
 #define AVW_COEFF_POWER 0
 
+struct hot_debugfs {
+	const char *name;
+	const struct file_operations *fops;
+};
+
 #endif /* __HOT_TRACKING__ */
diff --git a/include/linux/hot_tracking.h b/include/linux/hot_tracking.h
index 97283b3..afb2952 100644
--- a/include/linux/hot_tracking.h
+++ b/include/linux/hot_tracking.h
@@ -122,6 +122,7 @@ struct hot_info {
 	struct delayed_work update_work;
 	struct hot_type *hot_type;
 	struct shrinker hot_shrink;
+	struct dentry *vol_dentry;
 };
 
 /*
-- 
1.7.6.5


^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [PATCH v1 hot_track 14/16] proc: add two hot_track proc files
  2012-11-16  9:51 [PATCH v1 resend hot_track 00/16] vfs: hot data tracking zwu.kernel
                   ` (12 preceding siblings ...)
  2012-11-16  9:51 ` [PATCH v1 hot_track 13/16] vfs: add debugfs support zwu.kernel
@ 2012-11-16  9:51 ` zwu.kernel
  2012-11-16  9:51 ` [PATCH v1 hot_track 15/16] btrfs: add hot tracking support zwu.kernel
                   ` (2 subsequent siblings)
  16 siblings, 0 replies; 22+ messages in thread
From: zwu.kernel @ 2012-11-16  9:51 UTC (permalink / raw)
  To: linux-fsdevel; +Cc: linux-kernel, viro, Zhi Yong Wu

From: Zhi Yong Wu <wuzhy@linux.vnet.ibm.com>

  Add two proc files hot-kick-time and hot-update-delay
under the dir /proc/sys/fs/ in order to turn
TIME_TO_KICK and HEAT_UPDATE_DELAY into be tunable.

Signed-off-by: Zhi Yong Wu <wuzhy@linux.vnet.ibm.com>
---
 fs/hot_tracking.c            |   12 +++++++++---
 fs/hot_tracking.h            |    9 ---------
 include/linux/hot_tracking.h |    7 +++++++
 kernel/sysctl.c              |   14 ++++++++++++++
 4 files changed, 30 insertions(+), 12 deletions(-)

diff --git a/fs/hot_tracking.c b/fs/hot_tracking.c
index a98bfe6..69a6d33 100644
--- a/fs/hot_tracking.c
+++ b/fs/hot_tracking.c
@@ -27,6 +27,12 @@
 
 static struct dentry *hot_debugfs_root;
 
+int sysctl_hot_kick_time __read_mostly = 300;
+EXPORT_SYMBOL_GPL(sysctl_hot_kick_time);
+
+int sysctl_hot_update_delay __read_mostly = 300;
+EXPORT_SYMBOL_GPL(sysctl_hot_update_delay);
+
 /* kmem_cache pointers for slab caches */
 static struct kmem_cache *hot_inode_item_cachep __read_mostly;
 static struct kmem_cache *hot_range_item_cachep __read_mostly;
@@ -406,7 +412,7 @@ static bool hot_is_obsolete(struct hot_freq_data *freq_data)
 		(cur_time - timespec_to_ns(&freq_data->last_read_time));
 	u64 last_write_ns =
 		(cur_time - timespec_to_ns(&freq_data->last_write_time));
-	u64 kick_ns =  TIME_TO_KICK * NSEC_PER_SEC;
+	u64 kick_ns =  sysctl_hot_kick_time * NSEC_PER_SEC;
 
 	if ((last_read_ns > kick_ns) && (last_write_ns > kick_ns))
 		ret = 1;
@@ -604,7 +610,7 @@ static void hot_update_worker(struct work_struct *work)
 
 	/* Instert next delayed work */
 	queue_delayed_work(root->update_wq, &root->update_work,
-		msecs_to_jiffies(HEAT_UPDATE_DELAY * MSEC_PER_SEC));
+		msecs_to_jiffies(sysctl_hot_update_delay * MSEC_PER_SEC));
 }
 
 static void *hot_range_seq_start(struct seq_file *seq, loff_t *pos)
@@ -1251,7 +1257,7 @@ int hot_track_init(struct super_block *sb)
 	/* Initialize hot tracking wq and arm one delayed work */
 	INIT_DELAYED_WORK(&root->update_work, hot_update_worker);
 	queue_delayed_work(root->update_wq, &root->update_work,
-		msecs_to_jiffies(HEAT_UPDATE_DELAY * MSEC_PER_SEC));
+		msecs_to_jiffies(sysctl_hot_update_delay * MSEC_PER_SEC));
 
 	/* Register a shrinker callback */
 	root->hot_shrink.shrink = hot_track_prune;
diff --git a/fs/hot_tracking.h b/fs/hot_tracking.h
index a969940..ab6d603 100644
--- a/fs/hot_tracking.h
+++ b/fs/hot_tracking.h
@@ -23,15 +23,6 @@
 #define RANGE_BITS 20
 #define FREQ_POWER 4
 
-/*
- * time to quit keeping track of
- * tracking data (seconds)
- */
-#define TIME_TO_KICK 300
-
-/* set how often to update temperatures (seconds) */
-#define HEAT_UPDATE_DELAY 300
-
 /* NRR/NRW heat unit = 2^X accesses */
 #define NRR_MULTIPLIER_POWER 20 /* NRR - number of reads since mount */
 #define NRR_COEFF_POWER 0
diff --git a/include/linux/hot_tracking.h b/include/linux/hot_tracking.h
index afb2952..f764730 100644
--- a/include/linux/hot_tracking.h
+++ b/include/linux/hot_tracking.h
@@ -126,6 +126,13 @@ struct hot_info {
 };
 
 /*
+ * Two variables have meanings as below:
+ * 1. time to quit keeping track of tracking data (seconds)
+ * 2. set how often to update temperatures (seconds)
+ */
+extern int sysctl_hot_kick_time, sysctl_hot_update_delay;
+
+/*
  * Hot data tracking ioctls:
  *
  * HOT_INFO - retrieve info on frequency of access
diff --git a/kernel/sysctl.c b/kernel/sysctl.c
index 26f65ea..37624fb 100644
--- a/kernel/sysctl.c
+++ b/kernel/sysctl.c
@@ -1545,6 +1545,20 @@ static struct ctl_table fs_table[] = {
 		.proc_handler	= &pipe_proc_fn,
 		.extra1		= &pipe_min_size,
 	},
+	{
+		.procname	= "hot-kick-time",
+		.data		= &sysctl_hot_kick_time,
+		.maxlen		= sizeof(int),
+		.mode		= 0644,
+		.proc_handler	= proc_dointvec,
+	},
+	{
+		.procname	= "hot-update-delay",
+		.data		= &sysctl_hot_update_delay,
+		.maxlen		= sizeof(int),
+		.mode		= 0644,
+		.proc_handler	= proc_dointvec,
+	},
 	{ }
 };
 
-- 
1.7.6.5

^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [PATCH v1 hot_track 15/16] btrfs: add hot tracking support
  2012-11-16  9:51 [PATCH v1 resend hot_track 00/16] vfs: hot data tracking zwu.kernel
                   ` (13 preceding siblings ...)
  2012-11-16  9:51 ` [PATCH v1 hot_track 14/16] proc: add two hot_track proc files zwu.kernel
@ 2012-11-16  9:51 ` zwu.kernel
  2012-11-16  9:51 ` [PATCH v1 hot_track 16/16] vfs: add documentation zwu.kernel
  2012-12-06  3:28 ` [PATCH v1 resend hot_track 00/16] vfs: hot data tracking Zhi Yong Wu
  16 siblings, 0 replies; 22+ messages in thread
From: zwu.kernel @ 2012-11-16  9:51 UTC (permalink / raw)
  To: linux-fsdevel; +Cc: linux-kernel, viro, Zhi Yong Wu

From: Zhi Yong Wu <wuzhy@linux.vnet.ibm.com>

  Introduce one new mount option '-o hot_track',
and add its parsing support.
  Its usage looks like:
   mount -o hot_track
   mount -o nouser,hot_track
   mount -o nouser,hot_track,loop
   mount -o hot_track,nouser

Reviewed-by:   David Sterba <dsterba@suse.cz>
Signed-off-by: Zhi Yong Wu <wuzhy@linux.vnet.ibm.com>
---
 fs/btrfs/ctree.h |    1 +
 fs/btrfs/super.c |   22 +++++++++++++++++++++-
 2 files changed, 22 insertions(+), 1 deletions(-)

diff --git a/fs/btrfs/ctree.h b/fs/btrfs/ctree.h
index c72ead8..4703178 100644
--- a/fs/btrfs/ctree.h
+++ b/fs/btrfs/ctree.h
@@ -1756,6 +1756,7 @@ struct btrfs_ioctl_defrag_range_args {
 #define BTRFS_MOUNT_CHECK_INTEGRITY	(1 << 20)
 #define BTRFS_MOUNT_CHECK_INTEGRITY_INCLUDING_EXTENT_DATA (1 << 21)
 #define BTRFS_MOUNT_PANIC_ON_FATAL_ERROR	(1 << 22)
+#define BTRFS_MOUNT_HOT_TRACK		(1 << 23)
 
 #define btrfs_clear_opt(o, opt)		((o) &= ~BTRFS_MOUNT_##opt)
 #define btrfs_set_opt(o, opt)		((o) |= BTRFS_MOUNT_##opt)
diff --git a/fs/btrfs/super.c b/fs/btrfs/super.c
index 915ac14..0bcc62b 100644
--- a/fs/btrfs/super.c
+++ b/fs/btrfs/super.c
@@ -41,6 +41,7 @@
 #include <linux/slab.h>
 #include <linux/cleancache.h>
 #include <linux/ratelimit.h>
+#include <linux/hot_tracking.h>
 #include "compat.h"
 #include "delayed-inode.h"
 #include "ctree.h"
@@ -299,6 +300,10 @@ static void btrfs_put_super(struct super_block *sb)
 	 * last process that kept it busy.  Or segfault in the aforementioned
 	 * process...  Whom would you report that to?
 	 */
+
+	/* Hot data tracking */
+	if (btrfs_test_opt(btrfs_sb(sb)->tree_root, HOT_TRACK))
+		hot_track_exit(sb);
 }
 
 enum {
@@ -311,7 +316,7 @@ enum {
 	Opt_enospc_debug, Opt_subvolrootid, Opt_defrag, Opt_inode_cache,
 	Opt_no_space_cache, Opt_recovery, Opt_skip_balance,
 	Opt_check_integrity, Opt_check_integrity_including_extent_data,
-	Opt_check_integrity_print_mask, Opt_fatal_errors,
+	Opt_check_integrity_print_mask, Opt_fatal_errors, Opt_hot_track,
 	Opt_err,
 };
 
@@ -352,6 +357,7 @@ static match_table_t tokens = {
 	{Opt_check_integrity_including_extent_data, "check_int_data"},
 	{Opt_check_integrity_print_mask, "check_int_print_mask=%d"},
 	{Opt_fatal_errors, "fatal_errors=%s"},
+	{Opt_hot_track, "hot_track"},
 	{Opt_err, NULL},
 };
 
@@ -614,6 +620,9 @@ int btrfs_parse_options(struct btrfs_root *root, char *options)
 				goto out;
 			}
 			break;
+		case Opt_hot_track:
+			btrfs_set_opt(info->mount_opt, HOT_TRACK);
+			break;
 		case Opt_err:
 			printk(KERN_INFO "btrfs: unrecognized mount option "
 			       "'%s'\n", p);
@@ -841,11 +850,20 @@ static int btrfs_fill_super(struct super_block *sb,
 		goto fail_close;
 	}
 
+	if (btrfs_test_opt(fs_info->tree_root, HOT_TRACK)) {
+		err = hot_track_init(sb);
+		if (err)
+			goto fail_hot;
+	}
+
 	save_mount_options(sb, data);
 	cleancache_init_fs(sb);
 	sb->s_flags |= MS_ACTIVE;
 	return 0;
 
+fail_hot:
+	dput(sb->s_root);
+	sb->s_root = NULL;
 fail_close:
 	close_ctree(fs_info->tree_root);
 	return err;
@@ -941,6 +959,8 @@ static int btrfs_show_options(struct seq_file *seq, struct dentry *dentry)
 		seq_puts(seq, ",skip_balance");
 	if (btrfs_test_opt(root, PANIC_ON_FATAL_ERROR))
 		seq_puts(seq, ",fatal_errors=panic");
+	if (btrfs_test_opt(root, HOT_TRACK))
+		seq_puts(seq, ",hot_track");
 	return 0;
 }
 
-- 
1.7.6.5

^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [PATCH v1 hot_track 16/16] vfs: add documentation
  2012-11-16  9:51 [PATCH v1 resend hot_track 00/16] vfs: hot data tracking zwu.kernel
                   ` (14 preceding siblings ...)
  2012-11-16  9:51 ` [PATCH v1 hot_track 15/16] btrfs: add hot tracking support zwu.kernel
@ 2012-11-16  9:51 ` zwu.kernel
  2012-12-06  3:28 ` [PATCH v1 resend hot_track 00/16] vfs: hot data tracking Zhi Yong Wu
  16 siblings, 0 replies; 22+ messages in thread
From: zwu.kernel @ 2012-11-16  9:51 UTC (permalink / raw)
  To: linux-fsdevel; +Cc: linux-kernel, viro, Zhi Yong Wu

From: Zhi Yong Wu <wuzhy@linux.vnet.ibm.com>

  Add one doc for VFS hot tracking feature

Signed-off-by: Zhi Yong Wu <wuzhy@linux.vnet.ibm.com>
---
 Documentation/filesystems/00-INDEX         |    2 +
 Documentation/filesystems/hot_tracking.txt |  263 ++++++++++++++++++++++++++++
 2 files changed, 265 insertions(+), 0 deletions(-)
 create mode 100644 Documentation/filesystems/hot_tracking.txt

diff --git a/Documentation/filesystems/00-INDEX b/Documentation/filesystems/00-INDEX
index 8c624a1..b68bdff 100644
--- a/Documentation/filesystems/00-INDEX
+++ b/Documentation/filesystems/00-INDEX
@@ -118,3 +118,5 @@ xfs.txt
 	- info and mount options for the XFS filesystem.
 xip.txt
 	- info on execute-in-place for file mappings.
+hot_tracking.txt
+	- info on hot data tracking in VFS layer
diff --git a/Documentation/filesystems/hot_tracking.txt b/Documentation/filesystems/hot_tracking.txt
new file mode 100644
index 0000000..0adc524
--- /dev/null
+++ b/Documentation/filesystems/hot_tracking.txt
@@ -0,0 +1,263 @@
+Hot Data Tracking
+
+September, 2012		Zhi Yong Wu <wuzhy@linux.vnet.ibm.com>
+
+CONTENTS
+
+1. Introduction
+2. Motivation
+3. The Design
+4. How to Calc Frequency of Reads/Writes & Temperature
+5. Git Development Tree
+6. Usage Example
+
+
+1. Introduction
+
+  The feature adds experimental support for tracking data temperature
+information in VFS layer.  Essentially, this means maintaining some key
+stats(like number of reads/writes, last read/write time, frequency of
+reads/writes), then distilling those numbers down to a single
+"temperature" value that reflects what data is "hot," and using that
+temperature to move data to SSDs.
+
+  The long-term goal of the feature is to allow some FSs,
+e.g. Btrfs to intelligently utilize SSDs in a heterogenous volume.
+Incidentally, this project has been motivated by
+the Project Ideas page on the Btrfs wiki.
+
+  Of course, users are warned not to run this code outside of development
+environments. These patches are EXPERIMENTAL, and as such they might eat
+your data and/or memory. That said, the code should be relatively safe
+when the hottrack mount option are disabled.
+
+
+2. Motivation
+
+  The overall goal of enabling hot data relocation to SSD has been
+motivated by the Project Ideas page on the Btrfs wiki at
+<https://btrfs.wiki.kernel.org/index.php/Project_ideas>.
+It will divide into two steps. VFS provide hot data tracking function
+while specific FS will provide hot data relocation function.
+So as the first step of this goal, it is hoped that the patchset
+for hot data tracking will eventually mature into VFS.
+
+  This is essentially the traditional cache argument: SSD is fast and
+expensive; HDD is cheap but slow. ZFS, for example, can already take
+advantage of SSD caching. Btrfs should also be able to take advantage of
+hybrid storage without many broad, sweeping changes to existing code.
+
+
+3. The Design
+
+These include the following parts:
+
+    * Hooks in existing vfs functions to track data access frequency
+
+    * New radix-trees for tracking access frequency of inodes and sub-file
+ranges
+    The relationship between super_block and radix-tree is as below:
+hot_info.hot_inode_tree
+    Each FS instance can find hot tracking info s_hotinfo.
+In this hot_info, it store a lot of hot tracking info such as hot_inode_tree,
+inode and range list, etc.
+
+    * A list for indexing data by its temperature
+
+    * A debugfs interface for dumping data from the radix-trees
+
+    * A background kthread for updating inode heat info
+
+    * Mount options for enabling temperature tracking(-o hot_track,
+default mean disabled)
+    * An ioctl to retrieve the frequency information collected for a certain
+file
+    * Ioctls to enable/disable frequency tracking per inode.
+
+Let us see their relationship as below:
+
+    * hot_info.hot_inode_tree indexes hot_inode_items, one per inode
+
+    * hot_inode_item contains access frequency data for that inode
+
+    * hot_inode_item holds a heat list node to index the access
+frequency data for that inode
+
+    * hot_inode_item.hot_range_tree indexes hot_range_items for that inode
+
+    * hot_range_item contains access frequency data for that range
+
+    * hot_range_item holds a heat list node to index the access
+frequency data for that range
+
+    * hot_info.heat_inode_map indexes per-inode heat list nodes
+
+    * hot_info.heat_range_map indexes per-range heat list nodes
+
+  How about some ascii art? :) Just looking at the hot inode item case
+(the range item case is the same pattern, though), we have:
+
+heat_inode_map           hot_inode_tree
+    |                         |
+    |                         V
+    |           +-------hot_comm_item--------+
+    |           |       frequency data       |
++---+           |        list_head           |
+|               V            ^ |             V
+| ...<--hot_comm_item-->...  | |  ...<--hot_comm_item-->...
+|       frequency data       | |        frequency data
++-------->list_head----------+ +--------->list_head--->.....
+       hot_range_tree                  hot_range_tree
+                                             |
+             heat_range_map                  V
+                   |           +-------hot_comm_item--------+
+                   |           |       frequency data       |
+               +---+           |        list_head           |
+               |               V            ^ |             V
+               | ...<--hot_comm_item-->...  | |  ...<--hot_comm_item-->...
+               |       frequency data       | |        frequency data
+               +-------->list_head----------+ +--------->list_head--->.....
+
+
+4. How to Calc Frequency of Reads/Writes & Temperature
+
+1.) hot_rw_freq_calc()
+
+  This function does the actual work of updating the frequency numbers,
+whatever they turn out to be. FREQ_POWER determines how many atime
+deltas we keep track of (as a power of 2). So, setting it to anything above
+16ish is probably overkill. Also, the higher the power, the more bits get
+right shifted out of the timestamp, reducing precision, so take note of that
+as well.
+
+  The caller should have already locked freq_data's parent's spinlock.
+
+  FREQ_POWER, defined immediately below, determines how heavily to weight
+the current frequency numbers against the newest access. For example, a value
+of 4 means that the new access information will be weighted 1/16th (ie 2^-4)
+as heavily as the existing frequency info. In essence, this is a kludged-
+together version of a weighted average, since we can't afford to keep all of
+the information that it would take to get a _real_ weighted average.
+
+2.) Some Micro explaination
+
+  The following comments explain what exactly comprises a unit of heat.
+Each of six values of heat are calculated and combined in order to form an
+overall temperature for the data:
+
+    * NRR - number of reads since mount
+    * NRW - number of writes since mount
+    * LTR - time elapsed since last read (ns)
+    * LTW - time elapsed since last write (ns)
+    * AVR - average delta between recent reads (ns)
+    * AVW - average delta between recent writes (ns)
+
+  These values are divided (right-shifted) according to the *_DIVIDER_POWER
+values defined below to bring the numbers into a reasonable range. You can
+modify these values to fit your needs. However, each heat unit is a u32 and
+thus maxes out at 2^32 - 1. Therefore, you must choose your dividers quite
+carefully or else they could max out or be stuck at zero quite easily.
+(E.g., if you chose AVR_DIVIDER_POWER = 0, nothing less than 4s of atime
+delta would bring the temperature above zero, ever.)
+
+  Finally, each value is added to the overall temperature between 0 and 8
+times, depending on its *_COEFF_POWER value. Note that the coefficients are
+also actually implemented with shifts, so take care to treat these values
+as powers of 2. (I.e., 0 means we'll add it to the temp once; 1 = 2x, etc.)
+
+    * AVR/AVW cold unit = 2^X ns of average delta
+    * AVR/AVW heat unit = HEAT_MAX_VALUE - cold unit
+
+  E.g., data with an average delta between 0 and 2^X ns will have a cold
+value of 0, which means a heat value equal to HEAT_MAX_VALUE.
+
+3.) hot_temp_calc()
+
+  This function is responsible for distilling the six heat
+criteria, which are described in detail in hot_tracking.h) down into a single
+temperature value for the data, which is an integer between 0
+and HEAT_MAX_VALUE.
+
+  To accomplish this, the raw values from the hot_freq_data structure
+are shifted various ways in order to make the temperature calculation more
+or less sensitive to each value.
+
+  Once this calibration has happened, we do some additional normalization and
+make sure that everything fits nicely in a u32. From there, we take a very
+rudimentary kind of "average" of each of the values, where the *_COEFF_POWER
+values act as weights for the average.
+
+  Finally, we use the HEAT_HASH_BITS value, which determines the size of the
+heat list array, to normalize the temperature to the proper granularity.
+
+
+5. Git Development Tree
+
+  This feature is still on development and review, so if you're interested,
+you can pull from the git repository at the following location:
+
+  https://github.com/wuzhy/kernel.git hot_tracking
+  git://github.com/wuzhy/kernel.git hot_tracking
+
+
+6. Usage Example
+
+1.) To use hot tracking, you should mount like this:
+
+$ mount -o hot_track /dev/sdb /mnt
+[ 1505.894078] device label test devid 1 transid 29 /dev/sdb
+[ 1505.952977] btrfs: disk space caching is enabled
+[ 1506.069678] vfs: turning on hot data tracking
+
+2.) Mount debugfs at first:
+
+$ mount -t debugfs none /sys/kernel/debug
+$ ls -l /sys/kernel/debug/hot_track/
+total 0
+drwxr-xr-x 2 root root 0 Aug  8 04:40 sdb
+$ ls -l /sys/kernel/debug/hot_track/sdb
+total 0
+-rw-r--r-- 1 root root 0 Aug  8 04:40 rt_stats_inode
+-rw-r--r-- 1 root root 0 Aug  8 04:40 rt_stats_range
+
+3.) View information about hot tracking from debugfs:
+
+$ echo "hot tracking test" > /mnt/file
+$ cat /sys/kernel/debug/hot_track/sdb/rt_stats_inode
+inode #279, reads 0, writes 1, avg read time 18446744073709551615,
+avg write time 5251566408153596, temp 109
+$ cat /sys/kernel/debug/hot_track/sdb/range_data
+inode #279, range start 0 (range len 1048576) reads 0, writes 1,
+avg read time 18446744073709551615, avg write time 1128690176623144209, temp 64
+
+$ echo "hot data tracking test" >> /mnt/file
+$ cat /sys/kernel/debug/hot_track/sdb/rt_stats_inode
+inode #279, reads 0, writes 2, avg read time 18446744073709551615,
+avg write time 4923343766042451, temp 109
+$ cat /sys/kernel/debug/hot_track/sdb/range_data
+inode #279, range start 0 (range len 1048576) reads 0, writes 2,
+avg read time 18446744073709551615, avg write time 1058147040842596150, temp 64
+
+4.) Check temp sorting result of some nodes:
+
+$ cat /sys/kernel/debug/hot_track/loop0/hot_spots_inode
+inode #5248773, reads 0, writes 244,
+avg read time 18446744073709, avg write time 822, temp 111
+inode #878523, reads 0, writes 1,
+avg read time 18446744073709, avg write time 5278036898, temp 109
+inode #878524, reads 0, writes 1,
+avg read time 18446744073709, avg write time 5278036898, temp 109
+
+5.) Tune some hot tracking parameters as below:
+
+$ cat /proc/sys/fs/hot-kick-time
+300
+$ echo 360 > /proc/sys/fs/hot-kick-time
+$ cat /proc/sys/fs/hot-kick-time
+360
+$ cat /proc/sys/fs/hot-update-delay
+300
+$ echo 360 > /proc/sys/fs/hot-update-delay
+$ cat /proc/sys/fs/hot-update-delay
+360
+
-- 
1.7.6.5

^ permalink raw reply related	[flat|nested] 22+ messages in thread

* Re: [PATCH v1 resend hot_track 00/16] vfs: hot data tracking
  2012-11-16  9:51 [PATCH v1 resend hot_track 00/16] vfs: hot data tracking zwu.kernel
                   ` (15 preceding siblings ...)
  2012-11-16  9:51 ` [PATCH v1 hot_track 16/16] vfs: add documentation zwu.kernel
@ 2012-12-06  3:28 ` Zhi Yong Wu
  2012-12-10  3:30   ` Zhi Yong Wu
  16 siblings, 1 reply; 22+ messages in thread
From: Zhi Yong Wu @ 2012-12-06  3:28 UTC (permalink / raw)
  To: zwu.kernel
  Cc: linux-fsdevel, linux-kernel, viro, linuxram, david, swhiteho,
	dave, darrick.wong, andi, northrup.james

HI, guys

THe perf testing is done separately with fs_mark, fio, ffsb and
compilebench in one kvm guest.

Below is the performance testing report for hot tracking, and no obvious
perf downgrade is found.

Note: original kernel means its source code is not changed;
      kernel with enabled hot tracking means its source code is with hot
tracking patchset.

The test env is set up as below:

root@debian-i386:/home/zwu# uname -a
Linux debian-i386 3.7.0-rc8+ #266 SMP Tue Dec 4 12:17:55 CST 2012 x86_64
GNU/Linux

root@debian-i386:/home/zwu# mkfs.xfs -f -l
size=1310b,sunit=8 /home/zwu/bdev.img
meta-data=/home/zwu/bdev.img     isize=256    agcount=4, agsize=128000
blks
         =                       sectsz=512   attr=2, projid32bit=0
data     =                       bsize=4096   blocks=512000, imaxpct=25
         =                       sunit=0      swidth=0 blks
naming   =version 2              bsize=4096   ascii-ci=0
log      =internal log           bsize=4096   blocks=1310, version=2
         =                       sectsz=512   sunit=1 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0

1.) original kernel

root@debian-i386:/home/zwu# mount -o
loop,logbsize=256k /home/zwu/bdev.img /mnt/scratch
[ 1197.421616] XFS (loop0): Mounting Filesystem
[ 1197.567399] XFS (loop0): Ending clean mount
root@debian-i386:/home/zwu# mount
/dev/sda1 on / type ext3 (rw,errors=remount-ro)
tmpfs on /lib/init/rw type tmpfs (rw,nosuid,mode=0755)
proc on /proc type proc (rw,noexec,nosuid,nodev)
sysfs on /sys type sysfs (rw,noexec,nosuid,nodev)
udev on /dev type tmpfs (rw,mode=0755)
tmpfs on /dev/shm type tmpfs (rw,nosuid,nodev)
devpts on /dev/pts type devpts (rw,noexec,nosuid,gid=5,mode=620)
none on /selinux type selinuxfs (rw,relatime)
debugfs on /sys/kernel/debug type debugfs (rw)
binfmt_misc on /proc/sys/fs/binfmt_misc type binfmt_misc
(rw,noexec,nosuid,nodev)
/dev/loop0 on /mnt/scratch type xfs (rw,logbsize=256k)
root@debian-i386:/home/zwu# free -m
             total       used       free     shared    buffers
cached
Mem:           112        109          2          0          4
53
-/+ buffers/cache:         51         60
Swap:          713         29        684

2.) kernel with enabled hot tracking

root@debian-i386:/home/zwu# mount -o
hot_track,loop,logbsize=256k /home/zwu/bdev.img /mnt/scratch
[  364.648470] XFS (loop0): Mounting Filesystem
[  364.910035] XFS (loop0): Ending clean mount
[  364.921063] VFS: Turning on hot data tracking
root@debian-i386:/home/zwu# mount
/dev/sda1 on / type ext3 (rw,errors=remount-ro)
tmpfs on /lib/init/rw type tmpfs (rw,nosuid,mode=0755)
proc on /proc type proc (rw,noexec,nosuid,nodev)
sysfs on /sys type sysfs (rw,noexec,nosuid,nodev)
udev on /dev type tmpfs (rw,mode=0755)
tmpfs on /dev/shm type tmpfs (rw,nosuid,nodev)
devpts on /dev/pts type devpts (rw,noexec,nosuid,gid=5,mode=620)
none on /selinux type selinuxfs (rw,relatime)
binfmt_misc on /proc/sys/fs/binfmt_misc type binfmt_misc
(rw,noexec,nosuid,nodev)
/dev/loop0 on /mnt/scratch type xfs (rw,hot_track,logbsize=256k)
root@debian-i386:/home/zwu# free -m
             total       used       free     shared    buffers
cached
Mem:           112        107          4          0          2
34
-/+ buffers/cache:         70         41
Swap:          713          2        711

1. fs_mark test

1.) orginal kernel

#  ./fs_mark  -D  100  -S0  -n  1000  -s  1  -L  30  -d  /mnt/scratch/0
-d  /mnt/scratch/1  -d  /mnt/scratch/2  -d  /mnt/scratch/3
-d  /mnt/scratch/4  -d  /mnt/scratch/5  -d  /mnt/scratch/6
-d  /mnt/scratch/7 
#	Version 3.3, 8 thread(s) starting at Wed Dec  5 03:20:58 2012
#	Sync method: NO SYNC: Test does not issue sync() or fsync() calls.
#	Directories:  Time based hash between directories across 100
subdirectories with 180 seconds per subdirectory.
#	File names: 40 bytes long, (16 initial bytes of time stamp with 24
random bytes at end of name)
#	Files info: size 1 bytes, written with an IO size of 16384 bytes per
write
#	App overhead is time in microseconds spent in the test not doing file
writing related system calls.

FSUse%        Count         Size    Files/sec     App Overhead
     2         8000            1        375.6         27175895
     3        16000            1        375.6         27478079
     4        24000            1        346.0         27819607
     4        32000            1        316.9         25863385
     5        40000            1        335.2         25460605
     6        48000            1        312.3         25889196
     7        56000            1        327.3         25000611
     8        64000            1        304.4         28126698
     9        72000            1        361.7         26652172
     9        80000            1        370.1         27075875
    10        88000            1        347.8         31093106
    11        96000            1        387.1         26877324
    12       104000            1        352.3         26635853
    13       112000            1        379.3         26400198
    14       120000            1        367.4         27228178
    14       128000            1        359.2         27627871
    15       136000            1        358.4         27089821
    16       144000            1        385.5         27804852
    17       152000            1        322.9         26221907
    18       160000            1        393.2         26760040
    18       168000            1        351.9         29210327
    20       176000            1        395.2         24610548
    20       184000            1        376.7         27518650
    21       192000            1        340.1         27512874
    22       200000            1        389.0         27109104
    23       208000            1        389.7         29288594
    24       216000            1        352.6         29948820
    25       224000            1        380.4         26370958
    26       232000            1        332.9         27770518
    26       240000            1        333.6         25176691

2.) kernel with enabled hot tracking

#  ./fs_mark  -D  100  -S0  -n  1000  -s  1  -L  30  -d  /mnt/scratch/0
-d  /mnt/scratch/1  -d  /mnt/scratch/2  -d  /mnt/scratch/3
-d  /mnt/scratch/4  -d  /mnt/scratch/5  -d  /mnt/scratch/6
-d  /mnt/scratch/7 
#	Version 3.3, 8 thread(s) starting at Tue Dec  4 04:28:48 2012
#	Sync method: NO SYNC: Test does not issue sync() or fsync() calls.
#	Directories:  Time based hash between directories across 100
subdirectories with 180 seconds per subdirectory.
#	File names: 40 bytes long, (16 initial bytes of time stamp with 24
random bytes at end of name)
#	Files info: size 1 bytes, written with an IO size of 16384 bytes per
write
#	App overhead is time in microseconds spent in the test not doing file
writing related system calls.

FSUse%        Count         Size    Files/sec     App Overhead
     4         8000            1        323.0         25104879
     6        16000            1        351.4         25372919
     8        24000            1        345.9         24107987
     9        32000            1        313.2         26249533
    10        40000            1        323.0         20312267
    12        48000            1        303.2         22178040
    14        56000            1        307.6         22775058
    15        64000            1        317.9         25178845
    17        72000            1        351.8         22020260
    19        80000            1        369.3         23546708
    21        88000            1        324.1         29068297
    22        96000            1        355.3         25212333
    24       104000            1        346.4         26622613
    26       112000            1        360.4         25477193
    28       120000            1        362.9         21774508
    29       128000            1        329.0         25760109
    31       136000            1        369.5         24540577
    32       144000            1        330.2         26013559
    34       152000            1        365.5         25643279
    36       160000            1        366.2         24393130
    38       168000            1        348.3         25248940
    39       176000            1        357.3         24080574
    40       184000            1        316.8         23011921
    43       192000            1        351.7         27468060
    44       200000            1        362.2         27540349
    46       208000            1        340.9         26135445
    48       216000            1        339.2         20926743
    50       224000            1        316.5         21399871
    52       232000            1        346.3         24669604
    53       240000            1        320.5         22204449


2. FFSB test

1.) original kernel

FFSB version 6.0-RC2 started

benchmark time = 10
ThreadGroup 0
================
	 num_threads      = 4
	
	 read_random      = off
	 read_size        = 40960	(40KB)
	 read_blocksize   = 4096	(4KB)
	 read_skip        = off
	 read_skipsize    = 0	(0B)
	
	 write_random     = off
	 write_size       = 40960	(40KB)
	 fsync_file       = 0
	 write_blocksize  = 4096	(4KB)
	 wait time        = 0
	
	 op weights
	                 read = 0 (0.00%)
	              readall = 1 (10.00%)
	                write = 0 (0.00%)
	               create = 1 (10.00%)
	               append = 1 (10.00%)
	               delete = 1 (10.00%)
	               metaop = 0 (0.00%)
	            createdir = 0 (0.00%)
	                 stat = 1 (10.00%)
	             writeall = 1 (10.00%)
	       writeall_fsync = 1 (10.00%)
	           open_close = 1 (10.00%)
	          write_fsync = 0 (0.00%)
	         create_fsync = 1 (10.00%)
	         append_fsync = 1 (10.00%)
	
FileSystem /mnt/scratch/test1
==========
	 num_dirs         = 100
	 starting files   = 0
	
	 Fileset weight:
		     33554432 (  32MB) -> 1 (1.00%)
		      8388608 (   8MB) -> 2 (2.00%)
		       524288 ( 512KB) -> 3 (3.00%)
		       262144 ( 256KB) -> 4 (4.00%)
		       131072 ( 128KB) -> 5 (5.00%)
		        65536 (  64KB) -> 8 (8.00%)
		        32768 (  32KB) -> 10 (10.00%)
		        16384 (  16KB) -> 13 (13.00%)
		         8192 (   8KB) -> 21 (21.00%)
		         4096 (   4KB) -> 33 (33.00%)
	 directio         = off
	 alignedio        = off
	 bufferedio       = off
	
	 aging is off
	 current utilization = 26.19%
	
creating new fileset /mnt/scratch/test1
fs setup took 87 secs
Syncing()...1 sec
Starting Actual Benchmark At: Wed Dec  5 03:38:06 2012

Syncing()...0 sec
FFSB benchmark finished   at: Wed Dec  5 03:38:18 2012

Results:
Benchmark took 11.44 sec

Total Results
===============
             Op Name   Transactions	 Trans/sec	% Trans	    % Op Weight
Throughput
             =======   ============	 =========	=======	    ===========
==========
             readall :           93	      8.13	 0.880%		21.053%
32.5KB/sec
              create :           20	      1.75	 0.189%		 5.263%
6.99KB/sec
              append :           10	      0.87	 0.095%		 2.632%
3.5KB/sec
              delete :            4	      0.35	 0.038%		10.526%
NA
                stat :            3	      0.26	 0.028%		 7.895%
NA
            writeall :         2178	    190.39	20.600%		10.526%
762KB/sec
      writeall_fsync :            5	      0.44	 0.047%		 5.263%
1.75KB/sec
          open_close :            6	      0.52	 0.057%		15.789%
NA
        create_fsync :         8234	    719.78	77.878%		15.789%
2.81MB/sec
        append_fsync :           20	      1.75	 0.189%		 5.263%
6.99KB/sec
-
924.24 Transactions per Second

Throughput Results
===================
Read Throughput: 32.5KB/sec
Write Throughput: 3.57MB/sec

System Call Latency statistics in millisecs
=====
		Min		Avg		Max		Total Calls
		========	========	========	============
[   open]	0.050000	3.980161	41.840000	          31
   -
[   read]	0.017000	71.442215	1286.122000	          93
   -
[  write]	0.052000	1.034817	2201.956000	       10467
   -
[ unlink]	1.118000	185.398750	730.807000	           4
   -
[  close]	0.019000	1.968968	39.679000	          31
   -
[   stat]	0.043000	2.173667	6.428000	           3
   -

0.8% User   Time
9.2% System Time
10.0% CPU Utilization

2.) kernel with enabled hot tracking

FFSB version 6.0-RC2 started

benchmark time = 10
ThreadGroup 0
================
	 num_threads      = 4
	
	 read_random      = off
	 read_size        = 40960	(40KB)
	 read_blocksize   = 4096	(4KB)
	 read_skip        = off
	 read_skipsize    = 0	(0B)
	
	 write_random     = off
	 write_size       = 40960	(40KB)
	 fsync_file       = 0
	 write_blocksize  = 4096	(4KB)
	 wait time        = 0
	
	 op weights
	                 read = 0 (0.00%)
	              readall = 1 (10.00%)
	                write = 0 (0.00%)
	               create = 1 (10.00%)
	               append = 1 (10.00%)
	               delete = 1 (10.00%)
	               metaop = 0 (0.00%)
	            createdir = 0 (0.00%)
	                 stat = 1 (10.00%)
	             writeall = 1 (10.00%)
	       writeall_fsync = 1 (10.00%)
	           open_close = 1 (10.00%)
	          write_fsync = 0 (0.00%)
	         create_fsync = 1 (10.00%)
	         append_fsync = 1 (10.00%)
	
FileSystem /mnt/scratch/test1
==========
	 num_dirs         = 100
	 starting files   = 0
	
	 Fileset weight:
		     33554432 (  32MB) -> 1 (1.00%)
		      8388608 (   8MB) -> 2 (2.00%)
		       524288 ( 512KB) -> 3 (3.00%)
		       262144 ( 256KB) -> 4 (4.00%)
		       131072 ( 128KB) -> 5 (5.00%)
		        65536 (  64KB) -> 8 (8.00%)
		        32768 (  32KB) -> 10 (10.00%)
		        16384 (  16KB) -> 13 (13.00%)
		         8192 (   8KB) -> 21 (21.00%)
		         4096 (   4KB) -> 33 (33.00%)
	 directio         = off
	 alignedio        = off
	 bufferedio       = off
	
	 aging is off
	 current utilization = 52.46%
	
creating new fileset /mnt/scratch/test1
fs setup took 42 secs
Syncing()...1 sec
Starting Actual Benchmark At: Tue Dec  4 06:41:54 2012

Syncing()...0 sec
FFSB benchmark finished   at: Tue Dec  4 06:42:53 2012

Results:
Benchmark took 59.42 sec

Total Results
===============
             Op Name   Transactions	 Trans/sec	% Trans	    % Op Weight
Throughput
             =======   ============	 =========	=======	    ===========
==========
             readall :        10510	    176.87	54.808%		10.959%
707KB/sec
              create :           48	      0.81	 0.250%		 9.589%
3.23KB/sec
              append :          100	      1.68	 0.521%		13.699%
6.73KB/sec
              delete :            5	      0.08	 0.026%		 6.849%
NA
                stat :            5	      0.08	 0.026%		 6.849%
NA
            writeall :          130	      2.19	 0.678%		12.329%
8.75KB/sec
      writeall_fsync :           19	      0.32	 0.099%		 8.219%
1.28KB/sec
          open_close :            9	      0.15	 0.047%		12.329%
NA
        create_fsync :         8300	    139.67	43.283%		12.329%
559KB/sec
        append_fsync :           50	      0.84	 0.261%		 6.849%
3.37KB/sec
-
322.70 Transactions per Second

Throughput Results
===================
Read Throughput: 707KB/sec
Write Throughput: 582KB/sec

System Call Latency statistics in millisecs
=====
		Min		Avg		Max		Total Calls
		========	========	========	============
[   open]	0.061000	0.750540	10.721000	          63
   -
[   read]	0.017000	11.058425	28555.394000	       10510
   -
[  write]	0.034000	6.705286	26812.076000	        8647
   -
[ unlink]	0.922000	7.679800	25.364000	           5
   -
[  close]	0.019000	0.996635	34.723000	          63
   -
[   stat]	0.046000	0.942800	4.489000	           5
   -

0.2% User   Time
2.6% System Time
2.8% CPU Utilization


3. fio test

1.) original kernel

seq-read: (g=0): rw=read, bs=8K-8K/8K-8K, ioengine=libaio, iodepth=8
...
seq-read: (g=0): rw=read, bs=8K-8K/8K-8K, ioengine=libaio, iodepth=8
seq-write: (g=1): rw=write, bs=8K-8K/8K-8K, ioengine=libaio, iodepth=8
...
seq-write: (g=1): rw=write, bs=8K-8K/8K-8K, ioengine=libaio, iodepth=8
rnd-read: (g=2): rw=randread, bs=8K-8K/8K-8K, ioengine=libaio, iodepth=8
...
rnd-read: (g=2): rw=randread, bs=8K-8K/8K-8K, ioengine=libaio, iodepth=8
rnd-write: (g=3): rw=randwrite, bs=8K-8K/8K-8K, ioengine=libaio,
iodepth=8
...
rnd-write: (g=3): rw=randwrite, bs=8K-8K/8K-8K, ioengine=libaio,
iodepth=8
Starting 16 threads

seq-read: (groupid=0, jobs=4): err= 0: pid=1646
  read : io=2,835MB, bw=24,192KB/s, iops=3,023, runt=120021msec
    slat (usec): min=0, max=999K, avg=1202.67, stdev=3145.84
    clat (usec): min=0, max=1,536K, avg=9186.07, stdev=11344.56
    bw (KB/s) : min=   39, max=21301, per=26.11%, avg=6315.41,
stdev=1082.63
  cpu          : usr=10.89%, sys=33.14%, ctx=1488108, majf=13, minf=2238
  IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=100.0%, 16=0.0%, 32=0.0%,
>=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%,
>=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%,
>=64=0.0%
     issued r/w: total=362940/0, short=0/0
     lat (usec): 2=3.53%, 50=0.01%, 100=0.01%, 250=0.01%, 500=0.01%
     lat (usec): 750=0.03%, 1000=0.03%
     lat (msec): 2=1.75%, 4=1.08%, 10=68.93%, 20=22.39%, 50=2.02%
     lat (msec): 100=0.16%, 250=0.04%, 1000=0.01%, 2000=0.03%
seq-write: (groupid=1, jobs=4): err= 0: pid=1646
  write: io=1,721MB, bw=14,652KB/s, iops=1,831, runt=120277msec
    slat (usec): min=0, max=1,004K, avg=1744.41, stdev=3144.06
    clat (usec): min=0, max=1,014K, avg=15699.65, stdev=19751.69
    bw (KB/s) : min=  285, max=18032, per=26.41%, avg=3869.67,
stdev=762.96
  cpu          : usr=6.29%, sys=22.61%, ctx=880380, majf=36, minf=3222
  IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=100.0%, 16=0.0%, 32=0.0%,
>=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%,
>=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%,
>=64=0.0%
     issued r/w: total=0/220282, short=0/0
     lat (usec): 2=2.43%, 500=0.01%, 750=0.12%, 1000=0.14%
     lat (msec): 2=0.86%, 4=1.72%, 10=39.03%, 20=42.20%, 50=11.87%
     lat (msec): 100=1.15%, 250=0.17%, 500=0.06%, 750=0.14%, 1000=0.09%
     lat (msec): 2000=0.02%
rnd-read: (groupid=2, jobs=4): err= 0: pid=1646
  read : io=65,128KB, bw=541KB/s, iops=67, runt=120381msec
    slat (usec): min=48, max=55,230, avg=167.95, stdev=248.50
    clat (msec): min=74, max=4,229, avg=472.23, stdev=129.50
    bw (KB/s) : min=    0, max=  203, per=25.34%, avg=137.08,
stdev=21.73
  cpu          : usr=0.85%, sys=2.19%, ctx=44001, majf=30, minf=3726
  IO depths    : 1=0.1%, 2=0.1%, 4=0.2%, 8=99.7%, 16=0.0%, 32=0.0%,
>=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%,
>=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%,
>=64=0.0%
     issued r/w: total=8141/0, short=0/0

     lat (msec): 100=0.04%, 250=0.09%, 500=81.72%, 750=13.09%,
1000=2.97%
     lat (msec): 2000=1.50%, >=2000=0.59%
rnd-write: (groupid=3, jobs=4): err= 0: pid=1646
  write: io=200MB, bw=1,698KB/s, iops=212, runt=120331msec
    slat (usec): min=48, max=215K, avg=2272.24, stdev=2283.09
    clat (usec): min=762, max=14,617K, avg=147521.66, stdev=444146.36
    bw (KB/s) : min=    1, max= 3960, per=56.86%, avg=964.90,
stdev=514.63
  cpu          : usr=1.25%, sys=4.20%, ctx=135229, majf=0, minf=10194
  IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=99.9%, 16=0.0%, 32=0.0%,
>=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%,
>=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%,
>=64=0.0%
     issued r/w: total=0/25536, short=0/0
     lat (usec): 1000=0.26%
     lat (msec): 2=0.13%, 4=2.01%, 10=3.77%, 20=42.78%, 50=20.95%
     lat (msec): 100=12.83%, 250=12.50%, 500=2.49%, 750=0.33%,
1000=0.12%
     lat (msec): 2000=0.53%, >=2000=1.30%

Run status group 0 (all jobs):
   READ: io=2,835MB, aggrb=24,191KB/s, minb=24,772KB/s, maxb=24,772KB/s,
mint=120021msec, maxt=120021msec

Run status group 1 (all jobs):
  WRITE: io=1,721MB, aggrb=14,651KB/s, minb=15,003KB/s, maxb=15,003KB/s,
mint=120277msec, maxt=120277msec

Run status group 2 (all jobs):
   READ: io=65,128KB, aggrb=541KB/s, minb=553KB/s, maxb=553KB/s,
mint=120381msec, maxt=120381msec

Run status group 3 (all jobs):
  WRITE: io=200MB, aggrb=1,697KB/s, minb=1,738KB/s, maxb=1,738KB/s,
mint=120331msec, maxt=120331msec

Disk stats (read/write):
  loop0: ios=0/0, merge=0/0, ticks=0/0, in_queue=0, util=0.00%

2.) kernel with enabled hot tracking

seq-read: (g=0): rw=read, bs=8K-8K/8K-8K, ioengine=libaio, iodepth=8
...
seq-read: (g=0): rw=read, bs=8K-8K/8K-8K, ioengine=libaio, iodepth=8
seq-write: (g=1): rw=write, bs=8K-8K/8K-8K, ioengine=libaio, iodepth=8
...
seq-write: (g=1): rw=write, bs=8K-8K/8K-8K, ioengine=libaio, iodepth=8
rnd-read: (g=2): rw=randread, bs=8K-8K/8K-8K, ioengine=libaio, iodepth=8
...
rnd-read: (g=2): rw=randread, bs=8K-8K/8K-8K, ioengine=libaio, iodepth=8
rnd-write: (g=3): rw=randwrite, bs=8K-8K/8K-8K, ioengine=libaio,
iodepth=8
...
rnd-write: (g=3): rw=randwrite, bs=8K-8K/8K-8K, ioengine=libaio,
iodepth=8
Starting 16 threads

seq-read: (groupid=0, jobs=4): err= 0: pid=2163
  read : io=3,047MB, bw=26,001KB/s, iops=3,250, runt=120003msec
    slat (usec): min=0, max=1,000K, avg=1141.34, stdev=2175.25
    clat (usec): min=0, max=1,002K, avg=8610.96, stdev=6184.67
    bw (KB/s) : min=   12, max=18896, per=25.28%, avg=6572.50,
stdev=713.22
  cpu          : usr=10.38%, sys=35.02%, ctx=1601418, majf=12, minf=2235
  IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=100.0%, 16=0.0%, 32=0.0%,
>=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%,
>=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%,
>=64=0.0%
     issued r/w: total=390029/0, short=0/0
     lat (usec): 2=1.49%, 50=0.01%, 100=0.01%, 250=0.01%, 500=0.01%
     lat (usec): 750=0.01%, 1000=0.02%
     lat (msec): 2=1.53%, 4=0.86%, 10=79.60%, 20=14.93%, 50=1.43%
     lat (msec): 100=0.09%, 250=0.02%, 500=0.01%, 1000=0.01%, 2000=0.01%
seq-write: (groupid=1, jobs=4): err= 0: pid=2163
  write: io=1,752MB, bw=14,950KB/s, iops=1,868, runt=120003msec
    slat (usec): min=0, max=1,002K, avg=1697.47, stdev=3568.70
    clat (usec): min=0, max=1,019K, avg=15630.94, stdev=21109.46
    bw (KB/s) : min=  123, max=14693, per=26.31%, avg=3933.46,
stdev=779.57
  cpu          : usr=6.31%, sys=21.85%, ctx=894177, majf=4, minf=3407
  IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=100.0%, 16=0.0%, 32=0.0%,
>=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%,
>=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%,
>=64=0.0%
     issued r/w: total=0/224253, short=0/0
     lat (usec): 2=2.44%, 100=0.01%, 250=0.01%, 500=0.01%, 750=0.06%
     lat (usec): 1000=0.23%
     lat (msec): 2=0.73%, 4=2.00%, 10=40.15%, 20=42.68%, 50=10.25%
     lat (msec): 100=0.95%, 250=0.14%, 500=0.10%, 750=0.12%, 1000=0.11%
     lat (msec): 2000=0.03%
rnd-read: (groupid=2, jobs=4): err= 0: pid=2163
  read : io=85,208KB, bw=709KB/s, iops=88, runt=120252msec
    slat (usec): min=52, max=48,325, avg=204.43, stdev=596.50
    clat (msec): min=1, max=2,754, avg=359.99, stdev=78.96
    bw (KB/s) : min=    0, max=  249, per=25.17%, avg=178.20,
stdev=23.79
  cpu          : usr=1.00%, sys=2.64%, ctx=55704, majf=28, minf=2971
  IO depths    : 1=0.1%, 2=0.1%, 4=0.2%, 8=99.7%, 16=0.0%, 32=0.0%,
>=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%,
>=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%,
>=64=0.0%
     issued r/w: total=10651/0, short=0/0

     lat (msec): 2=0.01%, 50=0.02%, 100=0.04%, 250=1.61%, 500=92.60%
     lat (msec): 750=4.24%, 1000=0.68%, 2000=0.59%, >=2000=0.22%
rnd-write: (groupid=3, jobs=4): err= 0: pid=2163
  write: io=247MB, bw=2,019KB/s, iops=252, runt=125287msec
    slat (usec): min=51, max=286K, avg=2576.23, stdev=2882.30
    clat (usec): min=698, max=8,156K, avg=123274.05, stdev=355311.20
    bw (KB/s) : min=    1, max= 4848, per=57.62%, avg=1162.77,
stdev=560.79
  cpu          : usr=1.33%, sys=4.24%, ctx=163334, majf=0, minf=8588
  IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=99.9%, 16=0.0%, 32=0.0%,
>=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%,
>=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%,
>=64=0.0%
     issued r/w: total=0/31616, short=0/0
     lat (usec): 750=0.03%, 1000=0.15%
     lat (msec): 2=0.06%, 4=2.15%, 10=3.57%, 20=48.23%, 50=22.43%
     lat (msec): 100=11.48%, 250=9.14%, 500=1.04%, 750=0.16%, 1000=0.05%
     lat (msec): 2000=0.09%, >=2000=1.42%

Run status group 0 (all jobs):
   READ: io=3,047MB, aggrb=26,001KB/s, minb=26,625KB/s, maxb=26,625KB/s,
mint=120003msec, maxt=120003msec

Run status group 1 (all jobs):
  WRITE: io=1,752MB, aggrb=14,949KB/s, minb=15,308KB/s, maxb=15,308KB/s,
mint=120003msec, maxt=120003msec

Run status group 2 (all jobs):
   READ: io=85,208KB, aggrb=708KB/s, minb=725KB/s, maxb=725KB/s,
mint=120252msec, maxt=120252msec

Run status group 3 (all jobs):
  WRITE: io=247MB, aggrb=2,018KB/s, minb=2,067KB/s, maxb=2,067KB/s,
mint=125287msec, maxt=125287msec

Disk stats (read/write):
  loop0: ios=0/0, merge=0/0, ticks=0/0, in_queue=0, util=0.00%


4. compilebench test

1.) original kernel

using working directory /mnt/scratch/, 30 intial dirs 100 runs

native unpatched native-0 222MB in 87.48 seconds (2.54 MB/s)
native patched native-0 109MB in 24.89 seconds (4.41 MB/s)
native patched compiled native-0 691MB in 35.54 seconds (19.46 MB/s)
create dir kernel-0 222MB in 91.22 seconds (2.44 MB/s)
create dir kernel-1 222MB in 91.55 seconds (2.43 MB/s)
create dir kernel-2 222MB in 97.00 seconds (2.29 MB/s)
create dir kernel-3 222MB in 87.64 seconds (2.54 MB/s)
create dir kernel-4 222MB in 86.18 seconds (2.58 MB/s)
create dir kernel-5 222MB in 84.68 seconds (2.63 MB/s)
create dir kernel-6 222MB in 85.02 seconds (2.62 MB/s)
create dir kernel-7 222MB in 87.74 seconds (2.53 MB/s)
create dir kernel-8 222MB in 86.79 seconds (2.56 MB/s)
create dir kernel-9 222MB in 87.85 seconds (2.53 MB/s)
create dir kernel-10 222MB in 86.88 seconds (2.56 MB/s)
create dir kernel-11 222MB in 86.47 seconds (2.57 MB/s)
create dir kernel-12 222MB in 84.26 seconds (2.64 MB/s)
create dir kernel-13 222MB in 87.40 seconds (2.54 MB/s)
create dir kernel-14 222MB in 85.44 seconds (2.60 MB/s)
create dir kernel-15 222MB in 86.80 seconds (2.56 MB/s)
create dir kernel-16 222MB in 88.57 seconds (2.51 MB/s)
create dir kernel-17 222MB in 85.66 seconds (2.60 MB/s)
create dir kernel-18 222MB in 87.40 seconds (2.54 MB/s)
create dir kernel-19 222MB in 85.47 seconds (2.60 MB/s)
create dir kernel-20 222MB in 89.29 seconds (2.49 MB/s)
create dir kernel-21 222MB in 88.53 seconds (2.51 MB/s)
create dir kernel-22 222MB in 86.25 seconds (2.58 MB/s)
create dir kernel-23 222MB in 85.34 seconds (2.61 MB/s)
create dir kernel-24 222MB in 84.61 seconds (2.63 MB/s)
create dir kernel-25 222MB in 88.13 seconds (2.52 MB/s)
create dir kernel-26 222MB in 85.57 seconds (2.60 MB/s)
create dir kernel-27 222MB in 87.26 seconds (2.55 MB/s)
create dir kernel-28 222MB in 83.68 seconds (2.66 MB/s)
create dir kernel-29 222MB in 86.33 seconds (2.58 MB/s)
=== sdb ===
  CPU  0:              9366376 events,   439049 KiB data
  Total:               9366376 events (dropped 0),   439049 KiB data
patch dir kernel-7 109MB in 55.00 seconds (1.99 MB/s)
compile dir kernel-7 691MB in 37.15 seconds (18.62 MB/s)
compile dir kernel-14 680MB in 38.48 seconds (17.69 MB/s)
patch dir kernel-14 691MB in 93.31 seconds (7.41 MB/s)
read dir kernel-7 in 93.36 9.85 MB/s
read dir kernel-10 in 58.25 3.82 MB/s
create dir kernel-3116 222MB in 91.96 seconds (2.42 MB/s)
clean kernel-7 691MB in 5.16 seconds (134.03 MB/s)
read dir kernel-6 in 56.98 3.90 MB/s
stat dir kernel-2 in 19.42 seconds
compile dir kernel-2 680MB in 43.11 seconds (15.79 MB/s)
clean kernel-14 691MB in 6.27 seconds (110.30 MB/s)
clean kernel-2 680MB in 5.79 seconds (117.55 MB/s)
patch dir kernel-2 109MB in 71.22 seconds (1.54 MB/s)
stat dir kernel-2 in 16.06 seconds
create dir kernel-6231 222MB in 96.20 seconds (2.31 MB/s)
delete kernel-8 in 45.20 seconds
compile dir kernel-2 691MB in 38.58 seconds (17.93 MB/s)
create dir kernel-70151 222MB in 93.41 seconds (2.38 MB/s)
clean kernel-2 691MB in 5.09 seconds (135.87 MB/s)
create dir kernel-78184 222MB in 86.04 seconds (2.58 MB/s)
compile dir kernel-7 691MB in 37.60 seconds (18.39 MB/s)
create dir kernel-64250 222MB in 80.33 seconds (2.77 MB/s)
delete kernel-12 in 43.00 seconds
stat dir kernel-2 in 16.43 seconds
patch dir kernel-70151 109MB in 77.42 seconds (1.42 MB/s)
stat dir kernel-7 in 18.48 seconds
stat dir kernel-78184 in 18.62 seconds
compile dir kernel-2 691MB in 43.31 seconds (15.97 MB/s)
compile dir kernel-26 680MB in 50.37 seconds (13.51 MB/s)
stat dir kernel-7 in 21.52 seconds
create dir kernel-2717 222MB in 89.86 seconds (2.47 MB/s)
delete kernel-26 in 47.81 seconds
stat dir kernel-2 in 18.61 seconds
compile dir kernel-14 691MB in 46.66 seconds (14.82 MB/s)
compile dir kernel-70151 691MB in 39.19 seconds (17.65 MB/s)
create dir kernel-55376 222MB in 88.91 seconds (2.50 MB/s)
stat dir kernel-22 in 18.66 seconds
delete kernel-55376 in 37.71 seconds
patch dir kernel-27 109MB in 74.82 seconds (1.47 MB/s)
patch dir kernel-64250 109MB in 81.08 seconds (1.35 MB/s)
read dir kernel-6231 in 82.15 2.71 MB/s
patch dir kernel-9 109MB in 96.02 seconds (1.14 MB/s)
stat dir kernel-14 in 22.46 seconds
read dir kernel-29 in 58.10 3.83 MB/s
create dir kernel-57327 222MB in 93.92 seconds (2.37 MB/s)
stat dir kernel-14 in 21.92 seconds
compile dir kernel-27 691MB in 41.43 seconds (16.69 MB/s)
create dir kernel-64334 222MB in 89.31 seconds (2.49 MB/s)
patch dir kernel-1 109MB in 84.37 seconds (1.30 MB/s)
create dir kernel-16056 222MB in 94.93 seconds (2.34 MB/s)
clean kernel-7 691MB in 7.27 seconds (95.13 MB/s)
delete kernel-27 in 46.32 seconds
create dir kernel-51614 222MB in 88.91 seconds (2.50 MB/s)
clean kernel-14 691MB in 6.71 seconds (103.07 MB/s)
delete kernel-64250 in 43.60 seconds
stat dir kernel-2 in 24.25 seconds
clean kernel-70151 691MB in 6.20 seconds (111.55 MB/s)
delete kernel-14 in 40.74 seconds
read dir kernel-2 in 118.45 7.76 MB/s
create dir kernel-24150 222MB in 88.99 seconds (2.50 MB/s)
read dir kernel-9 in 83.70 2.73 MB/s
patch dir kernel-19 109MB in 76.06 seconds (1.44 MB/s)
clean kernel-2 691MB in 6.64 seconds (104.16 MB/s)
compile dir kernel-18 680MB in 47.33 seconds (14.38 MB/s)
compile dir kernel-2 691MB in 44.63 seconds (15.50 MB/s)
delete kernel-2 in 51.03 seconds
delete kernel-70151 in 45.96 seconds
stat dir kernel-1 in 17.56 seconds
read dir kernel-18 in 121.08 7.46 MB/s
clean kernel-18 680MB in 6.47 seconds (105.20 MB/s)
compile dir kernel-17 680MB in 52.10 seconds (13.06 MB/s)
read dir kernel-17 in 114.66 7.88 MB/s
stat dir kernel-18 in 30.36 seconds
stat dir kernel-64334 in 44.78 seconds
delete kernel-24150 in 44.79 seconds
delete kernel-17 in 47.64 seconds
stat dir kernel-1 in 19.87 seconds
compile dir kernel-7 691MB in 47.65 seconds (14.51 MB/s)
patch dir kernel-16 109MB in 100.96 seconds (1.09 MB/s)
stat dir kernel-7 in 21.35 seconds
create dir kernel-82195 222MB in 111.17 seconds (2.00 MB/s)
delete kernel-82195 in 40.79 seconds
stat dir kernel-3 in 19.51 seconds
patch dir kernel-2717 109MB in 94.55 seconds (1.16 MB/s)
patch dir kernel-5 109MB in 60.21 seconds (1.82 MB/s)
read dir kernel-2717 in 94.85 2.41 MB/s
delete kernel-29 in 40.51 seconds
clean kernel-7 691MB in 5.84 seconds (118.42 MB/s)
read dir kernel-4 in 57.91 3.84 MB/s
stat dir kernel-78184 in 19.65 seconds
patch dir kernel-0 109MB in 90.61 seconds (1.21 MB/s)
patch dir kernel-3 109MB in 75.67 seconds (1.45 MB/s)
create dir kernel-30226 222MB in 106.72 seconds (2.08 MB/s)
read dir kernel-19 in 83.79 2.72 MB/s
read dir kernel-9 in 82.64 2.76 MB/s
delete kernel-5 in 38.89 seconds
read dir kernel-7 in 59.70 3.82 MB/s
patch dir kernel-57327 109MB in 101.71 seconds (1.08 MB/s)
read dir kernel-11 in 59.83 3.72 MB/s

run complete:
==========================================================================
intial create total runs 30 avg 2.55 MB/s (user 13.94s sys 34.07s)
create total runs 14 avg 2.41 MB/s (user 13.83s sys 34.39s)
patch total runs 15 avg 1.79 MB/s (user 6.55s sys 34.71s)
compile total runs 14 avg 16.04 MB/s (user 2.65s sys 16.88s)
clean total runs 10 avg 113.53 MB/s (user 0.46s sys 3.14s)
read tree total runs 11 avg 3.30 MB/s (user 11.68s sys 24.50s)
read compiled tree total runs 4 avg 8.24 MB/s (user 13.67s sys 35.85s)
delete tree total runs 10 avg 42.12 seconds (user 6.76s sys 24.50s)
delete compiled tree total runs 4 avg 48.20 seconds (user 7.65s sys
28.60s)
stat tree total runs 11 avg 21.90 seconds (user 6.87s sys 6.34s)
stat compiled tree total runs 7 avg 21.23 seconds (user 7.65s sys 7.15s)

2.) kernel with enabled hot tracking

using working directory /mnt/scratch/, 30 intial dirs 100 runs
native unpatched native-0 222MB in 112.82 seconds (1.97 MB/s)
native patched native-0 109MB in 27.38 seconds (4.01 MB/s)
native patched compiled native-0 691MB in 40.42 seconds (17.11 MB/s)
create dir kernel-0 222MB in 92.88 seconds (2.39 MB/s)
create dir kernel-1 222MB in 98.56 seconds (2.26 MB/s)
create dir kernel-2 222MB in 107.27 seconds (2.07 MB/s)
create dir kernel-3 222MB in 92.81 seconds (2.40 MB/s)
create dir kernel-4 222MB in 90.30 seconds (2.46 MB/s)
create dir kernel-5 222MB in 91.57 seconds (2.43 MB/s)
create dir kernel-6 222MB in 91.92 seconds (2.42 MB/s)
create dir kernel-7 222MB in 90.16 seconds (2.47 MB/s)
create dir kernel-8 222MB in 94.71 seconds (2.35 MB/s)
create dir kernel-9 222MB in 91.79 seconds (2.42 MB/s)
create dir kernel-10 222MB in 92.14 seconds (2.41 MB/s)
create dir kernel-11 222MB in 88.59 seconds (2.51 MB/s)
create dir kernel-12 222MB in 92.15 seconds (2.41 MB/s)
create dir kernel-13 222MB in 91.54 seconds (2.43 MB/s)
create dir kernel-14 222MB in 91.15 seconds (2.44 MB/s)
create dir kernel-15 222MB in 90.54 seconds (2.46 MB/s)
create dir kernel-16 222MB in 92.23 seconds (2.41 MB/s)
create dir kernel-17 222MB in 89.88 seconds (2.47 MB/s)
create dir kernel-18 222MB in 94.65 seconds (2.35 MB/s)
create dir kernel-19 222MB in 89.99 seconds (2.47 MB/s)
create dir kernel-20 222MB in 90.35 seconds (2.46 MB/s)
create dir kernel-21 222MB in 90.92 seconds (2.45 MB/s)
create dir kernel-22 222MB in 90.76 seconds (2.45 MB/s)
create dir kernel-23 222MB in 90.04 seconds (2.47 MB/s)
create dir kernel-24 222MB in 89.60 seconds (2.48 MB/s)
create dir kernel-25 222MB in 91.52 seconds (2.43 MB/s)
create dir kernel-26 222MB in 90.45 seconds (2.46 MB/s)
create dir kernel-27 222MB in 92.72 seconds (2.40 MB/s)
create dir kernel-28 222MB in 90.37 seconds (2.46 MB/s)
create dir kernel-29 222MB in 89.60 seconds (2.48 MB/s)
=== sdb ===
  CPU  0:              8878754 events,   416192 KiB data
  Total:               8878754 events (dropped 0),   416192 KiB data
patch dir kernel-7 109MB in 61.00 seconds (1.80 MB/s)
compile dir kernel-7 691MB in 40.21 seconds (17.20 MB/s)
compile dir kernel-14 680MB in 45.97 seconds (14.81 MB/s)
patch dir kernel-14 691MB in 83.73 seconds (8.26 MB/s)
read dir kernel-7 in 88.66 10.37 MB/s
read dir kernel-10 in 56.44 3.94 MB/s
create dir kernel-3116 222MB in 91.58 seconds (2.43 MB/s)
clean kernel-7 691MB in 6.69 seconds (103.38 MB/s)
read dir kernel-6 in 61.07 3.64 MB/s
stat dir kernel-2 in 21.42 seconds
compile dir kernel-2 680MB in 44.55 seconds (15.28 MB/s)
clean kernel-14 691MB in 6.98 seconds (99.08 MB/s)
clean kernel-2 680MB in 6.12 seconds (111.21 MB/s)
patch dir kernel-2 109MB in 73.95 seconds (1.48 MB/s)
stat dir kernel-2 in 18.61 seconds
create dir kernel-6231 222MB in 100.84 seconds (2.21 MB/s)
delete kernel-8 in 40.38 seconds
compile dir kernel-2 691MB in 42.18 seconds (16.40 MB/s)
create dir kernel-70151 222MB in 96.34 seconds (2.31 MB/s)
clean kernel-2 691MB in 4.54 seconds (152.33 MB/s)
create dir kernel-78184 222MB in 94.71 seconds (2.35 MB/s)
compile dir kernel-7 691MB in 43.64 seconds (15.85 MB/s)
create dir kernel-64250 222MB in 87.65 seconds (2.54 MB/s)
delete kernel-12 in 38.58 seconds
stat dir kernel-2 in 17.48 seconds
patch dir kernel-70151 109MB in 79.82 seconds (1.37 MB/s)
stat dir kernel-7 in 25.76 seconds
stat dir kernel-78184 in 20.30 seconds
compile dir kernel-2 691MB in 40.93 seconds (16.90 MB/s)
compile dir kernel-26 680MB in 48.86 seconds (13.93 MB/s)
stat dir kernel-7 in 23.87 seconds
create dir kernel-2717 222MB in 98.71 seconds (2.25 MB/s)
delete kernel-26 in 45.60 seconds
stat dir kernel-2 in 22.62 seconds
compile dir kernel-14 691MB in 51.12 seconds (13.53 MB/s)
compile dir kernel-70151 691MB in 41.40 seconds (16.71 MB/s)
create dir kernel-55376 222MB in 94.61 seconds (2.35 MB/s)
stat dir kernel-22 in 22.11 seconds
delete kernel-55376 in 36.47 seconds
patch dir kernel-27 109MB in 76.74 seconds (1.43 MB/s)
patch dir kernel-64250 109MB in 86.43 seconds (1.27 MB/s)
read dir kernel-6231 in 85.10 2.61 MB/s
patch dir kernel-9 109MB in 97.67 seconds (1.12 MB/s)
stat dir kernel-14 in 24.80 seconds
read dir kernel-29 in 61.00 3.65 MB/s
create dir kernel-57327 222MB in 101.42 seconds (2.19 MB/s)
stat dir kernel-14 in 22.45 seconds
compile dir kernel-27 691MB in 48.19 seconds (14.35 MB/s)
create dir kernel-64334 222MB in 96.65 seconds (2.30 MB/s)
patch dir kernel-1 109MB in 88.32 seconds (1.24 MB/s)
create dir kernel-16056 222MB in 100.60 seconds (2.21 MB/s)
clean kernel-7 691MB in 8.20 seconds (84.34 MB/s)
delete kernel-27 in 48.53 seconds
create dir kernel-51614 222MB in 98.07 seconds (2.27 MB/s)
clean kernel-14 691MB in 6.82 seconds (101.41 MB/s)
delete kernel-64250 in 44.01 seconds
stat dir kernel-2 in 26.37 seconds
clean kernel-70151 691MB in 6.21 seconds (111.37 MB/s)
delete kernel-14 in 41.74 seconds
read dir kernel-2 in 122.71 7.50 MB/s
create dir kernel-24150 222MB in 99.01 seconds (2.25 MB/s)
read dir kernel-9 in 78.29 2.91 MB/s
patch dir kernel-19 109MB in 77.45 seconds (1.42 MB/s)
clean kernel-2 691MB in 5.94 seconds (116.43 MB/s)
compile dir kernel-18 680MB in 49.17 seconds (13.84 MB/s)
compile dir kernel-2 691MB in 47.20 seconds (14.65 MB/s)
delete kernel-2 in 48.01 seconds
delete kernel-70151 in 47.60 seconds
stat dir kernel-1 in 21.80 seconds
read dir kernel-18 in 109.98 8.21 MB/s
clean kernel-18 680MB in 7.78 seconds (87.49 MB/s)
compile dir kernel-17 680MB in 54.39 seconds (12.51 MB/s)
read dir kernel-17 in 108.52 8.32 MB/s
stat dir kernel-18 in 19.48 seconds
stat dir kernel-64334 in 22.04 seconds
delete kernel-24150 in 44.36 seconds
delete kernel-17 in 49.09 seconds
stat dir kernel-1 in 18.16 seconds
compile dir kernel-7 691MB in 48.90 seconds (14.14 MB/s)
patch dir kernel-16 109MB in 103.71 seconds (1.06 MB/s)
stat dir kernel-7 in 21.94 seconds
create dir kernel-82195 222MB in 110.82 seconds (2.01 MB/s)
delete kernel-82195 in 38.64 seconds
stat dir kernel-3 in 22.88 seconds
patch dir kernel-2717 109MB in 92.23 seconds (1.19 MB/s)
patch dir kernel-5 109MB in 64.95 seconds (1.69 MB/s)
read dir kernel-2717 in 97.88 2.33 MB/s
delete kernel-29 in 40.59 seconds
clean kernel-7 691MB in 5.09 seconds (135.87 MB/s)
read dir kernel-4 in 59.42 3.74 MB/s
stat dir kernel-78184 in 20.24 seconds
patch dir kernel-0 109MB in 95.95 seconds (1.14 MB/s)
patch dir kernel-3 109MB in 62.86 seconds (1.74 MB/s)
create dir kernel-30226 222MB in 106.81 seconds (2.08 MB/s)
read dir kernel-19 in 81.32 2.81 MB/s
read dir kernel-9 in 74.65 3.06 MB/s
delete kernel-5 in 42.04 seconds
read dir kernel-7 in 61.95 3.68 MB/s
patch dir kernel-57327 109MB in 97.85 seconds (1.12 MB/s)
read dir kernel-11 in 58.85 3.78 MB/s

run complete:
==========================================================================
intial create total runs 30 avg 2.42 MB/s (user 13.60s sys 36.18s)
create total runs 14 avg 2.27 MB/s (user 13.66s sys 36.94s)
patch total runs 15 avg 1.82 MB/s (user 6.62s sys 36.93s)
compile total runs 14 avg 15.01 MB/s (user 2.76s sys 18.29s)
clean total runs 10 avg 110.29 MB/s (user 0.46s sys 3.21s)
read tree total runs 11 avg 3.29 MB/s (user 11.04s sys 28.65s)
read compiled tree total runs 4 avg 8.60 MB/s (user 13.16s sys 41.32s)
delete tree total runs 10 avg 41.44 seconds (user 6.43s sys 25.19s)
delete compiled tree total runs 4 avg 47.81 seconds (user 7.18s sys
29.27s)
stat tree total runs 11 avg 20.41 seconds (user 6.39s sys 7.45s)
stat compiled tree total runs 7 avg 23.97 seconds (user 7.24s sys 8.74s)

On Fri, 2012-11-16 at 17:51 +0800, zwu.kernel@gmail.com wrote:
> From: Zhi Yong Wu <wuzhy@linux.vnet.ibm.com>
> 
> HI, guys,
> 
>   Any comments or ideas are appreciated, thanks.
> 
> NOTE:
> 
>   The patchset can be obtained via my kernel dev git on github:
> git://github.com/wuzhy/kernel.git hot_tracking
>   If you're interested, you can also review them via
> https://github.com/wuzhy/kernel/commits/hot_tracking
> 
>   For more info, please check hot_tracking.txt in Documentation
> 
> TODO List:
> 
>  1.) Need to do scalability or performance tests. - Required
>  2.) Need one simpler but efficient temp calculation function
>  3.) How to save the file temperature among the umount to be able to
>      preserve the file tempreture after reboot - Optional
> 
> Changelog:
> 
>  - Solved 64 bits inode number issue. [David Sterba]
>  - Embed struct hot_type in struct file_system_type [Darrick J. Wong]
>  - Cleanup Some issues [David Sterba]
>  - Use a static hot debugfs root [Greg KH]
>  - Rewritten debugfs support based on seq_file operation. [Dave Chinner]
>  - Refactored workqueue support. [Dave Chinner]
>  - Turn some Micro into be tunable   [Zhiyong, Zheng Liu]
>        TIME_TO_KICK, and HEAT_UPDATE_DELAY
>  - Introduce hot func registering framework [Zhiyong]
>  - Remove global variable for hot tracking [Zhiyong]
>  - Add xfs hot tracking support [Dave Chinner]
>  - Add ext4 hot tracking support [Zheng Liu]
>  - Cleanedup a lot of other issues [Dave Chinner]
>  - Added memory shrinker [Dave Chinner]
>  - Converted to one workqueue to update map info periodically [Dave Chinner]
>  - Cleanedup a lot of other issues [Dave Chinner]
>  - Reduce new files and put all in fs/hot_tracking.[ch] [Dave Chinner]
>  - Add btrfs hot tracking support [Zhiyong]
>  - The first three patches can probably just be flattened into one.
>                                         [Marco Stornelli , Dave Chinner]
> 
> Zhi Yong Wu (16):
>   vfs: introduce some data structures
>   vfs: add init and cleanup functions
>   vfs: add I/O frequency update function
>   vfs: add two map arrays
>   vfs: add hooks to enable hot tracking
>   vfs: add temp calculation function
>   vfs: add map info update function
>   vfs: add aging function
>   vfs: add one work queue
>   vfs: add FS hot type support
>   vfs: register one shrinker
>   vfs: add one ioctl interface
>   vfs: add debugfs support
>   proc: add two hot_track proc files
>   btrfs: add hot tracking support
>   vfs: add documentation
> 
>  Documentation/filesystems/00-INDEX         |    2 +
>  Documentation/filesystems/hot_tracking.txt |  263 ++++++
>  fs/Makefile                                |    2 +-
>  fs/btrfs/ctree.h                           |    1 +
>  fs/btrfs/super.c                           |   22 +-
>  fs/compat_ioctl.c                          |    5 +
>  fs/dcache.c                                |    2 +
>  fs/direct-io.c                             |    6 +
>  fs/hot_tracking.c                          | 1306 ++++++++++++++++++++++++++++
>  fs/hot_tracking.h                          |   52 ++
>  fs/ioctl.c                                 |   74 ++
>  include/linux/fs.h                         |    5 +
>  include/linux/hot_tracking.h               |  152 ++++
>  kernel/sysctl.c                            |   14 +
>  mm/filemap.c                               |    6 +
>  mm/page-writeback.c                        |   12 +
>  mm/readahead.c                             |    7 +
>  17 files changed, 1929 insertions(+), 2 deletions(-)
>  create mode 100644 Documentation/filesystems/hot_tracking.txt
>  create mode 100644 fs/hot_tracking.c
>  create mode 100644 fs/hot_tracking.h
>  create mode 100644 include/linux/hot_tracking.h
> 

-- 
Regards,

Zhi Yong Wu

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH v1 resend hot_track 00/16] vfs: hot data tracking
  2012-12-06  3:28 ` [PATCH v1 resend hot_track 00/16] vfs: hot data tracking Zhi Yong Wu
@ 2012-12-10  3:30   ` Zhi Yong Wu
  2012-12-12 19:50     ` Darrick J. Wong
  0 siblings, 1 reply; 22+ messages in thread
From: Zhi Yong Wu @ 2012-12-10  3:30 UTC (permalink / raw)
  To: wuzhy
  Cc: linux-fsdevel, linux-kernel, viro, linuxram, david, swhiteho,
	dave, darrick.wong, andi, northrup.james

HI, all guys.

any comments or suggestions?

On Thu, Dec 6, 2012 at 11:28 AM, Zhi Yong Wu <wuzhy@linux.vnet.ibm.com> wrote:
> HI, guys
>
> THe perf testing is done separately with fs_mark, fio, ffsb and
> compilebench in one kvm guest.
>
> Below is the performance testing report for hot tracking, and no obvious
> perf downgrade is found.
>
> Note: original kernel means its source code is not changed;
>       kernel with enabled hot tracking means its source code is with hot
> tracking patchset.
>
> The test env is set up as below:
>
> root@debian-i386:/home/zwu# uname -a
> Linux debian-i386 3.7.0-rc8+ #266 SMP Tue Dec 4 12:17:55 CST 2012 x86_64
> GNU/Linux
>
> root@debian-i386:/home/zwu# mkfs.xfs -f -l
> size=1310b,sunit=8 /home/zwu/bdev.img
> meta-data=/home/zwu/bdev.img     isize=256    agcount=4, agsize=128000
> blks
>          =                       sectsz=512   attr=2, projid32bit=0
> data     =                       bsize=4096   blocks=512000, imaxpct=25
>          =                       sunit=0      swidth=0 blks
> naming   =version 2              bsize=4096   ascii-ci=0
> log      =internal log           bsize=4096   blocks=1310, version=2
>          =                       sectsz=512   sunit=1 blks, lazy-count=1
> realtime =none                   extsz=4096   blocks=0, rtextents=0
>
> 1.) original kernel
>
> root@debian-i386:/home/zwu# mount -o
> loop,logbsize=256k /home/zwu/bdev.img /mnt/scratch
> [ 1197.421616] XFS (loop0): Mounting Filesystem
> [ 1197.567399] XFS (loop0): Ending clean mount
> root@debian-i386:/home/zwu# mount
> /dev/sda1 on / type ext3 (rw,errors=remount-ro)
> tmpfs on /lib/init/rw type tmpfs (rw,nosuid,mode=0755)
> proc on /proc type proc (rw,noexec,nosuid,nodev)
> sysfs on /sys type sysfs (rw,noexec,nosuid,nodev)
> udev on /dev type tmpfs (rw,mode=0755)
> tmpfs on /dev/shm type tmpfs (rw,nosuid,nodev)
> devpts on /dev/pts type devpts (rw,noexec,nosuid,gid=5,mode=620)
> none on /selinux type selinuxfs (rw,relatime)
> debugfs on /sys/kernel/debug type debugfs (rw)
> binfmt_misc on /proc/sys/fs/binfmt_misc type binfmt_misc
> (rw,noexec,nosuid,nodev)
> /dev/loop0 on /mnt/scratch type xfs (rw,logbsize=256k)
> root@debian-i386:/home/zwu# free -m
>              total       used       free     shared    buffers
> cached
> Mem:           112        109          2          0          4
> 53
> -/+ buffers/cache:         51         60
> Swap:          713         29        684
>
> 2.) kernel with enabled hot tracking
>
> root@debian-i386:/home/zwu# mount -o
> hot_track,loop,logbsize=256k /home/zwu/bdev.img /mnt/scratch
> [  364.648470] XFS (loop0): Mounting Filesystem
> [  364.910035] XFS (loop0): Ending clean mount
> [  364.921063] VFS: Turning on hot data tracking
> root@debian-i386:/home/zwu# mount
> /dev/sda1 on / type ext3 (rw,errors=remount-ro)
> tmpfs on /lib/init/rw type tmpfs (rw,nosuid,mode=0755)
> proc on /proc type proc (rw,noexec,nosuid,nodev)
> sysfs on /sys type sysfs (rw,noexec,nosuid,nodev)
> udev on /dev type tmpfs (rw,mode=0755)
> tmpfs on /dev/shm type tmpfs (rw,nosuid,nodev)
> devpts on /dev/pts type devpts (rw,noexec,nosuid,gid=5,mode=620)
> none on /selinux type selinuxfs (rw,relatime)
> binfmt_misc on /proc/sys/fs/binfmt_misc type binfmt_misc
> (rw,noexec,nosuid,nodev)
> /dev/loop0 on /mnt/scratch type xfs (rw,hot_track,logbsize=256k)
> root@debian-i386:/home/zwu# free -m
>              total       used       free     shared    buffers
> cached
> Mem:           112        107          4          0          2
> 34
> -/+ buffers/cache:         70         41
> Swap:          713          2        711
>
> 1. fs_mark test
>
> 1.) orginal kernel
>
> #  ./fs_mark  -D  100  -S0  -n  1000  -s  1  -L  30  -d  /mnt/scratch/0
> -d  /mnt/scratch/1  -d  /mnt/scratch/2  -d  /mnt/scratch/3
> -d  /mnt/scratch/4  -d  /mnt/scratch/5  -d  /mnt/scratch/6
> -d  /mnt/scratch/7
> #       Version 3.3, 8 thread(s) starting at Wed Dec  5 03:20:58 2012
> #       Sync method: NO SYNC: Test does not issue sync() or fsync() calls.
> #       Directories:  Time based hash between directories across 100
> subdirectories with 180 seconds per subdirectory.
> #       File names: 40 bytes long, (16 initial bytes of time stamp with 24
> random bytes at end of name)
> #       Files info: size 1 bytes, written with an IO size of 16384 bytes per
> write
> #       App overhead is time in microseconds spent in the test not doing file
> writing related system calls.
>
> FSUse%        Count         Size    Files/sec     App Overhead
>      2         8000            1        375.6         27175895
>      3        16000            1        375.6         27478079
>      4        24000            1        346.0         27819607
>      4        32000            1        316.9         25863385
>      5        40000            1        335.2         25460605
>      6        48000            1        312.3         25889196
>      7        56000            1        327.3         25000611
>      8        64000            1        304.4         28126698
>      9        72000            1        361.7         26652172
>      9        80000            1        370.1         27075875
>     10        88000            1        347.8         31093106
>     11        96000            1        387.1         26877324
>     12       104000            1        352.3         26635853
>     13       112000            1        379.3         26400198
>     14       120000            1        367.4         27228178
>     14       128000            1        359.2         27627871
>     15       136000            1        358.4         27089821
>     16       144000            1        385.5         27804852
>     17       152000            1        322.9         26221907
>     18       160000            1        393.2         26760040
>     18       168000            1        351.9         29210327
>     20       176000            1        395.2         24610548
>     20       184000            1        376.7         27518650
>     21       192000            1        340.1         27512874
>     22       200000            1        389.0         27109104
>     23       208000            1        389.7         29288594
>     24       216000            1        352.6         29948820
>     25       224000            1        380.4         26370958
>     26       232000            1        332.9         27770518
>     26       240000            1        333.6         25176691
>
> 2.) kernel with enabled hot tracking
>
> #  ./fs_mark  -D  100  -S0  -n  1000  -s  1  -L  30  -d  /mnt/scratch/0
> -d  /mnt/scratch/1  -d  /mnt/scratch/2  -d  /mnt/scratch/3
> -d  /mnt/scratch/4  -d  /mnt/scratch/5  -d  /mnt/scratch/6
> -d  /mnt/scratch/7
> #       Version 3.3, 8 thread(s) starting at Tue Dec  4 04:28:48 2012
> #       Sync method: NO SYNC: Test does not issue sync() or fsync() calls.
> #       Directories:  Time based hash between directories across 100
> subdirectories with 180 seconds per subdirectory.
> #       File names: 40 bytes long, (16 initial bytes of time stamp with 24
> random bytes at end of name)
> #       Files info: size 1 bytes, written with an IO size of 16384 bytes per
> write
> #       App overhead is time in microseconds spent in the test not doing file
> writing related system calls.
>
> FSUse%        Count         Size    Files/sec     App Overhead
>      4         8000            1        323.0         25104879
>      6        16000            1        351.4         25372919
>      8        24000            1        345.9         24107987
>      9        32000            1        313.2         26249533
>     10        40000            1        323.0         20312267
>     12        48000            1        303.2         22178040
>     14        56000            1        307.6         22775058
>     15        64000            1        317.9         25178845
>     17        72000            1        351.8         22020260
>     19        80000            1        369.3         23546708
>     21        88000            1        324.1         29068297
>     22        96000            1        355.3         25212333
>     24       104000            1        346.4         26622613
>     26       112000            1        360.4         25477193
>     28       120000            1        362.9         21774508
>     29       128000            1        329.0         25760109
>     31       136000            1        369.5         24540577
>     32       144000            1        330.2         26013559
>     34       152000            1        365.5         25643279
>     36       160000            1        366.2         24393130
>     38       168000            1        348.3         25248940
>     39       176000            1        357.3         24080574
>     40       184000            1        316.8         23011921
>     43       192000            1        351.7         27468060
>     44       200000            1        362.2         27540349
>     46       208000            1        340.9         26135445
>     48       216000            1        339.2         20926743
>     50       224000            1        316.5         21399871
>     52       232000            1        346.3         24669604
>     53       240000            1        320.5         22204449
>
>
> 2. FFSB test
>
> 1.) original kernel
>
> FFSB version 6.0-RC2 started
>
> benchmark time = 10
> ThreadGroup 0
> ================
>          num_threads      = 4
>
>          read_random      = off
>          read_size        = 40960       (40KB)
>          read_blocksize   = 4096        (4KB)
>          read_skip        = off
>          read_skipsize    = 0   (0B)
>
>          write_random     = off
>          write_size       = 40960       (40KB)
>          fsync_file       = 0
>          write_blocksize  = 4096        (4KB)
>          wait time        = 0
>
>          op weights
>                          read = 0 (0.00%)
>                       readall = 1 (10.00%)
>                         write = 0 (0.00%)
>                        create = 1 (10.00%)
>                        append = 1 (10.00%)
>                        delete = 1 (10.00%)
>                        metaop = 0 (0.00%)
>                     createdir = 0 (0.00%)
>                          stat = 1 (10.00%)
>                      writeall = 1 (10.00%)
>                writeall_fsync = 1 (10.00%)
>                    open_close = 1 (10.00%)
>                   write_fsync = 0 (0.00%)
>                  create_fsync = 1 (10.00%)
>                  append_fsync = 1 (10.00%)
>
> FileSystem /mnt/scratch/test1
> ==========
>          num_dirs         = 100
>          starting files   = 0
>
>          Fileset weight:
>                      33554432 (  32MB) -> 1 (1.00%)
>                       8388608 (   8MB) -> 2 (2.00%)
>                        524288 ( 512KB) -> 3 (3.00%)
>                        262144 ( 256KB) -> 4 (4.00%)
>                        131072 ( 128KB) -> 5 (5.00%)
>                         65536 (  64KB) -> 8 (8.00%)
>                         32768 (  32KB) -> 10 (10.00%)
>                         16384 (  16KB) -> 13 (13.00%)
>                          8192 (   8KB) -> 21 (21.00%)
>                          4096 (   4KB) -> 33 (33.00%)
>          directio         = off
>          alignedio        = off
>          bufferedio       = off
>
>          aging is off
>          current utilization = 26.19%
>
> creating new fileset /mnt/scratch/test1
> fs setup took 87 secs
> Syncing()...1 sec
> Starting Actual Benchmark At: Wed Dec  5 03:38:06 2012
>
> Syncing()...0 sec
> FFSB benchmark finished   at: Wed Dec  5 03:38:18 2012
>
> Results:
> Benchmark took 11.44 sec
>
> Total Results
> ===============
>              Op Name   Transactions      Trans/sec      % Trans     % Op Weight
> Throughput
>              =======   ============      =========      =======     ===========
> ==========
>              readall :           93           8.13       0.880%         21.053%
> 32.5KB/sec
>               create :           20           1.75       0.189%          5.263%
> 6.99KB/sec
>               append :           10           0.87       0.095%          2.632%
> 3.5KB/sec
>               delete :            4           0.35       0.038%         10.526%
> NA
>                 stat :            3           0.26       0.028%          7.895%
> NA
>             writeall :         2178         190.39      20.600%         10.526%
> 762KB/sec
>       writeall_fsync :            5           0.44       0.047%          5.263%
> 1.75KB/sec
>           open_close :            6           0.52       0.057%         15.789%
> NA
>         create_fsync :         8234         719.78      77.878%         15.789%
> 2.81MB/sec
>         append_fsync :           20           1.75       0.189%          5.263%
> 6.99KB/sec
> -
> 924.24 Transactions per Second
>
> Throughput Results
> ===================
> Read Throughput: 32.5KB/sec
> Write Throughput: 3.57MB/sec
>
> System Call Latency statistics in millisecs
> =====
>                 Min             Avg             Max             Total Calls
>                 ========        ========        ========        ============
> [   open]       0.050000        3.980161        41.840000                 31
>    -
> [   read]       0.017000        71.442215       1286.122000               93
>    -
> [  write]       0.052000        1.034817        2201.956000            10467
>    -
> [ unlink]       1.118000        185.398750      730.807000                 4
>    -
> [  close]       0.019000        1.968968        39.679000                 31
>    -
> [   stat]       0.043000        2.173667        6.428000                   3
>    -
>
> 0.8% User   Time
> 9.2% System Time
> 10.0% CPU Utilization
>
> 2.) kernel with enabled hot tracking
>
> FFSB version 6.0-RC2 started
>
> benchmark time = 10
> ThreadGroup 0
> ================
>          num_threads      = 4
>
>          read_random      = off
>          read_size        = 40960       (40KB)
>          read_blocksize   = 4096        (4KB)
>          read_skip        = off
>          read_skipsize    = 0   (0B)
>
>          write_random     = off
>          write_size       = 40960       (40KB)
>          fsync_file       = 0
>          write_blocksize  = 4096        (4KB)
>          wait time        = 0
>
>          op weights
>                          read = 0 (0.00%)
>                       readall = 1 (10.00%)
>                         write = 0 (0.00%)
>                        create = 1 (10.00%)
>                        append = 1 (10.00%)
>                        delete = 1 (10.00%)
>                        metaop = 0 (0.00%)
>                     createdir = 0 (0.00%)
>                          stat = 1 (10.00%)
>                      writeall = 1 (10.00%)
>                writeall_fsync = 1 (10.00%)
>                    open_close = 1 (10.00%)
>                   write_fsync = 0 (0.00%)
>                  create_fsync = 1 (10.00%)
>                  append_fsync = 1 (10.00%)
>
> FileSystem /mnt/scratch/test1
> ==========
>          num_dirs         = 100
>          starting files   = 0
>
>          Fileset weight:
>                      33554432 (  32MB) -> 1 (1.00%)
>                       8388608 (   8MB) -> 2 (2.00%)
>                        524288 ( 512KB) -> 3 (3.00%)
>                        262144 ( 256KB) -> 4 (4.00%)
>                        131072 ( 128KB) -> 5 (5.00%)
>                         65536 (  64KB) -> 8 (8.00%)
>                         32768 (  32KB) -> 10 (10.00%)
>                         16384 (  16KB) -> 13 (13.00%)
>                          8192 (   8KB) -> 21 (21.00%)
>                          4096 (   4KB) -> 33 (33.00%)
>          directio         = off
>          alignedio        = off
>          bufferedio       = off
>
>          aging is off
>          current utilization = 52.46%
>
> creating new fileset /mnt/scratch/test1
> fs setup took 42 secs
> Syncing()...1 sec
> Starting Actual Benchmark At: Tue Dec  4 06:41:54 2012
>
> Syncing()...0 sec
> FFSB benchmark finished   at: Tue Dec  4 06:42:53 2012
>
> Results:
> Benchmark took 59.42 sec
>
> Total Results
> ===============
>              Op Name   Transactions      Trans/sec      % Trans     % Op Weight
> Throughput
>              =======   ============      =========      =======     ===========
> ==========
>              readall :        10510         176.87      54.808%         10.959%
> 707KB/sec
>               create :           48           0.81       0.250%          9.589%
> 3.23KB/sec
>               append :          100           1.68       0.521%         13.699%
> 6.73KB/sec
>               delete :            5           0.08       0.026%          6.849%
> NA
>                 stat :            5           0.08       0.026%          6.849%
> NA
>             writeall :          130           2.19       0.678%         12.329%
> 8.75KB/sec
>       writeall_fsync :           19           0.32       0.099%          8.219%
> 1.28KB/sec
>           open_close :            9           0.15       0.047%         12.329%
> NA
>         create_fsync :         8300         139.67      43.283%         12.329%
> 559KB/sec
>         append_fsync :           50           0.84       0.261%          6.849%
> 3.37KB/sec
> -
> 322.70 Transactions per Second
>
> Throughput Results
> ===================
> Read Throughput: 707KB/sec
> Write Throughput: 582KB/sec
>
> System Call Latency statistics in millisecs
> =====
>                 Min             Avg             Max             Total Calls
>                 ========        ========        ========        ============
> [   open]       0.061000        0.750540        10.721000                 63
>    -
> [   read]       0.017000        11.058425       28555.394000           10510
>    -
> [  write]       0.034000        6.705286        26812.076000            8647
>    -
> [ unlink]       0.922000        7.679800        25.364000                  5
>    -
> [  close]       0.019000        0.996635        34.723000                 63
>    -
> [   stat]       0.046000        0.942800        4.489000                   5
>    -
>
> 0.2% User   Time
> 2.6% System Time
> 2.8% CPU Utilization
>
>
> 3. fio test
>
> 1.) original kernel
>
> seq-read: (g=0): rw=read, bs=8K-8K/8K-8K, ioengine=libaio, iodepth=8
> ...
> seq-read: (g=0): rw=read, bs=8K-8K/8K-8K, ioengine=libaio, iodepth=8
> seq-write: (g=1): rw=write, bs=8K-8K/8K-8K, ioengine=libaio, iodepth=8
> ...
> seq-write: (g=1): rw=write, bs=8K-8K/8K-8K, ioengine=libaio, iodepth=8
> rnd-read: (g=2): rw=randread, bs=8K-8K/8K-8K, ioengine=libaio, iodepth=8
> ...
> rnd-read: (g=2): rw=randread, bs=8K-8K/8K-8K, ioengine=libaio, iodepth=8
> rnd-write: (g=3): rw=randwrite, bs=8K-8K/8K-8K, ioengine=libaio,
> iodepth=8
> ...
> rnd-write: (g=3): rw=randwrite, bs=8K-8K/8K-8K, ioengine=libaio,
> iodepth=8
> Starting 16 threads
>
> seq-read: (groupid=0, jobs=4): err= 0: pid=1646
>   read : io=2,835MB, bw=24,192KB/s, iops=3,023, runt=120021msec
>     slat (usec): min=0, max=999K, avg=1202.67, stdev=3145.84
>     clat (usec): min=0, max=1,536K, avg=9186.07, stdev=11344.56
>     bw (KB/s) : min=   39, max=21301, per=26.11%, avg=6315.41,
> stdev=1082.63
>   cpu          : usr=10.89%, sys=33.14%, ctx=1488108, majf=13, minf=2238
>   IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=100.0%, 16=0.0%, 32=0.0%,
>>=64=0.0%
>      submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%,
>>=64=0.0%
>      complete  : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%,
>>=64=0.0%
>      issued r/w: total=362940/0, short=0/0
>      lat (usec): 2=3.53%, 50=0.01%, 100=0.01%, 250=0.01%, 500=0.01%
>      lat (usec): 750=0.03%, 1000=0.03%
>      lat (msec): 2=1.75%, 4=1.08%, 10=68.93%, 20=22.39%, 50=2.02%
>      lat (msec): 100=0.16%, 250=0.04%, 1000=0.01%, 2000=0.03%
> seq-write: (groupid=1, jobs=4): err= 0: pid=1646
>   write: io=1,721MB, bw=14,652KB/s, iops=1,831, runt=120277msec
>     slat (usec): min=0, max=1,004K, avg=1744.41, stdev=3144.06
>     clat (usec): min=0, max=1,014K, avg=15699.65, stdev=19751.69
>     bw (KB/s) : min=  285, max=18032, per=26.41%, avg=3869.67,
> stdev=762.96
>   cpu          : usr=6.29%, sys=22.61%, ctx=880380, majf=36, minf=3222
>   IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=100.0%, 16=0.0%, 32=0.0%,
>>=64=0.0%
>      submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%,
>>=64=0.0%
>      complete  : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%,
>>=64=0.0%
>      issued r/w: total=0/220282, short=0/0
>      lat (usec): 2=2.43%, 500=0.01%, 750=0.12%, 1000=0.14%
>      lat (msec): 2=0.86%, 4=1.72%, 10=39.03%, 20=42.20%, 50=11.87%
>      lat (msec): 100=1.15%, 250=0.17%, 500=0.06%, 750=0.14%, 1000=0.09%
>      lat (msec): 2000=0.02%
> rnd-read: (groupid=2, jobs=4): err= 0: pid=1646
>   read : io=65,128KB, bw=541KB/s, iops=67, runt=120381msec
>     slat (usec): min=48, max=55,230, avg=167.95, stdev=248.50
>     clat (msec): min=74, max=4,229, avg=472.23, stdev=129.50
>     bw (KB/s) : min=    0, max=  203, per=25.34%, avg=137.08,
> stdev=21.73
>   cpu          : usr=0.85%, sys=2.19%, ctx=44001, majf=30, minf=3726
>   IO depths    : 1=0.1%, 2=0.1%, 4=0.2%, 8=99.7%, 16=0.0%, 32=0.0%,
>>=64=0.0%
>      submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%,
>>=64=0.0%
>      complete  : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%,
>>=64=0.0%
>      issued r/w: total=8141/0, short=0/0
>
>      lat (msec): 100=0.04%, 250=0.09%, 500=81.72%, 750=13.09%,
> 1000=2.97%
>      lat (msec): 2000=1.50%, >=2000=0.59%
> rnd-write: (groupid=3, jobs=4): err= 0: pid=1646
>   write: io=200MB, bw=1,698KB/s, iops=212, runt=120331msec
>     slat (usec): min=48, max=215K, avg=2272.24, stdev=2283.09
>     clat (usec): min=762, max=14,617K, avg=147521.66, stdev=444146.36
>     bw (KB/s) : min=    1, max= 3960, per=56.86%, avg=964.90,
> stdev=514.63
>   cpu          : usr=1.25%, sys=4.20%, ctx=135229, majf=0, minf=10194
>   IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=99.9%, 16=0.0%, 32=0.0%,
>>=64=0.0%
>      submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%,
>>=64=0.0%
>      complete  : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%,
>>=64=0.0%
>      issued r/w: total=0/25536, short=0/0
>      lat (usec): 1000=0.26%
>      lat (msec): 2=0.13%, 4=2.01%, 10=3.77%, 20=42.78%, 50=20.95%
>      lat (msec): 100=12.83%, 250=12.50%, 500=2.49%, 750=0.33%,
> 1000=0.12%
>      lat (msec): 2000=0.53%, >=2000=1.30%
>
> Run status group 0 (all jobs):
>    READ: io=2,835MB, aggrb=24,191KB/s, minb=24,772KB/s, maxb=24,772KB/s,
> mint=120021msec, maxt=120021msec
>
> Run status group 1 (all jobs):
>   WRITE: io=1,721MB, aggrb=14,651KB/s, minb=15,003KB/s, maxb=15,003KB/s,
> mint=120277msec, maxt=120277msec
>
> Run status group 2 (all jobs):
>    READ: io=65,128KB, aggrb=541KB/s, minb=553KB/s, maxb=553KB/s,
> mint=120381msec, maxt=120381msec
>
> Run status group 3 (all jobs):
>   WRITE: io=200MB, aggrb=1,697KB/s, minb=1,738KB/s, maxb=1,738KB/s,
> mint=120331msec, maxt=120331msec
>
> Disk stats (read/write):
>   loop0: ios=0/0, merge=0/0, ticks=0/0, in_queue=0, util=0.00%
>
> 2.) kernel with enabled hot tracking
>
> seq-read: (g=0): rw=read, bs=8K-8K/8K-8K, ioengine=libaio, iodepth=8
> ...
> seq-read: (g=0): rw=read, bs=8K-8K/8K-8K, ioengine=libaio, iodepth=8
> seq-write: (g=1): rw=write, bs=8K-8K/8K-8K, ioengine=libaio, iodepth=8
> ...
> seq-write: (g=1): rw=write, bs=8K-8K/8K-8K, ioengine=libaio, iodepth=8
> rnd-read: (g=2): rw=randread, bs=8K-8K/8K-8K, ioengine=libaio, iodepth=8
> ...
> rnd-read: (g=2): rw=randread, bs=8K-8K/8K-8K, ioengine=libaio, iodepth=8
> rnd-write: (g=3): rw=randwrite, bs=8K-8K/8K-8K, ioengine=libaio,
> iodepth=8
> ...
> rnd-write: (g=3): rw=randwrite, bs=8K-8K/8K-8K, ioengine=libaio,
> iodepth=8
> Starting 16 threads
>
> seq-read: (groupid=0, jobs=4): err= 0: pid=2163
>   read : io=3,047MB, bw=26,001KB/s, iops=3,250, runt=120003msec
>     slat (usec): min=0, max=1,000K, avg=1141.34, stdev=2175.25
>     clat (usec): min=0, max=1,002K, avg=8610.96, stdev=6184.67
>     bw (KB/s) : min=   12, max=18896, per=25.28%, avg=6572.50,
> stdev=713.22
>   cpu          : usr=10.38%, sys=35.02%, ctx=1601418, majf=12, minf=2235
>   IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=100.0%, 16=0.0%, 32=0.0%,
>>=64=0.0%
>      submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%,
>>=64=0.0%
>      complete  : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%,
>>=64=0.0%
>      issued r/w: total=390029/0, short=0/0
>      lat (usec): 2=1.49%, 50=0.01%, 100=0.01%, 250=0.01%, 500=0.01%
>      lat (usec): 750=0.01%, 1000=0.02%
>      lat (msec): 2=1.53%, 4=0.86%, 10=79.60%, 20=14.93%, 50=1.43%
>      lat (msec): 100=0.09%, 250=0.02%, 500=0.01%, 1000=0.01%, 2000=0.01%
> seq-write: (groupid=1, jobs=4): err= 0: pid=2163
>   write: io=1,752MB, bw=14,950KB/s, iops=1,868, runt=120003msec
>     slat (usec): min=0, max=1,002K, avg=1697.47, stdev=3568.70
>     clat (usec): min=0, max=1,019K, avg=15630.94, stdev=21109.46
>     bw (KB/s) : min=  123, max=14693, per=26.31%, avg=3933.46,
> stdev=779.57
>   cpu          : usr=6.31%, sys=21.85%, ctx=894177, majf=4, minf=3407
>   IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=100.0%, 16=0.0%, 32=0.0%,
>>=64=0.0%
>      submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%,
>>=64=0.0%
>      complete  : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%,
>>=64=0.0%
>      issued r/w: total=0/224253, short=0/0
>      lat (usec): 2=2.44%, 100=0.01%, 250=0.01%, 500=0.01%, 750=0.06%
>      lat (usec): 1000=0.23%
>      lat (msec): 2=0.73%, 4=2.00%, 10=40.15%, 20=42.68%, 50=10.25%
>      lat (msec): 100=0.95%, 250=0.14%, 500=0.10%, 750=0.12%, 1000=0.11%
>      lat (msec): 2000=0.03%
> rnd-read: (groupid=2, jobs=4): err= 0: pid=2163
>   read : io=85,208KB, bw=709KB/s, iops=88, runt=120252msec
>     slat (usec): min=52, max=48,325, avg=204.43, stdev=596.50
>     clat (msec): min=1, max=2,754, avg=359.99, stdev=78.96
>     bw (KB/s) : min=    0, max=  249, per=25.17%, avg=178.20,
> stdev=23.79
>   cpu          : usr=1.00%, sys=2.64%, ctx=55704, majf=28, minf=2971
>   IO depths    : 1=0.1%, 2=0.1%, 4=0.2%, 8=99.7%, 16=0.0%, 32=0.0%,
>>=64=0.0%
>      submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%,
>>=64=0.0%
>      complete  : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%,
>>=64=0.0%
>      issued r/w: total=10651/0, short=0/0
>
>      lat (msec): 2=0.01%, 50=0.02%, 100=0.04%, 250=1.61%, 500=92.60%
>      lat (msec): 750=4.24%, 1000=0.68%, 2000=0.59%, >=2000=0.22%
> rnd-write: (groupid=3, jobs=4): err= 0: pid=2163
>   write: io=247MB, bw=2,019KB/s, iops=252, runt=125287msec
>     slat (usec): min=51, max=286K, avg=2576.23, stdev=2882.30
>     clat (usec): min=698, max=8,156K, avg=123274.05, stdev=355311.20
>     bw (KB/s) : min=    1, max= 4848, per=57.62%, avg=1162.77,
> stdev=560.79
>   cpu          : usr=1.33%, sys=4.24%, ctx=163334, majf=0, minf=8588
>   IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=99.9%, 16=0.0%, 32=0.0%,
>>=64=0.0%
>      submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%,
>>=64=0.0%
>      complete  : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%,
>>=64=0.0%
>      issued r/w: total=0/31616, short=0/0
>      lat (usec): 750=0.03%, 1000=0.15%
>      lat (msec): 2=0.06%, 4=2.15%, 10=3.57%, 20=48.23%, 50=22.43%
>      lat (msec): 100=11.48%, 250=9.14%, 500=1.04%, 750=0.16%, 1000=0.05%
>      lat (msec): 2000=0.09%, >=2000=1.42%
>
> Run status group 0 (all jobs):
>    READ: io=3,047MB, aggrb=26,001KB/s, minb=26,625KB/s, maxb=26,625KB/s,
> mint=120003msec, maxt=120003msec
>
> Run status group 1 (all jobs):
>   WRITE: io=1,752MB, aggrb=14,949KB/s, minb=15,308KB/s, maxb=15,308KB/s,
> mint=120003msec, maxt=120003msec
>
> Run status group 2 (all jobs):
>    READ: io=85,208KB, aggrb=708KB/s, minb=725KB/s, maxb=725KB/s,
> mint=120252msec, maxt=120252msec
>
> Run status group 3 (all jobs):
>   WRITE: io=247MB, aggrb=2,018KB/s, minb=2,067KB/s, maxb=2,067KB/s,
> mint=125287msec, maxt=125287msec
>
> Disk stats (read/write):
>   loop0: ios=0/0, merge=0/0, ticks=0/0, in_queue=0, util=0.00%
>
>
> 4. compilebench test
>
> 1.) original kernel
>
> using working directory /mnt/scratch/, 30 intial dirs 100 runs
>
> native unpatched native-0 222MB in 87.48 seconds (2.54 MB/s)
> native patched native-0 109MB in 24.89 seconds (4.41 MB/s)
> native patched compiled native-0 691MB in 35.54 seconds (19.46 MB/s)
> create dir kernel-0 222MB in 91.22 seconds (2.44 MB/s)
> create dir kernel-1 222MB in 91.55 seconds (2.43 MB/s)
> create dir kernel-2 222MB in 97.00 seconds (2.29 MB/s)
> create dir kernel-3 222MB in 87.64 seconds (2.54 MB/s)
> create dir kernel-4 222MB in 86.18 seconds (2.58 MB/s)
> create dir kernel-5 222MB in 84.68 seconds (2.63 MB/s)
> create dir kernel-6 222MB in 85.02 seconds (2.62 MB/s)
> create dir kernel-7 222MB in 87.74 seconds (2.53 MB/s)
> create dir kernel-8 222MB in 86.79 seconds (2.56 MB/s)
> create dir kernel-9 222MB in 87.85 seconds (2.53 MB/s)
> create dir kernel-10 222MB in 86.88 seconds (2.56 MB/s)
> create dir kernel-11 222MB in 86.47 seconds (2.57 MB/s)
> create dir kernel-12 222MB in 84.26 seconds (2.64 MB/s)
> create dir kernel-13 222MB in 87.40 seconds (2.54 MB/s)
> create dir kernel-14 222MB in 85.44 seconds (2.60 MB/s)
> create dir kernel-15 222MB in 86.80 seconds (2.56 MB/s)
> create dir kernel-16 222MB in 88.57 seconds (2.51 MB/s)
> create dir kernel-17 222MB in 85.66 seconds (2.60 MB/s)
> create dir kernel-18 222MB in 87.40 seconds (2.54 MB/s)
> create dir kernel-19 222MB in 85.47 seconds (2.60 MB/s)
> create dir kernel-20 222MB in 89.29 seconds (2.49 MB/s)
> create dir kernel-21 222MB in 88.53 seconds (2.51 MB/s)
> create dir kernel-22 222MB in 86.25 seconds (2.58 MB/s)
> create dir kernel-23 222MB in 85.34 seconds (2.61 MB/s)
> create dir kernel-24 222MB in 84.61 seconds (2.63 MB/s)
> create dir kernel-25 222MB in 88.13 seconds (2.52 MB/s)
> create dir kernel-26 222MB in 85.57 seconds (2.60 MB/s)
> create dir kernel-27 222MB in 87.26 seconds (2.55 MB/s)
> create dir kernel-28 222MB in 83.68 seconds (2.66 MB/s)
> create dir kernel-29 222MB in 86.33 seconds (2.58 MB/s)
> === sdb ===
>   CPU  0:              9366376 events,   439049 KiB data
>   Total:               9366376 events (dropped 0),   439049 KiB data
> patch dir kernel-7 109MB in 55.00 seconds (1.99 MB/s)
> compile dir kernel-7 691MB in 37.15 seconds (18.62 MB/s)
> compile dir kernel-14 680MB in 38.48 seconds (17.69 MB/s)
> patch dir kernel-14 691MB in 93.31 seconds (7.41 MB/s)
> read dir kernel-7 in 93.36 9.85 MB/s
> read dir kernel-10 in 58.25 3.82 MB/s
> create dir kernel-3116 222MB in 91.96 seconds (2.42 MB/s)
> clean kernel-7 691MB in 5.16 seconds (134.03 MB/s)
> read dir kernel-6 in 56.98 3.90 MB/s
> stat dir kernel-2 in 19.42 seconds
> compile dir kernel-2 680MB in 43.11 seconds (15.79 MB/s)
> clean kernel-14 691MB in 6.27 seconds (110.30 MB/s)
> clean kernel-2 680MB in 5.79 seconds (117.55 MB/s)
> patch dir kernel-2 109MB in 71.22 seconds (1.54 MB/s)
> stat dir kernel-2 in 16.06 seconds
> create dir kernel-6231 222MB in 96.20 seconds (2.31 MB/s)
> delete kernel-8 in 45.20 seconds
> compile dir kernel-2 691MB in 38.58 seconds (17.93 MB/s)
> create dir kernel-70151 222MB in 93.41 seconds (2.38 MB/s)
> clean kernel-2 691MB in 5.09 seconds (135.87 MB/s)
> create dir kernel-78184 222MB in 86.04 seconds (2.58 MB/s)
> compile dir kernel-7 691MB in 37.60 seconds (18.39 MB/s)
> create dir kernel-64250 222MB in 80.33 seconds (2.77 MB/s)
> delete kernel-12 in 43.00 seconds
> stat dir kernel-2 in 16.43 seconds
> patch dir kernel-70151 109MB in 77.42 seconds (1.42 MB/s)
> stat dir kernel-7 in 18.48 seconds
> stat dir kernel-78184 in 18.62 seconds
> compile dir kernel-2 691MB in 43.31 seconds (15.97 MB/s)
> compile dir kernel-26 680MB in 50.37 seconds (13.51 MB/s)
> stat dir kernel-7 in 21.52 seconds
> create dir kernel-2717 222MB in 89.86 seconds (2.47 MB/s)
> delete kernel-26 in 47.81 seconds
> stat dir kernel-2 in 18.61 seconds
> compile dir kernel-14 691MB in 46.66 seconds (14.82 MB/s)
> compile dir kernel-70151 691MB in 39.19 seconds (17.65 MB/s)
> create dir kernel-55376 222MB in 88.91 seconds (2.50 MB/s)
> stat dir kernel-22 in 18.66 seconds
> delete kernel-55376 in 37.71 seconds
> patch dir kernel-27 109MB in 74.82 seconds (1.47 MB/s)
> patch dir kernel-64250 109MB in 81.08 seconds (1.35 MB/s)
> read dir kernel-6231 in 82.15 2.71 MB/s
> patch dir kernel-9 109MB in 96.02 seconds (1.14 MB/s)
> stat dir kernel-14 in 22.46 seconds
> read dir kernel-29 in 58.10 3.83 MB/s
> create dir kernel-57327 222MB in 93.92 seconds (2.37 MB/s)
> stat dir kernel-14 in 21.92 seconds
> compile dir kernel-27 691MB in 41.43 seconds (16.69 MB/s)
> create dir kernel-64334 222MB in 89.31 seconds (2.49 MB/s)
> patch dir kernel-1 109MB in 84.37 seconds (1.30 MB/s)
> create dir kernel-16056 222MB in 94.93 seconds (2.34 MB/s)
> clean kernel-7 691MB in 7.27 seconds (95.13 MB/s)
> delete kernel-27 in 46.32 seconds
> create dir kernel-51614 222MB in 88.91 seconds (2.50 MB/s)
> clean kernel-14 691MB in 6.71 seconds (103.07 MB/s)
> delete kernel-64250 in 43.60 seconds
> stat dir kernel-2 in 24.25 seconds
> clean kernel-70151 691MB in 6.20 seconds (111.55 MB/s)
> delete kernel-14 in 40.74 seconds
> read dir kernel-2 in 118.45 7.76 MB/s
> create dir kernel-24150 222MB in 88.99 seconds (2.50 MB/s)
> read dir kernel-9 in 83.70 2.73 MB/s
> patch dir kernel-19 109MB in 76.06 seconds (1.44 MB/s)
> clean kernel-2 691MB in 6.64 seconds (104.16 MB/s)
> compile dir kernel-18 680MB in 47.33 seconds (14.38 MB/s)
> compile dir kernel-2 691MB in 44.63 seconds (15.50 MB/s)
> delete kernel-2 in 51.03 seconds
> delete kernel-70151 in 45.96 seconds
> stat dir kernel-1 in 17.56 seconds
> read dir kernel-18 in 121.08 7.46 MB/s
> clean kernel-18 680MB in 6.47 seconds (105.20 MB/s)
> compile dir kernel-17 680MB in 52.10 seconds (13.06 MB/s)
> read dir kernel-17 in 114.66 7.88 MB/s
> stat dir kernel-18 in 30.36 seconds
> stat dir kernel-64334 in 44.78 seconds
> delete kernel-24150 in 44.79 seconds
> delete kernel-17 in 47.64 seconds
> stat dir kernel-1 in 19.87 seconds
> compile dir kernel-7 691MB in 47.65 seconds (14.51 MB/s)
> patch dir kernel-16 109MB in 100.96 seconds (1.09 MB/s)
> stat dir kernel-7 in 21.35 seconds
> create dir kernel-82195 222MB in 111.17 seconds (2.00 MB/s)
> delete kernel-82195 in 40.79 seconds
> stat dir kernel-3 in 19.51 seconds
> patch dir kernel-2717 109MB in 94.55 seconds (1.16 MB/s)
> patch dir kernel-5 109MB in 60.21 seconds (1.82 MB/s)
> read dir kernel-2717 in 94.85 2.41 MB/s
> delete kernel-29 in 40.51 seconds
> clean kernel-7 691MB in 5.84 seconds (118.42 MB/s)
> read dir kernel-4 in 57.91 3.84 MB/s
> stat dir kernel-78184 in 19.65 seconds
> patch dir kernel-0 109MB in 90.61 seconds (1.21 MB/s)
> patch dir kernel-3 109MB in 75.67 seconds (1.45 MB/s)
> create dir kernel-30226 222MB in 106.72 seconds (2.08 MB/s)
> read dir kernel-19 in 83.79 2.72 MB/s
> read dir kernel-9 in 82.64 2.76 MB/s
> delete kernel-5 in 38.89 seconds
> read dir kernel-7 in 59.70 3.82 MB/s
> patch dir kernel-57327 109MB in 101.71 seconds (1.08 MB/s)
> read dir kernel-11 in 59.83 3.72 MB/s
>
> run complete:
> ==========================================================================
> intial create total runs 30 avg 2.55 MB/s (user 13.94s sys 34.07s)
> create total runs 14 avg 2.41 MB/s (user 13.83s sys 34.39s)
> patch total runs 15 avg 1.79 MB/s (user 6.55s sys 34.71s)
> compile total runs 14 avg 16.04 MB/s (user 2.65s sys 16.88s)
> clean total runs 10 avg 113.53 MB/s (user 0.46s sys 3.14s)
> read tree total runs 11 avg 3.30 MB/s (user 11.68s sys 24.50s)
> read compiled tree total runs 4 avg 8.24 MB/s (user 13.67s sys 35.85s)
> delete tree total runs 10 avg 42.12 seconds (user 6.76s sys 24.50s)
> delete compiled tree total runs 4 avg 48.20 seconds (user 7.65s sys
> 28.60s)
> stat tree total runs 11 avg 21.90 seconds (user 6.87s sys 6.34s)
> stat compiled tree total runs 7 avg 21.23 seconds (user 7.65s sys 7.15s)
>
> 2.) kernel with enabled hot tracking
>
> using working directory /mnt/scratch/, 30 intial dirs 100 runs
> native unpatched native-0 222MB in 112.82 seconds (1.97 MB/s)
> native patched native-0 109MB in 27.38 seconds (4.01 MB/s)
> native patched compiled native-0 691MB in 40.42 seconds (17.11 MB/s)
> create dir kernel-0 222MB in 92.88 seconds (2.39 MB/s)
> create dir kernel-1 222MB in 98.56 seconds (2.26 MB/s)
> create dir kernel-2 222MB in 107.27 seconds (2.07 MB/s)
> create dir kernel-3 222MB in 92.81 seconds (2.40 MB/s)
> create dir kernel-4 222MB in 90.30 seconds (2.46 MB/s)
> create dir kernel-5 222MB in 91.57 seconds (2.43 MB/s)
> create dir kernel-6 222MB in 91.92 seconds (2.42 MB/s)
> create dir kernel-7 222MB in 90.16 seconds (2.47 MB/s)
> create dir kernel-8 222MB in 94.71 seconds (2.35 MB/s)
> create dir kernel-9 222MB in 91.79 seconds (2.42 MB/s)
> create dir kernel-10 222MB in 92.14 seconds (2.41 MB/s)
> create dir kernel-11 222MB in 88.59 seconds (2.51 MB/s)
> create dir kernel-12 222MB in 92.15 seconds (2.41 MB/s)
> create dir kernel-13 222MB in 91.54 seconds (2.43 MB/s)
> create dir kernel-14 222MB in 91.15 seconds (2.44 MB/s)
> create dir kernel-15 222MB in 90.54 seconds (2.46 MB/s)
> create dir kernel-16 222MB in 92.23 seconds (2.41 MB/s)
> create dir kernel-17 222MB in 89.88 seconds (2.47 MB/s)
> create dir kernel-18 222MB in 94.65 seconds (2.35 MB/s)
> create dir kernel-19 222MB in 89.99 seconds (2.47 MB/s)
> create dir kernel-20 222MB in 90.35 seconds (2.46 MB/s)
> create dir kernel-21 222MB in 90.92 seconds (2.45 MB/s)
> create dir kernel-22 222MB in 90.76 seconds (2.45 MB/s)
> create dir kernel-23 222MB in 90.04 seconds (2.47 MB/s)
> create dir kernel-24 222MB in 89.60 seconds (2.48 MB/s)
> create dir kernel-25 222MB in 91.52 seconds (2.43 MB/s)
> create dir kernel-26 222MB in 90.45 seconds (2.46 MB/s)
> create dir kernel-27 222MB in 92.72 seconds (2.40 MB/s)
> create dir kernel-28 222MB in 90.37 seconds (2.46 MB/s)
> create dir kernel-29 222MB in 89.60 seconds (2.48 MB/s)
> === sdb ===
>   CPU  0:              8878754 events,   416192 KiB data
>   Total:               8878754 events (dropped 0),   416192 KiB data
> patch dir kernel-7 109MB in 61.00 seconds (1.80 MB/s)
> compile dir kernel-7 691MB in 40.21 seconds (17.20 MB/s)
> compile dir kernel-14 680MB in 45.97 seconds (14.81 MB/s)
> patch dir kernel-14 691MB in 83.73 seconds (8.26 MB/s)
> read dir kernel-7 in 88.66 10.37 MB/s
> read dir kernel-10 in 56.44 3.94 MB/s
> create dir kernel-3116 222MB in 91.58 seconds (2.43 MB/s)
> clean kernel-7 691MB in 6.69 seconds (103.38 MB/s)
> read dir kernel-6 in 61.07 3.64 MB/s
> stat dir kernel-2 in 21.42 seconds
> compile dir kernel-2 680MB in 44.55 seconds (15.28 MB/s)
> clean kernel-14 691MB in 6.98 seconds (99.08 MB/s)
> clean kernel-2 680MB in 6.12 seconds (111.21 MB/s)
> patch dir kernel-2 109MB in 73.95 seconds (1.48 MB/s)
> stat dir kernel-2 in 18.61 seconds
> create dir kernel-6231 222MB in 100.84 seconds (2.21 MB/s)
> delete kernel-8 in 40.38 seconds
> compile dir kernel-2 691MB in 42.18 seconds (16.40 MB/s)
> create dir kernel-70151 222MB in 96.34 seconds (2.31 MB/s)
> clean kernel-2 691MB in 4.54 seconds (152.33 MB/s)
> create dir kernel-78184 222MB in 94.71 seconds (2.35 MB/s)
> compile dir kernel-7 691MB in 43.64 seconds (15.85 MB/s)
> create dir kernel-64250 222MB in 87.65 seconds (2.54 MB/s)
> delete kernel-12 in 38.58 seconds
> stat dir kernel-2 in 17.48 seconds
> patch dir kernel-70151 109MB in 79.82 seconds (1.37 MB/s)
> stat dir kernel-7 in 25.76 seconds
> stat dir kernel-78184 in 20.30 seconds
> compile dir kernel-2 691MB in 40.93 seconds (16.90 MB/s)
> compile dir kernel-26 680MB in 48.86 seconds (13.93 MB/s)
> stat dir kernel-7 in 23.87 seconds
> create dir kernel-2717 222MB in 98.71 seconds (2.25 MB/s)
> delete kernel-26 in 45.60 seconds
> stat dir kernel-2 in 22.62 seconds
> compile dir kernel-14 691MB in 51.12 seconds (13.53 MB/s)
> compile dir kernel-70151 691MB in 41.40 seconds (16.71 MB/s)
> create dir kernel-55376 222MB in 94.61 seconds (2.35 MB/s)
> stat dir kernel-22 in 22.11 seconds
> delete kernel-55376 in 36.47 seconds
> patch dir kernel-27 109MB in 76.74 seconds (1.43 MB/s)
> patch dir kernel-64250 109MB in 86.43 seconds (1.27 MB/s)
> read dir kernel-6231 in 85.10 2.61 MB/s
> patch dir kernel-9 109MB in 97.67 seconds (1.12 MB/s)
> stat dir kernel-14 in 24.80 seconds
> read dir kernel-29 in 61.00 3.65 MB/s
> create dir kernel-57327 222MB in 101.42 seconds (2.19 MB/s)
> stat dir kernel-14 in 22.45 seconds
> compile dir kernel-27 691MB in 48.19 seconds (14.35 MB/s)
> create dir kernel-64334 222MB in 96.65 seconds (2.30 MB/s)
> patch dir kernel-1 109MB in 88.32 seconds (1.24 MB/s)
> create dir kernel-16056 222MB in 100.60 seconds (2.21 MB/s)
> clean kernel-7 691MB in 8.20 seconds (84.34 MB/s)
> delete kernel-27 in 48.53 seconds
> create dir kernel-51614 222MB in 98.07 seconds (2.27 MB/s)
> clean kernel-14 691MB in 6.82 seconds (101.41 MB/s)
> delete kernel-64250 in 44.01 seconds
> stat dir kernel-2 in 26.37 seconds
> clean kernel-70151 691MB in 6.21 seconds (111.37 MB/s)
> delete kernel-14 in 41.74 seconds
> read dir kernel-2 in 122.71 7.50 MB/s
> create dir kernel-24150 222MB in 99.01 seconds (2.25 MB/s)
> read dir kernel-9 in 78.29 2.91 MB/s
> patch dir kernel-19 109MB in 77.45 seconds (1.42 MB/s)
> clean kernel-2 691MB in 5.94 seconds (116.43 MB/s)
> compile dir kernel-18 680MB in 49.17 seconds (13.84 MB/s)
> compile dir kernel-2 691MB in 47.20 seconds (14.65 MB/s)
> delete kernel-2 in 48.01 seconds
> delete kernel-70151 in 47.60 seconds
> stat dir kernel-1 in 21.80 seconds
> read dir kernel-18 in 109.98 8.21 MB/s
> clean kernel-18 680MB in 7.78 seconds (87.49 MB/s)
> compile dir kernel-17 680MB in 54.39 seconds (12.51 MB/s)
> read dir kernel-17 in 108.52 8.32 MB/s
> stat dir kernel-18 in 19.48 seconds
> stat dir kernel-64334 in 22.04 seconds
> delete kernel-24150 in 44.36 seconds
> delete kernel-17 in 49.09 seconds
> stat dir kernel-1 in 18.16 seconds
> compile dir kernel-7 691MB in 48.90 seconds (14.14 MB/s)
> patch dir kernel-16 109MB in 103.71 seconds (1.06 MB/s)
> stat dir kernel-7 in 21.94 seconds
> create dir kernel-82195 222MB in 110.82 seconds (2.01 MB/s)
> delete kernel-82195 in 38.64 seconds
> stat dir kernel-3 in 22.88 seconds
> patch dir kernel-2717 109MB in 92.23 seconds (1.19 MB/s)
> patch dir kernel-5 109MB in 64.95 seconds (1.69 MB/s)
> read dir kernel-2717 in 97.88 2.33 MB/s
> delete kernel-29 in 40.59 seconds
> clean kernel-7 691MB in 5.09 seconds (135.87 MB/s)
> read dir kernel-4 in 59.42 3.74 MB/s
> stat dir kernel-78184 in 20.24 seconds
> patch dir kernel-0 109MB in 95.95 seconds (1.14 MB/s)
> patch dir kernel-3 109MB in 62.86 seconds (1.74 MB/s)
> create dir kernel-30226 222MB in 106.81 seconds (2.08 MB/s)
> read dir kernel-19 in 81.32 2.81 MB/s
> read dir kernel-9 in 74.65 3.06 MB/s
> delete kernel-5 in 42.04 seconds
> read dir kernel-7 in 61.95 3.68 MB/s
> patch dir kernel-57327 109MB in 97.85 seconds (1.12 MB/s)
> read dir kernel-11 in 58.85 3.78 MB/s
>
> run complete:
> ==========================================================================
> intial create total runs 30 avg 2.42 MB/s (user 13.60s sys 36.18s)
> create total runs 14 avg 2.27 MB/s (user 13.66s sys 36.94s)
> patch total runs 15 avg 1.82 MB/s (user 6.62s sys 36.93s)
> compile total runs 14 avg 15.01 MB/s (user 2.76s sys 18.29s)
> clean total runs 10 avg 110.29 MB/s (user 0.46s sys 3.21s)
> read tree total runs 11 avg 3.29 MB/s (user 11.04s sys 28.65s)
> read compiled tree total runs 4 avg 8.60 MB/s (user 13.16s sys 41.32s)
> delete tree total runs 10 avg 41.44 seconds (user 6.43s sys 25.19s)
> delete compiled tree total runs 4 avg 47.81 seconds (user 7.18s sys
> 29.27s)
> stat tree total runs 11 avg 20.41 seconds (user 6.39s sys 7.45s)
> stat compiled tree total runs 7 avg 23.97 seconds (user 7.24s sys 8.74s)
>
> On Fri, 2012-11-16 at 17:51 +0800, zwu.kernel@gmail.com wrote:
>> From: Zhi Yong Wu <wuzhy@linux.vnet.ibm.com>
>>
>> HI, guys,
>>
>>   Any comments or ideas are appreciated, thanks.
>>
>> NOTE:
>>
>>   The patchset can be obtained via my kernel dev git on github:
>> git://github.com/wuzhy/kernel.git hot_tracking
>>   If you're interested, you can also review them via
>> https://github.com/wuzhy/kernel/commits/hot_tracking
>>
>>   For more info, please check hot_tracking.txt in Documentation
>>
>> TODO List:
>>
>>  1.) Need to do scalability or performance tests. - Required
>>  2.) Need one simpler but efficient temp calculation function
>>  3.) How to save the file temperature among the umount to be able to
>>      preserve the file tempreture after reboot - Optional
>>
>> Changelog:
>>
>>  - Solved 64 bits inode number issue. [David Sterba]
>>  - Embed struct hot_type in struct file_system_type [Darrick J. Wong]
>>  - Cleanup Some issues [David Sterba]
>>  - Use a static hot debugfs root [Greg KH]
>>  - Rewritten debugfs support based on seq_file operation. [Dave Chinner]
>>  - Refactored workqueue support. [Dave Chinner]
>>  - Turn some Micro into be tunable   [Zhiyong, Zheng Liu]
>>        TIME_TO_KICK, and HEAT_UPDATE_DELAY
>>  - Introduce hot func registering framework [Zhiyong]
>>  - Remove global variable for hot tracking [Zhiyong]
>>  - Add xfs hot tracking support [Dave Chinner]
>>  - Add ext4 hot tracking support [Zheng Liu]
>>  - Cleanedup a lot of other issues [Dave Chinner]
>>  - Added memory shrinker [Dave Chinner]
>>  - Converted to one workqueue to update map info periodically [Dave Chinner]
>>  - Cleanedup a lot of other issues [Dave Chinner]
>>  - Reduce new files and put all in fs/hot_tracking.[ch] [Dave Chinner]
>>  - Add btrfs hot tracking support [Zhiyong]
>>  - The first three patches can probably just be flattened into one.
>>                                         [Marco Stornelli , Dave Chinner]
>>
>> Zhi Yong Wu (16):
>>   vfs: introduce some data structures
>>   vfs: add init and cleanup functions
>>   vfs: add I/O frequency update function
>>   vfs: add two map arrays
>>   vfs: add hooks to enable hot tracking
>>   vfs: add temp calculation function
>>   vfs: add map info update function
>>   vfs: add aging function
>>   vfs: add one work queue
>>   vfs: add FS hot type support
>>   vfs: register one shrinker
>>   vfs: add one ioctl interface
>>   vfs: add debugfs support
>>   proc: add two hot_track proc files
>>   btrfs: add hot tracking support
>>   vfs: add documentation
>>
>>  Documentation/filesystems/00-INDEX         |    2 +
>>  Documentation/filesystems/hot_tracking.txt |  263 ++++++
>>  fs/Makefile                                |    2 +-
>>  fs/btrfs/ctree.h                           |    1 +
>>  fs/btrfs/super.c                           |   22 +-
>>  fs/compat_ioctl.c                          |    5 +
>>  fs/dcache.c                                |    2 +
>>  fs/direct-io.c                             |    6 +
>>  fs/hot_tracking.c                          | 1306 ++++++++++++++++++++++++++++
>>  fs/hot_tracking.h                          |   52 ++
>>  fs/ioctl.c                                 |   74 ++
>>  include/linux/fs.h                         |    5 +
>>  include/linux/hot_tracking.h               |  152 ++++
>>  kernel/sysctl.c                            |   14 +
>>  mm/filemap.c                               |    6 +
>>  mm/page-writeback.c                        |   12 +
>>  mm/readahead.c                             |    7 +
>>  17 files changed, 1929 insertions(+), 2 deletions(-)
>>  create mode 100644 Documentation/filesystems/hot_tracking.txt
>>  create mode 100644 fs/hot_tracking.c
>>  create mode 100644 fs/hot_tracking.h
>>  create mode 100644 include/linux/hot_tracking.h
>>
>
> --
> Regards,
>
> Zhi Yong Wu
>



-- 
Regards,

Zhi Yong Wu

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH v1 resend hot_track 00/16] vfs: hot data tracking
  2012-12-10  3:30   ` Zhi Yong Wu
@ 2012-12-12 19:50     ` Darrick J. Wong
  2012-12-13 12:17       ` Zhi Yong Wu
  0 siblings, 1 reply; 22+ messages in thread
From: Darrick J. Wong @ 2012-12-12 19:50 UTC (permalink / raw)
  To: Zhi Yong Wu
  Cc: wuzhy, linux-fsdevel, linux-kernel, viro, linuxram, david,
	swhiteho, dave, andi, northrup.james

On Mon, Dec 10, 2012 at 11:30:03AM +0800, Zhi Yong Wu wrote:
> HI, all guys.
> 
> any comments or suggestions?

Why did ffsb drop from 924 transactions/sec to 322?

--D
> 
> On Thu, Dec 6, 2012 at 11:28 AM, Zhi Yong Wu <wuzhy@linux.vnet.ibm.com> wrote:
> > HI, guys
> >
> > THe perf testing is done separately with fs_mark, fio, ffsb and
> > compilebench in one kvm guest.
> >
> > Below is the performance testing report for hot tracking, and no obvious
> > perf downgrade is found.
> >
> > Note: original kernel means its source code is not changed;
> >       kernel with enabled hot tracking means its source code is with hot
> > tracking patchset.
> >
> > The test env is set up as below:
> >
> > root@debian-i386:/home/zwu# uname -a
> > Linux debian-i386 3.7.0-rc8+ #266 SMP Tue Dec 4 12:17:55 CST 2012 x86_64
> > GNU/Linux
> >
> > root@debian-i386:/home/zwu# mkfs.xfs -f -l
> > size=1310b,sunit=8 /home/zwu/bdev.img
> > meta-data=/home/zwu/bdev.img     isize=256    agcount=4, agsize=128000
> > blks
> >          =                       sectsz=512   attr=2, projid32bit=0
> > data     =                       bsize=4096   blocks=512000, imaxpct=25
> >          =                       sunit=0      swidth=0 blks
> > naming   =version 2              bsize=4096   ascii-ci=0
> > log      =internal log           bsize=4096   blocks=1310, version=2
> >          =                       sectsz=512   sunit=1 blks, lazy-count=1
> > realtime =none                   extsz=4096   blocks=0, rtextents=0
> >
> > 1.) original kernel
> >
> > root@debian-i386:/home/zwu# mount -o
> > loop,logbsize=256k /home/zwu/bdev.img /mnt/scratch
> > [ 1197.421616] XFS (loop0): Mounting Filesystem
> > [ 1197.567399] XFS (loop0): Ending clean mount
> > root@debian-i386:/home/zwu# mount
> > /dev/sda1 on / type ext3 (rw,errors=remount-ro)
> > tmpfs on /lib/init/rw type tmpfs (rw,nosuid,mode=0755)
> > proc on /proc type proc (rw,noexec,nosuid,nodev)
> > sysfs on /sys type sysfs (rw,noexec,nosuid,nodev)
> > udev on /dev type tmpfs (rw,mode=0755)
> > tmpfs on /dev/shm type tmpfs (rw,nosuid,nodev)
> > devpts on /dev/pts type devpts (rw,noexec,nosuid,gid=5,mode=620)
> > none on /selinux type selinuxfs (rw,relatime)
> > debugfs on /sys/kernel/debug type debugfs (rw)
> > binfmt_misc on /proc/sys/fs/binfmt_misc type binfmt_misc
> > (rw,noexec,nosuid,nodev)
> > /dev/loop0 on /mnt/scratch type xfs (rw,logbsize=256k)
> > root@debian-i386:/home/zwu# free -m
> >              total       used       free     shared    buffers
> > cached
> > Mem:           112        109          2          0          4
> > 53
> > -/+ buffers/cache:         51         60
> > Swap:          713         29        684
> >
> > 2.) kernel with enabled hot tracking
> >
> > root@debian-i386:/home/zwu# mount -o
> > hot_track,loop,logbsize=256k /home/zwu/bdev.img /mnt/scratch
> > [  364.648470] XFS (loop0): Mounting Filesystem
> > [  364.910035] XFS (loop0): Ending clean mount
> > [  364.921063] VFS: Turning on hot data tracking
> > root@debian-i386:/home/zwu# mount
> > /dev/sda1 on / type ext3 (rw,errors=remount-ro)
> > tmpfs on /lib/init/rw type tmpfs (rw,nosuid,mode=0755)
> > proc on /proc type proc (rw,noexec,nosuid,nodev)
> > sysfs on /sys type sysfs (rw,noexec,nosuid,nodev)
> > udev on /dev type tmpfs (rw,mode=0755)
> > tmpfs on /dev/shm type tmpfs (rw,nosuid,nodev)
> > devpts on /dev/pts type devpts (rw,noexec,nosuid,gid=5,mode=620)
> > none on /selinux type selinuxfs (rw,relatime)
> > binfmt_misc on /proc/sys/fs/binfmt_misc type binfmt_misc
> > (rw,noexec,nosuid,nodev)
> > /dev/loop0 on /mnt/scratch type xfs (rw,hot_track,logbsize=256k)
> > root@debian-i386:/home/zwu# free -m
> >              total       used       free     shared    buffers
> > cached
> > Mem:           112        107          4          0          2
> > 34
> > -/+ buffers/cache:         70         41
> > Swap:          713          2        711
> >
> > 1. fs_mark test
> >
> > 1.) orginal kernel
> >
> > #  ./fs_mark  -D  100  -S0  -n  1000  -s  1  -L  30  -d  /mnt/scratch/0
> > -d  /mnt/scratch/1  -d  /mnt/scratch/2  -d  /mnt/scratch/3
> > -d  /mnt/scratch/4  -d  /mnt/scratch/5  -d  /mnt/scratch/6
> > -d  /mnt/scratch/7
> > #       Version 3.3, 8 thread(s) starting at Wed Dec  5 03:20:58 2012
> > #       Sync method: NO SYNC: Test does not issue sync() or fsync() calls.
> > #       Directories:  Time based hash between directories across 100
> > subdirectories with 180 seconds per subdirectory.
> > #       File names: 40 bytes long, (16 initial bytes of time stamp with 24
> > random bytes at end of name)
> > #       Files info: size 1 bytes, written with an IO size of 16384 bytes per
> > write
> > #       App overhead is time in microseconds spent in the test not doing file
> > writing related system calls.
> >
> > FSUse%        Count         Size    Files/sec     App Overhead
> >      2         8000            1        375.6         27175895
> >      3        16000            1        375.6         27478079
> >      4        24000            1        346.0         27819607
> >      4        32000            1        316.9         25863385
> >      5        40000            1        335.2         25460605
> >      6        48000            1        312.3         25889196
> >      7        56000            1        327.3         25000611
> >      8        64000            1        304.4         28126698
> >      9        72000            1        361.7         26652172
> >      9        80000            1        370.1         27075875
> >     10        88000            1        347.8         31093106
> >     11        96000            1        387.1         26877324
> >     12       104000            1        352.3         26635853
> >     13       112000            1        379.3         26400198
> >     14       120000            1        367.4         27228178
> >     14       128000            1        359.2         27627871
> >     15       136000            1        358.4         27089821
> >     16       144000            1        385.5         27804852
> >     17       152000            1        322.9         26221907
> >     18       160000            1        393.2         26760040
> >     18       168000            1        351.9         29210327
> >     20       176000            1        395.2         24610548
> >     20       184000            1        376.7         27518650
> >     21       192000            1        340.1         27512874
> >     22       200000            1        389.0         27109104
> >     23       208000            1        389.7         29288594
> >     24       216000            1        352.6         29948820
> >     25       224000            1        380.4         26370958
> >     26       232000            1        332.9         27770518
> >     26       240000            1        333.6         25176691
> >
> > 2.) kernel with enabled hot tracking
> >
> > #  ./fs_mark  -D  100  -S0  -n  1000  -s  1  -L  30  -d  /mnt/scratch/0
> > -d  /mnt/scratch/1  -d  /mnt/scratch/2  -d  /mnt/scratch/3
> > -d  /mnt/scratch/4  -d  /mnt/scratch/5  -d  /mnt/scratch/6
> > -d  /mnt/scratch/7
> > #       Version 3.3, 8 thread(s) starting at Tue Dec  4 04:28:48 2012
> > #       Sync method: NO SYNC: Test does not issue sync() or fsync() calls.
> > #       Directories:  Time based hash between directories across 100
> > subdirectories with 180 seconds per subdirectory.
> > #       File names: 40 bytes long, (16 initial bytes of time stamp with 24
> > random bytes at end of name)
> > #       Files info: size 1 bytes, written with an IO size of 16384 bytes per
> > write
> > #       App overhead is time in microseconds spent in the test not doing file
> > writing related system calls.
> >
> > FSUse%        Count         Size    Files/sec     App Overhead
> >      4         8000            1        323.0         25104879
> >      6        16000            1        351.4         25372919
> >      8        24000            1        345.9         24107987
> >      9        32000            1        313.2         26249533
> >     10        40000            1        323.0         20312267
> >     12        48000            1        303.2         22178040
> >     14        56000            1        307.6         22775058
> >     15        64000            1        317.9         25178845
> >     17        72000            1        351.8         22020260
> >     19        80000            1        369.3         23546708
> >     21        88000            1        324.1         29068297
> >     22        96000            1        355.3         25212333
> >     24       104000            1        346.4         26622613
> >     26       112000            1        360.4         25477193
> >     28       120000            1        362.9         21774508
> >     29       128000            1        329.0         25760109
> >     31       136000            1        369.5         24540577
> >     32       144000            1        330.2         26013559
> >     34       152000            1        365.5         25643279
> >     36       160000            1        366.2         24393130
> >     38       168000            1        348.3         25248940
> >     39       176000            1        357.3         24080574
> >     40       184000            1        316.8         23011921
> >     43       192000            1        351.7         27468060
> >     44       200000            1        362.2         27540349
> >     46       208000            1        340.9         26135445
> >     48       216000            1        339.2         20926743
> >     50       224000            1        316.5         21399871
> >     52       232000            1        346.3         24669604
> >     53       240000            1        320.5         22204449
> >
> >
> > 2. FFSB test
> >
> > 1.) original kernel
> >
> > FFSB version 6.0-RC2 started
> >
> > benchmark time = 10
> > ThreadGroup 0
> > ================
> >          num_threads      = 4
> >
> >          read_random      = off
> >          read_size        = 40960       (40KB)
> >          read_blocksize   = 4096        (4KB)
> >          read_skip        = off
> >          read_skipsize    = 0   (0B)
> >
> >          write_random     = off
> >          write_size       = 40960       (40KB)
> >          fsync_file       = 0
> >          write_blocksize  = 4096        (4KB)
> >          wait time        = 0
> >
> >          op weights
> >                          read = 0 (0.00%)
> >                       readall = 1 (10.00%)
> >                         write = 0 (0.00%)
> >                        create = 1 (10.00%)
> >                        append = 1 (10.00%)
> >                        delete = 1 (10.00%)
> >                        metaop = 0 (0.00%)
> >                     createdir = 0 (0.00%)
> >                          stat = 1 (10.00%)
> >                      writeall = 1 (10.00%)
> >                writeall_fsync = 1 (10.00%)
> >                    open_close = 1 (10.00%)
> >                   write_fsync = 0 (0.00%)
> >                  create_fsync = 1 (10.00%)
> >                  append_fsync = 1 (10.00%)
> >
> > FileSystem /mnt/scratch/test1
> > ==========
> >          num_dirs         = 100
> >          starting files   = 0
> >
> >          Fileset weight:
> >                      33554432 (  32MB) -> 1 (1.00%)
> >                       8388608 (   8MB) -> 2 (2.00%)
> >                        524288 ( 512KB) -> 3 (3.00%)
> >                        262144 ( 256KB) -> 4 (4.00%)
> >                        131072 ( 128KB) -> 5 (5.00%)
> >                         65536 (  64KB) -> 8 (8.00%)
> >                         32768 (  32KB) -> 10 (10.00%)
> >                         16384 (  16KB) -> 13 (13.00%)
> >                          8192 (   8KB) -> 21 (21.00%)
> >                          4096 (   4KB) -> 33 (33.00%)
> >          directio         = off
> >          alignedio        = off
> >          bufferedio       = off
> >
> >          aging is off
> >          current utilization = 26.19%
> >
> > creating new fileset /mnt/scratch/test1
> > fs setup took 87 secs
> > Syncing()...1 sec
> > Starting Actual Benchmark At: Wed Dec  5 03:38:06 2012
> >
> > Syncing()...0 sec
> > FFSB benchmark finished   at: Wed Dec  5 03:38:18 2012
> >
> > Results:
> > Benchmark took 11.44 sec
> >
> > Total Results
> > ===============
> >              Op Name   Transactions      Trans/sec      % Trans     % Op Weight
> > Throughput
> >              =======   ============      =========      =======     ===========
> > ==========
> >              readall :           93           8.13       0.880%         21.053%
> > 32.5KB/sec
> >               create :           20           1.75       0.189%          5.263%
> > 6.99KB/sec
> >               append :           10           0.87       0.095%          2.632%
> > 3.5KB/sec
> >               delete :            4           0.35       0.038%         10.526%
> > NA
> >                 stat :            3           0.26       0.028%          7.895%
> > NA
> >             writeall :         2178         190.39      20.600%         10.526%
> > 762KB/sec
> >       writeall_fsync :            5           0.44       0.047%          5.263%
> > 1.75KB/sec
> >           open_close :            6           0.52       0.057%         15.789%
> > NA
> >         create_fsync :         8234         719.78      77.878%         15.789%
> > 2.81MB/sec
> >         append_fsync :           20           1.75       0.189%          5.263%
> > 6.99KB/sec
> > -
> > 924.24 Transactions per Second
> >
> > Throughput Results
> > ===================
> > Read Throughput: 32.5KB/sec
> > Write Throughput: 3.57MB/sec
> >
> > System Call Latency statistics in millisecs
> > =====
> >                 Min             Avg             Max             Total Calls
> >                 ========        ========        ========        ============
> > [   open]       0.050000        3.980161        41.840000                 31
> >    -
> > [   read]       0.017000        71.442215       1286.122000               93
> >    -
> > [  write]       0.052000        1.034817        2201.956000            10467
> >    -
> > [ unlink]       1.118000        185.398750      730.807000                 4
> >    -
> > [  close]       0.019000        1.968968        39.679000                 31
> >    -
> > [   stat]       0.043000        2.173667        6.428000                   3
> >    -
> >
> > 0.8% User   Time
> > 9.2% System Time
> > 10.0% CPU Utilization
> >
> > 2.) kernel with enabled hot tracking
> >
> > FFSB version 6.0-RC2 started
> >
> > benchmark time = 10
> > ThreadGroup 0
> > ================
> >          num_threads      = 4
> >
> >          read_random      = off
> >          read_size        = 40960       (40KB)
> >          read_blocksize   = 4096        (4KB)
> >          read_skip        = off
> >          read_skipsize    = 0   (0B)
> >
> >          write_random     = off
> >          write_size       = 40960       (40KB)
> >          fsync_file       = 0
> >          write_blocksize  = 4096        (4KB)
> >          wait time        = 0
> >
> >          op weights
> >                          read = 0 (0.00%)
> >                       readall = 1 (10.00%)
> >                         write = 0 (0.00%)
> >                        create = 1 (10.00%)
> >                        append = 1 (10.00%)
> >                        delete = 1 (10.00%)
> >                        metaop = 0 (0.00%)
> >                     createdir = 0 (0.00%)
> >                          stat = 1 (10.00%)
> >                      writeall = 1 (10.00%)
> >                writeall_fsync = 1 (10.00%)
> >                    open_close = 1 (10.00%)
> >                   write_fsync = 0 (0.00%)
> >                  create_fsync = 1 (10.00%)
> >                  append_fsync = 1 (10.00%)
> >
> > FileSystem /mnt/scratch/test1
> > ==========
> >          num_dirs         = 100
> >          starting files   = 0
> >
> >          Fileset weight:
> >                      33554432 (  32MB) -> 1 (1.00%)
> >                       8388608 (   8MB) -> 2 (2.00%)
> >                        524288 ( 512KB) -> 3 (3.00%)
> >                        262144 ( 256KB) -> 4 (4.00%)
> >                        131072 ( 128KB) -> 5 (5.00%)
> >                         65536 (  64KB) -> 8 (8.00%)
> >                         32768 (  32KB) -> 10 (10.00%)
> >                         16384 (  16KB) -> 13 (13.00%)
> >                          8192 (   8KB) -> 21 (21.00%)
> >                          4096 (   4KB) -> 33 (33.00%)
> >          directio         = off
> >          alignedio        = off
> >          bufferedio       = off
> >
> >          aging is off
> >          current utilization = 52.46%
> >
> > creating new fileset /mnt/scratch/test1
> > fs setup took 42 secs
> > Syncing()...1 sec
> > Starting Actual Benchmark At: Tue Dec  4 06:41:54 2012
> >
> > Syncing()...0 sec
> > FFSB benchmark finished   at: Tue Dec  4 06:42:53 2012
> >
> > Results:
> > Benchmark took 59.42 sec
> >
> > Total Results
> > ===============
> >              Op Name   Transactions      Trans/sec      % Trans     % Op Weight
> > Throughput
> >              =======   ============      =========      =======     ===========
> > ==========
> >              readall :        10510         176.87      54.808%         10.959%
> > 707KB/sec
> >               create :           48           0.81       0.250%          9.589%
> > 3.23KB/sec
> >               append :          100           1.68       0.521%         13.699%
> > 6.73KB/sec
> >               delete :            5           0.08       0.026%          6.849%
> > NA
> >                 stat :            5           0.08       0.026%          6.849%
> > NA
> >             writeall :          130           2.19       0.678%         12.329%
> > 8.75KB/sec
> >       writeall_fsync :           19           0.32       0.099%          8.219%
> > 1.28KB/sec
> >           open_close :            9           0.15       0.047%         12.329%
> > NA
> >         create_fsync :         8300         139.67      43.283%         12.329%
> > 559KB/sec
> >         append_fsync :           50           0.84       0.261%          6.849%
> > 3.37KB/sec
> > -
> > 322.70 Transactions per Second
> >
> > Throughput Results
> > ===================
> > Read Throughput: 707KB/sec
> > Write Throughput: 582KB/sec
> >
> > System Call Latency statistics in millisecs
> > =====
> >                 Min             Avg             Max             Total Calls
> >                 ========        ========        ========        ============
> > [   open]       0.061000        0.750540        10.721000                 63
> >    -
> > [   read]       0.017000        11.058425       28555.394000           10510
> >    -
> > [  write]       0.034000        6.705286        26812.076000            8647
> >    -
> > [ unlink]       0.922000        7.679800        25.364000                  5
> >    -
> > [  close]       0.019000        0.996635        34.723000                 63
> >    -
> > [   stat]       0.046000        0.942800        4.489000                   5
> >    -
> >
> > 0.2% User   Time
> > 2.6% System Time
> > 2.8% CPU Utilization
> >
> >
> > 3. fio test
> >
> > 1.) original kernel
> >
> > seq-read: (g=0): rw=read, bs=8K-8K/8K-8K, ioengine=libaio, iodepth=8
> > ...
> > seq-read: (g=0): rw=read, bs=8K-8K/8K-8K, ioengine=libaio, iodepth=8
> > seq-write: (g=1): rw=write, bs=8K-8K/8K-8K, ioengine=libaio, iodepth=8
> > ...
> > seq-write: (g=1): rw=write, bs=8K-8K/8K-8K, ioengine=libaio, iodepth=8
> > rnd-read: (g=2): rw=randread, bs=8K-8K/8K-8K, ioengine=libaio, iodepth=8
> > ...
> > rnd-read: (g=2): rw=randread, bs=8K-8K/8K-8K, ioengine=libaio, iodepth=8
> > rnd-write: (g=3): rw=randwrite, bs=8K-8K/8K-8K, ioengine=libaio,
> > iodepth=8
> > ...
> > rnd-write: (g=3): rw=randwrite, bs=8K-8K/8K-8K, ioengine=libaio,
> > iodepth=8
> > Starting 16 threads
> >
> > seq-read: (groupid=0, jobs=4): err= 0: pid=1646
> >   read : io=2,835MB, bw=24,192KB/s, iops=3,023, runt=120021msec
> >     slat (usec): min=0, max=999K, avg=1202.67, stdev=3145.84
> >     clat (usec): min=0, max=1,536K, avg=9186.07, stdev=11344.56
> >     bw (KB/s) : min=   39, max=21301, per=26.11%, avg=6315.41,
> > stdev=1082.63
> >   cpu          : usr=10.89%, sys=33.14%, ctx=1488108, majf=13, minf=2238
> >   IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=100.0%, 16=0.0%, 32=0.0%,
> >>=64=0.0%
> >      submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%,
> >>=64=0.0%
> >      complete  : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%,
> >>=64=0.0%
> >      issued r/w: total=362940/0, short=0/0
> >      lat (usec): 2=3.53%, 50=0.01%, 100=0.01%, 250=0.01%, 500=0.01%
> >      lat (usec): 750=0.03%, 1000=0.03%
> >      lat (msec): 2=1.75%, 4=1.08%, 10=68.93%, 20=22.39%, 50=2.02%
> >      lat (msec): 100=0.16%, 250=0.04%, 1000=0.01%, 2000=0.03%
> > seq-write: (groupid=1, jobs=4): err= 0: pid=1646
> >   write: io=1,721MB, bw=14,652KB/s, iops=1,831, runt=120277msec
> >     slat (usec): min=0, max=1,004K, avg=1744.41, stdev=3144.06
> >     clat (usec): min=0, max=1,014K, avg=15699.65, stdev=19751.69
> >     bw (KB/s) : min=  285, max=18032, per=26.41%, avg=3869.67,
> > stdev=762.96
> >   cpu          : usr=6.29%, sys=22.61%, ctx=880380, majf=36, minf=3222
> >   IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=100.0%, 16=0.0%, 32=0.0%,
> >>=64=0.0%
> >      submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%,
> >>=64=0.0%
> >      complete  : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%,
> >>=64=0.0%
> >      issued r/w: total=0/220282, short=0/0
> >      lat (usec): 2=2.43%, 500=0.01%, 750=0.12%, 1000=0.14%
> >      lat (msec): 2=0.86%, 4=1.72%, 10=39.03%, 20=42.20%, 50=11.87%
> >      lat (msec): 100=1.15%, 250=0.17%, 500=0.06%, 750=0.14%, 1000=0.09%
> >      lat (msec): 2000=0.02%
> > rnd-read: (groupid=2, jobs=4): err= 0: pid=1646
> >   read : io=65,128KB, bw=541KB/s, iops=67, runt=120381msec
> >     slat (usec): min=48, max=55,230, avg=167.95, stdev=248.50
> >     clat (msec): min=74, max=4,229, avg=472.23, stdev=129.50
> >     bw (KB/s) : min=    0, max=  203, per=25.34%, avg=137.08,
> > stdev=21.73
> >   cpu          : usr=0.85%, sys=2.19%, ctx=44001, majf=30, minf=3726
> >   IO depths    : 1=0.1%, 2=0.1%, 4=0.2%, 8=99.7%, 16=0.0%, 32=0.0%,
> >>=64=0.0%
> >      submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%,
> >>=64=0.0%
> >      complete  : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%,
> >>=64=0.0%
> >      issued r/w: total=8141/0, short=0/0
> >
> >      lat (msec): 100=0.04%, 250=0.09%, 500=81.72%, 750=13.09%,
> > 1000=2.97%
> >      lat (msec): 2000=1.50%, >=2000=0.59%
> > rnd-write: (groupid=3, jobs=4): err= 0: pid=1646
> >   write: io=200MB, bw=1,698KB/s, iops=212, runt=120331msec
> >     slat (usec): min=48, max=215K, avg=2272.24, stdev=2283.09
> >     clat (usec): min=762, max=14,617K, avg=147521.66, stdev=444146.36
> >     bw (KB/s) : min=    1, max= 3960, per=56.86%, avg=964.90,
> > stdev=514.63
> >   cpu          : usr=1.25%, sys=4.20%, ctx=135229, majf=0, minf=10194
> >   IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=99.9%, 16=0.0%, 32=0.0%,
> >>=64=0.0%
> >      submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%,
> >>=64=0.0%
> >      complete  : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%,
> >>=64=0.0%
> >      issued r/w: total=0/25536, short=0/0
> >      lat (usec): 1000=0.26%
> >      lat (msec): 2=0.13%, 4=2.01%, 10=3.77%, 20=42.78%, 50=20.95%
> >      lat (msec): 100=12.83%, 250=12.50%, 500=2.49%, 750=0.33%,
> > 1000=0.12%
> >      lat (msec): 2000=0.53%, >=2000=1.30%
> >
> > Run status group 0 (all jobs):
> >    READ: io=2,835MB, aggrb=24,191KB/s, minb=24,772KB/s, maxb=24,772KB/s,
> > mint=120021msec, maxt=120021msec
> >
> > Run status group 1 (all jobs):
> >   WRITE: io=1,721MB, aggrb=14,651KB/s, minb=15,003KB/s, maxb=15,003KB/s,
> > mint=120277msec, maxt=120277msec
> >
> > Run status group 2 (all jobs):
> >    READ: io=65,128KB, aggrb=541KB/s, minb=553KB/s, maxb=553KB/s,
> > mint=120381msec, maxt=120381msec
> >
> > Run status group 3 (all jobs):
> >   WRITE: io=200MB, aggrb=1,697KB/s, minb=1,738KB/s, maxb=1,738KB/s,
> > mint=120331msec, maxt=120331msec
> >
> > Disk stats (read/write):
> >   loop0: ios=0/0, merge=0/0, ticks=0/0, in_queue=0, util=0.00%
> >
> > 2.) kernel with enabled hot tracking
> >
> > seq-read: (g=0): rw=read, bs=8K-8K/8K-8K, ioengine=libaio, iodepth=8
> > ...
> > seq-read: (g=0): rw=read, bs=8K-8K/8K-8K, ioengine=libaio, iodepth=8
> > seq-write: (g=1): rw=write, bs=8K-8K/8K-8K, ioengine=libaio, iodepth=8
> > ...
> > seq-write: (g=1): rw=write, bs=8K-8K/8K-8K, ioengine=libaio, iodepth=8
> > rnd-read: (g=2): rw=randread, bs=8K-8K/8K-8K, ioengine=libaio, iodepth=8
> > ...
> > rnd-read: (g=2): rw=randread, bs=8K-8K/8K-8K, ioengine=libaio, iodepth=8
> > rnd-write: (g=3): rw=randwrite, bs=8K-8K/8K-8K, ioengine=libaio,
> > iodepth=8
> > ...
> > rnd-write: (g=3): rw=randwrite, bs=8K-8K/8K-8K, ioengine=libaio,
> > iodepth=8
> > Starting 16 threads
> >
> > seq-read: (groupid=0, jobs=4): err= 0: pid=2163
> >   read : io=3,047MB, bw=26,001KB/s, iops=3,250, runt=120003msec
> >     slat (usec): min=0, max=1,000K, avg=1141.34, stdev=2175.25
> >     clat (usec): min=0, max=1,002K, avg=8610.96, stdev=6184.67
> >     bw (KB/s) : min=   12, max=18896, per=25.28%, avg=6572.50,
> > stdev=713.22
> >   cpu          : usr=10.38%, sys=35.02%, ctx=1601418, majf=12, minf=2235
> >   IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=100.0%, 16=0.0%, 32=0.0%,
> >>=64=0.0%
> >      submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%,
> >>=64=0.0%
> >      complete  : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%,
> >>=64=0.0%
> >      issued r/w: total=390029/0, short=0/0
> >      lat (usec): 2=1.49%, 50=0.01%, 100=0.01%, 250=0.01%, 500=0.01%
> >      lat (usec): 750=0.01%, 1000=0.02%
> >      lat (msec): 2=1.53%, 4=0.86%, 10=79.60%, 20=14.93%, 50=1.43%
> >      lat (msec): 100=0.09%, 250=0.02%, 500=0.01%, 1000=0.01%, 2000=0.01%
> > seq-write: (groupid=1, jobs=4): err= 0: pid=2163
> >   write: io=1,752MB, bw=14,950KB/s, iops=1,868, runt=120003msec
> >     slat (usec): min=0, max=1,002K, avg=1697.47, stdev=3568.70
> >     clat (usec): min=0, max=1,019K, avg=15630.94, stdev=21109.46
> >     bw (KB/s) : min=  123, max=14693, per=26.31%, avg=3933.46,
> > stdev=779.57
> >   cpu          : usr=6.31%, sys=21.85%, ctx=894177, majf=4, minf=3407
> >   IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=100.0%, 16=0.0%, 32=0.0%,
> >>=64=0.0%
> >      submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%,
> >>=64=0.0%
> >      complete  : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%,
> >>=64=0.0%
> >      issued r/w: total=0/224253, short=0/0
> >      lat (usec): 2=2.44%, 100=0.01%, 250=0.01%, 500=0.01%, 750=0.06%
> >      lat (usec): 1000=0.23%
> >      lat (msec): 2=0.73%, 4=2.00%, 10=40.15%, 20=42.68%, 50=10.25%
> >      lat (msec): 100=0.95%, 250=0.14%, 500=0.10%, 750=0.12%, 1000=0.11%
> >      lat (msec): 2000=0.03%
> > rnd-read: (groupid=2, jobs=4): err= 0: pid=2163
> >   read : io=85,208KB, bw=709KB/s, iops=88, runt=120252msec
> >     slat (usec): min=52, max=48,325, avg=204.43, stdev=596.50
> >     clat (msec): min=1, max=2,754, avg=359.99, stdev=78.96
> >     bw (KB/s) : min=    0, max=  249, per=25.17%, avg=178.20,
> > stdev=23.79
> >   cpu          : usr=1.00%, sys=2.64%, ctx=55704, majf=28, minf=2971
> >   IO depths    : 1=0.1%, 2=0.1%, 4=0.2%, 8=99.7%, 16=0.0%, 32=0.0%,
> >>=64=0.0%
> >      submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%,
> >>=64=0.0%
> >      complete  : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%,
> >>=64=0.0%
> >      issued r/w: total=10651/0, short=0/0
> >
> >      lat (msec): 2=0.01%, 50=0.02%, 100=0.04%, 250=1.61%, 500=92.60%
> >      lat (msec): 750=4.24%, 1000=0.68%, 2000=0.59%, >=2000=0.22%
> > rnd-write: (groupid=3, jobs=4): err= 0: pid=2163
> >   write: io=247MB, bw=2,019KB/s, iops=252, runt=125287msec
> >     slat (usec): min=51, max=286K, avg=2576.23, stdev=2882.30
> >     clat (usec): min=698, max=8,156K, avg=123274.05, stdev=355311.20
> >     bw (KB/s) : min=    1, max= 4848, per=57.62%, avg=1162.77,
> > stdev=560.79
> >   cpu          : usr=1.33%, sys=4.24%, ctx=163334, majf=0, minf=8588
> >   IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=99.9%, 16=0.0%, 32=0.0%,
> >>=64=0.0%
> >      submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%,
> >>=64=0.0%
> >      complete  : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%,
> >>=64=0.0%
> >      issued r/w: total=0/31616, short=0/0
> >      lat (usec): 750=0.03%, 1000=0.15%
> >      lat (msec): 2=0.06%, 4=2.15%, 10=3.57%, 20=48.23%, 50=22.43%
> >      lat (msec): 100=11.48%, 250=9.14%, 500=1.04%, 750=0.16%, 1000=0.05%
> >      lat (msec): 2000=0.09%, >=2000=1.42%
> >
> > Run status group 0 (all jobs):
> >    READ: io=3,047MB, aggrb=26,001KB/s, minb=26,625KB/s, maxb=26,625KB/s,
> > mint=120003msec, maxt=120003msec
> >
> > Run status group 1 (all jobs):
> >   WRITE: io=1,752MB, aggrb=14,949KB/s, minb=15,308KB/s, maxb=15,308KB/s,
> > mint=120003msec, maxt=120003msec
> >
> > Run status group 2 (all jobs):
> >    READ: io=85,208KB, aggrb=708KB/s, minb=725KB/s, maxb=725KB/s,
> > mint=120252msec, maxt=120252msec
> >
> > Run status group 3 (all jobs):
> >   WRITE: io=247MB, aggrb=2,018KB/s, minb=2,067KB/s, maxb=2,067KB/s,
> > mint=125287msec, maxt=125287msec
> >
> > Disk stats (read/write):
> >   loop0: ios=0/0, merge=0/0, ticks=0/0, in_queue=0, util=0.00%
> >
> >
> > 4. compilebench test
> >
> > 1.) original kernel
> >
> > using working directory /mnt/scratch/, 30 intial dirs 100 runs
> >
> > native unpatched native-0 222MB in 87.48 seconds (2.54 MB/s)
> > native patched native-0 109MB in 24.89 seconds (4.41 MB/s)
> > native patched compiled native-0 691MB in 35.54 seconds (19.46 MB/s)
> > create dir kernel-0 222MB in 91.22 seconds (2.44 MB/s)
> > create dir kernel-1 222MB in 91.55 seconds (2.43 MB/s)
> > create dir kernel-2 222MB in 97.00 seconds (2.29 MB/s)
> > create dir kernel-3 222MB in 87.64 seconds (2.54 MB/s)
> > create dir kernel-4 222MB in 86.18 seconds (2.58 MB/s)
> > create dir kernel-5 222MB in 84.68 seconds (2.63 MB/s)
> > create dir kernel-6 222MB in 85.02 seconds (2.62 MB/s)
> > create dir kernel-7 222MB in 87.74 seconds (2.53 MB/s)
> > create dir kernel-8 222MB in 86.79 seconds (2.56 MB/s)
> > create dir kernel-9 222MB in 87.85 seconds (2.53 MB/s)
> > create dir kernel-10 222MB in 86.88 seconds (2.56 MB/s)
> > create dir kernel-11 222MB in 86.47 seconds (2.57 MB/s)
> > create dir kernel-12 222MB in 84.26 seconds (2.64 MB/s)
> > create dir kernel-13 222MB in 87.40 seconds (2.54 MB/s)
> > create dir kernel-14 222MB in 85.44 seconds (2.60 MB/s)
> > create dir kernel-15 222MB in 86.80 seconds (2.56 MB/s)
> > create dir kernel-16 222MB in 88.57 seconds (2.51 MB/s)
> > create dir kernel-17 222MB in 85.66 seconds (2.60 MB/s)
> > create dir kernel-18 222MB in 87.40 seconds (2.54 MB/s)
> > create dir kernel-19 222MB in 85.47 seconds (2.60 MB/s)
> > create dir kernel-20 222MB in 89.29 seconds (2.49 MB/s)
> > create dir kernel-21 222MB in 88.53 seconds (2.51 MB/s)
> > create dir kernel-22 222MB in 86.25 seconds (2.58 MB/s)
> > create dir kernel-23 222MB in 85.34 seconds (2.61 MB/s)
> > create dir kernel-24 222MB in 84.61 seconds (2.63 MB/s)
> > create dir kernel-25 222MB in 88.13 seconds (2.52 MB/s)
> > create dir kernel-26 222MB in 85.57 seconds (2.60 MB/s)
> > create dir kernel-27 222MB in 87.26 seconds (2.55 MB/s)
> > create dir kernel-28 222MB in 83.68 seconds (2.66 MB/s)
> > create dir kernel-29 222MB in 86.33 seconds (2.58 MB/s)
> > === sdb ===
> >   CPU  0:              9366376 events,   439049 KiB data
> >   Total:               9366376 events (dropped 0),   439049 KiB data
> > patch dir kernel-7 109MB in 55.00 seconds (1.99 MB/s)
> > compile dir kernel-7 691MB in 37.15 seconds (18.62 MB/s)
> > compile dir kernel-14 680MB in 38.48 seconds (17.69 MB/s)
> > patch dir kernel-14 691MB in 93.31 seconds (7.41 MB/s)
> > read dir kernel-7 in 93.36 9.85 MB/s
> > read dir kernel-10 in 58.25 3.82 MB/s
> > create dir kernel-3116 222MB in 91.96 seconds (2.42 MB/s)
> > clean kernel-7 691MB in 5.16 seconds (134.03 MB/s)
> > read dir kernel-6 in 56.98 3.90 MB/s
> > stat dir kernel-2 in 19.42 seconds
> > compile dir kernel-2 680MB in 43.11 seconds (15.79 MB/s)
> > clean kernel-14 691MB in 6.27 seconds (110.30 MB/s)
> > clean kernel-2 680MB in 5.79 seconds (117.55 MB/s)
> > patch dir kernel-2 109MB in 71.22 seconds (1.54 MB/s)
> > stat dir kernel-2 in 16.06 seconds
> > create dir kernel-6231 222MB in 96.20 seconds (2.31 MB/s)
> > delete kernel-8 in 45.20 seconds
> > compile dir kernel-2 691MB in 38.58 seconds (17.93 MB/s)
> > create dir kernel-70151 222MB in 93.41 seconds (2.38 MB/s)
> > clean kernel-2 691MB in 5.09 seconds (135.87 MB/s)
> > create dir kernel-78184 222MB in 86.04 seconds (2.58 MB/s)
> > compile dir kernel-7 691MB in 37.60 seconds (18.39 MB/s)
> > create dir kernel-64250 222MB in 80.33 seconds (2.77 MB/s)
> > delete kernel-12 in 43.00 seconds
> > stat dir kernel-2 in 16.43 seconds
> > patch dir kernel-70151 109MB in 77.42 seconds (1.42 MB/s)
> > stat dir kernel-7 in 18.48 seconds
> > stat dir kernel-78184 in 18.62 seconds
> > compile dir kernel-2 691MB in 43.31 seconds (15.97 MB/s)
> > compile dir kernel-26 680MB in 50.37 seconds (13.51 MB/s)
> > stat dir kernel-7 in 21.52 seconds
> > create dir kernel-2717 222MB in 89.86 seconds (2.47 MB/s)
> > delete kernel-26 in 47.81 seconds
> > stat dir kernel-2 in 18.61 seconds
> > compile dir kernel-14 691MB in 46.66 seconds (14.82 MB/s)
> > compile dir kernel-70151 691MB in 39.19 seconds (17.65 MB/s)
> > create dir kernel-55376 222MB in 88.91 seconds (2.50 MB/s)
> > stat dir kernel-22 in 18.66 seconds
> > delete kernel-55376 in 37.71 seconds
> > patch dir kernel-27 109MB in 74.82 seconds (1.47 MB/s)
> > patch dir kernel-64250 109MB in 81.08 seconds (1.35 MB/s)
> > read dir kernel-6231 in 82.15 2.71 MB/s
> > patch dir kernel-9 109MB in 96.02 seconds (1.14 MB/s)
> > stat dir kernel-14 in 22.46 seconds
> > read dir kernel-29 in 58.10 3.83 MB/s
> > create dir kernel-57327 222MB in 93.92 seconds (2.37 MB/s)
> > stat dir kernel-14 in 21.92 seconds
> > compile dir kernel-27 691MB in 41.43 seconds (16.69 MB/s)
> > create dir kernel-64334 222MB in 89.31 seconds (2.49 MB/s)
> > patch dir kernel-1 109MB in 84.37 seconds (1.30 MB/s)
> > create dir kernel-16056 222MB in 94.93 seconds (2.34 MB/s)
> > clean kernel-7 691MB in 7.27 seconds (95.13 MB/s)
> > delete kernel-27 in 46.32 seconds
> > create dir kernel-51614 222MB in 88.91 seconds (2.50 MB/s)
> > clean kernel-14 691MB in 6.71 seconds (103.07 MB/s)
> > delete kernel-64250 in 43.60 seconds
> > stat dir kernel-2 in 24.25 seconds
> > clean kernel-70151 691MB in 6.20 seconds (111.55 MB/s)
> > delete kernel-14 in 40.74 seconds
> > read dir kernel-2 in 118.45 7.76 MB/s
> > create dir kernel-24150 222MB in 88.99 seconds (2.50 MB/s)
> > read dir kernel-9 in 83.70 2.73 MB/s
> > patch dir kernel-19 109MB in 76.06 seconds (1.44 MB/s)
> > clean kernel-2 691MB in 6.64 seconds (104.16 MB/s)
> > compile dir kernel-18 680MB in 47.33 seconds (14.38 MB/s)
> > compile dir kernel-2 691MB in 44.63 seconds (15.50 MB/s)
> > delete kernel-2 in 51.03 seconds
> > delete kernel-70151 in 45.96 seconds
> > stat dir kernel-1 in 17.56 seconds
> > read dir kernel-18 in 121.08 7.46 MB/s
> > clean kernel-18 680MB in 6.47 seconds (105.20 MB/s)
> > compile dir kernel-17 680MB in 52.10 seconds (13.06 MB/s)
> > read dir kernel-17 in 114.66 7.88 MB/s
> > stat dir kernel-18 in 30.36 seconds
> > stat dir kernel-64334 in 44.78 seconds
> > delete kernel-24150 in 44.79 seconds
> > delete kernel-17 in 47.64 seconds
> > stat dir kernel-1 in 19.87 seconds
> > compile dir kernel-7 691MB in 47.65 seconds (14.51 MB/s)
> > patch dir kernel-16 109MB in 100.96 seconds (1.09 MB/s)
> > stat dir kernel-7 in 21.35 seconds
> > create dir kernel-82195 222MB in 111.17 seconds (2.00 MB/s)
> > delete kernel-82195 in 40.79 seconds
> > stat dir kernel-3 in 19.51 seconds
> > patch dir kernel-2717 109MB in 94.55 seconds (1.16 MB/s)
> > patch dir kernel-5 109MB in 60.21 seconds (1.82 MB/s)
> > read dir kernel-2717 in 94.85 2.41 MB/s
> > delete kernel-29 in 40.51 seconds
> > clean kernel-7 691MB in 5.84 seconds (118.42 MB/s)
> > read dir kernel-4 in 57.91 3.84 MB/s
> > stat dir kernel-78184 in 19.65 seconds
> > patch dir kernel-0 109MB in 90.61 seconds (1.21 MB/s)
> > patch dir kernel-3 109MB in 75.67 seconds (1.45 MB/s)
> > create dir kernel-30226 222MB in 106.72 seconds (2.08 MB/s)
> > read dir kernel-19 in 83.79 2.72 MB/s
> > read dir kernel-9 in 82.64 2.76 MB/s
> > delete kernel-5 in 38.89 seconds
> > read dir kernel-7 in 59.70 3.82 MB/s
> > patch dir kernel-57327 109MB in 101.71 seconds (1.08 MB/s)
> > read dir kernel-11 in 59.83 3.72 MB/s
> >
> > run complete:
> > ==========================================================================
> > intial create total runs 30 avg 2.55 MB/s (user 13.94s sys 34.07s)
> > create total runs 14 avg 2.41 MB/s (user 13.83s sys 34.39s)
> > patch total runs 15 avg 1.79 MB/s (user 6.55s sys 34.71s)
> > compile total runs 14 avg 16.04 MB/s (user 2.65s sys 16.88s)
> > clean total runs 10 avg 113.53 MB/s (user 0.46s sys 3.14s)
> > read tree total runs 11 avg 3.30 MB/s (user 11.68s sys 24.50s)
> > read compiled tree total runs 4 avg 8.24 MB/s (user 13.67s sys 35.85s)
> > delete tree total runs 10 avg 42.12 seconds (user 6.76s sys 24.50s)
> > delete compiled tree total runs 4 avg 48.20 seconds (user 7.65s sys
> > 28.60s)
> > stat tree total runs 11 avg 21.90 seconds (user 6.87s sys 6.34s)
> > stat compiled tree total runs 7 avg 21.23 seconds (user 7.65s sys 7.15s)
> >
> > 2.) kernel with enabled hot tracking
> >
> > using working directory /mnt/scratch/, 30 intial dirs 100 runs
> > native unpatched native-0 222MB in 112.82 seconds (1.97 MB/s)
> > native patched native-0 109MB in 27.38 seconds (4.01 MB/s)
> > native patched compiled native-0 691MB in 40.42 seconds (17.11 MB/s)
> > create dir kernel-0 222MB in 92.88 seconds (2.39 MB/s)
> > create dir kernel-1 222MB in 98.56 seconds (2.26 MB/s)
> > create dir kernel-2 222MB in 107.27 seconds (2.07 MB/s)
> > create dir kernel-3 222MB in 92.81 seconds (2.40 MB/s)
> > create dir kernel-4 222MB in 90.30 seconds (2.46 MB/s)
> > create dir kernel-5 222MB in 91.57 seconds (2.43 MB/s)
> > create dir kernel-6 222MB in 91.92 seconds (2.42 MB/s)
> > create dir kernel-7 222MB in 90.16 seconds (2.47 MB/s)
> > create dir kernel-8 222MB in 94.71 seconds (2.35 MB/s)
> > create dir kernel-9 222MB in 91.79 seconds (2.42 MB/s)
> > create dir kernel-10 222MB in 92.14 seconds (2.41 MB/s)
> > create dir kernel-11 222MB in 88.59 seconds (2.51 MB/s)
> > create dir kernel-12 222MB in 92.15 seconds (2.41 MB/s)
> > create dir kernel-13 222MB in 91.54 seconds (2.43 MB/s)
> > create dir kernel-14 222MB in 91.15 seconds (2.44 MB/s)
> > create dir kernel-15 222MB in 90.54 seconds (2.46 MB/s)
> > create dir kernel-16 222MB in 92.23 seconds (2.41 MB/s)
> > create dir kernel-17 222MB in 89.88 seconds (2.47 MB/s)
> > create dir kernel-18 222MB in 94.65 seconds (2.35 MB/s)
> > create dir kernel-19 222MB in 89.99 seconds (2.47 MB/s)
> > create dir kernel-20 222MB in 90.35 seconds (2.46 MB/s)
> > create dir kernel-21 222MB in 90.92 seconds (2.45 MB/s)
> > create dir kernel-22 222MB in 90.76 seconds (2.45 MB/s)
> > create dir kernel-23 222MB in 90.04 seconds (2.47 MB/s)
> > create dir kernel-24 222MB in 89.60 seconds (2.48 MB/s)
> > create dir kernel-25 222MB in 91.52 seconds (2.43 MB/s)
> > create dir kernel-26 222MB in 90.45 seconds (2.46 MB/s)
> > create dir kernel-27 222MB in 92.72 seconds (2.40 MB/s)
> > create dir kernel-28 222MB in 90.37 seconds (2.46 MB/s)
> > create dir kernel-29 222MB in 89.60 seconds (2.48 MB/s)
> > === sdb ===
> >   CPU  0:              8878754 events,   416192 KiB data
> >   Total:               8878754 events (dropped 0),   416192 KiB data
> > patch dir kernel-7 109MB in 61.00 seconds (1.80 MB/s)
> > compile dir kernel-7 691MB in 40.21 seconds (17.20 MB/s)
> > compile dir kernel-14 680MB in 45.97 seconds (14.81 MB/s)
> > patch dir kernel-14 691MB in 83.73 seconds (8.26 MB/s)
> > read dir kernel-7 in 88.66 10.37 MB/s
> > read dir kernel-10 in 56.44 3.94 MB/s
> > create dir kernel-3116 222MB in 91.58 seconds (2.43 MB/s)
> > clean kernel-7 691MB in 6.69 seconds (103.38 MB/s)
> > read dir kernel-6 in 61.07 3.64 MB/s
> > stat dir kernel-2 in 21.42 seconds
> > compile dir kernel-2 680MB in 44.55 seconds (15.28 MB/s)
> > clean kernel-14 691MB in 6.98 seconds (99.08 MB/s)
> > clean kernel-2 680MB in 6.12 seconds (111.21 MB/s)
> > patch dir kernel-2 109MB in 73.95 seconds (1.48 MB/s)
> > stat dir kernel-2 in 18.61 seconds
> > create dir kernel-6231 222MB in 100.84 seconds (2.21 MB/s)
> > delete kernel-8 in 40.38 seconds
> > compile dir kernel-2 691MB in 42.18 seconds (16.40 MB/s)
> > create dir kernel-70151 222MB in 96.34 seconds (2.31 MB/s)
> > clean kernel-2 691MB in 4.54 seconds (152.33 MB/s)
> > create dir kernel-78184 222MB in 94.71 seconds (2.35 MB/s)
> > compile dir kernel-7 691MB in 43.64 seconds (15.85 MB/s)
> > create dir kernel-64250 222MB in 87.65 seconds (2.54 MB/s)
> > delete kernel-12 in 38.58 seconds
> > stat dir kernel-2 in 17.48 seconds
> > patch dir kernel-70151 109MB in 79.82 seconds (1.37 MB/s)
> > stat dir kernel-7 in 25.76 seconds
> > stat dir kernel-78184 in 20.30 seconds
> > compile dir kernel-2 691MB in 40.93 seconds (16.90 MB/s)
> > compile dir kernel-26 680MB in 48.86 seconds (13.93 MB/s)
> > stat dir kernel-7 in 23.87 seconds
> > create dir kernel-2717 222MB in 98.71 seconds (2.25 MB/s)
> > delete kernel-26 in 45.60 seconds
> > stat dir kernel-2 in 22.62 seconds
> > compile dir kernel-14 691MB in 51.12 seconds (13.53 MB/s)
> > compile dir kernel-70151 691MB in 41.40 seconds (16.71 MB/s)
> > create dir kernel-55376 222MB in 94.61 seconds (2.35 MB/s)
> > stat dir kernel-22 in 22.11 seconds
> > delete kernel-55376 in 36.47 seconds
> > patch dir kernel-27 109MB in 76.74 seconds (1.43 MB/s)
> > patch dir kernel-64250 109MB in 86.43 seconds (1.27 MB/s)
> > read dir kernel-6231 in 85.10 2.61 MB/s
> > patch dir kernel-9 109MB in 97.67 seconds (1.12 MB/s)
> > stat dir kernel-14 in 24.80 seconds
> > read dir kernel-29 in 61.00 3.65 MB/s
> > create dir kernel-57327 222MB in 101.42 seconds (2.19 MB/s)
> > stat dir kernel-14 in 22.45 seconds
> > compile dir kernel-27 691MB in 48.19 seconds (14.35 MB/s)
> > create dir kernel-64334 222MB in 96.65 seconds (2.30 MB/s)
> > patch dir kernel-1 109MB in 88.32 seconds (1.24 MB/s)
> > create dir kernel-16056 222MB in 100.60 seconds (2.21 MB/s)
> > clean kernel-7 691MB in 8.20 seconds (84.34 MB/s)
> > delete kernel-27 in 48.53 seconds
> > create dir kernel-51614 222MB in 98.07 seconds (2.27 MB/s)
> > clean kernel-14 691MB in 6.82 seconds (101.41 MB/s)
> > delete kernel-64250 in 44.01 seconds
> > stat dir kernel-2 in 26.37 seconds
> > clean kernel-70151 691MB in 6.21 seconds (111.37 MB/s)
> > delete kernel-14 in 41.74 seconds
> > read dir kernel-2 in 122.71 7.50 MB/s
> > create dir kernel-24150 222MB in 99.01 seconds (2.25 MB/s)
> > read dir kernel-9 in 78.29 2.91 MB/s
> > patch dir kernel-19 109MB in 77.45 seconds (1.42 MB/s)
> > clean kernel-2 691MB in 5.94 seconds (116.43 MB/s)
> > compile dir kernel-18 680MB in 49.17 seconds (13.84 MB/s)
> > compile dir kernel-2 691MB in 47.20 seconds (14.65 MB/s)
> > delete kernel-2 in 48.01 seconds
> > delete kernel-70151 in 47.60 seconds
> > stat dir kernel-1 in 21.80 seconds
> > read dir kernel-18 in 109.98 8.21 MB/s
> > clean kernel-18 680MB in 7.78 seconds (87.49 MB/s)
> > compile dir kernel-17 680MB in 54.39 seconds (12.51 MB/s)
> > read dir kernel-17 in 108.52 8.32 MB/s
> > stat dir kernel-18 in 19.48 seconds
> > stat dir kernel-64334 in 22.04 seconds
> > delete kernel-24150 in 44.36 seconds
> > delete kernel-17 in 49.09 seconds
> > stat dir kernel-1 in 18.16 seconds
> > compile dir kernel-7 691MB in 48.90 seconds (14.14 MB/s)
> > patch dir kernel-16 109MB in 103.71 seconds (1.06 MB/s)
> > stat dir kernel-7 in 21.94 seconds
> > create dir kernel-82195 222MB in 110.82 seconds (2.01 MB/s)
> > delete kernel-82195 in 38.64 seconds
> > stat dir kernel-3 in 22.88 seconds
> > patch dir kernel-2717 109MB in 92.23 seconds (1.19 MB/s)
> > patch dir kernel-5 109MB in 64.95 seconds (1.69 MB/s)
> > read dir kernel-2717 in 97.88 2.33 MB/s
> > delete kernel-29 in 40.59 seconds
> > clean kernel-7 691MB in 5.09 seconds (135.87 MB/s)
> > read dir kernel-4 in 59.42 3.74 MB/s
> > stat dir kernel-78184 in 20.24 seconds
> > patch dir kernel-0 109MB in 95.95 seconds (1.14 MB/s)
> > patch dir kernel-3 109MB in 62.86 seconds (1.74 MB/s)
> > create dir kernel-30226 222MB in 106.81 seconds (2.08 MB/s)
> > read dir kernel-19 in 81.32 2.81 MB/s
> > read dir kernel-9 in 74.65 3.06 MB/s
> > delete kernel-5 in 42.04 seconds
> > read dir kernel-7 in 61.95 3.68 MB/s
> > patch dir kernel-57327 109MB in 97.85 seconds (1.12 MB/s)
> > read dir kernel-11 in 58.85 3.78 MB/s
> >
> > run complete:
> > ==========================================================================
> > intial create total runs 30 avg 2.42 MB/s (user 13.60s sys 36.18s)
> > create total runs 14 avg 2.27 MB/s (user 13.66s sys 36.94s)
> > patch total runs 15 avg 1.82 MB/s (user 6.62s sys 36.93s)
> > compile total runs 14 avg 15.01 MB/s (user 2.76s sys 18.29s)
> > clean total runs 10 avg 110.29 MB/s (user 0.46s sys 3.21s)
> > read tree total runs 11 avg 3.29 MB/s (user 11.04s sys 28.65s)
> > read compiled tree total runs 4 avg 8.60 MB/s (user 13.16s sys 41.32s)
> > delete tree total runs 10 avg 41.44 seconds (user 6.43s sys 25.19s)
> > delete compiled tree total runs 4 avg 47.81 seconds (user 7.18s sys
> > 29.27s)
> > stat tree total runs 11 avg 20.41 seconds (user 6.39s sys 7.45s)
> > stat compiled tree total runs 7 avg 23.97 seconds (user 7.24s sys 8.74s)
> >
> > On Fri, 2012-11-16 at 17:51 +0800, zwu.kernel@gmail.com wrote:
> >> From: Zhi Yong Wu <wuzhy@linux.vnet.ibm.com>
> >>
> >> HI, guys,
> >>
> >>   Any comments or ideas are appreciated, thanks.
> >>
> >> NOTE:
> >>
> >>   The patchset can be obtained via my kernel dev git on github:
> >> git://github.com/wuzhy/kernel.git hot_tracking
> >>   If you're interested, you can also review them via
> >> https://github.com/wuzhy/kernel/commits/hot_tracking
> >>
> >>   For more info, please check hot_tracking.txt in Documentation
> >>
> >> TODO List:
> >>
> >>  1.) Need to do scalability or performance tests. - Required
> >>  2.) Need one simpler but efficient temp calculation function
> >>  3.) How to save the file temperature among the umount to be able to
> >>      preserve the file tempreture after reboot - Optional
> >>
> >> Changelog:
> >>
> >>  - Solved 64 bits inode number issue. [David Sterba]
> >>  - Embed struct hot_type in struct file_system_type [Darrick J. Wong]
> >>  - Cleanup Some issues [David Sterba]
> >>  - Use a static hot debugfs root [Greg KH]
> >>  - Rewritten debugfs support based on seq_file operation. [Dave Chinner]
> >>  - Refactored workqueue support. [Dave Chinner]
> >>  - Turn some Micro into be tunable   [Zhiyong, Zheng Liu]
> >>        TIME_TO_KICK, and HEAT_UPDATE_DELAY
> >>  - Introduce hot func registering framework [Zhiyong]
> >>  - Remove global variable for hot tracking [Zhiyong]
> >>  - Add xfs hot tracking support [Dave Chinner]
> >>  - Add ext4 hot tracking support [Zheng Liu]
> >>  - Cleanedup a lot of other issues [Dave Chinner]
> >>  - Added memory shrinker [Dave Chinner]
> >>  - Converted to one workqueue to update map info periodically [Dave Chinner]
> >>  - Cleanedup a lot of other issues [Dave Chinner]
> >>  - Reduce new files and put all in fs/hot_tracking.[ch] [Dave Chinner]
> >>  - Add btrfs hot tracking support [Zhiyong]
> >>  - The first three patches can probably just be flattened into one.
> >>                                         [Marco Stornelli , Dave Chinner]
> >>
> >> Zhi Yong Wu (16):
> >>   vfs: introduce some data structures
> >>   vfs: add init and cleanup functions
> >>   vfs: add I/O frequency update function
> >>   vfs: add two map arrays
> >>   vfs: add hooks to enable hot tracking
> >>   vfs: add temp calculation function
> >>   vfs: add map info update function
> >>   vfs: add aging function
> >>   vfs: add one work queue
> >>   vfs: add FS hot type support
> >>   vfs: register one shrinker
> >>   vfs: add one ioctl interface
> >>   vfs: add debugfs support
> >>   proc: add two hot_track proc files
> >>   btrfs: add hot tracking support
> >>   vfs: add documentation
> >>
> >>  Documentation/filesystems/00-INDEX         |    2 +
> >>  Documentation/filesystems/hot_tracking.txt |  263 ++++++
> >>  fs/Makefile                                |    2 +-
> >>  fs/btrfs/ctree.h                           |    1 +
> >>  fs/btrfs/super.c                           |   22 +-
> >>  fs/compat_ioctl.c                          |    5 +
> >>  fs/dcache.c                                |    2 +
> >>  fs/direct-io.c                             |    6 +
> >>  fs/hot_tracking.c                          | 1306 ++++++++++++++++++++++++++++
> >>  fs/hot_tracking.h                          |   52 ++
> >>  fs/ioctl.c                                 |   74 ++
> >>  include/linux/fs.h                         |    5 +
> >>  include/linux/hot_tracking.h               |  152 ++++
> >>  kernel/sysctl.c                            |   14 +
> >>  mm/filemap.c                               |    6 +
> >>  mm/page-writeback.c                        |   12 +
> >>  mm/readahead.c                             |    7 +
> >>  17 files changed, 1929 insertions(+), 2 deletions(-)
> >>  create mode 100644 Documentation/filesystems/hot_tracking.txt
> >>  create mode 100644 fs/hot_tracking.c
> >>  create mode 100644 fs/hot_tracking.h
> >>  create mode 100644 include/linux/hot_tracking.h
> >>
> >
> > --
> > Regards,
> >
> > Zhi Yong Wu
> >
> 
> 
> 
> -- 
> Regards,
> 
> Zhi Yong Wu

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH v1 resend hot_track 00/16] vfs: hot data tracking
  2012-12-12 19:50     ` Darrick J. Wong
@ 2012-12-13 12:17       ` Zhi Yong Wu
  2012-12-14  2:46         ` Darrick J. Wong
  0 siblings, 1 reply; 22+ messages in thread
From: Zhi Yong Wu @ 2012-12-13 12:17 UTC (permalink / raw)
  To: Darrick J. Wong
  Cc: wuzhy, linux-fsdevel, linux-kernel, viro, linuxram, david,
	swhiteho, dave, andi, northrup.james

On Thu, Dec 13, 2012 at 3:50 AM, Darrick J. Wong
<darrick.wong@oracle.com> wrote:
> On Mon, Dec 10, 2012 at 11:30:03AM +0800, Zhi Yong Wu wrote:
>> HI, all guys.
>>
>> any comments or suggestions?
>
> Why did ffsb drop from 924 transactions/sec to 322?
It is maybe that some noise operations impact on it. I am doing one
larger scale perf testing in one clearer environment. So i want to
look at if its ffsb testing result has some difference from this.

>
> --D
>>
>> On Thu, Dec 6, 2012 at 11:28 AM, Zhi Yong Wu <wuzhy@linux.vnet.ibm.com> wrote:
>> > HI, guys
>> >
>> > THe perf testing is done separately with fs_mark, fio, ffsb and
>> > compilebench in one kvm guest.
>> >
>> > Below is the performance testing report for hot tracking, and no obvious
>> > perf downgrade is found.
>> >
>> > Note: original kernel means its source code is not changed;
>> >       kernel with enabled hot tracking means its source code is with hot
>> > tracking patchset.
>> >
>> > The test env is set up as below:
>> >
>> > root@debian-i386:/home/zwu# uname -a
>> > Linux debian-i386 3.7.0-rc8+ #266 SMP Tue Dec 4 12:17:55 CST 2012 x86_64
>> > GNU/Linux
>> >
>> > root@debian-i386:/home/zwu# mkfs.xfs -f -l
>> > size=1310b,sunit=8 /home/zwu/bdev.img
>> > meta-data=/home/zwu/bdev.img     isize=256    agcount=4, agsize=128000
>> > blks
>> >          =                       sectsz=512   attr=2, projid32bit=0
>> > data     =                       bsize=4096   blocks=512000, imaxpct=25
>> >          =                       sunit=0      swidth=0 blks
>> > naming   =version 2              bsize=4096   ascii-ci=0
>> > log      =internal log           bsize=4096   blocks=1310, version=2
>> >          =                       sectsz=512   sunit=1 blks, lazy-count=1
>> > realtime =none                   extsz=4096   blocks=0, rtextents=0
>> >
>> > 1.) original kernel
>> >
>> > root@debian-i386:/home/zwu# mount -o
>> > loop,logbsize=256k /home/zwu/bdev.img /mnt/scratch
>> > [ 1197.421616] XFS (loop0): Mounting Filesystem
>> > [ 1197.567399] XFS (loop0): Ending clean mount
>> > root@debian-i386:/home/zwu# mount
>> > /dev/sda1 on / type ext3 (rw,errors=remount-ro)
>> > tmpfs on /lib/init/rw type tmpfs (rw,nosuid,mode=0755)
>> > proc on /proc type proc (rw,noexec,nosuid,nodev)
>> > sysfs on /sys type sysfs (rw,noexec,nosuid,nodev)
>> > udev on /dev type tmpfs (rw,mode=0755)
>> > tmpfs on /dev/shm type tmpfs (rw,nosuid,nodev)
>> > devpts on /dev/pts type devpts (rw,noexec,nosuid,gid=5,mode=620)
>> > none on /selinux type selinuxfs (rw,relatime)
>> > debugfs on /sys/kernel/debug type debugfs (rw)
>> > binfmt_misc on /proc/sys/fs/binfmt_misc type binfmt_misc
>> > (rw,noexec,nosuid,nodev)
>> > /dev/loop0 on /mnt/scratch type xfs (rw,logbsize=256k)
>> > root@debian-i386:/home/zwu# free -m
>> >              total       used       free     shared    buffers
>> > cached
>> > Mem:           112        109          2          0          4
>> > 53
>> > -/+ buffers/cache:         51         60
>> > Swap:          713         29        684
>> >
>> > 2.) kernel with enabled hot tracking
>> >
>> > root@debian-i386:/home/zwu# mount -o
>> > hot_track,loop,logbsize=256k /home/zwu/bdev.img /mnt/scratch
>> > [  364.648470] XFS (loop0): Mounting Filesystem
>> > [  364.910035] XFS (loop0): Ending clean mount
>> > [  364.921063] VFS: Turning on hot data tracking
>> > root@debian-i386:/home/zwu# mount
>> > /dev/sda1 on / type ext3 (rw,errors=remount-ro)
>> > tmpfs on /lib/init/rw type tmpfs (rw,nosuid,mode=0755)
>> > proc on /proc type proc (rw,noexec,nosuid,nodev)
>> > sysfs on /sys type sysfs (rw,noexec,nosuid,nodev)
>> > udev on /dev type tmpfs (rw,mode=0755)
>> > tmpfs on /dev/shm type tmpfs (rw,nosuid,nodev)
>> > devpts on /dev/pts type devpts (rw,noexec,nosuid,gid=5,mode=620)
>> > none on /selinux type selinuxfs (rw,relatime)
>> > binfmt_misc on /proc/sys/fs/binfmt_misc type binfmt_misc
>> > (rw,noexec,nosuid,nodev)
>> > /dev/loop0 on /mnt/scratch type xfs (rw,hot_track,logbsize=256k)
>> > root@debian-i386:/home/zwu# free -m
>> >              total       used       free     shared    buffers
>> > cached
>> > Mem:           112        107          4          0          2
>> > 34
>> > -/+ buffers/cache:         70         41
>> > Swap:          713          2        711
>> >
>> > 1. fs_mark test
>> >
>> > 1.) orginal kernel
>> >
>> > #  ./fs_mark  -D  100  -S0  -n  1000  -s  1  -L  30  -d  /mnt/scratch/0
>> > -d  /mnt/scratch/1  -d  /mnt/scratch/2  -d  /mnt/scratch/3
>> > -d  /mnt/scratch/4  -d  /mnt/scratch/5  -d  /mnt/scratch/6
>> > -d  /mnt/scratch/7
>> > #       Version 3.3, 8 thread(s) starting at Wed Dec  5 03:20:58 2012
>> > #       Sync method: NO SYNC: Test does not issue sync() or fsync() calls.
>> > #       Directories:  Time based hash between directories across 100
>> > subdirectories with 180 seconds per subdirectory.
>> > #       File names: 40 bytes long, (16 initial bytes of time stamp with 24
>> > random bytes at end of name)
>> > #       Files info: size 1 bytes, written with an IO size of 16384 bytes per
>> > write
>> > #       App overhead is time in microseconds spent in the test not doing file
>> > writing related system calls.
>> >
>> > FSUse%        Count         Size    Files/sec     App Overhead
>> >      2         8000            1        375.6         27175895
>> >      3        16000            1        375.6         27478079
>> >      4        24000            1        346.0         27819607
>> >      4        32000            1        316.9         25863385
>> >      5        40000            1        335.2         25460605
>> >      6        48000            1        312.3         25889196
>> >      7        56000            1        327.3         25000611
>> >      8        64000            1        304.4         28126698
>> >      9        72000            1        361.7         26652172
>> >      9        80000            1        370.1         27075875
>> >     10        88000            1        347.8         31093106
>> >     11        96000            1        387.1         26877324
>> >     12       104000            1        352.3         26635853
>> >     13       112000            1        379.3         26400198
>> >     14       120000            1        367.4         27228178
>> >     14       128000            1        359.2         27627871
>> >     15       136000            1        358.4         27089821
>> >     16       144000            1        385.5         27804852
>> >     17       152000            1        322.9         26221907
>> >     18       160000            1        393.2         26760040
>> >     18       168000            1        351.9         29210327
>> >     20       176000            1        395.2         24610548
>> >     20       184000            1        376.7         27518650
>> >     21       192000            1        340.1         27512874
>> >     22       200000            1        389.0         27109104
>> >     23       208000            1        389.7         29288594
>> >     24       216000            1        352.6         29948820
>> >     25       224000            1        380.4         26370958
>> >     26       232000            1        332.9         27770518
>> >     26       240000            1        333.6         25176691
>> >
>> > 2.) kernel with enabled hot tracking
>> >
>> > #  ./fs_mark  -D  100  -S0  -n  1000  -s  1  -L  30  -d  /mnt/scratch/0
>> > -d  /mnt/scratch/1  -d  /mnt/scratch/2  -d  /mnt/scratch/3
>> > -d  /mnt/scratch/4  -d  /mnt/scratch/5  -d  /mnt/scratch/6
>> > -d  /mnt/scratch/7
>> > #       Version 3.3, 8 thread(s) starting at Tue Dec  4 04:28:48 2012
>> > #       Sync method: NO SYNC: Test does not issue sync() or fsync() calls.
>> > #       Directories:  Time based hash between directories across 100
>> > subdirectories with 180 seconds per subdirectory.
>> > #       File names: 40 bytes long, (16 initial bytes of time stamp with 24
>> > random bytes at end of name)
>> > #       Files info: size 1 bytes, written with an IO size of 16384 bytes per
>> > write
>> > #       App overhead is time in microseconds spent in the test not doing file
>> > writing related system calls.
>> >
>> > FSUse%        Count         Size    Files/sec     App Overhead
>> >      4         8000            1        323.0         25104879
>> >      6        16000            1        351.4         25372919
>> >      8        24000            1        345.9         24107987
>> >      9        32000            1        313.2         26249533
>> >     10        40000            1        323.0         20312267
>> >     12        48000            1        303.2         22178040
>> >     14        56000            1        307.6         22775058
>> >     15        64000            1        317.9         25178845
>> >     17        72000            1        351.8         22020260
>> >     19        80000            1        369.3         23546708
>> >     21        88000            1        324.1         29068297
>> >     22        96000            1        355.3         25212333
>> >     24       104000            1        346.4         26622613
>> >     26       112000            1        360.4         25477193
>> >     28       120000            1        362.9         21774508
>> >     29       128000            1        329.0         25760109
>> >     31       136000            1        369.5         24540577
>> >     32       144000            1        330.2         26013559
>> >     34       152000            1        365.5         25643279
>> >     36       160000            1        366.2         24393130
>> >     38       168000            1        348.3         25248940
>> >     39       176000            1        357.3         24080574
>> >     40       184000            1        316.8         23011921
>> >     43       192000            1        351.7         27468060
>> >     44       200000            1        362.2         27540349
>> >     46       208000            1        340.9         26135445
>> >     48       216000            1        339.2         20926743
>> >     50       224000            1        316.5         21399871
>> >     52       232000            1        346.3         24669604
>> >     53       240000            1        320.5         22204449
>> >
>> >
>> > 2. FFSB test
>> >
>> > 1.) original kernel
>> >
>> > FFSB version 6.0-RC2 started
>> >
>> > benchmark time = 10
>> > ThreadGroup 0
>> > ================
>> >          num_threads      = 4
>> >
>> >          read_random      = off
>> >          read_size        = 40960       (40KB)
>> >          read_blocksize   = 4096        (4KB)
>> >          read_skip        = off
>> >          read_skipsize    = 0   (0B)
>> >
>> >          write_random     = off
>> >          write_size       = 40960       (40KB)
>> >          fsync_file       = 0
>> >          write_blocksize  = 4096        (4KB)
>> >          wait time        = 0
>> >
>> >          op weights
>> >                          read = 0 (0.00%)
>> >                       readall = 1 (10.00%)
>> >                         write = 0 (0.00%)
>> >                        create = 1 (10.00%)
>> >                        append = 1 (10.00%)
>> >                        delete = 1 (10.00%)
>> >                        metaop = 0 (0.00%)
>> >                     createdir = 0 (0.00%)
>> >                          stat = 1 (10.00%)
>> >                      writeall = 1 (10.00%)
>> >                writeall_fsync = 1 (10.00%)
>> >                    open_close = 1 (10.00%)
>> >                   write_fsync = 0 (0.00%)
>> >                  create_fsync = 1 (10.00%)
>> >                  append_fsync = 1 (10.00%)
>> >
>> > FileSystem /mnt/scratch/test1
>> > ==========
>> >          num_dirs         = 100
>> >          starting files   = 0
>> >
>> >          Fileset weight:
>> >                      33554432 (  32MB) -> 1 (1.00%)
>> >                       8388608 (   8MB) -> 2 (2.00%)
>> >                        524288 ( 512KB) -> 3 (3.00%)
>> >                        262144 ( 256KB) -> 4 (4.00%)
>> >                        131072 ( 128KB) -> 5 (5.00%)
>> >                         65536 (  64KB) -> 8 (8.00%)
>> >                         32768 (  32KB) -> 10 (10.00%)
>> >                         16384 (  16KB) -> 13 (13.00%)
>> >                          8192 (   8KB) -> 21 (21.00%)
>> >                          4096 (   4KB) -> 33 (33.00%)
>> >          directio         = off
>> >          alignedio        = off
>> >          bufferedio       = off
>> >
>> >          aging is off
>> >          current utilization = 26.19%
>> >
>> > creating new fileset /mnt/scratch/test1
>> > fs setup took 87 secs
>> > Syncing()...1 sec
>> > Starting Actual Benchmark At: Wed Dec  5 03:38:06 2012
>> >
>> > Syncing()...0 sec
>> > FFSB benchmark finished   at: Wed Dec  5 03:38:18 2012
>> >
>> > Results:
>> > Benchmark took 11.44 sec
>> >
>> > Total Results
>> > ===============
>> >              Op Name   Transactions      Trans/sec      % Trans     % Op Weight
>> > Throughput
>> >              =======   ============      =========      =======     ===========
>> > ==========
>> >              readall :           93           8.13       0.880%         21.053%
>> > 32.5KB/sec
>> >               create :           20           1.75       0.189%          5.263%
>> > 6.99KB/sec
>> >               append :           10           0.87       0.095%          2.632%
>> > 3.5KB/sec
>> >               delete :            4           0.35       0.038%         10.526%
>> > NA
>> >                 stat :            3           0.26       0.028%          7.895%
>> > NA
>> >             writeall :         2178         190.39      20.600%         10.526%
>> > 762KB/sec
>> >       writeall_fsync :            5           0.44       0.047%          5.263%
>> > 1.75KB/sec
>> >           open_close :            6           0.52       0.057%         15.789%
>> > NA
>> >         create_fsync :         8234         719.78      77.878%         15.789%
>> > 2.81MB/sec
>> >         append_fsync :           20           1.75       0.189%          5.263%
>> > 6.99KB/sec
>> > -
>> > 924.24 Transactions per Second
>> >
>> > Throughput Results
>> > ===================
>> > Read Throughput: 32.5KB/sec
>> > Write Throughput: 3.57MB/sec
>> >
>> > System Call Latency statistics in millisecs
>> > =====
>> >                 Min             Avg             Max             Total Calls
>> >                 ========        ========        ========        ============
>> > [   open]       0.050000        3.980161        41.840000                 31
>> >    -
>> > [   read]       0.017000        71.442215       1286.122000               93
>> >    -
>> > [  write]       0.052000        1.034817        2201.956000            10467
>> >    -
>> > [ unlink]       1.118000        185.398750      730.807000                 4
>> >    -
>> > [  close]       0.019000        1.968968        39.679000                 31
>> >    -
>> > [   stat]       0.043000        2.173667        6.428000                   3
>> >    -
>> >
>> > 0.8% User   Time
>> > 9.2% System Time
>> > 10.0% CPU Utilization
>> >
>> > 2.) kernel with enabled hot tracking
>> >
>> > FFSB version 6.0-RC2 started
>> >
>> > benchmark time = 10
>> > ThreadGroup 0
>> > ================
>> >          num_threads      = 4
>> >
>> >          read_random      = off
>> >          read_size        = 40960       (40KB)
>> >          read_blocksize   = 4096        (4KB)
>> >          read_skip        = off
>> >          read_skipsize    = 0   (0B)
>> >
>> >          write_random     = off
>> >          write_size       = 40960       (40KB)
>> >          fsync_file       = 0
>> >          write_blocksize  = 4096        (4KB)
>> >          wait time        = 0
>> >
>> >          op weights
>> >                          read = 0 (0.00%)
>> >                       readall = 1 (10.00%)
>> >                         write = 0 (0.00%)
>> >                        create = 1 (10.00%)
>> >                        append = 1 (10.00%)
>> >                        delete = 1 (10.00%)
>> >                        metaop = 0 (0.00%)
>> >                     createdir = 0 (0.00%)
>> >                          stat = 1 (10.00%)
>> >                      writeall = 1 (10.00%)
>> >                writeall_fsync = 1 (10.00%)
>> >                    open_close = 1 (10.00%)
>> >                   write_fsync = 0 (0.00%)
>> >                  create_fsync = 1 (10.00%)
>> >                  append_fsync = 1 (10.00%)
>> >
>> > FileSystem /mnt/scratch/test1
>> > ==========
>> >          num_dirs         = 100
>> >          starting files   = 0
>> >
>> >          Fileset weight:
>> >                      33554432 (  32MB) -> 1 (1.00%)
>> >                       8388608 (   8MB) -> 2 (2.00%)
>> >                        524288 ( 512KB) -> 3 (3.00%)
>> >                        262144 ( 256KB) -> 4 (4.00%)
>> >                        131072 ( 128KB) -> 5 (5.00%)
>> >                         65536 (  64KB) -> 8 (8.00%)
>> >                         32768 (  32KB) -> 10 (10.00%)
>> >                         16384 (  16KB) -> 13 (13.00%)
>> >                          8192 (   8KB) -> 21 (21.00%)
>> >                          4096 (   4KB) -> 33 (33.00%)
>> >          directio         = off
>> >          alignedio        = off
>> >          bufferedio       = off
>> >
>> >          aging is off
>> >          current utilization = 52.46%
>> >
>> > creating new fileset /mnt/scratch/test1
>> > fs setup took 42 secs
>> > Syncing()...1 sec
>> > Starting Actual Benchmark At: Tue Dec  4 06:41:54 2012
>> >
>> > Syncing()...0 sec
>> > FFSB benchmark finished   at: Tue Dec  4 06:42:53 2012
>> >
>> > Results:
>> > Benchmark took 59.42 sec
>> >
>> > Total Results
>> > ===============
>> >              Op Name   Transactions      Trans/sec      % Trans     % Op Weight
>> > Throughput
>> >              =======   ============      =========      =======     ===========
>> > ==========
>> >              readall :        10510         176.87      54.808%         10.959%
>> > 707KB/sec
>> >               create :           48           0.81       0.250%          9.589%
>> > 3.23KB/sec
>> >               append :          100           1.68       0.521%         13.699%
>> > 6.73KB/sec
>> >               delete :            5           0.08       0.026%          6.849%
>> > NA
>> >                 stat :            5           0.08       0.026%          6.849%
>> > NA
>> >             writeall :          130           2.19       0.678%         12.329%
>> > 8.75KB/sec
>> >       writeall_fsync :           19           0.32       0.099%          8.219%
>> > 1.28KB/sec
>> >           open_close :            9           0.15       0.047%         12.329%
>> > NA
>> >         create_fsync :         8300         139.67      43.283%         12.329%
>> > 559KB/sec
>> >         append_fsync :           50           0.84       0.261%          6.849%
>> > 3.37KB/sec
>> > -
>> > 322.70 Transactions per Second
>> >
>> > Throughput Results
>> > ===================
>> > Read Throughput: 707KB/sec
>> > Write Throughput: 582KB/sec
>> >
>> > System Call Latency statistics in millisecs
>> > =====
>> >                 Min             Avg             Max             Total Calls
>> >                 ========        ========        ========        ============
>> > [   open]       0.061000        0.750540        10.721000                 63
>> >    -
>> > [   read]       0.017000        11.058425       28555.394000           10510
>> >    -
>> > [  write]       0.034000        6.705286        26812.076000            8647
>> >    -
>> > [ unlink]       0.922000        7.679800        25.364000                  5
>> >    -
>> > [  close]       0.019000        0.996635        34.723000                 63
>> >    -
>> > [   stat]       0.046000        0.942800        4.489000                   5
>> >    -
>> >
>> > 0.2% User   Time
>> > 2.6% System Time
>> > 2.8% CPU Utilization
>> >
>> >
>> > 3. fio test
>> >
>> > 1.) original kernel
>> >
>> > seq-read: (g=0): rw=read, bs=8K-8K/8K-8K, ioengine=libaio, iodepth=8
>> > ...
>> > seq-read: (g=0): rw=read, bs=8K-8K/8K-8K, ioengine=libaio, iodepth=8
>> > seq-write: (g=1): rw=write, bs=8K-8K/8K-8K, ioengine=libaio, iodepth=8
>> > ...
>> > seq-write: (g=1): rw=write, bs=8K-8K/8K-8K, ioengine=libaio, iodepth=8
>> > rnd-read: (g=2): rw=randread, bs=8K-8K/8K-8K, ioengine=libaio, iodepth=8
>> > ...
>> > rnd-read: (g=2): rw=randread, bs=8K-8K/8K-8K, ioengine=libaio, iodepth=8
>> > rnd-write: (g=3): rw=randwrite, bs=8K-8K/8K-8K, ioengine=libaio,
>> > iodepth=8
>> > ...
>> > rnd-write: (g=3): rw=randwrite, bs=8K-8K/8K-8K, ioengine=libaio,
>> > iodepth=8
>> > Starting 16 threads
>> >
>> > seq-read: (groupid=0, jobs=4): err= 0: pid=1646
>> >   read : io=2,835MB, bw=24,192KB/s, iops=3,023, runt=120021msec
>> >     slat (usec): min=0, max=999K, avg=1202.67, stdev=3145.84
>> >     clat (usec): min=0, max=1,536K, avg=9186.07, stdev=11344.56
>> >     bw (KB/s) : min=   39, max=21301, per=26.11%, avg=6315.41,
>> > stdev=1082.63
>> >   cpu          : usr=10.89%, sys=33.14%, ctx=1488108, majf=13, minf=2238
>> >   IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=100.0%, 16=0.0%, 32=0.0%,
>> >>=64=0.0%
>> >      submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%,
>> >>=64=0.0%
>> >      complete  : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%,
>> >>=64=0.0%
>> >      issued r/w: total=362940/0, short=0/0
>> >      lat (usec): 2=3.53%, 50=0.01%, 100=0.01%, 250=0.01%, 500=0.01%
>> >      lat (usec): 750=0.03%, 1000=0.03%
>> >      lat (msec): 2=1.75%, 4=1.08%, 10=68.93%, 20=22.39%, 50=2.02%
>> >      lat (msec): 100=0.16%, 250=0.04%, 1000=0.01%, 2000=0.03%
>> > seq-write: (groupid=1, jobs=4): err= 0: pid=1646
>> >   write: io=1,721MB, bw=14,652KB/s, iops=1,831, runt=120277msec
>> >     slat (usec): min=0, max=1,004K, avg=1744.41, stdev=3144.06
>> >     clat (usec): min=0, max=1,014K, avg=15699.65, stdev=19751.69
>> >     bw (KB/s) : min=  285, max=18032, per=26.41%, avg=3869.67,
>> > stdev=762.96
>> >   cpu          : usr=6.29%, sys=22.61%, ctx=880380, majf=36, minf=3222
>> >   IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=100.0%, 16=0.0%, 32=0.0%,
>> >>=64=0.0%
>> >      submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%,
>> >>=64=0.0%
>> >      complete  : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%,
>> >>=64=0.0%
>> >      issued r/w: total=0/220282, short=0/0
>> >      lat (usec): 2=2.43%, 500=0.01%, 750=0.12%, 1000=0.14%
>> >      lat (msec): 2=0.86%, 4=1.72%, 10=39.03%, 20=42.20%, 50=11.87%
>> >      lat (msec): 100=1.15%, 250=0.17%, 500=0.06%, 750=0.14%, 1000=0.09%
>> >      lat (msec): 2000=0.02%
>> > rnd-read: (groupid=2, jobs=4): err= 0: pid=1646
>> >   read : io=65,128KB, bw=541KB/s, iops=67, runt=120381msec
>> >     slat (usec): min=48, max=55,230, avg=167.95, stdev=248.50
>> >     clat (msec): min=74, max=4,229, avg=472.23, stdev=129.50
>> >     bw (KB/s) : min=    0, max=  203, per=25.34%, avg=137.08,
>> > stdev=21.73
>> >   cpu          : usr=0.85%, sys=2.19%, ctx=44001, majf=30, minf=3726
>> >   IO depths    : 1=0.1%, 2=0.1%, 4=0.2%, 8=99.7%, 16=0.0%, 32=0.0%,
>> >>=64=0.0%
>> >      submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%,
>> >>=64=0.0%
>> >      complete  : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%,
>> >>=64=0.0%
>> >      issued r/w: total=8141/0, short=0/0
>> >
>> >      lat (msec): 100=0.04%, 250=0.09%, 500=81.72%, 750=13.09%,
>> > 1000=2.97%
>> >      lat (msec): 2000=1.50%, >=2000=0.59%
>> > rnd-write: (groupid=3, jobs=4): err= 0: pid=1646
>> >   write: io=200MB, bw=1,698KB/s, iops=212, runt=120331msec
>> >     slat (usec): min=48, max=215K, avg=2272.24, stdev=2283.09
>> >     clat (usec): min=762, max=14,617K, avg=147521.66, stdev=444146.36
>> >     bw (KB/s) : min=    1, max= 3960, per=56.86%, avg=964.90,
>> > stdev=514.63
>> >   cpu          : usr=1.25%, sys=4.20%, ctx=135229, majf=0, minf=10194
>> >   IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=99.9%, 16=0.0%, 32=0.0%,
>> >>=64=0.0%
>> >      submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%,
>> >>=64=0.0%
>> >      complete  : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%,
>> >>=64=0.0%
>> >      issued r/w: total=0/25536, short=0/0
>> >      lat (usec): 1000=0.26%
>> >      lat (msec): 2=0.13%, 4=2.01%, 10=3.77%, 20=42.78%, 50=20.95%
>> >      lat (msec): 100=12.83%, 250=12.50%, 500=2.49%, 750=0.33%,
>> > 1000=0.12%
>> >      lat (msec): 2000=0.53%, >=2000=1.30%
>> >
>> > Run status group 0 (all jobs):
>> >    READ: io=2,835MB, aggrb=24,191KB/s, minb=24,772KB/s, maxb=24,772KB/s,
>> > mint=120021msec, maxt=120021msec
>> >
>> > Run status group 1 (all jobs):
>> >   WRITE: io=1,721MB, aggrb=14,651KB/s, minb=15,003KB/s, maxb=15,003KB/s,
>> > mint=120277msec, maxt=120277msec
>> >
>> > Run status group 2 (all jobs):
>> >    READ: io=65,128KB, aggrb=541KB/s, minb=553KB/s, maxb=553KB/s,
>> > mint=120381msec, maxt=120381msec
>> >
>> > Run status group 3 (all jobs):
>> >   WRITE: io=200MB, aggrb=1,697KB/s, minb=1,738KB/s, maxb=1,738KB/s,
>> > mint=120331msec, maxt=120331msec
>> >
>> > Disk stats (read/write):
>> >   loop0: ios=0/0, merge=0/0, ticks=0/0, in_queue=0, util=0.00%
>> >
>> > 2.) kernel with enabled hot tracking
>> >
>> > seq-read: (g=0): rw=read, bs=8K-8K/8K-8K, ioengine=libaio, iodepth=8
>> > ...
>> > seq-read: (g=0): rw=read, bs=8K-8K/8K-8K, ioengine=libaio, iodepth=8
>> > seq-write: (g=1): rw=write, bs=8K-8K/8K-8K, ioengine=libaio, iodepth=8
>> > ...
>> > seq-write: (g=1): rw=write, bs=8K-8K/8K-8K, ioengine=libaio, iodepth=8
>> > rnd-read: (g=2): rw=randread, bs=8K-8K/8K-8K, ioengine=libaio, iodepth=8
>> > ...
>> > rnd-read: (g=2): rw=randread, bs=8K-8K/8K-8K, ioengine=libaio, iodepth=8
>> > rnd-write: (g=3): rw=randwrite, bs=8K-8K/8K-8K, ioengine=libaio,
>> > iodepth=8
>> > ...
>> > rnd-write: (g=3): rw=randwrite, bs=8K-8K/8K-8K, ioengine=libaio,
>> > iodepth=8
>> > Starting 16 threads
>> >
>> > seq-read: (groupid=0, jobs=4): err= 0: pid=2163
>> >   read : io=3,047MB, bw=26,001KB/s, iops=3,250, runt=120003msec
>> >     slat (usec): min=0, max=1,000K, avg=1141.34, stdev=2175.25
>> >     clat (usec): min=0, max=1,002K, avg=8610.96, stdev=6184.67
>> >     bw (KB/s) : min=   12, max=18896, per=25.28%, avg=6572.50,
>> > stdev=713.22
>> >   cpu          : usr=10.38%, sys=35.02%, ctx=1601418, majf=12, minf=2235
>> >   IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=100.0%, 16=0.0%, 32=0.0%,
>> >>=64=0.0%
>> >      submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%,
>> >>=64=0.0%
>> >      complete  : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%,
>> >>=64=0.0%
>> >      issued r/w: total=390029/0, short=0/0
>> >      lat (usec): 2=1.49%, 50=0.01%, 100=0.01%, 250=0.01%, 500=0.01%
>> >      lat (usec): 750=0.01%, 1000=0.02%
>> >      lat (msec): 2=1.53%, 4=0.86%, 10=79.60%, 20=14.93%, 50=1.43%
>> >      lat (msec): 100=0.09%, 250=0.02%, 500=0.01%, 1000=0.01%, 2000=0.01%
>> > seq-write: (groupid=1, jobs=4): err= 0: pid=2163
>> >   write: io=1,752MB, bw=14,950KB/s, iops=1,868, runt=120003msec
>> >     slat (usec): min=0, max=1,002K, avg=1697.47, stdev=3568.70
>> >     clat (usec): min=0, max=1,019K, avg=15630.94, stdev=21109.46
>> >     bw (KB/s) : min=  123, max=14693, per=26.31%, avg=3933.46,
>> > stdev=779.57
>> >   cpu          : usr=6.31%, sys=21.85%, ctx=894177, majf=4, minf=3407
>> >   IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=100.0%, 16=0.0%, 32=0.0%,
>> >>=64=0.0%
>> >      submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%,
>> >>=64=0.0%
>> >      complete  : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%,
>> >>=64=0.0%
>> >      issued r/w: total=0/224253, short=0/0
>> >      lat (usec): 2=2.44%, 100=0.01%, 250=0.01%, 500=0.01%, 750=0.06%
>> >      lat (usec): 1000=0.23%
>> >      lat (msec): 2=0.73%, 4=2.00%, 10=40.15%, 20=42.68%, 50=10.25%
>> >      lat (msec): 100=0.95%, 250=0.14%, 500=0.10%, 750=0.12%, 1000=0.11%
>> >      lat (msec): 2000=0.03%
>> > rnd-read: (groupid=2, jobs=4): err= 0: pid=2163
>> >   read : io=85,208KB, bw=709KB/s, iops=88, runt=120252msec
>> >     slat (usec): min=52, max=48,325, avg=204.43, stdev=596.50
>> >     clat (msec): min=1, max=2,754, avg=359.99, stdev=78.96
>> >     bw (KB/s) : min=    0, max=  249, per=25.17%, avg=178.20,
>> > stdev=23.79
>> >   cpu          : usr=1.00%, sys=2.64%, ctx=55704, majf=28, minf=2971
>> >   IO depths    : 1=0.1%, 2=0.1%, 4=0.2%, 8=99.7%, 16=0.0%, 32=0.0%,
>> >>=64=0.0%
>> >      submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%,
>> >>=64=0.0%
>> >      complete  : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%,
>> >>=64=0.0%
>> >      issued r/w: total=10651/0, short=0/0
>> >
>> >      lat (msec): 2=0.01%, 50=0.02%, 100=0.04%, 250=1.61%, 500=92.60%
>> >      lat (msec): 750=4.24%, 1000=0.68%, 2000=0.59%, >=2000=0.22%
>> > rnd-write: (groupid=3, jobs=4): err= 0: pid=2163
>> >   write: io=247MB, bw=2,019KB/s, iops=252, runt=125287msec
>> >     slat (usec): min=51, max=286K, avg=2576.23, stdev=2882.30
>> >     clat (usec): min=698, max=8,156K, avg=123274.05, stdev=355311.20
>> >     bw (KB/s) : min=    1, max= 4848, per=57.62%, avg=1162.77,
>> > stdev=560.79
>> >   cpu          : usr=1.33%, sys=4.24%, ctx=163334, majf=0, minf=8588
>> >   IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=99.9%, 16=0.0%, 32=0.0%,
>> >>=64=0.0%
>> >      submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%,
>> >>=64=0.0%
>> >      complete  : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%,
>> >>=64=0.0%
>> >      issued r/w: total=0/31616, short=0/0
>> >      lat (usec): 750=0.03%, 1000=0.15%
>> >      lat (msec): 2=0.06%, 4=2.15%, 10=3.57%, 20=48.23%, 50=22.43%
>> >      lat (msec): 100=11.48%, 250=9.14%, 500=1.04%, 750=0.16%, 1000=0.05%
>> >      lat (msec): 2000=0.09%, >=2000=1.42%
>> >
>> > Run status group 0 (all jobs):
>> >    READ: io=3,047MB, aggrb=26,001KB/s, minb=26,625KB/s, maxb=26,625KB/s,
>> > mint=120003msec, maxt=120003msec
>> >
>> > Run status group 1 (all jobs):
>> >   WRITE: io=1,752MB, aggrb=14,949KB/s, minb=15,308KB/s, maxb=15,308KB/s,
>> > mint=120003msec, maxt=120003msec
>> >
>> > Run status group 2 (all jobs):
>> >    READ: io=85,208KB, aggrb=708KB/s, minb=725KB/s, maxb=725KB/s,
>> > mint=120252msec, maxt=120252msec
>> >
>> > Run status group 3 (all jobs):
>> >   WRITE: io=247MB, aggrb=2,018KB/s, minb=2,067KB/s, maxb=2,067KB/s,
>> > mint=125287msec, maxt=125287msec
>> >
>> > Disk stats (read/write):
>> >   loop0: ios=0/0, merge=0/0, ticks=0/0, in_queue=0, util=0.00%
>> >
>> >
>> > 4. compilebench test
>> >
>> > 1.) original kernel
>> >
>> > using working directory /mnt/scratch/, 30 intial dirs 100 runs
>> >
>> > native unpatched native-0 222MB in 87.48 seconds (2.54 MB/s)
>> > native patched native-0 109MB in 24.89 seconds (4.41 MB/s)
>> > native patched compiled native-0 691MB in 35.54 seconds (19.46 MB/s)
>> > create dir kernel-0 222MB in 91.22 seconds (2.44 MB/s)
>> > create dir kernel-1 222MB in 91.55 seconds (2.43 MB/s)
>> > create dir kernel-2 222MB in 97.00 seconds (2.29 MB/s)
>> > create dir kernel-3 222MB in 87.64 seconds (2.54 MB/s)
>> > create dir kernel-4 222MB in 86.18 seconds (2.58 MB/s)
>> > create dir kernel-5 222MB in 84.68 seconds (2.63 MB/s)
>> > create dir kernel-6 222MB in 85.02 seconds (2.62 MB/s)
>> > create dir kernel-7 222MB in 87.74 seconds (2.53 MB/s)
>> > create dir kernel-8 222MB in 86.79 seconds (2.56 MB/s)
>> > create dir kernel-9 222MB in 87.85 seconds (2.53 MB/s)
>> > create dir kernel-10 222MB in 86.88 seconds (2.56 MB/s)
>> > create dir kernel-11 222MB in 86.47 seconds (2.57 MB/s)
>> > create dir kernel-12 222MB in 84.26 seconds (2.64 MB/s)
>> > create dir kernel-13 222MB in 87.40 seconds (2.54 MB/s)
>> > create dir kernel-14 222MB in 85.44 seconds (2.60 MB/s)
>> > create dir kernel-15 222MB in 86.80 seconds (2.56 MB/s)
>> > create dir kernel-16 222MB in 88.57 seconds (2.51 MB/s)
>> > create dir kernel-17 222MB in 85.66 seconds (2.60 MB/s)
>> > create dir kernel-18 222MB in 87.40 seconds (2.54 MB/s)
>> > create dir kernel-19 222MB in 85.47 seconds (2.60 MB/s)
>> > create dir kernel-20 222MB in 89.29 seconds (2.49 MB/s)
>> > create dir kernel-21 222MB in 88.53 seconds (2.51 MB/s)
>> > create dir kernel-22 222MB in 86.25 seconds (2.58 MB/s)
>> > create dir kernel-23 222MB in 85.34 seconds (2.61 MB/s)
>> > create dir kernel-24 222MB in 84.61 seconds (2.63 MB/s)
>> > create dir kernel-25 222MB in 88.13 seconds (2.52 MB/s)
>> > create dir kernel-26 222MB in 85.57 seconds (2.60 MB/s)
>> > create dir kernel-27 222MB in 87.26 seconds (2.55 MB/s)
>> > create dir kernel-28 222MB in 83.68 seconds (2.66 MB/s)
>> > create dir kernel-29 222MB in 86.33 seconds (2.58 MB/s)
>> > === sdb ===
>> >   CPU  0:              9366376 events,   439049 KiB data
>> >   Total:               9366376 events (dropped 0),   439049 KiB data
>> > patch dir kernel-7 109MB in 55.00 seconds (1.99 MB/s)
>> > compile dir kernel-7 691MB in 37.15 seconds (18.62 MB/s)
>> > compile dir kernel-14 680MB in 38.48 seconds (17.69 MB/s)
>> > patch dir kernel-14 691MB in 93.31 seconds (7.41 MB/s)
>> > read dir kernel-7 in 93.36 9.85 MB/s
>> > read dir kernel-10 in 58.25 3.82 MB/s
>> > create dir kernel-3116 222MB in 91.96 seconds (2.42 MB/s)
>> > clean kernel-7 691MB in 5.16 seconds (134.03 MB/s)
>> > read dir kernel-6 in 56.98 3.90 MB/s
>> > stat dir kernel-2 in 19.42 seconds
>> > compile dir kernel-2 680MB in 43.11 seconds (15.79 MB/s)
>> > clean kernel-14 691MB in 6.27 seconds (110.30 MB/s)
>> > clean kernel-2 680MB in 5.79 seconds (117.55 MB/s)
>> > patch dir kernel-2 109MB in 71.22 seconds (1.54 MB/s)
>> > stat dir kernel-2 in 16.06 seconds
>> > create dir kernel-6231 222MB in 96.20 seconds (2.31 MB/s)
>> > delete kernel-8 in 45.20 seconds
>> > compile dir kernel-2 691MB in 38.58 seconds (17.93 MB/s)
>> > create dir kernel-70151 222MB in 93.41 seconds (2.38 MB/s)
>> > clean kernel-2 691MB in 5.09 seconds (135.87 MB/s)
>> > create dir kernel-78184 222MB in 86.04 seconds (2.58 MB/s)
>> > compile dir kernel-7 691MB in 37.60 seconds (18.39 MB/s)
>> > create dir kernel-64250 222MB in 80.33 seconds (2.77 MB/s)
>> > delete kernel-12 in 43.00 seconds
>> > stat dir kernel-2 in 16.43 seconds
>> > patch dir kernel-70151 109MB in 77.42 seconds (1.42 MB/s)
>> > stat dir kernel-7 in 18.48 seconds
>> > stat dir kernel-78184 in 18.62 seconds
>> > compile dir kernel-2 691MB in 43.31 seconds (15.97 MB/s)
>> > compile dir kernel-26 680MB in 50.37 seconds (13.51 MB/s)
>> > stat dir kernel-7 in 21.52 seconds
>> > create dir kernel-2717 222MB in 89.86 seconds (2.47 MB/s)
>> > delete kernel-26 in 47.81 seconds
>> > stat dir kernel-2 in 18.61 seconds
>> > compile dir kernel-14 691MB in 46.66 seconds (14.82 MB/s)
>> > compile dir kernel-70151 691MB in 39.19 seconds (17.65 MB/s)
>> > create dir kernel-55376 222MB in 88.91 seconds (2.50 MB/s)
>> > stat dir kernel-22 in 18.66 seconds
>> > delete kernel-55376 in 37.71 seconds
>> > patch dir kernel-27 109MB in 74.82 seconds (1.47 MB/s)
>> > patch dir kernel-64250 109MB in 81.08 seconds (1.35 MB/s)
>> > read dir kernel-6231 in 82.15 2.71 MB/s
>> > patch dir kernel-9 109MB in 96.02 seconds (1.14 MB/s)
>> > stat dir kernel-14 in 22.46 seconds
>> > read dir kernel-29 in 58.10 3.83 MB/s
>> > create dir kernel-57327 222MB in 93.92 seconds (2.37 MB/s)
>> > stat dir kernel-14 in 21.92 seconds
>> > compile dir kernel-27 691MB in 41.43 seconds (16.69 MB/s)
>> > create dir kernel-64334 222MB in 89.31 seconds (2.49 MB/s)
>> > patch dir kernel-1 109MB in 84.37 seconds (1.30 MB/s)
>> > create dir kernel-16056 222MB in 94.93 seconds (2.34 MB/s)
>> > clean kernel-7 691MB in 7.27 seconds (95.13 MB/s)
>> > delete kernel-27 in 46.32 seconds
>> > create dir kernel-51614 222MB in 88.91 seconds (2.50 MB/s)
>> > clean kernel-14 691MB in 6.71 seconds (103.07 MB/s)
>> > delete kernel-64250 in 43.60 seconds
>> > stat dir kernel-2 in 24.25 seconds
>> > clean kernel-70151 691MB in 6.20 seconds (111.55 MB/s)
>> > delete kernel-14 in 40.74 seconds
>> > read dir kernel-2 in 118.45 7.76 MB/s
>> > create dir kernel-24150 222MB in 88.99 seconds (2.50 MB/s)
>> > read dir kernel-9 in 83.70 2.73 MB/s
>> > patch dir kernel-19 109MB in 76.06 seconds (1.44 MB/s)
>> > clean kernel-2 691MB in 6.64 seconds (104.16 MB/s)
>> > compile dir kernel-18 680MB in 47.33 seconds (14.38 MB/s)
>> > compile dir kernel-2 691MB in 44.63 seconds (15.50 MB/s)
>> > delete kernel-2 in 51.03 seconds
>> > delete kernel-70151 in 45.96 seconds
>> > stat dir kernel-1 in 17.56 seconds
>> > read dir kernel-18 in 121.08 7.46 MB/s
>> > clean kernel-18 680MB in 6.47 seconds (105.20 MB/s)
>> > compile dir kernel-17 680MB in 52.10 seconds (13.06 MB/s)
>> > read dir kernel-17 in 114.66 7.88 MB/s
>> > stat dir kernel-18 in 30.36 seconds
>> > stat dir kernel-64334 in 44.78 seconds
>> > delete kernel-24150 in 44.79 seconds
>> > delete kernel-17 in 47.64 seconds
>> > stat dir kernel-1 in 19.87 seconds
>> > compile dir kernel-7 691MB in 47.65 seconds (14.51 MB/s)
>> > patch dir kernel-16 109MB in 100.96 seconds (1.09 MB/s)
>> > stat dir kernel-7 in 21.35 seconds
>> > create dir kernel-82195 222MB in 111.17 seconds (2.00 MB/s)
>> > delete kernel-82195 in 40.79 seconds
>> > stat dir kernel-3 in 19.51 seconds
>> > patch dir kernel-2717 109MB in 94.55 seconds (1.16 MB/s)
>> > patch dir kernel-5 109MB in 60.21 seconds (1.82 MB/s)
>> > read dir kernel-2717 in 94.85 2.41 MB/s
>> > delete kernel-29 in 40.51 seconds
>> > clean kernel-7 691MB in 5.84 seconds (118.42 MB/s)
>> > read dir kernel-4 in 57.91 3.84 MB/s
>> > stat dir kernel-78184 in 19.65 seconds
>> > patch dir kernel-0 109MB in 90.61 seconds (1.21 MB/s)
>> > patch dir kernel-3 109MB in 75.67 seconds (1.45 MB/s)
>> > create dir kernel-30226 222MB in 106.72 seconds (2.08 MB/s)
>> > read dir kernel-19 in 83.79 2.72 MB/s
>> > read dir kernel-9 in 82.64 2.76 MB/s
>> > delete kernel-5 in 38.89 seconds
>> > read dir kernel-7 in 59.70 3.82 MB/s
>> > patch dir kernel-57327 109MB in 101.71 seconds (1.08 MB/s)
>> > read dir kernel-11 in 59.83 3.72 MB/s
>> >
>> > run complete:
>> > ==========================================================================
>> > intial create total runs 30 avg 2.55 MB/s (user 13.94s sys 34.07s)
>> > create total runs 14 avg 2.41 MB/s (user 13.83s sys 34.39s)
>> > patch total runs 15 avg 1.79 MB/s (user 6.55s sys 34.71s)
>> > compile total runs 14 avg 16.04 MB/s (user 2.65s sys 16.88s)
>> > clean total runs 10 avg 113.53 MB/s (user 0.46s sys 3.14s)
>> > read tree total runs 11 avg 3.30 MB/s (user 11.68s sys 24.50s)
>> > read compiled tree total runs 4 avg 8.24 MB/s (user 13.67s sys 35.85s)
>> > delete tree total runs 10 avg 42.12 seconds (user 6.76s sys 24.50s)
>> > delete compiled tree total runs 4 avg 48.20 seconds (user 7.65s sys
>> > 28.60s)
>> > stat tree total runs 11 avg 21.90 seconds (user 6.87s sys 6.34s)
>> > stat compiled tree total runs 7 avg 21.23 seconds (user 7.65s sys 7.15s)
>> >
>> > 2.) kernel with enabled hot tracking
>> >
>> > using working directory /mnt/scratch/, 30 intial dirs 100 runs
>> > native unpatched native-0 222MB in 112.82 seconds (1.97 MB/s)
>> > native patched native-0 109MB in 27.38 seconds (4.01 MB/s)
>> > native patched compiled native-0 691MB in 40.42 seconds (17.11 MB/s)
>> > create dir kernel-0 222MB in 92.88 seconds (2.39 MB/s)
>> > create dir kernel-1 222MB in 98.56 seconds (2.26 MB/s)
>> > create dir kernel-2 222MB in 107.27 seconds (2.07 MB/s)
>> > create dir kernel-3 222MB in 92.81 seconds (2.40 MB/s)
>> > create dir kernel-4 222MB in 90.30 seconds (2.46 MB/s)
>> > create dir kernel-5 222MB in 91.57 seconds (2.43 MB/s)
>> > create dir kernel-6 222MB in 91.92 seconds (2.42 MB/s)
>> > create dir kernel-7 222MB in 90.16 seconds (2.47 MB/s)
>> > create dir kernel-8 222MB in 94.71 seconds (2.35 MB/s)
>> > create dir kernel-9 222MB in 91.79 seconds (2.42 MB/s)
>> > create dir kernel-10 222MB in 92.14 seconds (2.41 MB/s)
>> > create dir kernel-11 222MB in 88.59 seconds (2.51 MB/s)
>> > create dir kernel-12 222MB in 92.15 seconds (2.41 MB/s)
>> > create dir kernel-13 222MB in 91.54 seconds (2.43 MB/s)
>> > create dir kernel-14 222MB in 91.15 seconds (2.44 MB/s)
>> > create dir kernel-15 222MB in 90.54 seconds (2.46 MB/s)
>> > create dir kernel-16 222MB in 92.23 seconds (2.41 MB/s)
>> > create dir kernel-17 222MB in 89.88 seconds (2.47 MB/s)
>> > create dir kernel-18 222MB in 94.65 seconds (2.35 MB/s)
>> > create dir kernel-19 222MB in 89.99 seconds (2.47 MB/s)
>> > create dir kernel-20 222MB in 90.35 seconds (2.46 MB/s)
>> > create dir kernel-21 222MB in 90.92 seconds (2.45 MB/s)
>> > create dir kernel-22 222MB in 90.76 seconds (2.45 MB/s)
>> > create dir kernel-23 222MB in 90.04 seconds (2.47 MB/s)
>> > create dir kernel-24 222MB in 89.60 seconds (2.48 MB/s)
>> > create dir kernel-25 222MB in 91.52 seconds (2.43 MB/s)
>> > create dir kernel-26 222MB in 90.45 seconds (2.46 MB/s)
>> > create dir kernel-27 222MB in 92.72 seconds (2.40 MB/s)
>> > create dir kernel-28 222MB in 90.37 seconds (2.46 MB/s)
>> > create dir kernel-29 222MB in 89.60 seconds (2.48 MB/s)
>> > === sdb ===
>> >   CPU  0:              8878754 events,   416192 KiB data
>> >   Total:               8878754 events (dropped 0),   416192 KiB data
>> > patch dir kernel-7 109MB in 61.00 seconds (1.80 MB/s)
>> > compile dir kernel-7 691MB in 40.21 seconds (17.20 MB/s)
>> > compile dir kernel-14 680MB in 45.97 seconds (14.81 MB/s)
>> > patch dir kernel-14 691MB in 83.73 seconds (8.26 MB/s)
>> > read dir kernel-7 in 88.66 10.37 MB/s
>> > read dir kernel-10 in 56.44 3.94 MB/s
>> > create dir kernel-3116 222MB in 91.58 seconds (2.43 MB/s)
>> > clean kernel-7 691MB in 6.69 seconds (103.38 MB/s)
>> > read dir kernel-6 in 61.07 3.64 MB/s
>> > stat dir kernel-2 in 21.42 seconds
>> > compile dir kernel-2 680MB in 44.55 seconds (15.28 MB/s)
>> > clean kernel-14 691MB in 6.98 seconds (99.08 MB/s)
>> > clean kernel-2 680MB in 6.12 seconds (111.21 MB/s)
>> > patch dir kernel-2 109MB in 73.95 seconds (1.48 MB/s)
>> > stat dir kernel-2 in 18.61 seconds
>> > create dir kernel-6231 222MB in 100.84 seconds (2.21 MB/s)
>> > delete kernel-8 in 40.38 seconds
>> > compile dir kernel-2 691MB in 42.18 seconds (16.40 MB/s)
>> > create dir kernel-70151 222MB in 96.34 seconds (2.31 MB/s)
>> > clean kernel-2 691MB in 4.54 seconds (152.33 MB/s)
>> > create dir kernel-78184 222MB in 94.71 seconds (2.35 MB/s)
>> > compile dir kernel-7 691MB in 43.64 seconds (15.85 MB/s)
>> > create dir kernel-64250 222MB in 87.65 seconds (2.54 MB/s)
>> > delete kernel-12 in 38.58 seconds
>> > stat dir kernel-2 in 17.48 seconds
>> > patch dir kernel-70151 109MB in 79.82 seconds (1.37 MB/s)
>> > stat dir kernel-7 in 25.76 seconds
>> > stat dir kernel-78184 in 20.30 seconds
>> > compile dir kernel-2 691MB in 40.93 seconds (16.90 MB/s)
>> > compile dir kernel-26 680MB in 48.86 seconds (13.93 MB/s)
>> > stat dir kernel-7 in 23.87 seconds
>> > create dir kernel-2717 222MB in 98.71 seconds (2.25 MB/s)
>> > delete kernel-26 in 45.60 seconds
>> > stat dir kernel-2 in 22.62 seconds
>> > compile dir kernel-14 691MB in 51.12 seconds (13.53 MB/s)
>> > compile dir kernel-70151 691MB in 41.40 seconds (16.71 MB/s)
>> > create dir kernel-55376 222MB in 94.61 seconds (2.35 MB/s)
>> > stat dir kernel-22 in 22.11 seconds
>> > delete kernel-55376 in 36.47 seconds
>> > patch dir kernel-27 109MB in 76.74 seconds (1.43 MB/s)
>> > patch dir kernel-64250 109MB in 86.43 seconds (1.27 MB/s)
>> > read dir kernel-6231 in 85.10 2.61 MB/s
>> > patch dir kernel-9 109MB in 97.67 seconds (1.12 MB/s)
>> > stat dir kernel-14 in 24.80 seconds
>> > read dir kernel-29 in 61.00 3.65 MB/s
>> > create dir kernel-57327 222MB in 101.42 seconds (2.19 MB/s)
>> > stat dir kernel-14 in 22.45 seconds
>> > compile dir kernel-27 691MB in 48.19 seconds (14.35 MB/s)
>> > create dir kernel-64334 222MB in 96.65 seconds (2.30 MB/s)
>> > patch dir kernel-1 109MB in 88.32 seconds (1.24 MB/s)
>> > create dir kernel-16056 222MB in 100.60 seconds (2.21 MB/s)
>> > clean kernel-7 691MB in 8.20 seconds (84.34 MB/s)
>> > delete kernel-27 in 48.53 seconds
>> > create dir kernel-51614 222MB in 98.07 seconds (2.27 MB/s)
>> > clean kernel-14 691MB in 6.82 seconds (101.41 MB/s)
>> > delete kernel-64250 in 44.01 seconds
>> > stat dir kernel-2 in 26.37 seconds
>> > clean kernel-70151 691MB in 6.21 seconds (111.37 MB/s)
>> > delete kernel-14 in 41.74 seconds
>> > read dir kernel-2 in 122.71 7.50 MB/s
>> > create dir kernel-24150 222MB in 99.01 seconds (2.25 MB/s)
>> > read dir kernel-9 in 78.29 2.91 MB/s
>> > patch dir kernel-19 109MB in 77.45 seconds (1.42 MB/s)
>> > clean kernel-2 691MB in 5.94 seconds (116.43 MB/s)
>> > compile dir kernel-18 680MB in 49.17 seconds (13.84 MB/s)
>> > compile dir kernel-2 691MB in 47.20 seconds (14.65 MB/s)
>> > delete kernel-2 in 48.01 seconds
>> > delete kernel-70151 in 47.60 seconds
>> > stat dir kernel-1 in 21.80 seconds
>> > read dir kernel-18 in 109.98 8.21 MB/s
>> > clean kernel-18 680MB in 7.78 seconds (87.49 MB/s)
>> > compile dir kernel-17 680MB in 54.39 seconds (12.51 MB/s)
>> > read dir kernel-17 in 108.52 8.32 MB/s
>> > stat dir kernel-18 in 19.48 seconds
>> > stat dir kernel-64334 in 22.04 seconds
>> > delete kernel-24150 in 44.36 seconds
>> > delete kernel-17 in 49.09 seconds
>> > stat dir kernel-1 in 18.16 seconds
>> > compile dir kernel-7 691MB in 48.90 seconds (14.14 MB/s)
>> > patch dir kernel-16 109MB in 103.71 seconds (1.06 MB/s)
>> > stat dir kernel-7 in 21.94 seconds
>> > create dir kernel-82195 222MB in 110.82 seconds (2.01 MB/s)
>> > delete kernel-82195 in 38.64 seconds
>> > stat dir kernel-3 in 22.88 seconds
>> > patch dir kernel-2717 109MB in 92.23 seconds (1.19 MB/s)
>> > patch dir kernel-5 109MB in 64.95 seconds (1.69 MB/s)
>> > read dir kernel-2717 in 97.88 2.33 MB/s
>> > delete kernel-29 in 40.59 seconds
>> > clean kernel-7 691MB in 5.09 seconds (135.87 MB/s)
>> > read dir kernel-4 in 59.42 3.74 MB/s
>> > stat dir kernel-78184 in 20.24 seconds
>> > patch dir kernel-0 109MB in 95.95 seconds (1.14 MB/s)
>> > patch dir kernel-3 109MB in 62.86 seconds (1.74 MB/s)
>> > create dir kernel-30226 222MB in 106.81 seconds (2.08 MB/s)
>> > read dir kernel-19 in 81.32 2.81 MB/s
>> > read dir kernel-9 in 74.65 3.06 MB/s
>> > delete kernel-5 in 42.04 seconds
>> > read dir kernel-7 in 61.95 3.68 MB/s
>> > patch dir kernel-57327 109MB in 97.85 seconds (1.12 MB/s)
>> > read dir kernel-11 in 58.85 3.78 MB/s
>> >
>> > run complete:
>> > ==========================================================================
>> > intial create total runs 30 avg 2.42 MB/s (user 13.60s sys 36.18s)
>> > create total runs 14 avg 2.27 MB/s (user 13.66s sys 36.94s)
>> > patch total runs 15 avg 1.82 MB/s (user 6.62s sys 36.93s)
>> > compile total runs 14 avg 15.01 MB/s (user 2.76s sys 18.29s)
>> > clean total runs 10 avg 110.29 MB/s (user 0.46s sys 3.21s)
>> > read tree total runs 11 avg 3.29 MB/s (user 11.04s sys 28.65s)
>> > read compiled tree total runs 4 avg 8.60 MB/s (user 13.16s sys 41.32s)
>> > delete tree total runs 10 avg 41.44 seconds (user 6.43s sys 25.19s)
>> > delete compiled tree total runs 4 avg 47.81 seconds (user 7.18s sys
>> > 29.27s)
>> > stat tree total runs 11 avg 20.41 seconds (user 6.39s sys 7.45s)
>> > stat compiled tree total runs 7 avg 23.97 seconds (user 7.24s sys 8.74s)
>> >
>> > On Fri, 2012-11-16 at 17:51 +0800, zwu.kernel@gmail.com wrote:
>> >> From: Zhi Yong Wu <wuzhy@linux.vnet.ibm.com>
>> >>
>> >> HI, guys,
>> >>
>> >>   Any comments or ideas are appreciated, thanks.
>> >>
>> >> NOTE:
>> >>
>> >>   The patchset can be obtained via my kernel dev git on github:
>> >> git://github.com/wuzhy/kernel.git hot_tracking
>> >>   If you're interested, you can also review them via
>> >> https://github.com/wuzhy/kernel/commits/hot_tracking
>> >>
>> >>   For more info, please check hot_tracking.txt in Documentation
>> >>
>> >> TODO List:
>> >>
>> >>  1.) Need to do scalability or performance tests. - Required
>> >>  2.) Need one simpler but efficient temp calculation function
>> >>  3.) How to save the file temperature among the umount to be able to
>> >>      preserve the file tempreture after reboot - Optional
>> >>
>> >> Changelog:
>> >>
>> >>  - Solved 64 bits inode number issue. [David Sterba]
>> >>  - Embed struct hot_type in struct file_system_type [Darrick J. Wong]
>> >>  - Cleanup Some issues [David Sterba]
>> >>  - Use a static hot debugfs root [Greg KH]
>> >>  - Rewritten debugfs support based on seq_file operation. [Dave Chinner]
>> >>  - Refactored workqueue support. [Dave Chinner]
>> >>  - Turn some Micro into be tunable   [Zhiyong, Zheng Liu]
>> >>        TIME_TO_KICK, and HEAT_UPDATE_DELAY
>> >>  - Introduce hot func registering framework [Zhiyong]
>> >>  - Remove global variable for hot tracking [Zhiyong]
>> >>  - Add xfs hot tracking support [Dave Chinner]
>> >>  - Add ext4 hot tracking support [Zheng Liu]
>> >>  - Cleanedup a lot of other issues [Dave Chinner]
>> >>  - Added memory shrinker [Dave Chinner]
>> >>  - Converted to one workqueue to update map info periodically [Dave Chinner]
>> >>  - Cleanedup a lot of other issues [Dave Chinner]
>> >>  - Reduce new files and put all in fs/hot_tracking.[ch] [Dave Chinner]
>> >>  - Add btrfs hot tracking support [Zhiyong]
>> >>  - The first three patches can probably just be flattened into one.
>> >>                                         [Marco Stornelli , Dave Chinner]
>> >>
>> >> Zhi Yong Wu (16):
>> >>   vfs: introduce some data structures
>> >>   vfs: add init and cleanup functions
>> >>   vfs: add I/O frequency update function
>> >>   vfs: add two map arrays
>> >>   vfs: add hooks to enable hot tracking
>> >>   vfs: add temp calculation function
>> >>   vfs: add map info update function
>> >>   vfs: add aging function
>> >>   vfs: add one work queue
>> >>   vfs: add FS hot type support
>> >>   vfs: register one shrinker
>> >>   vfs: add one ioctl interface
>> >>   vfs: add debugfs support
>> >>   proc: add two hot_track proc files
>> >>   btrfs: add hot tracking support
>> >>   vfs: add documentation
>> >>
>> >>  Documentation/filesystems/00-INDEX         |    2 +
>> >>  Documentation/filesystems/hot_tracking.txt |  263 ++++++
>> >>  fs/Makefile                                |    2 +-
>> >>  fs/btrfs/ctree.h                           |    1 +
>> >>  fs/btrfs/super.c                           |   22 +-
>> >>  fs/compat_ioctl.c                          |    5 +
>> >>  fs/dcache.c                                |    2 +
>> >>  fs/direct-io.c                             |    6 +
>> >>  fs/hot_tracking.c                          | 1306 ++++++++++++++++++++++++++++
>> >>  fs/hot_tracking.h                          |   52 ++
>> >>  fs/ioctl.c                                 |   74 ++
>> >>  include/linux/fs.h                         |    5 +
>> >>  include/linux/hot_tracking.h               |  152 ++++
>> >>  kernel/sysctl.c                            |   14 +
>> >>  mm/filemap.c                               |    6 +
>> >>  mm/page-writeback.c                        |   12 +
>> >>  mm/readahead.c                             |    7 +
>> >>  17 files changed, 1929 insertions(+), 2 deletions(-)
>> >>  create mode 100644 Documentation/filesystems/hot_tracking.txt
>> >>  create mode 100644 fs/hot_tracking.c
>> >>  create mode 100644 fs/hot_tracking.h
>> >>  create mode 100644 include/linux/hot_tracking.h
>> >>
>> >
>> > --
>> > Regards,
>> >
>> > Zhi Yong Wu
>> >
>>
>>
>>
>> --
>> Regards,
>>
>> Zhi Yong Wu



-- 
Regards,

Zhi Yong Wu

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH v1 resend hot_track 00/16] vfs: hot data tracking
  2012-12-13 12:17       ` Zhi Yong Wu
@ 2012-12-14  2:46         ` Darrick J. Wong
  0 siblings, 0 replies; 22+ messages in thread
From: Darrick J. Wong @ 2012-12-14  2:46 UTC (permalink / raw)
  To: Zhi Yong Wu
  Cc: wuzhy, linux-fsdevel, linux-kernel, viro, linuxram, david,
	swhiteho, dave, andi, northrup.james

On Thu, Dec 13, 2012 at 08:17:26PM +0800, Zhi Yong Wu wrote:
> On Thu, Dec 13, 2012 at 3:50 AM, Darrick J. Wong
> <darrick.wong@oracle.com> wrote:
> > On Mon, Dec 10, 2012 at 11:30:03AM +0800, Zhi Yong Wu wrote:
> >> HI, all guys.
> >>
> >> any comments or suggestions?
> >
> > Why did ffsb drop from 924 transactions/sec to 322?
> It is maybe that some noise operations impact on it. I am doing one
> larger scale perf testing in one clearer environment. So i want to
> look at if its ffsb testing result has some difference from this.

That's quite a big noise there...

--D
> 
> >
> > --D
> >>
> >> On Thu, Dec 6, 2012 at 11:28 AM, Zhi Yong Wu <wuzhy@linux.vnet.ibm.com> wrote:
> >> > HI, guys
> >> >
> >> > THe perf testing is done separately with fs_mark, fio, ffsb and
> >> > compilebench in one kvm guest.
> >> >
> >> > Below is the performance testing report for hot tracking, and no obvious
> >> > perf downgrade is found.
> >> >
> >> > Note: original kernel means its source code is not changed;
> >> >       kernel with enabled hot tracking means its source code is with hot
> >> > tracking patchset.
> >> >
> >> > The test env is set up as below:
> >> >
> >> > root@debian-i386:/home/zwu# uname -a
> >> > Linux debian-i386 3.7.0-rc8+ #266 SMP Tue Dec 4 12:17:55 CST 2012 x86_64
> >> > GNU/Linux
> >> >
> >> > root@debian-i386:/home/zwu# mkfs.xfs -f -l
> >> > size=1310b,sunit=8 /home/zwu/bdev.img
> >> > meta-data=/home/zwu/bdev.img     isize=256    agcount=4, agsize=128000
> >> > blks
> >> >          =                       sectsz=512   attr=2, projid32bit=0
> >> > data     =                       bsize=4096   blocks=512000, imaxpct=25
> >> >          =                       sunit=0      swidth=0 blks
> >> > naming   =version 2              bsize=4096   ascii-ci=0
> >> > log      =internal log           bsize=4096   blocks=1310, version=2
> >> >          =                       sectsz=512   sunit=1 blks, lazy-count=1
> >> > realtime =none                   extsz=4096   blocks=0, rtextents=0
> >> >
> >> > 1.) original kernel
> >> >
> >> > root@debian-i386:/home/zwu# mount -o
> >> > loop,logbsize=256k /home/zwu/bdev.img /mnt/scratch
> >> > [ 1197.421616] XFS (loop0): Mounting Filesystem
> >> > [ 1197.567399] XFS (loop0): Ending clean mount
> >> > root@debian-i386:/home/zwu# mount
> >> > /dev/sda1 on / type ext3 (rw,errors=remount-ro)
> >> > tmpfs on /lib/init/rw type tmpfs (rw,nosuid,mode=0755)
> >> > proc on /proc type proc (rw,noexec,nosuid,nodev)
> >> > sysfs on /sys type sysfs (rw,noexec,nosuid,nodev)
> >> > udev on /dev type tmpfs (rw,mode=0755)
> >> > tmpfs on /dev/shm type tmpfs (rw,nosuid,nodev)
> >> > devpts on /dev/pts type devpts (rw,noexec,nosuid,gid=5,mode=620)
> >> > none on /selinux type selinuxfs (rw,relatime)
> >> > debugfs on /sys/kernel/debug type debugfs (rw)
> >> > binfmt_misc on /proc/sys/fs/binfmt_misc type binfmt_misc
> >> > (rw,noexec,nosuid,nodev)
> >> > /dev/loop0 on /mnt/scratch type xfs (rw,logbsize=256k)
> >> > root@debian-i386:/home/zwu# free -m
> >> >              total       used       free     shared    buffers
> >> > cached
> >> > Mem:           112        109          2          0          4
> >> > 53
> >> > -/+ buffers/cache:         51         60
> >> > Swap:          713         29        684
> >> >
> >> > 2.) kernel with enabled hot tracking
> >> >
> >> > root@debian-i386:/home/zwu# mount -o
> >> > hot_track,loop,logbsize=256k /home/zwu/bdev.img /mnt/scratch
> >> > [  364.648470] XFS (loop0): Mounting Filesystem
> >> > [  364.910035] XFS (loop0): Ending clean mount
> >> > [  364.921063] VFS: Turning on hot data tracking
> >> > root@debian-i386:/home/zwu# mount
> >> > /dev/sda1 on / type ext3 (rw,errors=remount-ro)
> >> > tmpfs on /lib/init/rw type tmpfs (rw,nosuid,mode=0755)
> >> > proc on /proc type proc (rw,noexec,nosuid,nodev)
> >> > sysfs on /sys type sysfs (rw,noexec,nosuid,nodev)
> >> > udev on /dev type tmpfs (rw,mode=0755)
> >> > tmpfs on /dev/shm type tmpfs (rw,nosuid,nodev)
> >> > devpts on /dev/pts type devpts (rw,noexec,nosuid,gid=5,mode=620)
> >> > none on /selinux type selinuxfs (rw,relatime)
> >> > binfmt_misc on /proc/sys/fs/binfmt_misc type binfmt_misc
> >> > (rw,noexec,nosuid,nodev)
> >> > /dev/loop0 on /mnt/scratch type xfs (rw,hot_track,logbsize=256k)
> >> > root@debian-i386:/home/zwu# free -m
> >> >              total       used       free     shared    buffers
> >> > cached
> >> > Mem:           112        107          4          0          2
> >> > 34
> >> > -/+ buffers/cache:         70         41
> >> > Swap:          713          2        711
> >> >
> >> > 1. fs_mark test
> >> >
> >> > 1.) orginal kernel
> >> >
> >> > #  ./fs_mark  -D  100  -S0  -n  1000  -s  1  -L  30  -d  /mnt/scratch/0
> >> > -d  /mnt/scratch/1  -d  /mnt/scratch/2  -d  /mnt/scratch/3
> >> > -d  /mnt/scratch/4  -d  /mnt/scratch/5  -d  /mnt/scratch/6
> >> > -d  /mnt/scratch/7
> >> > #       Version 3.3, 8 thread(s) starting at Wed Dec  5 03:20:58 2012
> >> > #       Sync method: NO SYNC: Test does not issue sync() or fsync() calls.
> >> > #       Directories:  Time based hash between directories across 100
> >> > subdirectories with 180 seconds per subdirectory.
> >> > #       File names: 40 bytes long, (16 initial bytes of time stamp with 24
> >> > random bytes at end of name)
> >> > #       Files info: size 1 bytes, written with an IO size of 16384 bytes per
> >> > write
> >> > #       App overhead is time in microseconds spent in the test not doing file
> >> > writing related system calls.
> >> >
> >> > FSUse%        Count         Size    Files/sec     App Overhead
> >> >      2         8000            1        375.6         27175895
> >> >      3        16000            1        375.6         27478079
> >> >      4        24000            1        346.0         27819607
> >> >      4        32000            1        316.9         25863385
> >> >      5        40000            1        335.2         25460605
> >> >      6        48000            1        312.3         25889196
> >> >      7        56000            1        327.3         25000611
> >> >      8        64000            1        304.4         28126698
> >> >      9        72000            1        361.7         26652172
> >> >      9        80000            1        370.1         27075875
> >> >     10        88000            1        347.8         31093106
> >> >     11        96000            1        387.1         26877324
> >> >     12       104000            1        352.3         26635853
> >> >     13       112000            1        379.3         26400198
> >> >     14       120000            1        367.4         27228178
> >> >     14       128000            1        359.2         27627871
> >> >     15       136000            1        358.4         27089821
> >> >     16       144000            1        385.5         27804852
> >> >     17       152000            1        322.9         26221907
> >> >     18       160000            1        393.2         26760040
> >> >     18       168000            1        351.9         29210327
> >> >     20       176000            1        395.2         24610548
> >> >     20       184000            1        376.7         27518650
> >> >     21       192000            1        340.1         27512874
> >> >     22       200000            1        389.0         27109104
> >> >     23       208000            1        389.7         29288594
> >> >     24       216000            1        352.6         29948820
> >> >     25       224000            1        380.4         26370958
> >> >     26       232000            1        332.9         27770518
> >> >     26       240000            1        333.6         25176691
> >> >
> >> > 2.) kernel with enabled hot tracking
> >> >
> >> > #  ./fs_mark  -D  100  -S0  -n  1000  -s  1  -L  30  -d  /mnt/scratch/0
> >> > -d  /mnt/scratch/1  -d  /mnt/scratch/2  -d  /mnt/scratch/3
> >> > -d  /mnt/scratch/4  -d  /mnt/scratch/5  -d  /mnt/scratch/6
> >> > -d  /mnt/scratch/7
> >> > #       Version 3.3, 8 thread(s) starting at Tue Dec  4 04:28:48 2012
> >> > #       Sync method: NO SYNC: Test does not issue sync() or fsync() calls.
> >> > #       Directories:  Time based hash between directories across 100
> >> > subdirectories with 180 seconds per subdirectory.
> >> > #       File names: 40 bytes long, (16 initial bytes of time stamp with 24
> >> > random bytes at end of name)
> >> > #       Files info: size 1 bytes, written with an IO size of 16384 bytes per
> >> > write
> >> > #       App overhead is time in microseconds spent in the test not doing file
> >> > writing related system calls.
> >> >
> >> > FSUse%        Count         Size    Files/sec     App Overhead
> >> >      4         8000            1        323.0         25104879
> >> >      6        16000            1        351.4         25372919
> >> >      8        24000            1        345.9         24107987
> >> >      9        32000            1        313.2         26249533
> >> >     10        40000            1        323.0         20312267
> >> >     12        48000            1        303.2         22178040
> >> >     14        56000            1        307.6         22775058
> >> >     15        64000            1        317.9         25178845
> >> >     17        72000            1        351.8         22020260
> >> >     19        80000            1        369.3         23546708
> >> >     21        88000            1        324.1         29068297
> >> >     22        96000            1        355.3         25212333
> >> >     24       104000            1        346.4         26622613
> >> >     26       112000            1        360.4         25477193
> >> >     28       120000            1        362.9         21774508
> >> >     29       128000            1        329.0         25760109
> >> >     31       136000            1        369.5         24540577
> >> >     32       144000            1        330.2         26013559
> >> >     34       152000            1        365.5         25643279
> >> >     36       160000            1        366.2         24393130
> >> >     38       168000            1        348.3         25248940
> >> >     39       176000            1        357.3         24080574
> >> >     40       184000            1        316.8         23011921
> >> >     43       192000            1        351.7         27468060
> >> >     44       200000            1        362.2         27540349
> >> >     46       208000            1        340.9         26135445
> >> >     48       216000            1        339.2         20926743
> >> >     50       224000            1        316.5         21399871
> >> >     52       232000            1        346.3         24669604
> >> >     53       240000            1        320.5         22204449
> >> >
> >> >
> >> > 2. FFSB test
> >> >
> >> > 1.) original kernel
> >> >
> >> > FFSB version 6.0-RC2 started
> >> >
> >> > benchmark time = 10
> >> > ThreadGroup 0
> >> > ================
> >> >          num_threads      = 4
> >> >
> >> >          read_random      = off
> >> >          read_size        = 40960       (40KB)
> >> >          read_blocksize   = 4096        (4KB)
> >> >          read_skip        = off
> >> >          read_skipsize    = 0   (0B)
> >> >
> >> >          write_random     = off
> >> >          write_size       = 40960       (40KB)
> >> >          fsync_file       = 0
> >> >          write_blocksize  = 4096        (4KB)
> >> >          wait time        = 0
> >> >
> >> >          op weights
> >> >                          read = 0 (0.00%)
> >> >                       readall = 1 (10.00%)
> >> >                         write = 0 (0.00%)
> >> >                        create = 1 (10.00%)
> >> >                        append = 1 (10.00%)
> >> >                        delete = 1 (10.00%)
> >> >                        metaop = 0 (0.00%)
> >> >                     createdir = 0 (0.00%)
> >> >                          stat = 1 (10.00%)
> >> >                      writeall = 1 (10.00%)
> >> >                writeall_fsync = 1 (10.00%)
> >> >                    open_close = 1 (10.00%)
> >> >                   write_fsync = 0 (0.00%)
> >> >                  create_fsync = 1 (10.00%)
> >> >                  append_fsync = 1 (10.00%)
> >> >
> >> > FileSystem /mnt/scratch/test1
> >> > ==========
> >> >          num_dirs         = 100
> >> >          starting files   = 0
> >> >
> >> >          Fileset weight:
> >> >                      33554432 (  32MB) -> 1 (1.00%)
> >> >                       8388608 (   8MB) -> 2 (2.00%)
> >> >                        524288 ( 512KB) -> 3 (3.00%)
> >> >                        262144 ( 256KB) -> 4 (4.00%)
> >> >                        131072 ( 128KB) -> 5 (5.00%)
> >> >                         65536 (  64KB) -> 8 (8.00%)
> >> >                         32768 (  32KB) -> 10 (10.00%)
> >> >                         16384 (  16KB) -> 13 (13.00%)
> >> >                          8192 (   8KB) -> 21 (21.00%)
> >> >                          4096 (   4KB) -> 33 (33.00%)
> >> >          directio         = off
> >> >          alignedio        = off
> >> >          bufferedio       = off
> >> >
> >> >          aging is off
> >> >          current utilization = 26.19%
> >> >
> >> > creating new fileset /mnt/scratch/test1
> >> > fs setup took 87 secs
> >> > Syncing()...1 sec
> >> > Starting Actual Benchmark At: Wed Dec  5 03:38:06 2012
> >> >
> >> > Syncing()...0 sec
> >> > FFSB benchmark finished   at: Wed Dec  5 03:38:18 2012
> >> >
> >> > Results:
> >> > Benchmark took 11.44 sec
> >> >
> >> > Total Results
> >> > ===============
> >> >              Op Name   Transactions      Trans/sec      % Trans     % Op Weight
> >> > Throughput
> >> >              =======   ============      =========      =======     ===========
> >> > ==========
> >> >              readall :           93           8.13       0.880%         21.053%
> >> > 32.5KB/sec
> >> >               create :           20           1.75       0.189%          5.263%
> >> > 6.99KB/sec
> >> >               append :           10           0.87       0.095%          2.632%
> >> > 3.5KB/sec
> >> >               delete :            4           0.35       0.038%         10.526%
> >> > NA
> >> >                 stat :            3           0.26       0.028%          7.895%
> >> > NA
> >> >             writeall :         2178         190.39      20.600%         10.526%
> >> > 762KB/sec
> >> >       writeall_fsync :            5           0.44       0.047%          5.263%
> >> > 1.75KB/sec
> >> >           open_close :            6           0.52       0.057%         15.789%
> >> > NA
> >> >         create_fsync :         8234         719.78      77.878%         15.789%
> >> > 2.81MB/sec
> >> >         append_fsync :           20           1.75       0.189%          5.263%
> >> > 6.99KB/sec
> >> > -
> >> > 924.24 Transactions per Second
> >> >
> >> > Throughput Results
> >> > ===================
> >> > Read Throughput: 32.5KB/sec
> >> > Write Throughput: 3.57MB/sec
> >> >
> >> > System Call Latency statistics in millisecs
> >> > =====
> >> >                 Min             Avg             Max             Total Calls
> >> >                 ========        ========        ========        ============
> >> > [   open]       0.050000        3.980161        41.840000                 31
> >> >    -
> >> > [   read]       0.017000        71.442215       1286.122000               93
> >> >    -
> >> > [  write]       0.052000        1.034817        2201.956000            10467
> >> >    -
> >> > [ unlink]       1.118000        185.398750      730.807000                 4
> >> >    -
> >> > [  close]       0.019000        1.968968        39.679000                 31
> >> >    -
> >> > [   stat]       0.043000        2.173667        6.428000                   3
> >> >    -
> >> >
> >> > 0.8% User   Time
> >> > 9.2% System Time
> >> > 10.0% CPU Utilization
> >> >
> >> > 2.) kernel with enabled hot tracking
> >> >
> >> > FFSB version 6.0-RC2 started
> >> >
> >> > benchmark time = 10
> >> > ThreadGroup 0
> >> > ================
> >> >          num_threads      = 4
> >> >
> >> >          read_random      = off
> >> >          read_size        = 40960       (40KB)
> >> >          read_blocksize   = 4096        (4KB)
> >> >          read_skip        = off
> >> >          read_skipsize    = 0   (0B)
> >> >
> >> >          write_random     = off
> >> >          write_size       = 40960       (40KB)
> >> >          fsync_file       = 0
> >> >          write_blocksize  = 4096        (4KB)
> >> >          wait time        = 0
> >> >
> >> >          op weights
> >> >                          read = 0 (0.00%)
> >> >                       readall = 1 (10.00%)
> >> >                         write = 0 (0.00%)
> >> >                        create = 1 (10.00%)
> >> >                        append = 1 (10.00%)
> >> >                        delete = 1 (10.00%)
> >> >                        metaop = 0 (0.00%)
> >> >                     createdir = 0 (0.00%)
> >> >                          stat = 1 (10.00%)
> >> >                      writeall = 1 (10.00%)
> >> >                writeall_fsync = 1 (10.00%)
> >> >                    open_close = 1 (10.00%)
> >> >                   write_fsync = 0 (0.00%)
> >> >                  create_fsync = 1 (10.00%)
> >> >                  append_fsync = 1 (10.00%)
> >> >
> >> > FileSystem /mnt/scratch/test1
> >> > ==========
> >> >          num_dirs         = 100
> >> >          starting files   = 0
> >> >
> >> >          Fileset weight:
> >> >                      33554432 (  32MB) -> 1 (1.00%)
> >> >                       8388608 (   8MB) -> 2 (2.00%)
> >> >                        524288 ( 512KB) -> 3 (3.00%)
> >> >                        262144 ( 256KB) -> 4 (4.00%)
> >> >                        131072 ( 128KB) -> 5 (5.00%)
> >> >                         65536 (  64KB) -> 8 (8.00%)
> >> >                         32768 (  32KB) -> 10 (10.00%)
> >> >                         16384 (  16KB) -> 13 (13.00%)
> >> >                          8192 (   8KB) -> 21 (21.00%)
> >> >                          4096 (   4KB) -> 33 (33.00%)
> >> >          directio         = off
> >> >          alignedio        = off
> >> >          bufferedio       = off
> >> >
> >> >          aging is off
> >> >          current utilization = 52.46%
> >> >
> >> > creating new fileset /mnt/scratch/test1
> >> > fs setup took 42 secs
> >> > Syncing()...1 sec
> >> > Starting Actual Benchmark At: Tue Dec  4 06:41:54 2012
> >> >
> >> > Syncing()...0 sec
> >> > FFSB benchmark finished   at: Tue Dec  4 06:42:53 2012
> >> >
> >> > Results:
> >> > Benchmark took 59.42 sec
> >> >
> >> > Total Results
> >> > ===============
> >> >              Op Name   Transactions      Trans/sec      % Trans     % Op Weight
> >> > Throughput
> >> >              =======   ============      =========      =======     ===========
> >> > ==========
> >> >              readall :        10510         176.87      54.808%         10.959%
> >> > 707KB/sec
> >> >               create :           48           0.81       0.250%          9.589%
> >> > 3.23KB/sec
> >> >               append :          100           1.68       0.521%         13.699%
> >> > 6.73KB/sec
> >> >               delete :            5           0.08       0.026%          6.849%
> >> > NA
> >> >                 stat :            5           0.08       0.026%          6.849%
> >> > NA
> >> >             writeall :          130           2.19       0.678%         12.329%
> >> > 8.75KB/sec
> >> >       writeall_fsync :           19           0.32       0.099%          8.219%
> >> > 1.28KB/sec
> >> >           open_close :            9           0.15       0.047%         12.329%
> >> > NA
> >> >         create_fsync :         8300         139.67      43.283%         12.329%
> >> > 559KB/sec
> >> >         append_fsync :           50           0.84       0.261%          6.849%
> >> > 3.37KB/sec
> >> > -
> >> > 322.70 Transactions per Second
> >> >
> >> > Throughput Results
> >> > ===================
> >> > Read Throughput: 707KB/sec
> >> > Write Throughput: 582KB/sec
> >> >
> >> > System Call Latency statistics in millisecs
> >> > =====
> >> >                 Min             Avg             Max             Total Calls
> >> >                 ========        ========        ========        ============
> >> > [   open]       0.061000        0.750540        10.721000                 63
> >> >    -
> >> > [   read]       0.017000        11.058425       28555.394000           10510
> >> >    -
> >> > [  write]       0.034000        6.705286        26812.076000            8647
> >> >    -
> >> > [ unlink]       0.922000        7.679800        25.364000                  5
> >> >    -
> >> > [  close]       0.019000        0.996635        34.723000                 63
> >> >    -
> >> > [   stat]       0.046000        0.942800        4.489000                   5
> >> >    -
> >> >
> >> > 0.2% User   Time
> >> > 2.6% System Time
> >> > 2.8% CPU Utilization
> >> >
> >> >
> >> > 3. fio test
> >> >
> >> > 1.) original kernel
> >> >
> >> > seq-read: (g=0): rw=read, bs=8K-8K/8K-8K, ioengine=libaio, iodepth=8
> >> > ...
> >> > seq-read: (g=0): rw=read, bs=8K-8K/8K-8K, ioengine=libaio, iodepth=8
> >> > seq-write: (g=1): rw=write, bs=8K-8K/8K-8K, ioengine=libaio, iodepth=8
> >> > ...
> >> > seq-write: (g=1): rw=write, bs=8K-8K/8K-8K, ioengine=libaio, iodepth=8
> >> > rnd-read: (g=2): rw=randread, bs=8K-8K/8K-8K, ioengine=libaio, iodepth=8
> >> > ...
> >> > rnd-read: (g=2): rw=randread, bs=8K-8K/8K-8K, ioengine=libaio, iodepth=8
> >> > rnd-write: (g=3): rw=randwrite, bs=8K-8K/8K-8K, ioengine=libaio,
> >> > iodepth=8
> >> > ...
> >> > rnd-write: (g=3): rw=randwrite, bs=8K-8K/8K-8K, ioengine=libaio,
> >> > iodepth=8
> >> > Starting 16 threads
> >> >
> >> > seq-read: (groupid=0, jobs=4): err= 0: pid=1646
> >> >   read : io=2,835MB, bw=24,192KB/s, iops=3,023, runt=120021msec
> >> >     slat (usec): min=0, max=999K, avg=1202.67, stdev=3145.84
> >> >     clat (usec): min=0, max=1,536K, avg=9186.07, stdev=11344.56
> >> >     bw (KB/s) : min=   39, max=21301, per=26.11%, avg=6315.41,
> >> > stdev=1082.63
> >> >   cpu          : usr=10.89%, sys=33.14%, ctx=1488108, majf=13, minf=2238
> >> >   IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=100.0%, 16=0.0%, 32=0.0%,
> >> >>=64=0.0%
> >> >      submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%,
> >> >>=64=0.0%
> >> >      complete  : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%,
> >> >>=64=0.0%
> >> >      issued r/w: total=362940/0, short=0/0
> >> >      lat (usec): 2=3.53%, 50=0.01%, 100=0.01%, 250=0.01%, 500=0.01%
> >> >      lat (usec): 750=0.03%, 1000=0.03%
> >> >      lat (msec): 2=1.75%, 4=1.08%, 10=68.93%, 20=22.39%, 50=2.02%
> >> >      lat (msec): 100=0.16%, 250=0.04%, 1000=0.01%, 2000=0.03%
> >> > seq-write: (groupid=1, jobs=4): err= 0: pid=1646
> >> >   write: io=1,721MB, bw=14,652KB/s, iops=1,831, runt=120277msec
> >> >     slat (usec): min=0, max=1,004K, avg=1744.41, stdev=3144.06
> >> >     clat (usec): min=0, max=1,014K, avg=15699.65, stdev=19751.69
> >> >     bw (KB/s) : min=  285, max=18032, per=26.41%, avg=3869.67,
> >> > stdev=762.96
> >> >   cpu          : usr=6.29%, sys=22.61%, ctx=880380, majf=36, minf=3222
> >> >   IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=100.0%, 16=0.0%, 32=0.0%,
> >> >>=64=0.0%
> >> >      submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%,
> >> >>=64=0.0%
> >> >      complete  : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%,
> >> >>=64=0.0%
> >> >      issued r/w: total=0/220282, short=0/0
> >> >      lat (usec): 2=2.43%, 500=0.01%, 750=0.12%, 1000=0.14%
> >> >      lat (msec): 2=0.86%, 4=1.72%, 10=39.03%, 20=42.20%, 50=11.87%
> >> >      lat (msec): 100=1.15%, 250=0.17%, 500=0.06%, 750=0.14%, 1000=0.09%
> >> >      lat (msec): 2000=0.02%
> >> > rnd-read: (groupid=2, jobs=4): err= 0: pid=1646
> >> >   read : io=65,128KB, bw=541KB/s, iops=67, runt=120381msec
> >> >     slat (usec): min=48, max=55,230, avg=167.95, stdev=248.50
> >> >     clat (msec): min=74, max=4,229, avg=472.23, stdev=129.50
> >> >     bw (KB/s) : min=    0, max=  203, per=25.34%, avg=137.08,
> >> > stdev=21.73
> >> >   cpu          : usr=0.85%, sys=2.19%, ctx=44001, majf=30, minf=3726
> >> >   IO depths    : 1=0.1%, 2=0.1%, 4=0.2%, 8=99.7%, 16=0.0%, 32=0.0%,
> >> >>=64=0.0%
> >> >      submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%,
> >> >>=64=0.0%
> >> >      complete  : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%,
> >> >>=64=0.0%
> >> >      issued r/w: total=8141/0, short=0/0
> >> >
> >> >      lat (msec): 100=0.04%, 250=0.09%, 500=81.72%, 750=13.09%,
> >> > 1000=2.97%
> >> >      lat (msec): 2000=1.50%, >=2000=0.59%
> >> > rnd-write: (groupid=3, jobs=4): err= 0: pid=1646
> >> >   write: io=200MB, bw=1,698KB/s, iops=212, runt=120331msec
> >> >     slat (usec): min=48, max=215K, avg=2272.24, stdev=2283.09
> >> >     clat (usec): min=762, max=14,617K, avg=147521.66, stdev=444146.36
> >> >     bw (KB/s) : min=    1, max= 3960, per=56.86%, avg=964.90,
> >> > stdev=514.63
> >> >   cpu          : usr=1.25%, sys=4.20%, ctx=135229, majf=0, minf=10194
> >> >   IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=99.9%, 16=0.0%, 32=0.0%,
> >> >>=64=0.0%
> >> >      submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%,
> >> >>=64=0.0%
> >> >      complete  : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%,
> >> >>=64=0.0%
> >> >      issued r/w: total=0/25536, short=0/0
> >> >      lat (usec): 1000=0.26%
> >> >      lat (msec): 2=0.13%, 4=2.01%, 10=3.77%, 20=42.78%, 50=20.95%
> >> >      lat (msec): 100=12.83%, 250=12.50%, 500=2.49%, 750=0.33%,
> >> > 1000=0.12%
> >> >      lat (msec): 2000=0.53%, >=2000=1.30%
> >> >
> >> > Run status group 0 (all jobs):
> >> >    READ: io=2,835MB, aggrb=24,191KB/s, minb=24,772KB/s, maxb=24,772KB/s,
> >> > mint=120021msec, maxt=120021msec
> >> >
> >> > Run status group 1 (all jobs):
> >> >   WRITE: io=1,721MB, aggrb=14,651KB/s, minb=15,003KB/s, maxb=15,003KB/s,
> >> > mint=120277msec, maxt=120277msec
> >> >
> >> > Run status group 2 (all jobs):
> >> >    READ: io=65,128KB, aggrb=541KB/s, minb=553KB/s, maxb=553KB/s,
> >> > mint=120381msec, maxt=120381msec
> >> >
> >> > Run status group 3 (all jobs):
> >> >   WRITE: io=200MB, aggrb=1,697KB/s, minb=1,738KB/s, maxb=1,738KB/s,
> >> > mint=120331msec, maxt=120331msec
> >> >
> >> > Disk stats (read/write):
> >> >   loop0: ios=0/0, merge=0/0, ticks=0/0, in_queue=0, util=0.00%
> >> >
> >> > 2.) kernel with enabled hot tracking
> >> >
> >> > seq-read: (g=0): rw=read, bs=8K-8K/8K-8K, ioengine=libaio, iodepth=8
> >> > ...
> >> > seq-read: (g=0): rw=read, bs=8K-8K/8K-8K, ioengine=libaio, iodepth=8
> >> > seq-write: (g=1): rw=write, bs=8K-8K/8K-8K, ioengine=libaio, iodepth=8
> >> > ...
> >> > seq-write: (g=1): rw=write, bs=8K-8K/8K-8K, ioengine=libaio, iodepth=8
> >> > rnd-read: (g=2): rw=randread, bs=8K-8K/8K-8K, ioengine=libaio, iodepth=8
> >> > ...
> >> > rnd-read: (g=2): rw=randread, bs=8K-8K/8K-8K, ioengine=libaio, iodepth=8
> >> > rnd-write: (g=3): rw=randwrite, bs=8K-8K/8K-8K, ioengine=libaio,
> >> > iodepth=8
> >> > ...
> >> > rnd-write: (g=3): rw=randwrite, bs=8K-8K/8K-8K, ioengine=libaio,
> >> > iodepth=8
> >> > Starting 16 threads
> >> >
> >> > seq-read: (groupid=0, jobs=4): err= 0: pid=2163
> >> >   read : io=3,047MB, bw=26,001KB/s, iops=3,250, runt=120003msec
> >> >     slat (usec): min=0, max=1,000K, avg=1141.34, stdev=2175.25
> >> >     clat (usec): min=0, max=1,002K, avg=8610.96, stdev=6184.67
> >> >     bw (KB/s) : min=   12, max=18896, per=25.28%, avg=6572.50,
> >> > stdev=713.22
> >> >   cpu          : usr=10.38%, sys=35.02%, ctx=1601418, majf=12, minf=2235
> >> >   IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=100.0%, 16=0.0%, 32=0.0%,
> >> >>=64=0.0%
> >> >      submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%,
> >> >>=64=0.0%
> >> >      complete  : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%,
> >> >>=64=0.0%
> >> >      issued r/w: total=390029/0, short=0/0
> >> >      lat (usec): 2=1.49%, 50=0.01%, 100=0.01%, 250=0.01%, 500=0.01%
> >> >      lat (usec): 750=0.01%, 1000=0.02%
> >> >      lat (msec): 2=1.53%, 4=0.86%, 10=79.60%, 20=14.93%, 50=1.43%
> >> >      lat (msec): 100=0.09%, 250=0.02%, 500=0.01%, 1000=0.01%, 2000=0.01%
> >> > seq-write: (groupid=1, jobs=4): err= 0: pid=2163
> >> >   write: io=1,752MB, bw=14,950KB/s, iops=1,868, runt=120003msec
> >> >     slat (usec): min=0, max=1,002K, avg=1697.47, stdev=3568.70
> >> >     clat (usec): min=0, max=1,019K, avg=15630.94, stdev=21109.46
> >> >     bw (KB/s) : min=  123, max=14693, per=26.31%, avg=3933.46,
> >> > stdev=779.57
> >> >   cpu          : usr=6.31%, sys=21.85%, ctx=894177, majf=4, minf=3407
> >> >   IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=100.0%, 16=0.0%, 32=0.0%,
> >> >>=64=0.0%
> >> >      submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%,
> >> >>=64=0.0%
> >> >      complete  : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%,
> >> >>=64=0.0%
> >> >      issued r/w: total=0/224253, short=0/0
> >> >      lat (usec): 2=2.44%, 100=0.01%, 250=0.01%, 500=0.01%, 750=0.06%
> >> >      lat (usec): 1000=0.23%
> >> >      lat (msec): 2=0.73%, 4=2.00%, 10=40.15%, 20=42.68%, 50=10.25%
> >> >      lat (msec): 100=0.95%, 250=0.14%, 500=0.10%, 750=0.12%, 1000=0.11%
> >> >      lat (msec): 2000=0.03%
> >> > rnd-read: (groupid=2, jobs=4): err= 0: pid=2163
> >> >   read : io=85,208KB, bw=709KB/s, iops=88, runt=120252msec
> >> >     slat (usec): min=52, max=48,325, avg=204.43, stdev=596.50
> >> >     clat (msec): min=1, max=2,754, avg=359.99, stdev=78.96
> >> >     bw (KB/s) : min=    0, max=  249, per=25.17%, avg=178.20,
> >> > stdev=23.79
> >> >   cpu          : usr=1.00%, sys=2.64%, ctx=55704, majf=28, minf=2971
> >> >   IO depths    : 1=0.1%, 2=0.1%, 4=0.2%, 8=99.7%, 16=0.0%, 32=0.0%,
> >> >>=64=0.0%
> >> >      submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%,
> >> >>=64=0.0%
> >> >      complete  : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%,
> >> >>=64=0.0%
> >> >      issued r/w: total=10651/0, short=0/0
> >> >
> >> >      lat (msec): 2=0.01%, 50=0.02%, 100=0.04%, 250=1.61%, 500=92.60%
> >> >      lat (msec): 750=4.24%, 1000=0.68%, 2000=0.59%, >=2000=0.22%
> >> > rnd-write: (groupid=3, jobs=4): err= 0: pid=2163
> >> >   write: io=247MB, bw=2,019KB/s, iops=252, runt=125287msec
> >> >     slat (usec): min=51, max=286K, avg=2576.23, stdev=2882.30
> >> >     clat (usec): min=698, max=8,156K, avg=123274.05, stdev=355311.20
> >> >     bw (KB/s) : min=    1, max= 4848, per=57.62%, avg=1162.77,
> >> > stdev=560.79
> >> >   cpu          : usr=1.33%, sys=4.24%, ctx=163334, majf=0, minf=8588
> >> >   IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=99.9%, 16=0.0%, 32=0.0%,
> >> >>=64=0.0%
> >> >      submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%,
> >> >>=64=0.0%
> >> >      complete  : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%,
> >> >>=64=0.0%
> >> >      issued r/w: total=0/31616, short=0/0
> >> >      lat (usec): 750=0.03%, 1000=0.15%
> >> >      lat (msec): 2=0.06%, 4=2.15%, 10=3.57%, 20=48.23%, 50=22.43%
> >> >      lat (msec): 100=11.48%, 250=9.14%, 500=1.04%, 750=0.16%, 1000=0.05%
> >> >      lat (msec): 2000=0.09%, >=2000=1.42%
> >> >
> >> > Run status group 0 (all jobs):
> >> >    READ: io=3,047MB, aggrb=26,001KB/s, minb=26,625KB/s, maxb=26,625KB/s,
> >> > mint=120003msec, maxt=120003msec
> >> >
> >> > Run status group 1 (all jobs):
> >> >   WRITE: io=1,752MB, aggrb=14,949KB/s, minb=15,308KB/s, maxb=15,308KB/s,
> >> > mint=120003msec, maxt=120003msec
> >> >
> >> > Run status group 2 (all jobs):
> >> >    READ: io=85,208KB, aggrb=708KB/s, minb=725KB/s, maxb=725KB/s,
> >> > mint=120252msec, maxt=120252msec
> >> >
> >> > Run status group 3 (all jobs):
> >> >   WRITE: io=247MB, aggrb=2,018KB/s, minb=2,067KB/s, maxb=2,067KB/s,
> >> > mint=125287msec, maxt=125287msec
> >> >
> >> > Disk stats (read/write):
> >> >   loop0: ios=0/0, merge=0/0, ticks=0/0, in_queue=0, util=0.00%
> >> >
> >> >
> >> > 4. compilebench test
> >> >
> >> > 1.) original kernel
> >> >
> >> > using working directory /mnt/scratch/, 30 intial dirs 100 runs
> >> >
> >> > native unpatched native-0 222MB in 87.48 seconds (2.54 MB/s)
> >> > native patched native-0 109MB in 24.89 seconds (4.41 MB/s)
> >> > native patched compiled native-0 691MB in 35.54 seconds (19.46 MB/s)
> >> > create dir kernel-0 222MB in 91.22 seconds (2.44 MB/s)
> >> > create dir kernel-1 222MB in 91.55 seconds (2.43 MB/s)
> >> > create dir kernel-2 222MB in 97.00 seconds (2.29 MB/s)
> >> > create dir kernel-3 222MB in 87.64 seconds (2.54 MB/s)
> >> > create dir kernel-4 222MB in 86.18 seconds (2.58 MB/s)
> >> > create dir kernel-5 222MB in 84.68 seconds (2.63 MB/s)
> >> > create dir kernel-6 222MB in 85.02 seconds (2.62 MB/s)
> >> > create dir kernel-7 222MB in 87.74 seconds (2.53 MB/s)
> >> > create dir kernel-8 222MB in 86.79 seconds (2.56 MB/s)
> >> > create dir kernel-9 222MB in 87.85 seconds (2.53 MB/s)
> >> > create dir kernel-10 222MB in 86.88 seconds (2.56 MB/s)
> >> > create dir kernel-11 222MB in 86.47 seconds (2.57 MB/s)
> >> > create dir kernel-12 222MB in 84.26 seconds (2.64 MB/s)
> >> > create dir kernel-13 222MB in 87.40 seconds (2.54 MB/s)
> >> > create dir kernel-14 222MB in 85.44 seconds (2.60 MB/s)
> >> > create dir kernel-15 222MB in 86.80 seconds (2.56 MB/s)
> >> > create dir kernel-16 222MB in 88.57 seconds (2.51 MB/s)
> >> > create dir kernel-17 222MB in 85.66 seconds (2.60 MB/s)
> >> > create dir kernel-18 222MB in 87.40 seconds (2.54 MB/s)
> >> > create dir kernel-19 222MB in 85.47 seconds (2.60 MB/s)
> >> > create dir kernel-20 222MB in 89.29 seconds (2.49 MB/s)
> >> > create dir kernel-21 222MB in 88.53 seconds (2.51 MB/s)
> >> > create dir kernel-22 222MB in 86.25 seconds (2.58 MB/s)
> >> > create dir kernel-23 222MB in 85.34 seconds (2.61 MB/s)
> >> > create dir kernel-24 222MB in 84.61 seconds (2.63 MB/s)
> >> > create dir kernel-25 222MB in 88.13 seconds (2.52 MB/s)
> >> > create dir kernel-26 222MB in 85.57 seconds (2.60 MB/s)
> >> > create dir kernel-27 222MB in 87.26 seconds (2.55 MB/s)
> >> > create dir kernel-28 222MB in 83.68 seconds (2.66 MB/s)
> >> > create dir kernel-29 222MB in 86.33 seconds (2.58 MB/s)
> >> > === sdb ===
> >> >   CPU  0:              9366376 events,   439049 KiB data
> >> >   Total:               9366376 events (dropped 0),   439049 KiB data
> >> > patch dir kernel-7 109MB in 55.00 seconds (1.99 MB/s)
> >> > compile dir kernel-7 691MB in 37.15 seconds (18.62 MB/s)
> >> > compile dir kernel-14 680MB in 38.48 seconds (17.69 MB/s)
> >> > patch dir kernel-14 691MB in 93.31 seconds (7.41 MB/s)
> >> > read dir kernel-7 in 93.36 9.85 MB/s
> >> > read dir kernel-10 in 58.25 3.82 MB/s
> >> > create dir kernel-3116 222MB in 91.96 seconds (2.42 MB/s)
> >> > clean kernel-7 691MB in 5.16 seconds (134.03 MB/s)
> >> > read dir kernel-6 in 56.98 3.90 MB/s
> >> > stat dir kernel-2 in 19.42 seconds
> >> > compile dir kernel-2 680MB in 43.11 seconds (15.79 MB/s)
> >> > clean kernel-14 691MB in 6.27 seconds (110.30 MB/s)
> >> > clean kernel-2 680MB in 5.79 seconds (117.55 MB/s)
> >> > patch dir kernel-2 109MB in 71.22 seconds (1.54 MB/s)
> >> > stat dir kernel-2 in 16.06 seconds
> >> > create dir kernel-6231 222MB in 96.20 seconds (2.31 MB/s)
> >> > delete kernel-8 in 45.20 seconds
> >> > compile dir kernel-2 691MB in 38.58 seconds (17.93 MB/s)
> >> > create dir kernel-70151 222MB in 93.41 seconds (2.38 MB/s)
> >> > clean kernel-2 691MB in 5.09 seconds (135.87 MB/s)
> >> > create dir kernel-78184 222MB in 86.04 seconds (2.58 MB/s)
> >> > compile dir kernel-7 691MB in 37.60 seconds (18.39 MB/s)
> >> > create dir kernel-64250 222MB in 80.33 seconds (2.77 MB/s)
> >> > delete kernel-12 in 43.00 seconds
> >> > stat dir kernel-2 in 16.43 seconds
> >> > patch dir kernel-70151 109MB in 77.42 seconds (1.42 MB/s)
> >> > stat dir kernel-7 in 18.48 seconds
> >> > stat dir kernel-78184 in 18.62 seconds
> >> > compile dir kernel-2 691MB in 43.31 seconds (15.97 MB/s)
> >> > compile dir kernel-26 680MB in 50.37 seconds (13.51 MB/s)
> >> > stat dir kernel-7 in 21.52 seconds
> >> > create dir kernel-2717 222MB in 89.86 seconds (2.47 MB/s)
> >> > delete kernel-26 in 47.81 seconds
> >> > stat dir kernel-2 in 18.61 seconds
> >> > compile dir kernel-14 691MB in 46.66 seconds (14.82 MB/s)
> >> > compile dir kernel-70151 691MB in 39.19 seconds (17.65 MB/s)
> >> > create dir kernel-55376 222MB in 88.91 seconds (2.50 MB/s)
> >> > stat dir kernel-22 in 18.66 seconds
> >> > delete kernel-55376 in 37.71 seconds
> >> > patch dir kernel-27 109MB in 74.82 seconds (1.47 MB/s)
> >> > patch dir kernel-64250 109MB in 81.08 seconds (1.35 MB/s)
> >> > read dir kernel-6231 in 82.15 2.71 MB/s
> >> > patch dir kernel-9 109MB in 96.02 seconds (1.14 MB/s)
> >> > stat dir kernel-14 in 22.46 seconds
> >> > read dir kernel-29 in 58.10 3.83 MB/s
> >> > create dir kernel-57327 222MB in 93.92 seconds (2.37 MB/s)
> >> > stat dir kernel-14 in 21.92 seconds
> >> > compile dir kernel-27 691MB in 41.43 seconds (16.69 MB/s)
> >> > create dir kernel-64334 222MB in 89.31 seconds (2.49 MB/s)
> >> > patch dir kernel-1 109MB in 84.37 seconds (1.30 MB/s)
> >> > create dir kernel-16056 222MB in 94.93 seconds (2.34 MB/s)
> >> > clean kernel-7 691MB in 7.27 seconds (95.13 MB/s)
> >> > delete kernel-27 in 46.32 seconds
> >> > create dir kernel-51614 222MB in 88.91 seconds (2.50 MB/s)
> >> > clean kernel-14 691MB in 6.71 seconds (103.07 MB/s)
> >> > delete kernel-64250 in 43.60 seconds
> >> > stat dir kernel-2 in 24.25 seconds
> >> > clean kernel-70151 691MB in 6.20 seconds (111.55 MB/s)
> >> > delete kernel-14 in 40.74 seconds
> >> > read dir kernel-2 in 118.45 7.76 MB/s
> >> > create dir kernel-24150 222MB in 88.99 seconds (2.50 MB/s)
> >> > read dir kernel-9 in 83.70 2.73 MB/s
> >> > patch dir kernel-19 109MB in 76.06 seconds (1.44 MB/s)
> >> > clean kernel-2 691MB in 6.64 seconds (104.16 MB/s)
> >> > compile dir kernel-18 680MB in 47.33 seconds (14.38 MB/s)
> >> > compile dir kernel-2 691MB in 44.63 seconds (15.50 MB/s)
> >> > delete kernel-2 in 51.03 seconds
> >> > delete kernel-70151 in 45.96 seconds
> >> > stat dir kernel-1 in 17.56 seconds
> >> > read dir kernel-18 in 121.08 7.46 MB/s
> >> > clean kernel-18 680MB in 6.47 seconds (105.20 MB/s)
> >> > compile dir kernel-17 680MB in 52.10 seconds (13.06 MB/s)
> >> > read dir kernel-17 in 114.66 7.88 MB/s
> >> > stat dir kernel-18 in 30.36 seconds
> >> > stat dir kernel-64334 in 44.78 seconds
> >> > delete kernel-24150 in 44.79 seconds
> >> > delete kernel-17 in 47.64 seconds
> >> > stat dir kernel-1 in 19.87 seconds
> >> > compile dir kernel-7 691MB in 47.65 seconds (14.51 MB/s)
> >> > patch dir kernel-16 109MB in 100.96 seconds (1.09 MB/s)
> >> > stat dir kernel-7 in 21.35 seconds
> >> > create dir kernel-82195 222MB in 111.17 seconds (2.00 MB/s)
> >> > delete kernel-82195 in 40.79 seconds
> >> > stat dir kernel-3 in 19.51 seconds
> >> > patch dir kernel-2717 109MB in 94.55 seconds (1.16 MB/s)
> >> > patch dir kernel-5 109MB in 60.21 seconds (1.82 MB/s)
> >> > read dir kernel-2717 in 94.85 2.41 MB/s
> >> > delete kernel-29 in 40.51 seconds
> >> > clean kernel-7 691MB in 5.84 seconds (118.42 MB/s)
> >> > read dir kernel-4 in 57.91 3.84 MB/s
> >> > stat dir kernel-78184 in 19.65 seconds
> >> > patch dir kernel-0 109MB in 90.61 seconds (1.21 MB/s)
> >> > patch dir kernel-3 109MB in 75.67 seconds (1.45 MB/s)
> >> > create dir kernel-30226 222MB in 106.72 seconds (2.08 MB/s)
> >> > read dir kernel-19 in 83.79 2.72 MB/s
> >> > read dir kernel-9 in 82.64 2.76 MB/s
> >> > delete kernel-5 in 38.89 seconds
> >> > read dir kernel-7 in 59.70 3.82 MB/s
> >> > patch dir kernel-57327 109MB in 101.71 seconds (1.08 MB/s)
> >> > read dir kernel-11 in 59.83 3.72 MB/s
> >> >
> >> > run complete:
> >> > ==========================================================================
> >> > intial create total runs 30 avg 2.55 MB/s (user 13.94s sys 34.07s)
> >> > create total runs 14 avg 2.41 MB/s (user 13.83s sys 34.39s)
> >> > patch total runs 15 avg 1.79 MB/s (user 6.55s sys 34.71s)
> >> > compile total runs 14 avg 16.04 MB/s (user 2.65s sys 16.88s)
> >> > clean total runs 10 avg 113.53 MB/s (user 0.46s sys 3.14s)
> >> > read tree total runs 11 avg 3.30 MB/s (user 11.68s sys 24.50s)
> >> > read compiled tree total runs 4 avg 8.24 MB/s (user 13.67s sys 35.85s)
> >> > delete tree total runs 10 avg 42.12 seconds (user 6.76s sys 24.50s)
> >> > delete compiled tree total runs 4 avg 48.20 seconds (user 7.65s sys
> >> > 28.60s)
> >> > stat tree total runs 11 avg 21.90 seconds (user 6.87s sys 6.34s)
> >> > stat compiled tree total runs 7 avg 21.23 seconds (user 7.65s sys 7.15s)
> >> >
> >> > 2.) kernel with enabled hot tracking
> >> >
> >> > using working directory /mnt/scratch/, 30 intial dirs 100 runs
> >> > native unpatched native-0 222MB in 112.82 seconds (1.97 MB/s)
> >> > native patched native-0 109MB in 27.38 seconds (4.01 MB/s)
> >> > native patched compiled native-0 691MB in 40.42 seconds (17.11 MB/s)
> >> > create dir kernel-0 222MB in 92.88 seconds (2.39 MB/s)
> >> > create dir kernel-1 222MB in 98.56 seconds (2.26 MB/s)
> >> > create dir kernel-2 222MB in 107.27 seconds (2.07 MB/s)
> >> > create dir kernel-3 222MB in 92.81 seconds (2.40 MB/s)
> >> > create dir kernel-4 222MB in 90.30 seconds (2.46 MB/s)
> >> > create dir kernel-5 222MB in 91.57 seconds (2.43 MB/s)
> >> > create dir kernel-6 222MB in 91.92 seconds (2.42 MB/s)
> >> > create dir kernel-7 222MB in 90.16 seconds (2.47 MB/s)
> >> > create dir kernel-8 222MB in 94.71 seconds (2.35 MB/s)
> >> > create dir kernel-9 222MB in 91.79 seconds (2.42 MB/s)
> >> > create dir kernel-10 222MB in 92.14 seconds (2.41 MB/s)
> >> > create dir kernel-11 222MB in 88.59 seconds (2.51 MB/s)
> >> > create dir kernel-12 222MB in 92.15 seconds (2.41 MB/s)
> >> > create dir kernel-13 222MB in 91.54 seconds (2.43 MB/s)
> >> > create dir kernel-14 222MB in 91.15 seconds (2.44 MB/s)
> >> > create dir kernel-15 222MB in 90.54 seconds (2.46 MB/s)
> >> > create dir kernel-16 222MB in 92.23 seconds (2.41 MB/s)
> >> > create dir kernel-17 222MB in 89.88 seconds (2.47 MB/s)
> >> > create dir kernel-18 222MB in 94.65 seconds (2.35 MB/s)
> >> > create dir kernel-19 222MB in 89.99 seconds (2.47 MB/s)
> >> > create dir kernel-20 222MB in 90.35 seconds (2.46 MB/s)
> >> > create dir kernel-21 222MB in 90.92 seconds (2.45 MB/s)
> >> > create dir kernel-22 222MB in 90.76 seconds (2.45 MB/s)
> >> > create dir kernel-23 222MB in 90.04 seconds (2.47 MB/s)
> >> > create dir kernel-24 222MB in 89.60 seconds (2.48 MB/s)
> >> > create dir kernel-25 222MB in 91.52 seconds (2.43 MB/s)
> >> > create dir kernel-26 222MB in 90.45 seconds (2.46 MB/s)
> >> > create dir kernel-27 222MB in 92.72 seconds (2.40 MB/s)
> >> > create dir kernel-28 222MB in 90.37 seconds (2.46 MB/s)
> >> > create dir kernel-29 222MB in 89.60 seconds (2.48 MB/s)
> >> > === sdb ===
> >> >   CPU  0:              8878754 events,   416192 KiB data
> >> >   Total:               8878754 events (dropped 0),   416192 KiB data
> >> > patch dir kernel-7 109MB in 61.00 seconds (1.80 MB/s)
> >> > compile dir kernel-7 691MB in 40.21 seconds (17.20 MB/s)
> >> > compile dir kernel-14 680MB in 45.97 seconds (14.81 MB/s)
> >> > patch dir kernel-14 691MB in 83.73 seconds (8.26 MB/s)
> >> > read dir kernel-7 in 88.66 10.37 MB/s
> >> > read dir kernel-10 in 56.44 3.94 MB/s
> >> > create dir kernel-3116 222MB in 91.58 seconds (2.43 MB/s)
> >> > clean kernel-7 691MB in 6.69 seconds (103.38 MB/s)
> >> > read dir kernel-6 in 61.07 3.64 MB/s
> >> > stat dir kernel-2 in 21.42 seconds
> >> > compile dir kernel-2 680MB in 44.55 seconds (15.28 MB/s)
> >> > clean kernel-14 691MB in 6.98 seconds (99.08 MB/s)
> >> > clean kernel-2 680MB in 6.12 seconds (111.21 MB/s)
> >> > patch dir kernel-2 109MB in 73.95 seconds (1.48 MB/s)
> >> > stat dir kernel-2 in 18.61 seconds
> >> > create dir kernel-6231 222MB in 100.84 seconds (2.21 MB/s)
> >> > delete kernel-8 in 40.38 seconds
> >> > compile dir kernel-2 691MB in 42.18 seconds (16.40 MB/s)
> >> > create dir kernel-70151 222MB in 96.34 seconds (2.31 MB/s)
> >> > clean kernel-2 691MB in 4.54 seconds (152.33 MB/s)
> >> > create dir kernel-78184 222MB in 94.71 seconds (2.35 MB/s)
> >> > compile dir kernel-7 691MB in 43.64 seconds (15.85 MB/s)
> >> > create dir kernel-64250 222MB in 87.65 seconds (2.54 MB/s)
> >> > delete kernel-12 in 38.58 seconds
> >> > stat dir kernel-2 in 17.48 seconds
> >> > patch dir kernel-70151 109MB in 79.82 seconds (1.37 MB/s)
> >> > stat dir kernel-7 in 25.76 seconds
> >> > stat dir kernel-78184 in 20.30 seconds
> >> > compile dir kernel-2 691MB in 40.93 seconds (16.90 MB/s)
> >> > compile dir kernel-26 680MB in 48.86 seconds (13.93 MB/s)
> >> > stat dir kernel-7 in 23.87 seconds
> >> > create dir kernel-2717 222MB in 98.71 seconds (2.25 MB/s)
> >> > delete kernel-26 in 45.60 seconds
> >> > stat dir kernel-2 in 22.62 seconds
> >> > compile dir kernel-14 691MB in 51.12 seconds (13.53 MB/s)
> >> > compile dir kernel-70151 691MB in 41.40 seconds (16.71 MB/s)
> >> > create dir kernel-55376 222MB in 94.61 seconds (2.35 MB/s)
> >> > stat dir kernel-22 in 22.11 seconds
> >> > delete kernel-55376 in 36.47 seconds
> >> > patch dir kernel-27 109MB in 76.74 seconds (1.43 MB/s)
> >> > patch dir kernel-64250 109MB in 86.43 seconds (1.27 MB/s)
> >> > read dir kernel-6231 in 85.10 2.61 MB/s
> >> > patch dir kernel-9 109MB in 97.67 seconds (1.12 MB/s)
> >> > stat dir kernel-14 in 24.80 seconds
> >> > read dir kernel-29 in 61.00 3.65 MB/s
> >> > create dir kernel-57327 222MB in 101.42 seconds (2.19 MB/s)
> >> > stat dir kernel-14 in 22.45 seconds
> >> > compile dir kernel-27 691MB in 48.19 seconds (14.35 MB/s)
> >> > create dir kernel-64334 222MB in 96.65 seconds (2.30 MB/s)
> >> > patch dir kernel-1 109MB in 88.32 seconds (1.24 MB/s)
> >> > create dir kernel-16056 222MB in 100.60 seconds (2.21 MB/s)
> >> > clean kernel-7 691MB in 8.20 seconds (84.34 MB/s)
> >> > delete kernel-27 in 48.53 seconds
> >> > create dir kernel-51614 222MB in 98.07 seconds (2.27 MB/s)
> >> > clean kernel-14 691MB in 6.82 seconds (101.41 MB/s)
> >> > delete kernel-64250 in 44.01 seconds
> >> > stat dir kernel-2 in 26.37 seconds
> >> > clean kernel-70151 691MB in 6.21 seconds (111.37 MB/s)
> >> > delete kernel-14 in 41.74 seconds
> >> > read dir kernel-2 in 122.71 7.50 MB/s
> >> > create dir kernel-24150 222MB in 99.01 seconds (2.25 MB/s)
> >> > read dir kernel-9 in 78.29 2.91 MB/s
> >> > patch dir kernel-19 109MB in 77.45 seconds (1.42 MB/s)
> >> > clean kernel-2 691MB in 5.94 seconds (116.43 MB/s)
> >> > compile dir kernel-18 680MB in 49.17 seconds (13.84 MB/s)
> >> > compile dir kernel-2 691MB in 47.20 seconds (14.65 MB/s)
> >> > delete kernel-2 in 48.01 seconds
> >> > delete kernel-70151 in 47.60 seconds
> >> > stat dir kernel-1 in 21.80 seconds
> >> > read dir kernel-18 in 109.98 8.21 MB/s
> >> > clean kernel-18 680MB in 7.78 seconds (87.49 MB/s)
> >> > compile dir kernel-17 680MB in 54.39 seconds (12.51 MB/s)
> >> > read dir kernel-17 in 108.52 8.32 MB/s
> >> > stat dir kernel-18 in 19.48 seconds
> >> > stat dir kernel-64334 in 22.04 seconds
> >> > delete kernel-24150 in 44.36 seconds
> >> > delete kernel-17 in 49.09 seconds
> >> > stat dir kernel-1 in 18.16 seconds
> >> > compile dir kernel-7 691MB in 48.90 seconds (14.14 MB/s)
> >> > patch dir kernel-16 109MB in 103.71 seconds (1.06 MB/s)
> >> > stat dir kernel-7 in 21.94 seconds
> >> > create dir kernel-82195 222MB in 110.82 seconds (2.01 MB/s)
> >> > delete kernel-82195 in 38.64 seconds
> >> > stat dir kernel-3 in 22.88 seconds
> >> > patch dir kernel-2717 109MB in 92.23 seconds (1.19 MB/s)
> >> > patch dir kernel-5 109MB in 64.95 seconds (1.69 MB/s)
> >> > read dir kernel-2717 in 97.88 2.33 MB/s
> >> > delete kernel-29 in 40.59 seconds
> >> > clean kernel-7 691MB in 5.09 seconds (135.87 MB/s)
> >> > read dir kernel-4 in 59.42 3.74 MB/s
> >> > stat dir kernel-78184 in 20.24 seconds
> >> > patch dir kernel-0 109MB in 95.95 seconds (1.14 MB/s)
> >> > patch dir kernel-3 109MB in 62.86 seconds (1.74 MB/s)
> >> > create dir kernel-30226 222MB in 106.81 seconds (2.08 MB/s)
> >> > read dir kernel-19 in 81.32 2.81 MB/s
> >> > read dir kernel-9 in 74.65 3.06 MB/s
> >> > delete kernel-5 in 42.04 seconds
> >> > read dir kernel-7 in 61.95 3.68 MB/s
> >> > patch dir kernel-57327 109MB in 97.85 seconds (1.12 MB/s)
> >> > read dir kernel-11 in 58.85 3.78 MB/s
> >> >
> >> > run complete:
> >> > ==========================================================================
> >> > intial create total runs 30 avg 2.42 MB/s (user 13.60s sys 36.18s)
> >> > create total runs 14 avg 2.27 MB/s (user 13.66s sys 36.94s)
> >> > patch total runs 15 avg 1.82 MB/s (user 6.62s sys 36.93s)
> >> > compile total runs 14 avg 15.01 MB/s (user 2.76s sys 18.29s)
> >> > clean total runs 10 avg 110.29 MB/s (user 0.46s sys 3.21s)
> >> > read tree total runs 11 avg 3.29 MB/s (user 11.04s sys 28.65s)
> >> > read compiled tree total runs 4 avg 8.60 MB/s (user 13.16s sys 41.32s)
> >> > delete tree total runs 10 avg 41.44 seconds (user 6.43s sys 25.19s)
> >> > delete compiled tree total runs 4 avg 47.81 seconds (user 7.18s sys
> >> > 29.27s)
> >> > stat tree total runs 11 avg 20.41 seconds (user 6.39s sys 7.45s)
> >> > stat compiled tree total runs 7 avg 23.97 seconds (user 7.24s sys 8.74s)
> >> >
> >> > On Fri, 2012-11-16 at 17:51 +0800, zwu.kernel@gmail.com wrote:
> >> >> From: Zhi Yong Wu <wuzhy@linux.vnet.ibm.com>
> >> >>
> >> >> HI, guys,
> >> >>
> >> >>   Any comments or ideas are appreciated, thanks.
> >> >>
> >> >> NOTE:
> >> >>
> >> >>   The patchset can be obtained via my kernel dev git on github:
> >> >> git://github.com/wuzhy/kernel.git hot_tracking
> >> >>   If you're interested, you can also review them via
> >> >> https://github.com/wuzhy/kernel/commits/hot_tracking
> >> >>
> >> >>   For more info, please check hot_tracking.txt in Documentation
> >> >>
> >> >> TODO List:
> >> >>
> >> >>  1.) Need to do scalability or performance tests. - Required
> >> >>  2.) Need one simpler but efficient temp calculation function
> >> >>  3.) How to save the file temperature among the umount to be able to
> >> >>      preserve the file tempreture after reboot - Optional
> >> >>
> >> >> Changelog:
> >> >>
> >> >>  - Solved 64 bits inode number issue. [David Sterba]
> >> >>  - Embed struct hot_type in struct file_system_type [Darrick J. Wong]
> >> >>  - Cleanup Some issues [David Sterba]
> >> >>  - Use a static hot debugfs root [Greg KH]
> >> >>  - Rewritten debugfs support based on seq_file operation. [Dave Chinner]
> >> >>  - Refactored workqueue support. [Dave Chinner]
> >> >>  - Turn some Micro into be tunable   [Zhiyong, Zheng Liu]
> >> >>        TIME_TO_KICK, and HEAT_UPDATE_DELAY
> >> >>  - Introduce hot func registering framework [Zhiyong]
> >> >>  - Remove global variable for hot tracking [Zhiyong]
> >> >>  - Add xfs hot tracking support [Dave Chinner]
> >> >>  - Add ext4 hot tracking support [Zheng Liu]
> >> >>  - Cleanedup a lot of other issues [Dave Chinner]
> >> >>  - Added memory shrinker [Dave Chinner]
> >> >>  - Converted to one workqueue to update map info periodically [Dave Chinner]
> >> >>  - Cleanedup a lot of other issues [Dave Chinner]
> >> >>  - Reduce new files and put all in fs/hot_tracking.[ch] [Dave Chinner]
> >> >>  - Add btrfs hot tracking support [Zhiyong]
> >> >>  - The first three patches can probably just be flattened into one.
> >> >>                                         [Marco Stornelli , Dave Chinner]
> >> >>
> >> >> Zhi Yong Wu (16):
> >> >>   vfs: introduce some data structures
> >> >>   vfs: add init and cleanup functions
> >> >>   vfs: add I/O frequency update function
> >> >>   vfs: add two map arrays
> >> >>   vfs: add hooks to enable hot tracking
> >> >>   vfs: add temp calculation function
> >> >>   vfs: add map info update function
> >> >>   vfs: add aging function
> >> >>   vfs: add one work queue
> >> >>   vfs: add FS hot type support
> >> >>   vfs: register one shrinker
> >> >>   vfs: add one ioctl interface
> >> >>   vfs: add debugfs support
> >> >>   proc: add two hot_track proc files
> >> >>   btrfs: add hot tracking support
> >> >>   vfs: add documentation
> >> >>
> >> >>  Documentation/filesystems/00-INDEX         |    2 +
> >> >>  Documentation/filesystems/hot_tracking.txt |  263 ++++++
> >> >>  fs/Makefile                                |    2 +-
> >> >>  fs/btrfs/ctree.h                           |    1 +
> >> >>  fs/btrfs/super.c                           |   22 +-
> >> >>  fs/compat_ioctl.c                          |    5 +
> >> >>  fs/dcache.c                                |    2 +
> >> >>  fs/direct-io.c                             |    6 +
> >> >>  fs/hot_tracking.c                          | 1306 ++++++++++++++++++++++++++++
> >> >>  fs/hot_tracking.h                          |   52 ++
> >> >>  fs/ioctl.c                                 |   74 ++
> >> >>  include/linux/fs.h                         |    5 +
> >> >>  include/linux/hot_tracking.h               |  152 ++++
> >> >>  kernel/sysctl.c                            |   14 +
> >> >>  mm/filemap.c                               |    6 +
> >> >>  mm/page-writeback.c                        |   12 +
> >> >>  mm/readahead.c                             |    7 +
> >> >>  17 files changed, 1929 insertions(+), 2 deletions(-)
> >> >>  create mode 100644 Documentation/filesystems/hot_tracking.txt
> >> >>  create mode 100644 fs/hot_tracking.c
> >> >>  create mode 100644 fs/hot_tracking.h
> >> >>  create mode 100644 include/linux/hot_tracking.h
> >> >>
> >> >
> >> > --
> >> > Regards,
> >> >
> >> > Zhi Yong Wu
> >> >
> >>
> >>
> >>
> >> --
> >> Regards,
> >>
> >> Zhi Yong Wu
> 
> 
> 
> -- 
> Regards,
> 
> Zhi Yong Wu
> --
> To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 22+ messages in thread

end of thread, other threads:[~2012-12-14  2:46 UTC | newest]

Thread overview: 22+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2012-11-16  9:51 [PATCH v1 resend hot_track 00/16] vfs: hot data tracking zwu.kernel
2012-11-16  9:51 ` [PATCH v1 hot_track 01/16] vfs: introduce some data structures zwu.kernel
2012-11-16  9:51 ` [PATCH v1 hot_track 02/16] vfs: add init and cleanup functions zwu.kernel
2012-11-16  9:51 ` [PATCH v1 hot_track 03/16] vfs: add I/O frequency update function zwu.kernel
2012-11-16  9:51 ` [PATCH v1 hot_track 04/16] vfs: add two map arrays zwu.kernel
2012-11-16  9:51 ` [PATCH v1 hot_track 05/16] vfs: add hooks to enable hot tracking zwu.kernel
2012-11-16  9:51 ` [PATCH v1 hot_track 06/16] vfs: add temp calculation function zwu.kernel
2012-11-16  9:51 ` [PATCH v1 hot_track 07/16] vfs: add map info update function zwu.kernel
2012-11-16  9:51 ` [PATCH v1 hot_track 08/16] vfs: add aging function zwu.kernel
2012-11-16  9:51 ` [PATCH v1 hot_track 09/16] vfs: add one work queue zwu.kernel
2012-11-16  9:51 ` [PATCH v1 hot_track 10/16] vfs: add FS hot type support zwu.kernel
2012-11-16  9:51 ` [PATCH v1 hot_track 11/16] vfs: register one shrinker zwu.kernel
2012-11-16  9:51 ` [PATCH v1 hot_track 12/16] vfs: add one ioctl interface zwu.kernel
2012-11-16  9:51 ` [PATCH v1 hot_track 13/16] vfs: add debugfs support zwu.kernel
2012-11-16  9:51 ` [PATCH v1 hot_track 14/16] proc: add two hot_track proc files zwu.kernel
2012-11-16  9:51 ` [PATCH v1 hot_track 15/16] btrfs: add hot tracking support zwu.kernel
2012-11-16  9:51 ` [PATCH v1 hot_track 16/16] vfs: add documentation zwu.kernel
2012-12-06  3:28 ` [PATCH v1 resend hot_track 00/16] vfs: hot data tracking Zhi Yong Wu
2012-12-10  3:30   ` Zhi Yong Wu
2012-12-12 19:50     ` Darrick J. Wong
2012-12-13 12:17       ` Zhi Yong Wu
2012-12-14  2:46         ` Darrick J. Wong

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).