* [PATCH 0/2] btrfs: add btree read ahead for send operations
@ 2021-02-26 15:17 fdmanana
2021-02-26 15:17 ` [PATCH 1/2] btrfs: add btree read ahead for full " fdmanana
` (2 more replies)
0 siblings, 3 replies; 7+ messages in thread
From: fdmanana @ 2021-02-26 15:17 UTC (permalink / raw)
To: linux-btrfs
From: Filipe Manana <fdmanana@suse.com>
This patchset adds btree read ahead for full and incremental send operations,
which results in some nice speedups. Test and results are mentioned in the
change log of each patch.
Filipe Manana (2):
btrfs: add btree read ahead for full send operations
btrfs: add btree read ahead for incremental send operations
fs/btrfs/send.c | 30 ++++++++++++++++++++++++------
1 file changed, 24 insertions(+), 6 deletions(-)
--
2.28.0
^ permalink raw reply [flat|nested] 7+ messages in thread* [PATCH 1/2] btrfs: add btree read ahead for full send operations 2021-02-26 15:17 [PATCH 0/2] btrfs: add btree read ahead for send operations fdmanana @ 2021-02-26 15:17 ` fdmanana 2021-02-26 15:17 ` [PATCH 2/2] btrfs: add btree read ahead for incremental " fdmanana 2021-03-01 9:26 ` [PATCH v2 0/2] btrfs: add btree read ahead for " fdmanana 2 siblings, 0 replies; 7+ messages in thread From: fdmanana @ 2021-02-26 15:17 UTC (permalink / raw) To: linux-btrfs From: Filipe Manana <fdmanana@suse.com> When doing a full send we know that we are going to be reading every node and leaf of the send root, so we benefit from enabling read ahead for the btree. This change enables read ahead for full send operations only, incremental sends will have read ahead enabled in a different way by a separate patch. The following test script was used to measure the improvement on a box using an average, consumer grade, spinning disk and with 16Gb of ram: $ cat test.sh #!/bin/bash DEV=/dev/sdj MNT=/mnt/sdj MKFS_OPTIONS="--nodesize 16384" # default, just to be explicit MOUNT_OPTIONS="-o max_inline=2048" # default, just to be explicit mkfs.btrfs -f $MKFS_OPTIONS $DEV > /dev/null mount $MOUNT_OPTIONS $DEV $MNT # Create files with inline data to make it easier and faster to create # large btrees. add_files() { local total=$1 local start_offset=$2 local number_jobs=$3 local total_per_job=$(($total / $number_jobs)) echo "Creating $total new files using $number_jobs jobs" for ((n = 0; n < $number_jobs; n++)); do ( local start_num=$(($start_offset + $n * $total_per_job)) for ((i = 1; i <= $total_per_job; i++)); do local file_num=$((start_num + $i)) local file_path="$MNT/file_${file_num}" xfs_io -f -c "pwrite -S 0xab 0 2000" $file_path > /dev/null if [ $? -ne 0 ]; then echo "Failed creating file $file_path" break fi done ) & worker_pids[$n]=$! done wait ${worker_pids[@]} sync echo echo "btree node/leaf count: $(btrfs inspect-internal dump-tree -t 5 $DEV | egrep '^(node|leaf) ' | wc -l)" } initial_file_count=500000 add_files $initial_file_count 0 4 echo echo "Creating first snapshot..." btrfs subvolume snapshot -r $MNT $MNT/snap1 echo echo "Adding more files..." add_files $((initial_file_count / 4)) $initial_file_count 4 echo echo "Updating 1/50th of the initial files..." for ((i = 1; i < $initial_file_count; i += 50)); do xfs_io -c "pwrite -S 0xcd 0 20" $MNT/file_$i > /dev/null done echo echo "Creating second snapshot..." btrfs subvolume snapshot -r $MNT $MNT/snap2 umount $MNT echo 3 > /proc/sys/vm/drop_caches blockdev --flushbufs $DEV &> /dev/null hdparm -F $DEV &> /dev/null mount $MOUNT_OPTIONS $DEV $MNT echo echo "Testing full send..." start=$(date +%s) btrfs send $MNT/snap1 > /dev/null end=$(date +%s) echo echo "Full send took $((end - start)) seconds" echo echo "Testing incremental send..." start=$(date +%s) btrfs send -p $MNT/snap1 $MNT/snap2 > /dev/null end=$(date +%s) echo echo "Incremental send took $((end - start)) seconds" umount $MNT Before this change, full send duration: with $initial_file_count == 200000: 165 seconds with $initial_file_count == 500000: 407 seconds After this change, full send duration: with $initial_file_count == 200000: 149 seconds (-10.2%) with $initial_file_count == 500000: 353 seconds (-14.2%) For $initial_file_count == 200000 there are 62600 nodes and leaves in the btree of the first snapshot, while for $initial_file_count == 500000 there are 152476 nodes and leaves. Signed-off-by: Filipe Manana <fdmanana@suse.com> --- fs/btrfs/send.c | 1 + 1 file changed, 1 insertion(+) diff --git a/fs/btrfs/send.c b/fs/btrfs/send.c index f87878274e9f..7ff81da30af4 100644 --- a/fs/btrfs/send.c +++ b/fs/btrfs/send.c @@ -6653,6 +6653,7 @@ static int full_send_tree(struct send_ctx *sctx) path = alloc_path_for_send(); if (!path) return -ENOMEM; + path->reada = READA_FORWARD; key.objectid = BTRFS_FIRST_FREE_OBJECTID; key.type = BTRFS_INODE_ITEM_KEY; -- 2.28.0 ^ permalink raw reply related [flat|nested] 7+ messages in thread
* [PATCH 2/2] btrfs: add btree read ahead for incremental send operations 2021-02-26 15:17 [PATCH 0/2] btrfs: add btree read ahead for send operations fdmanana 2021-02-26 15:17 ` [PATCH 1/2] btrfs: add btree read ahead for full " fdmanana @ 2021-02-26 15:17 ` fdmanana 2021-03-01 9:26 ` [PATCH v2 0/2] btrfs: add btree read ahead for " fdmanana 2 siblings, 0 replies; 7+ messages in thread From: fdmanana @ 2021-02-26 15:17 UTC (permalink / raw) To: linux-btrfs From: Filipe Manana <fdmanana@suse.com> Currently we do not do btree read ahead when doing an incremental send, however we know that we will read and process any node or leaf in the send root that has a generation greater than the generation of the parent root. So triggering read ahead for such nodes and leafs is beneficial for an incremental send. This change does that, triggers read ahead of any node or leaf in the send root that has a generation greater then the generation of the parent root. As for the parent root, no readahead is triggered because knowing in advance which nodes/leaves are going to be read is not so linear and there's often a large time window between visiting nodes or leaves of the parent root. So I opted to leave out the parent root, and triggering read ahead for every node/leaf of the parent root made things slightly worse (as expected). The following test script was used to measure the improvement on a box using an average, consumer grade, spinning disk and with 16Gb of ram: $ cat test.sh #!/bin/bash DEV=/dev/sdj MNT=/mnt/sdj MKFS_OPTIONS="--nodesize 16384" # default, just to be explicit MOUNT_OPTIONS="-o max_inline=2048" # default, just to be explicit mkfs.btrfs -f $MKFS_OPTIONS $DEV > /dev/null mount $MOUNT_OPTIONS $DEV $MNT # Create files with inline data to make it easier and faster to create # large btrees. add_files() { local total=$1 local start_offset=$2 local number_jobs=$3 local total_per_job=$(($total / $number_jobs)) echo "Creating $total new files using $number_jobs jobs" for ((n = 0; n < $number_jobs; n++)); do ( local start_num=$(($start_offset + $n * $total_per_job)) for ((i = 1; i <= $total_per_job; i++)); do local file_num=$((start_num + $i)) local file_path="$MNT/file_${file_num}" xfs_io -f -c "pwrite -S 0xab 0 2000" $file_path > /dev/null if [ $? -ne 0 ]; then echo "Failed creating file $file_path" break fi done ) & worker_pids[$n]=$! done wait ${worker_pids[@]} sync echo echo "btree node/leaf count: $(btrfs inspect-internal dump-tree -t 5 $DEV | egrep '^(node|leaf) ' | wc -l)" } initial_file_count=500000 add_files $initial_file_count 0 4 echo echo "Creating first snapshot..." btrfs subvolume snapshot -r $MNT $MNT/snap1 echo echo "Adding more files..." add_files $((initial_file_count / 4)) $initial_file_count 4 echo echo "Updating 1/50th of the initial files..." for ((i = 1; i < $initial_file_count; i += 50)); do xfs_io -c "pwrite -S 0xcd 0 20" $MNT/file_$i > /dev/null done echo echo "Creating second snapshot..." btrfs subvolume snapshot -r $MNT $MNT/snap2 umount $MNT echo 3 > /proc/sys/vm/drop_caches blockdev --flushbufs $DEV &> /dev/null hdparm -F $DEV &> /dev/null mount $MOUNT_OPTIONS $DEV $MNT echo echo "Testing full send..." start=$(date +%s) btrfs send $MNT/snap1 > /dev/null end=$(date +%s) echo echo "Full send took $((end - start)) seconds" echo echo "Testing incremental send..." start=$(date +%s) btrfs send -p $MNT/snap1 $MNT/snap2 > /dev/null end=$(date +%s) echo echo "Incremental send took $((end - start)) seconds" umount $MNT Before this change, incremental send duration: with $initial_file_count == 200000: 64 seconds with $initial_file_count == 500000: 179 seconds After this change, incremental send duration: with $initial_file_count == 200000: 52 seconds (-20.7%) with $initial_file_count == 500000: 130 seconds (-31.7%) For $initial_file_count == 200000 there are 62600 nodes and leaves in the btree of the first snapshot, and 77759 nodes and leaves in the btree of the second snapshot. While for $initial_file_count == 500000 there are 152476 nodes and leaves in the btree of the first snapshot, and 190511 nodes and leaves in the btree of the second snapshot. Signed-off-by: Filipe Manana <fdmanana@suse.com> --- fs/btrfs/send.c | 29 +++++++++++++++++++++++------ 1 file changed, 23 insertions(+), 6 deletions(-) diff --git a/fs/btrfs/send.c b/fs/btrfs/send.c index 7ff81da30af4..b53d16721b2d 100644 --- a/fs/btrfs/send.c +++ b/fs/btrfs/send.c @@ -6692,15 +6692,22 @@ static int full_send_tree(struct send_ctx *sctx) return ret; } -static int tree_move_down(struct btrfs_path *path, int *level) +static int tree_move_down(struct btrfs_path *path, int *level, u64 min_reada_gen) { struct extent_buffer *eb; + struct extent_buffer *parent = path->nodes[*level]; + const int nritems = btrfs_header_nritems(parent); + int slot = path->slots[*level]; BUG_ON(*level == 0); - eb = btrfs_read_node_slot(path->nodes[*level], path->slots[*level]); + eb = btrfs_read_node_slot(parent, slot); if (IS_ERR(eb)) return PTR_ERR(eb); + for (slot++; slot < nritems; slot++) + if (btrfs_node_ptr_generation(parent, slot) > min_reada_gen) + btrfs_readahead_node_child(parent, slot); + path->nodes[*level - 1] = eb; path->slots[*level - 1] = 0; (*level)--; @@ -6740,14 +6747,15 @@ static int tree_move_next_or_upnext(struct btrfs_path *path, static int tree_advance(struct btrfs_path *path, int *level, int root_level, int allow_down, - struct btrfs_key *key) + struct btrfs_key *key, + u64 min_reada_gen) { int ret; if (*level == 0 || !allow_down) { ret = tree_move_next_or_upnext(path, level, root_level); } else { - ret = tree_move_down(path, level); + ret = tree_move_down(path, level, min_reada_gen); } if (ret >= 0) { if (*level == 0) @@ -6821,6 +6829,7 @@ static int btrfs_compare_trees(struct btrfs_root *left_root, u64 right_blockptr; u64 left_gen; u64 right_gen; + u64 min_reada_gen; left_path = btrfs_alloc_path(); if (!left_path) { @@ -6900,6 +6909,14 @@ static int btrfs_compare_trees(struct btrfs_root *left_root, ret = -ENOMEM; goto out; } + /* + * Our right root is the parent root, while the left root is the "send" + * root. We know that all new nodes/leaves in the left root must have + * a generation greater than the right root's generation, so we trigger + * readahead for those nodes and leaves of the left root, as we know we + * will read them for sure. + */ + min_reada_gen = btrfs_header_generation(right_root->commit_root); up_read(&fs_info->commit_root_sem); if (left_level == 0) @@ -6924,7 +6941,7 @@ static int btrfs_compare_trees(struct btrfs_root *left_root, ret = tree_advance(left_path, &left_level, left_root_level, advance_left != ADVANCE_ONLY_NEXT, - &left_key); + &left_key, min_reada_gen); if (ret == -1) left_end_reached = ADVANCE; else if (ret < 0) @@ -6935,7 +6952,7 @@ static int btrfs_compare_trees(struct btrfs_root *left_root, ret = tree_advance(right_path, &right_level, right_root_level, advance_right != ADVANCE_ONLY_NEXT, - &right_key); + &right_key, min_reada_gen); if (ret == -1) right_end_reached = ADVANCE; else if (ret < 0) -- 2.28.0 ^ permalink raw reply related [flat|nested] 7+ messages in thread
* [PATCH v2 0/2] btrfs: add btree read ahead for send operations 2021-02-26 15:17 [PATCH 0/2] btrfs: add btree read ahead for send operations fdmanana 2021-02-26 15:17 ` [PATCH 1/2] btrfs: add btree read ahead for full " fdmanana 2021-02-26 15:17 ` [PATCH 2/2] btrfs: add btree read ahead for incremental " fdmanana @ 2021-03-01 9:26 ` fdmanana 2021-03-01 9:26 ` [PATCH v2 1/2] btrfs: add btree read ahead for full " fdmanana ` (2 more replies) 2 siblings, 3 replies; 7+ messages in thread From: fdmanana @ 2021-03-01 9:26 UTC (permalink / raw) To: linux-btrfs From: Filipe Manana <fdmanana@suse.com> This patchset adds btree read ahead for full and incremental send operations, which results in some nice speedups. Test and results are mentioned in the change log of each patch. V2: Updated second patch, for incremental sends, to limit readahead to avoid too many reads in parallel and disturbing other workloads running in parallel, and without causing a drop in the gains. Updated change logs. Filipe Manana (2): btrfs: add btree read ahead for full send operations btrfs: add btree read ahead for incremental send operations fs/btrfs/send.c | 43 +++++++++++++++++++++++++++++++++++++------ 1 file changed, 37 insertions(+), 6 deletions(-) -- 2.28.0 ^ permalink raw reply [flat|nested] 7+ messages in thread
* [PATCH v2 1/2] btrfs: add btree read ahead for full send operations 2021-03-01 9:26 ` [PATCH v2 0/2] btrfs: add btree read ahead for " fdmanana @ 2021-03-01 9:26 ` fdmanana 2021-03-01 9:26 ` [PATCH v2 2/2] btrfs: add btree read ahead for incremental " fdmanana 2021-03-04 16:51 ` [PATCH v2 0/2] btrfs: add btree read ahead for " David Sterba 2 siblings, 0 replies; 7+ messages in thread From: fdmanana @ 2021-03-01 9:26 UTC (permalink / raw) To: linux-btrfs From: Filipe Manana <fdmanana@suse.com> When doing a full send we know that we are going to be reading every node and leaf of the send root, so we benefit from enabling read ahead for the btree. This change enables read ahead for full send operations only, incremental sends will have read ahead enabled in a different way by a separate patch. The following test script was used to measure the improvement on a box using an average, consumer grade, spinning disk and with 16Gb of ram: $ cat test.sh #!/bin/bash DEV=/dev/sdj MNT=/mnt/sdj MKFS_OPTIONS="--nodesize 16384" # default, just to be explicit MOUNT_OPTIONS="-o max_inline=2048" # default, just to be explicit mkfs.btrfs -f $MKFS_OPTIONS $DEV > /dev/null mount $MOUNT_OPTIONS $DEV $MNT # Create files with inline data to make it easier and faster to create # large btrees. add_files() { local total=$1 local start_offset=$2 local number_jobs=$3 local total_per_job=$(($total / $number_jobs)) echo "Creating $total new files using $number_jobs jobs" for ((n = 0; n < $number_jobs; n++)); do ( local start_num=$(($start_offset + $n * $total_per_job)) for ((i = 1; i <= $total_per_job; i++)); do local file_num=$((start_num + $i)) local file_path="$MNT/file_${file_num}" xfs_io -f -c "pwrite -S 0xab 0 2000" $file_path > /dev/null if [ $? -ne 0 ]; then echo "Failed creating file $file_path" break fi done ) & worker_pids[$n]=$! done wait ${worker_pids[@]} sync echo echo "btree node/leaf count: $(btrfs inspect-internal dump-tree -t 5 $DEV | egrep '^(node|leaf) ' | wc -l)" } initial_file_count=500000 add_files $initial_file_count 0 4 echo echo "Creating first snapshot..." btrfs subvolume snapshot -r $MNT $MNT/snap1 echo echo "Adding more files..." add_files $((initial_file_count / 4)) $initial_file_count 4 echo echo "Updating 1/50th of the initial files..." for ((i = 1; i < $initial_file_count; i += 50)); do xfs_io -c "pwrite -S 0xcd 0 20" $MNT/file_$i > /dev/null done echo echo "Creating second snapshot..." btrfs subvolume snapshot -r $MNT $MNT/snap2 umount $MNT echo 3 > /proc/sys/vm/drop_caches blockdev --flushbufs $DEV &> /dev/null hdparm -F $DEV &> /dev/null mount $MOUNT_OPTIONS $DEV $MNT echo echo "Testing full send..." start=$(date +%s) btrfs send $MNT/snap1 > /dev/null end=$(date +%s) echo echo "Full send took $((end - start)) seconds" umount $MNT echo 3 > /proc/sys/vm/drop_caches blockdev --flushbufs $DEV &> /dev/null hdparm -F $DEV &> /dev/null mount $MOUNT_OPTIONS $DEV $MNT echo echo "Testing incremental send..." start=$(date +%s) btrfs send -p $MNT/snap1 $MNT/snap2 > /dev/null end=$(date +%s) echo echo "Incremental send took $((end - start)) seconds" umount $MNT Before this change, full send duration: with $initial_file_count == 200000: 165 seconds with $initial_file_count == 500000: 407 seconds After this change, full send duration: with $initial_file_count == 200000: 149 seconds (-10.2%) with $initial_file_count == 500000: 353 seconds (-14.2%) For $initial_file_count == 200000 there are 62600 nodes and leaves in the btree of the first snapshot, while for $initial_file_count == 500000 there are 152476 nodes and leaves. The roots were at level 2. Signed-off-by: Filipe Manana <fdmanana@suse.com> --- fs/btrfs/send.c | 1 + 1 file changed, 1 insertion(+) diff --git a/fs/btrfs/send.c b/fs/btrfs/send.c index f87878274e9f..7ff81da30af4 100644 --- a/fs/btrfs/send.c +++ b/fs/btrfs/send.c @@ -6653,6 +6653,7 @@ static int full_send_tree(struct send_ctx *sctx) path = alloc_path_for_send(); if (!path) return -ENOMEM; + path->reada = READA_FORWARD; key.objectid = BTRFS_FIRST_FREE_OBJECTID; key.type = BTRFS_INODE_ITEM_KEY; -- 2.28.0 ^ permalink raw reply related [flat|nested] 7+ messages in thread
* [PATCH v2 2/2] btrfs: add btree read ahead for incremental send operations 2021-03-01 9:26 ` [PATCH v2 0/2] btrfs: add btree read ahead for " fdmanana 2021-03-01 9:26 ` [PATCH v2 1/2] btrfs: add btree read ahead for full " fdmanana @ 2021-03-01 9:26 ` fdmanana 2021-03-04 16:51 ` [PATCH v2 0/2] btrfs: add btree read ahead for " David Sterba 2 siblings, 0 replies; 7+ messages in thread From: fdmanana @ 2021-03-01 9:26 UTC (permalink / raw) To: linux-btrfs From: Filipe Manana <fdmanana@suse.com> Currently we do not do btree read ahead when doing an incremental send, however we know that we will read and process any node or leaf in the send root that has a generation greater than the generation of the parent root. So triggering read ahead for such nodes and leafs is beneficial for an incremental send. This change does that, triggers read ahead of any node or leaf in the send root that has a generation greater then the generation of the parent root. As for the parent root, no readahead is triggered because knowing in advance which nodes/leaves are going to be read is not so linear and there's often a large time window between visiting nodes or leaves of the parent root. So I opted to leave out the parent root, and triggering read ahead for its nodes/leaves seemed to have not made significant difference. The following test script was used to measure the improvement on a box using an average, consumer grade, spinning disk and with 16Gb of ram: $ cat test.sh #!/bin/bash DEV=/dev/sdj MNT=/mnt/sdj MKFS_OPTIONS="--nodesize 16384" # default, just to be explicit MOUNT_OPTIONS="-o max_inline=2048" # default, just to be explicit mkfs.btrfs -f $MKFS_OPTIONS $DEV > /dev/null mount $MOUNT_OPTIONS $DEV $MNT # Create files with inline data to make it easier and faster to create # large btrees. add_files() { local total=$1 local start_offset=$2 local number_jobs=$3 local total_per_job=$(($total / $number_jobs)) echo "Creating $total new files using $number_jobs jobs" for ((n = 0; n < $number_jobs; n++)); do ( local start_num=$(($start_offset + $n * $total_per_job)) for ((i = 1; i <= $total_per_job; i++)); do local file_num=$((start_num + $i)) local file_path="$MNT/file_${file_num}" xfs_io -f -c "pwrite -S 0xab 0 2000" $file_path > /dev/null if [ $? -ne 0 ]; then echo "Failed creating file $file_path" break fi done ) & worker_pids[$n]=$! done wait ${worker_pids[@]} sync echo echo "btree node/leaf count: $(btrfs inspect-internal dump-tree -t 5 $DEV | egrep '^(node|leaf) ' | wc -l)" } initial_file_count=500000 add_files $initial_file_count 0 4 echo echo "Creating first snapshot..." btrfs subvolume snapshot -r $MNT $MNT/snap1 echo echo "Adding more files..." add_files $((initial_file_count / 4)) $initial_file_count 4 echo echo "Updating 1/50th of the initial files..." for ((i = 1; i < $initial_file_count; i += 50)); do xfs_io -c "pwrite -S 0xcd 0 20" $MNT/file_$i > /dev/null done echo echo "Creating second snapshot..." btrfs subvolume snapshot -r $MNT $MNT/snap2 umount $MNT echo 3 > /proc/sys/vm/drop_caches blockdev --flushbufs $DEV &> /dev/null hdparm -F $DEV &> /dev/null mount $MOUNT_OPTIONS $DEV $MNT echo echo "Testing full send..." start=$(date +%s) btrfs send $MNT/snap1 > /dev/null end=$(date +%s) echo echo "Full send took $((end - start)) seconds" umount $MNT echo 3 > /proc/sys/vm/drop_caches blockdev --flushbufs $DEV &> /dev/null hdparm -F $DEV &> /dev/null mount $MOUNT_OPTIONS $DEV $MNT echo echo "Testing incremental send..." start=$(date +%s) btrfs send -p $MNT/snap1 $MNT/snap2 > /dev/null end=$(date +%s) echo echo "Incremental send took $((end - start)) seconds" umount $MNT Before this change, incremental send duration: with $initial_file_count == 200000: 51 seconds with $initial_file_count == 500000: 168 seconds After this change, incremental send duration: with $initial_file_count == 200000: 39 seconds (-26.7%) with $initial_file_count == 500000: 125 seconds (-29.4%) For $initial_file_count == 200000 there are 62600 nodes and leaves in the btree of the first snapshot, and 77759 nodes and leaves in the btree of the second snapshot. The root nodes were at level 2. While for $initial_file_count == 500000 there are 152476 nodes and leaves in the btree of the first snapshot, and 190511 nodes and leaves in the btree of the second snapshot. The root nodes were at level 2 as well. Signed-off-by: Filipe Manana <fdmanana@suse.com> --- fs/btrfs/send.c | 42 ++++++++++++++++++++++++++++++++++++------ 1 file changed, 36 insertions(+), 6 deletions(-) diff --git a/fs/btrfs/send.c b/fs/btrfs/send.c index 7ff81da30af4..3bfd5609fbac 100644 --- a/fs/btrfs/send.c +++ b/fs/btrfs/send.c @@ -6692,15 +6692,35 @@ static int full_send_tree(struct send_ctx *sctx) return ret; } -static int tree_move_down(struct btrfs_path *path, int *level) +static int tree_move_down(struct btrfs_path *path, int *level, u64 reada_min_gen) { struct extent_buffer *eb; + struct extent_buffer *parent = path->nodes[*level]; + int slot = path->slots[*level]; + const int nritems = btrfs_header_nritems(parent); + u64 reada_max; + u64 reada_done = 0; BUG_ON(*level == 0); - eb = btrfs_read_node_slot(path->nodes[*level], path->slots[*level]); + eb = btrfs_read_node_slot(parent, slot); if (IS_ERR(eb)) return PTR_ERR(eb); + /* + * Trigger readahead for the next leaves we will process, so that it is + * very likely that when we need them they are already in memory and we + * will not block on disk IO. For nodes we only do readahead for one, + * since the time window between processing nodes is typically larger. + */ + reada_max = *level == 1 ? SZ_128K : eb->fs_info->nodesize; + + for (slot++; slot < nritems && reada_done < reada_max; slot++) { + if (btrfs_node_ptr_generation(parent, slot) > reada_min_gen) { + btrfs_readahead_node_child(parent, slot); + reada_done += eb->fs_info->nodesize; + } + } + path->nodes[*level - 1] = eb; path->slots[*level - 1] = 0; (*level)--; @@ -6740,14 +6760,15 @@ static int tree_move_next_or_upnext(struct btrfs_path *path, static int tree_advance(struct btrfs_path *path, int *level, int root_level, int allow_down, - struct btrfs_key *key) + struct btrfs_key *key, + u64 reada_min_gen) { int ret; if (*level == 0 || !allow_down) { ret = tree_move_next_or_upnext(path, level, root_level); } else { - ret = tree_move_down(path, level); + ret = tree_move_down(path, level, reada_min_gen); } if (ret >= 0) { if (*level == 0) @@ -6821,6 +6842,7 @@ static int btrfs_compare_trees(struct btrfs_root *left_root, u64 right_blockptr; u64 left_gen; u64 right_gen; + u64 reada_min_gen; left_path = btrfs_alloc_path(); if (!left_path) { @@ -6900,6 +6922,14 @@ static int btrfs_compare_trees(struct btrfs_root *left_root, ret = -ENOMEM; goto out; } + /* + * Our right root is the parent root, while the left root is the "send" + * root. We know that all new nodes/leaves in the left root must have + * a generation greater than the right root's generation, so we trigger + * readahead for those nodes and leaves of the left root, as we know we + * will need to read them at some point. + */ + reada_min_gen = btrfs_header_generation(right_root->commit_root); up_read(&fs_info->commit_root_sem); if (left_level == 0) @@ -6924,7 +6954,7 @@ static int btrfs_compare_trees(struct btrfs_root *left_root, ret = tree_advance(left_path, &left_level, left_root_level, advance_left != ADVANCE_ONLY_NEXT, - &left_key); + &left_key, reada_min_gen); if (ret == -1) left_end_reached = ADVANCE; else if (ret < 0) @@ -6935,7 +6965,7 @@ static int btrfs_compare_trees(struct btrfs_root *left_root, ret = tree_advance(right_path, &right_level, right_root_level, advance_right != ADVANCE_ONLY_NEXT, - &right_key); + &right_key, reada_min_gen); if (ret == -1) right_end_reached = ADVANCE; else if (ret < 0) -- 2.28.0 ^ permalink raw reply related [flat|nested] 7+ messages in thread
* Re: [PATCH v2 0/2] btrfs: add btree read ahead for send operations 2021-03-01 9:26 ` [PATCH v2 0/2] btrfs: add btree read ahead for " fdmanana 2021-03-01 9:26 ` [PATCH v2 1/2] btrfs: add btree read ahead for full " fdmanana 2021-03-01 9:26 ` [PATCH v2 2/2] btrfs: add btree read ahead for incremental " fdmanana @ 2021-03-04 16:51 ` David Sterba 2 siblings, 0 replies; 7+ messages in thread From: David Sterba @ 2021-03-04 16:51 UTC (permalink / raw) To: fdmanana; +Cc: linux-btrfs On Mon, Mar 01, 2021 at 09:26:41AM +0000, fdmanana@kernel.org wrote: > From: Filipe Manana <fdmanana@suse.com> > > This patchset adds btree read ahead for full and incremental send operations, > which results in some nice speedups. Test and results are mentioned in the > change log of each patch. Thanks, added to misc-next. The first patch has probably the best ratio of changed lines vs performance gain. ^ permalink raw reply [flat|nested] 7+ messages in thread
end of thread, other threads:[~2021-03-04 16:55 UTC | newest] Thread overview: 7+ messages (download: mbox.gz follow: Atom feed -- links below jump to the message on this page -- 2021-02-26 15:17 [PATCH 0/2] btrfs: add btree read ahead for send operations fdmanana 2021-02-26 15:17 ` [PATCH 1/2] btrfs: add btree read ahead for full " fdmanana 2021-02-26 15:17 ` [PATCH 2/2] btrfs: add btree read ahead for incremental " fdmanana 2021-03-01 9:26 ` [PATCH v2 0/2] btrfs: add btree read ahead for " fdmanana 2021-03-01 9:26 ` [PATCH v2 1/2] btrfs: add btree read ahead for full " fdmanana 2021-03-01 9:26 ` [PATCH v2 2/2] btrfs: add btree read ahead for incremental " fdmanana 2021-03-04 16:51 ` [PATCH v2 0/2] btrfs: add btree read ahead for " David Sterba
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox; as well as URLs for NNTP newsgroup(s).