linux-btrfs.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v4 0/4] btrfs: Fix data checksum error cause by replace with io-load
@ 2015-08-05  8:43 Zhao Lei
  2015-08-05  8:43 ` [PATCH v4 1/4] btrfs: Use ref_cnt for set_block_group_ro() Zhao Lei
                   ` (3 more replies)
  0 siblings, 4 replies; 5+ messages in thread
From: Zhao Lei @ 2015-08-05  8:43 UTC (permalink / raw)
  To: linux-btrfs; +Cc: Zhao Lei

This patchset is used to fix data checksum error cause by replace with
io-load.
It cause xfstests btrfs/070(071) failed randomly.

See description in [PATCH 4/4] for detail.

Changelog v3->v4:
 1: Fix regression of xfstests/061
    Patch v3 cause xfstests/061 failed in some case, because
    btrfs_inc_block_group_ro() include a btrfs_end_transaction()
    option, which will change datas in reloc_ctl->data_inode,
    and cause deadlock in relocate:
    scrub                       relocate
    ----                        ----
                                relocate_file_extent_cluster()
                                prealloc_file_extent_cluster()
                                ...
    btrfs_inc_block_group_ro()
    btrfs_wait_for_commit()
    insert_reserved_file_extent()
    btrfs_set_file_extent_disk_num_bytes()
    (modify reloc_ctl->data_inode)
    ...
                                do_relocation()
                                get_new_location() ret -EINVAL
                                (because data_inode's extent changed)
                                __btrfs_cow_block() ret -EINVAL
                                (without unlock eb)
                                btrfs_search_slot() deadlock
                                (try to lock eb again)

Changelog v2->v3:
 1: Fix a typo(caused in rebase) which make xfstests failed in
    btrfs/073 and btrfs/066.
 2: Rebase on top of integration-4.2
 3: Do full xfstests(generic and btrfs group with 10 mount options)

Changelog v1->v2:
 1: Update subject to reflect the problem being fixed.
 2: Update description to say reason why set read-only can fix the
    problem.
 3: Use a helper function to avoid duplicated code block for set
    chunk ro.
 All of above are suggested by: David Sterba <dsterba@suse.cz>

Zhao Lei (4):
  btrfs: Use ref_cnt for set_block_group_ro()
  btrfs: Separate scrub_blocked_if_needed() to scrub_pause_on/off()
  btrfs: use scrub_pause_on/off() to reduce code in
    scrub_enumerate_chunks()
  btrfs: Fix data checksum error cause by replace with io-load.

 fs/btrfs/ctree.h       |  6 +++---
 fs/btrfs/extent-tree.c | 42 +++++++++++++++++++-------------------
 fs/btrfs/relocation.c  | 14 ++++++-------
 fs/btrfs/scrub.c       | 55 ++++++++++++++++++++++++++++++++++++--------------
 fs/btrfs/volumes.c     |  2 ++
 5 files changed, 72 insertions(+), 47 deletions(-)

-- 
1.8.5.1


^ permalink raw reply	[flat|nested] 5+ messages in thread

* [PATCH v4 1/4] btrfs: Use ref_cnt for set_block_group_ro()
  2015-08-05  8:43 [PATCH v4 0/4] btrfs: Fix data checksum error cause by replace with io-load Zhao Lei
@ 2015-08-05  8:43 ` Zhao Lei
  2015-08-05  8:43 ` [PATCH v4 2/4] btrfs: Separate scrub_blocked_if_needed() to scrub_pause_on/off() Zhao Lei
                   ` (2 subsequent siblings)
  3 siblings, 0 replies; 5+ messages in thread
From: Zhao Lei @ 2015-08-05  8:43 UTC (permalink / raw)
  To: linux-btrfs; +Cc: Zhao Lei

More than one code call set_block_group_ro() and restore rw in fail.

Old code use bool bit to save blockgroup's ro state, it can not
support parallel case(it is confirmd exist in my debug log).

This patch use ref count to store ro state, and rename
set_block_group_ro/set_block_group_rw
to
inc_block_group_ro/dec_block_group_ro.

Signed-off-by: Zhao Lei <zhaolei@cn.fujitsu.com>
---
 fs/btrfs/ctree.h       |  6 +++---
 fs/btrfs/extent-tree.c | 42 +++++++++++++++++++++---------------------
 fs/btrfs/relocation.c  | 14 ++++++--------
 3 files changed, 30 insertions(+), 32 deletions(-)

diff --git a/fs/btrfs/ctree.h b/fs/btrfs/ctree.h
index aac314e..f57e6ca 100644
--- a/fs/btrfs/ctree.h
+++ b/fs/btrfs/ctree.h
@@ -1300,7 +1300,7 @@ struct btrfs_block_group_cache {
 	/* for raid56, this is a full stripe, without parity */
 	unsigned long full_stripe_len;
 
-	unsigned int ro:1;
+	unsigned int ro;
 	unsigned int iref:1;
 	unsigned int has_caching_ctl:1;
 	unsigned int removed:1;
@@ -3495,9 +3495,9 @@ int btrfs_cond_migrate_bytes(struct btrfs_fs_info *fs_info,
 void btrfs_block_rsv_release(struct btrfs_root *root,
 			     struct btrfs_block_rsv *block_rsv,
 			     u64 num_bytes);
-int btrfs_set_block_group_ro(struct btrfs_root *root,
+int btrfs_inc_block_group_ro(struct btrfs_root *root,
 			     struct btrfs_block_group_cache *cache);
-void btrfs_set_block_group_rw(struct btrfs_root *root,
+void btrfs_dec_block_group_ro(struct btrfs_root *root,
 			      struct btrfs_block_group_cache *cache);
 void btrfs_put_block_group_cache(struct btrfs_fs_info *info);
 u64 btrfs_account_ro_block_groups_free_space(struct btrfs_space_info *sinfo);
diff --git a/fs/btrfs/extent-tree.c b/fs/btrfs/extent-tree.c
index 1c2bd17..a436bd5 100644
--- a/fs/btrfs/extent-tree.c
+++ b/fs/btrfs/extent-tree.c
@@ -8692,14 +8692,13 @@ static u64 update_block_group_flags(struct btrfs_root *root, u64 flags)
 	return flags;
 }
 
-static int set_block_group_ro(struct btrfs_block_group_cache *cache, int force)
+static int inc_block_group_ro(struct btrfs_block_group_cache *cache, int force)
 {
 	struct btrfs_space_info *sinfo = cache->space_info;
 	u64 num_bytes;
 	u64 min_allocable_bytes;
 	int ret = -ENOSPC;
 
-
 	/*
 	 * We need some metadata space and system metadata space for
 	 * allocating chunks in some corner cases until we force to set
@@ -8716,6 +8715,7 @@ static int set_block_group_ro(struct btrfs_block_group_cache *cache, int force)
 	spin_lock(&cache->lock);
 
 	if (cache->ro) {
+		cache->ro++;
 		ret = 0;
 		goto out;
 	}
@@ -8727,7 +8727,7 @@ static int set_block_group_ro(struct btrfs_block_group_cache *cache, int force)
 	    sinfo->bytes_may_use + sinfo->bytes_readonly + num_bytes +
 	    min_allocable_bytes <= sinfo->total_bytes) {
 		sinfo->bytes_readonly += num_bytes;
-		cache->ro = 1;
+		cache->ro++;
 		list_add_tail(&cache->ro_list, &sinfo->ro_bgs);
 		ret = 0;
 	}
@@ -8737,7 +8737,7 @@ out:
 	return ret;
 }
 
-int btrfs_set_block_group_ro(struct btrfs_root *root,
+int btrfs_inc_block_group_ro(struct btrfs_root *root,
 			     struct btrfs_block_group_cache *cache)
 
 {
@@ -8745,8 +8745,6 @@ int btrfs_set_block_group_ro(struct btrfs_root *root,
 	u64 alloc_flags;
 	int ret;
 
-	BUG_ON(cache->ro);
-
 again:
 	trans = btrfs_join_transaction(root);
 	if (IS_ERR(trans))
@@ -8789,7 +8787,7 @@ again:
 			goto out;
 	}
 
-	ret = set_block_group_ro(cache, 0);
+	ret = inc_block_group_ro(cache, 0);
 	if (!ret)
 		goto out;
 	alloc_flags = get_alloc_profile(root, cache->space_info->flags);
@@ -8797,7 +8795,7 @@ again:
 			     CHUNK_ALLOC_FORCE);
 	if (ret < 0)
 		goto out;
-	ret = set_block_group_ro(cache, 0);
+	ret = inc_block_group_ro(cache, 0);
 out:
 	if (cache->flags & BTRFS_BLOCK_GROUP_SYSTEM) {
 		alloc_flags = update_block_group_flags(root, cache->flags);
@@ -8860,7 +8858,7 @@ u64 btrfs_account_ro_block_groups_free_space(struct btrfs_space_info *sinfo)
 	return free_bytes;
 }
 
-void btrfs_set_block_group_rw(struct btrfs_root *root,
+void btrfs_dec_block_group_ro(struct btrfs_root *root,
 			      struct btrfs_block_group_cache *cache)
 {
 	struct btrfs_space_info *sinfo = cache->space_info;
@@ -8870,11 +8868,13 @@ void btrfs_set_block_group_rw(struct btrfs_root *root,
 
 	spin_lock(&sinfo->lock);
 	spin_lock(&cache->lock);
-	num_bytes = cache->key.offset - cache->reserved - cache->pinned -
-		    cache->bytes_super - btrfs_block_group_used(&cache->item);
-	sinfo->bytes_readonly -= num_bytes;
-	cache->ro = 0;
-	list_del_init(&cache->ro_list);
+	if (!--cache->ro) {
+		num_bytes = cache->key.offset - cache->reserved -
+			    cache->pinned - cache->bytes_super -
+			    btrfs_block_group_used(&cache->item);
+		sinfo->bytes_readonly -= num_bytes;
+		list_del_init(&cache->ro_list);
+	}
 	spin_unlock(&cache->lock);
 	spin_unlock(&sinfo->lock);
 }
@@ -9390,7 +9390,7 @@ int btrfs_read_block_groups(struct btrfs_root *root)
 
 		set_avail_alloc_bits(root->fs_info, cache->flags);
 		if (btrfs_chunk_readonly(root, cache->key.objectid)) {
-			set_block_group_ro(cache, 1);
+			inc_block_group_ro(cache, 1);
 		} else if (btrfs_block_group_used(&cache->item) == 0) {
 			spin_lock(&info->unused_bgs_lock);
 			/* Should always be true but just in case. */
@@ -9418,11 +9418,11 @@ int btrfs_read_block_groups(struct btrfs_root *root)
 		list_for_each_entry(cache,
 				&space_info->block_groups[BTRFS_RAID_RAID0],
 				list)
-			set_block_group_ro(cache, 1);
+			inc_block_group_ro(cache, 1);
 		list_for_each_entry(cache,
 				&space_info->block_groups[BTRFS_RAID_SINGLE],
 				list)
-			set_block_group_ro(cache, 1);
+			inc_block_group_ro(cache, 1);
 	}
 
 	init_global_block_rsv(info);
@@ -9910,7 +9910,7 @@ void btrfs_delete_unused_bgs(struct btrfs_fs_info *fs_info)
 		spin_unlock(&block_group->lock);
 
 		/* We don't want to force the issue, only flip if it's ok. */
-		ret = set_block_group_ro(block_group, 0);
+		ret = inc_block_group_ro(block_group, 0);
 		up_write(&space_info->groups_sem);
 		if (ret < 0) {
 			ret = 0;
@@ -9924,7 +9924,7 @@ void btrfs_delete_unused_bgs(struct btrfs_fs_info *fs_info)
 		/* 1 for btrfs_orphan_reserve_metadata() */
 		trans = btrfs_start_transaction(root, 1);
 		if (IS_ERR(trans)) {
-			btrfs_set_block_group_rw(root, block_group);
+			btrfs_dec_block_group_ro(root, block_group);
 			ret = PTR_ERR(trans);
 			goto next;
 		}
@@ -9951,14 +9951,14 @@ void btrfs_delete_unused_bgs(struct btrfs_fs_info *fs_info)
 				  EXTENT_DIRTY, GFP_NOFS);
 		if (ret) {
 			mutex_unlock(&fs_info->unused_bg_unpin_mutex);
-			btrfs_set_block_group_rw(root, block_group);
+			btrfs_dec_block_group_ro(root, block_group);
 			goto end_trans;
 		}
 		ret = clear_extent_bits(&fs_info->freed_extents[1], start, end,
 				  EXTENT_DIRTY, GFP_NOFS);
 		if (ret) {
 			mutex_unlock(&fs_info->unused_bg_unpin_mutex);
-			btrfs_set_block_group_rw(root, block_group);
+			btrfs_dec_block_group_ro(root, block_group);
 			goto end_trans;
 		}
 		mutex_unlock(&fs_info->unused_bg_unpin_mutex);
diff --git a/fs/btrfs/relocation.c b/fs/btrfs/relocation.c
index 88cbb59..52fe55a 100644
--- a/fs/btrfs/relocation.c
+++ b/fs/btrfs/relocation.c
@@ -4215,14 +4215,12 @@ int btrfs_relocate_block_group(struct btrfs_root *extent_root, u64 group_start)
 	rc->block_group = btrfs_lookup_block_group(fs_info, group_start);
 	BUG_ON(!rc->block_group);
 
-	if (!rc->block_group->ro) {
-		ret = btrfs_set_block_group_ro(extent_root, rc->block_group);
-		if (ret) {
-			err = ret;
-			goto out;
-		}
-		rw = 1;
+	ret = btrfs_inc_block_group_ro(extent_root, rc->block_group);
+	if (ret) {
+		err = ret;
+		goto out;
 	}
+	rw = 1;
 
 	path = btrfs_alloc_path();
 	if (!path) {
@@ -4294,7 +4292,7 @@ int btrfs_relocate_block_group(struct btrfs_root *extent_root, u64 group_start)
 	WARN_ON(btrfs_block_group_used(&rc->block_group->item) > 0);
 out:
 	if (err && rw)
-		btrfs_set_block_group_rw(extent_root, rc->block_group);
+		btrfs_dec_block_group_ro(extent_root, rc->block_group);
 	iput(rc->data_inode);
 	btrfs_put_block_group(rc->block_group);
 	kfree(rc);
-- 
1.8.5.1


^ permalink raw reply related	[flat|nested] 5+ messages in thread

* [PATCH v4 2/4] btrfs: Separate scrub_blocked_if_needed() to scrub_pause_on/off()
  2015-08-05  8:43 [PATCH v4 0/4] btrfs: Fix data checksum error cause by replace with io-load Zhao Lei
  2015-08-05  8:43 ` [PATCH v4 1/4] btrfs: Use ref_cnt for set_block_group_ro() Zhao Lei
@ 2015-08-05  8:43 ` Zhao Lei
  2015-08-05  8:43 ` [PATCH v4 3/4] btrfs: use scrub_pause_on/off() to reduce code in scrub_enumerate_chunks() Zhao Lei
  2015-08-05  8:43 ` [PATCH v4 4/4] btrfs: Fix data checksum error cause by replace with io-load Zhao Lei
  3 siblings, 0 replies; 5+ messages in thread
From: Zhao Lei @ 2015-08-05  8:43 UTC (permalink / raw)
  To: linux-btrfs; +Cc: Zhao Lei

It can reduce current duplicated code which is similar to
scrub_blocked_if_needed() but can not call it because little
different.
It also used by my next patch which is in same case.

Signed-off-by: Zhao Lei <zhaolei@cn.fujitsu.com>
---
 fs/btrfs/scrub.c | 11 ++++++++++-
 1 file changed, 10 insertions(+), 1 deletion(-)

diff --git a/fs/btrfs/scrub.c b/fs/btrfs/scrub.c
index 94db0fa..cbfb8c7 100644
--- a/fs/btrfs/scrub.c
+++ b/fs/btrfs/scrub.c
@@ -332,11 +332,14 @@ static void __scrub_blocked_if_needed(struct btrfs_fs_info *fs_info)
 	}
 }
 
-static void scrub_blocked_if_needed(struct btrfs_fs_info *fs_info)
+static void scrub_pause_on(struct btrfs_fs_info *fs_info)
 {
 	atomic_inc(&fs_info->scrubs_paused);
 	wake_up(&fs_info->scrub_pause_wait);
+}
 
+static void scrub_pause_off(struct btrfs_fs_info *fs_info)
+{
 	mutex_lock(&fs_info->scrub_lock);
 	__scrub_blocked_if_needed(fs_info);
 	atomic_dec(&fs_info->scrubs_paused);
@@ -345,6 +348,12 @@ static void scrub_blocked_if_needed(struct btrfs_fs_info *fs_info)
 	wake_up(&fs_info->scrub_pause_wait);
 }
 
+static void scrub_blocked_if_needed(struct btrfs_fs_info *fs_info)
+{
+	scrub_pause_on(fs_info);
+	scrub_pause_off(fs_info);
+}
+
 /*
  * used for workers that require transaction commits (i.e., for the
  * NOCOW case)
-- 
1.8.5.1


^ permalink raw reply related	[flat|nested] 5+ messages in thread

* [PATCH v4 3/4] btrfs: use scrub_pause_on/off() to reduce code in scrub_enumerate_chunks()
  2015-08-05  8:43 [PATCH v4 0/4] btrfs: Fix data checksum error cause by replace with io-load Zhao Lei
  2015-08-05  8:43 ` [PATCH v4 1/4] btrfs: Use ref_cnt for set_block_group_ro() Zhao Lei
  2015-08-05  8:43 ` [PATCH v4 2/4] btrfs: Separate scrub_blocked_if_needed() to scrub_pause_on/off() Zhao Lei
@ 2015-08-05  8:43 ` Zhao Lei
  2015-08-05  8:43 ` [PATCH v4 4/4] btrfs: Fix data checksum error cause by replace with io-load Zhao Lei
  3 siblings, 0 replies; 5+ messages in thread
From: Zhao Lei @ 2015-08-05  8:43 UTC (permalink / raw)
  To: linux-btrfs; +Cc: Zhao Lei

Use new intruduced scrub_pause_on/off() can make this code block
clean and more readable.

Signed-off-by: Zhao Lei <zhaolei@cn.fujitsu.com>
---
 fs/btrfs/scrub.c | 10 +++-------
 1 file changed, 3 insertions(+), 7 deletions(-)

diff --git a/fs/btrfs/scrub.c b/fs/btrfs/scrub.c
index cbfb8c7..a882a34 100644
--- a/fs/btrfs/scrub.c
+++ b/fs/btrfs/scrub.c
@@ -3492,8 +3492,8 @@ int scrub_enumerate_chunks(struct scrub_ctx *sctx,
 
 		wait_event(sctx->list_wait,
 			   atomic_read(&sctx->bios_in_flight) == 0);
-		atomic_inc(&fs_info->scrubs_paused);
-		wake_up(&fs_info->scrub_pause_wait);
+
+		scrub_pause_on(fs_info);
 
 		/*
 		 * must be called before we decrease @scrub_paused.
@@ -3504,11 +3504,7 @@ int scrub_enumerate_chunks(struct scrub_ctx *sctx,
 			   atomic_read(&sctx->workers_pending) == 0);
 		atomic_set(&sctx->wr_ctx.flush_all_writes, 0);
 
-		mutex_lock(&fs_info->scrub_lock);
-		__scrub_blocked_if_needed(fs_info);
-		atomic_dec(&fs_info->scrubs_paused);
-		mutex_unlock(&fs_info->scrub_lock);
-		wake_up(&fs_info->scrub_pause_wait);
+		scrub_pause_off(fs_info);
 
 		btrfs_put_block_group(cache);
 		if (ret)
-- 
1.8.5.1


^ permalink raw reply related	[flat|nested] 5+ messages in thread

* [PATCH v4 4/4] btrfs: Fix data checksum error cause by replace with io-load.
  2015-08-05  8:43 [PATCH v4 0/4] btrfs: Fix data checksum error cause by replace with io-load Zhao Lei
                   ` (2 preceding siblings ...)
  2015-08-05  8:43 ` [PATCH v4 3/4] btrfs: use scrub_pause_on/off() to reduce code in scrub_enumerate_chunks() Zhao Lei
@ 2015-08-05  8:43 ` Zhao Lei
  3 siblings, 0 replies; 5+ messages in thread
From: Zhao Lei @ 2015-08-05  8:43 UTC (permalink / raw)
  To: linux-btrfs; +Cc: Zhao Lei, Qu Wenruo

xfstests btrfs/070 sometimes failed.
In my test machine, its fail rate is about 30%.
In another vm(vmware), its fail rate is about 50%.

Reason:
  btrfs/070 do replace and defrag with fsstress simultaneously,
  after above operation, checksum error is found by scrub.

  Actually, it have no relationship with defrag operation, only
  replace with fsstress can trigger this bug.

  New data writen to target device have possibility rewrited by
  old data from source device by replace code in debug, to avoid
  above problem, we can set target block group to readonly in
  replace period, so new data requested by other operation will
  not write to same place with replace code.

  Before patch(4.1-rc3):
    30% failed in 100 xfstests.
  After patch:
    0% failed in 300 xfstests.

It also happened in btrfs/071 as it's another scrub with IO load tests.

Reported-by: Qu Wenruo <quwenruo@cn.fujitsu.com>
Signed-off-by: Qu Wenruo <quwenruo@cn.fujitsu.com>
Signed-off-by: Zhao Lei <zhaolei@cn.fujitsu.com>

---

Changelog v3->v4:
 Patch v3 cause xfstests/061 failed in some case, because
 btrfs_inc_block_group_ro() include a btrfs_end_transaction()
 option, which will change datas in reloc_ctl->data_inode,
 and cause deadlock in relocate:
 scrub                       relocate
 ----                        ----
                             relocate_file_extent_cluster()
                             prealloc_file_extent_cluster()
                             ...
 btrfs_inc_block_group_ro()
 btrfs_wait_for_commit()
 insert_reserved_file_extent()
 btrfs_set_file_extent_disk_num_bytes()
 (modify reloc_ctl->data_inode)
 ...
                             do_relocation()
                             get_new_location() ret -EINVAL
                             (because data_inode's extent changed)
                             __btrfs_cow_block() ret -EINVAL
                             (without unlock eb)
                             btrfs_search_slot() deadlock
                             (try to lock eb again)

Changelog v2->v3:
 1: Fix a typo(caused in rebase) which make xfstests failed in
    btrfs/073 and btrfs/066.

Changelog v1->v2:
 Nothing for this patch.

---
 fs/btrfs/scrub.c   | 34 +++++++++++++++++++++++++++-------
 fs/btrfs/volumes.c |  2 ++
 2 files changed, 29 insertions(+), 7 deletions(-)

diff --git a/fs/btrfs/scrub.c b/fs/btrfs/scrub.c
index a882a34..e04436f 100644
--- a/fs/btrfs/scrub.c
+++ b/fs/btrfs/scrub.c
@@ -3396,7 +3396,7 @@ int scrub_enumerate_chunks(struct scrub_ctx *sctx,
 	u64 chunk_tree;
 	u64 chunk_objectid;
 	u64 chunk_offset;
-	int ret;
+	int ret = 0;
 	int slot;
 	struct extent_buffer *l;
 	struct btrfs_key key;
@@ -3424,8 +3424,14 @@ int scrub_enumerate_chunks(struct scrub_ctx *sctx,
 			if (path->slots[0] >=
 			    btrfs_header_nritems(path->nodes[0])) {
 				ret = btrfs_next_leaf(root, path);
-				if (ret)
+				if (ret < 0)
+					break;
+				if (ret > 0) {
+					ret = 0;
 					break;
+				}
+			} else {
+				ret = 0;
 			}
 		}
 
@@ -3467,6 +3473,22 @@ int scrub_enumerate_chunks(struct scrub_ctx *sctx,
 		if (!cache)
 			goto skip;
 
+		/*
+		 * we need call btrfs_inc_block_group_ro() with scrubs_paused,
+		 * to avoid deadlock caused by:
+		 * btrfs_inc_block_group_ro()
+		 * -> btrfs_wait_for_commit()
+		 * -> btrfs_commit_transaction()
+		 * -> btrfs_scrub_pause()
+		 */
+		scrub_pause_on(fs_info);
+		ret = btrfs_inc_block_group_ro(root, cache);
+		scrub_pause_off(fs_info);
+		if (ret) {
+			btrfs_put_block_group(cache);
+			break;
+		}
+
 		dev_replace->cursor_right = found_key.offset + length;
 		dev_replace->cursor_left = found_key.offset;
 		dev_replace->item_needs_writeback = 1;
@@ -3506,6 +3528,8 @@ int scrub_enumerate_chunks(struct scrub_ctx *sctx,
 
 		scrub_pause_off(fs_info);
 
+		btrfs_dec_block_group_ro(root, cache);
+
 		btrfs_put_block_group(cache);
 		if (ret)
 			break;
@@ -3528,11 +3552,7 @@ skip:
 
 	btrfs_free_path(path);
 
-	/*
-	 * ret can still be 1 from search_slot or next_leaf,
-	 * that's not an error
-	 */
-	return ret < 0 ? ret : 0;
+	return ret;
 }
 
 static noinline_for_stack int scrub_supers(struct scrub_ctx *sctx,
diff --git a/fs/btrfs/volumes.c b/fs/btrfs/volumes.c
index 9b95503..66f5a15 100644
--- a/fs/btrfs/volumes.c
+++ b/fs/btrfs/volumes.c
@@ -2785,7 +2785,9 @@ static int btrfs_relocate_chunk(struct btrfs_root *root,
 		return -ENOSPC;
 
 	/* step one, relocate all the extents inside this chunk */
+	btrfs_scrub_pause(root);
 	ret = btrfs_relocate_block_group(extent_root, chunk_offset);
+	btrfs_scrub_continue(root);
 	if (ret)
 		return ret;
 
-- 
1.8.5.1


^ permalink raw reply related	[flat|nested] 5+ messages in thread

end of thread, other threads:[~2015-08-05  8:45 UTC | newest]

Thread overview: 5+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2015-08-05  8:43 [PATCH v4 0/4] btrfs: Fix data checksum error cause by replace with io-load Zhao Lei
2015-08-05  8:43 ` [PATCH v4 1/4] btrfs: Use ref_cnt for set_block_group_ro() Zhao Lei
2015-08-05  8:43 ` [PATCH v4 2/4] btrfs: Separate scrub_blocked_if_needed() to scrub_pause_on/off() Zhao Lei
2015-08-05  8:43 ` [PATCH v4 3/4] btrfs: use scrub_pause_on/off() to reduce code in scrub_enumerate_chunks() Zhao Lei
2015-08-05  8:43 ` [PATCH v4 4/4] btrfs: Fix data checksum error cause by replace with io-load Zhao Lei

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).