* [PATCH v2 1/2] btrfs: Fix lost-data-profile caused by auto removing bg
@ 2015-09-30 11:11 Zhao Lei
2015-09-30 11:11 ` [PATCH v2 2/2] btrfs: Fix lost-data-profile caused by balance bg Zhao Lei
` (2 more replies)
0 siblings, 3 replies; 5+ messages in thread
From: Zhao Lei @ 2015-09-30 11:11 UTC (permalink / raw)
To: linux-btrfs; +Cc: Zhao Lei
Reproduce:
(In integration-4.3 branch)
TEST_DEV=(/dev/vdg /dev/vdh)
TEST_DIR=/mnt/tmp
umount "$TEST_DEV" >/dev/null
mkfs.btrfs -f -d raid1 "${TEST_DEV[@]}"
mount -o nospace_cache "$TEST_DEV" "$TEST_DIR"
umount "$TEST_DEV"
mount -o nospace_cache "$TEST_DEV" "$TEST_DIR"
btrfs filesystem usage $TEST_DIR
We can see the data chunk changed from raid1 to single:
# btrfs filesystem usage $TEST_DIR
Data,single: Size:8.00MiB, Used:0.00B
/dev/vdg 8.00MiB
#
Reason:
When a empty filesystem mount with -o nospace_cache, the last
data blockgroup will be auto-removed in umount.
Then if we mount it again, there is no data chunk in the
filesystem, so the only available data profile is 0x0, result
is all new chunks are created as single type.
Fix:
Don't auto-delete last blockgroup for a raid type.
Test:
Test by above script, and confirmed the logic by debug output.
Changelog v1->v2:
1: Put code of checking block_group->list into
semaphore of space_info->groups_sem.
Noticed-by: Filipe Manana <fdmanana@gmail.com>
Signed-off-by: Zhao Lei <zhaolei@cn.fujitsu.com>
---
fs/btrfs/extent-tree.c | 12 +++++++++++-
1 file changed, 11 insertions(+), 1 deletion(-)
diff --git a/fs/btrfs/extent-tree.c b/fs/btrfs/extent-tree.c
index 79a5bd9..ed9426c 100644
--- a/fs/btrfs/extent-tree.c
+++ b/fs/btrfs/extent-tree.c
@@ -10010,8 +10010,18 @@ void btrfs_delete_unused_bgs(struct btrfs_fs_info *fs_info)
block_group = list_first_entry(&fs_info->unused_bgs,
struct btrfs_block_group_cache,
bg_list);
- space_info = block_group->space_info;
list_del_init(&block_group->bg_list);
+
+ space_info = block_group->space_info;
+
+ down_read(&space_info->groups_sem);
+ if (block_group->list.next == block_group->list.prev) {
+ up_read(&space_info->groups_sem);
+ btrfs_put_block_group(block_group);
+ continue;
+ }
+ up_read(&space_info->groups_sem);
+
if (ret || btrfs_mixed_space_info(space_info)) {
btrfs_put_block_group(block_group);
continue;
--
1.8.5.1
^ permalink raw reply related [flat|nested] 5+ messages in thread
* [PATCH v2 2/2] btrfs: Fix lost-data-profile caused by balance bg
2015-09-30 11:11 [PATCH v2 1/2] btrfs: Fix lost-data-profile caused by auto removing bg Zhao Lei
@ 2015-09-30 11:11 ` Zhao Lei
2015-09-30 16:17 ` Filipe Manana
2015-09-30 16:19 ` [PATCH v2 1/2] btrfs: Fix lost-data-profile caused by auto removing bg Filipe Manana
2015-10-01 14:44 ` Jeff Mahoney
2 siblings, 1 reply; 5+ messages in thread
From: Zhao Lei @ 2015-09-30 11:11 UTC (permalink / raw)
To: linux-btrfs; +Cc: Zhao Lei
Reproduce:
(In integration-4.3 branch)
TEST_DEV=(/dev/vdg /dev/vdh)
TEST_DIR=/mnt/tmp
umount "$TEST_DEV" >/dev/null
mkfs.btrfs -f -d raid1 "${TEST_DEV[@]}"
mount -o nospace_cache "$TEST_DEV" "$TEST_DIR"
btrfs balance start -dusage=0 $TEST_DIR
btrfs filesystem usage $TEST_DIR
dd if=/dev/zero of="$TEST_DIR"/file count=100
btrfs filesystem usage $TEST_DIR
Result:
We can see "no data chunk" in first "btrfs filesystem usage":
# btrfs filesystem usage $TEST_DIR
Overall:
...
Metadata,single: Size:8.00MiB, Used:0.00B
/dev/vdg 8.00MiB
Metadata,RAID1: Size:122.88MiB, Used:112.00KiB
/dev/vdg 122.88MiB
/dev/vdh 122.88MiB
System,single: Size:4.00MiB, Used:0.00B
/dev/vdg 4.00MiB
System,RAID1: Size:8.00MiB, Used:16.00KiB
/dev/vdg 8.00MiB
/dev/vdh 8.00MiB
Unallocated:
/dev/vdg 1.06GiB
/dev/vdh 1.07GiB
And "data chunks changed from raid1 to single" in second
"btrfs filesystem usage":
# btrfs filesystem usage $TEST_DIR
Overall:
...
Data,single: Size:256.00MiB, Used:0.00B
/dev/vdh 256.00MiB
Metadata,single: Size:8.00MiB, Used:0.00B
/dev/vdg 8.00MiB
Metadata,RAID1: Size:122.88MiB, Used:112.00KiB
/dev/vdg 122.88MiB
/dev/vdh 122.88MiB
System,single: Size:4.00MiB, Used:0.00B
/dev/vdg 4.00MiB
System,RAID1: Size:8.00MiB, Used:16.00KiB
/dev/vdg 8.00MiB
/dev/vdh 8.00MiB
Unallocated:
/dev/vdg 1.06GiB
/dev/vdh 841.92MiB
Reason:
btrfs balance delete last data chunk in case of no data in
the filesystem, then we can see "no data chunk" by "fi usage"
command.
And when we do write operation to fs, the only available data
profile is 0x0, result is all new chunks are allocated single type.
Fix:
Allocate a data chunk explicitly to ensure we don't lose the
raid profile for data.
Test:
Test by above script, and confirmed the logic by debug output.
Changelog v1->v2:
1: Update patch description of "Fix" field
2: Use BTRFS_BLOCK_GROUP_DATA for btrfs_force_chunk_alloc instead
of 1
3: Only reserve chunk if balance data chunk.
All suggested-by: Filipe Manana <fdmanana@gmail.com>
Signed-off-by: Zhao Lei <zhaolei@cn.fujitsu.com>
---
fs/btrfs/volumes.c | 24 ++++++++++++++++++++++++
1 file changed, 24 insertions(+)
diff --git a/fs/btrfs/volumes.c b/fs/btrfs/volumes.c
index 6fc73586..cd9e5bd 100644
--- a/fs/btrfs/volumes.c
+++ b/fs/btrfs/volumes.c
@@ -3277,6 +3277,7 @@ static int __btrfs_balance(struct btrfs_fs_info *fs_info)
u64 limit_data = bctl->data.limit;
u64 limit_meta = bctl->meta.limit;
u64 limit_sys = bctl->sys.limit;
+ int chunk_reserved = 0;
/* step one make some room on all the devices */
devices = &fs_info->fs_devices->devices;
@@ -3326,6 +3327,8 @@ again:
key.type = BTRFS_CHUNK_ITEM_KEY;
while (1) {
+ u64 chunk_type;
+
if ((!counting && atomic_read(&fs_info->balance_pause_req)) ||
atomic_read(&fs_info->balance_cancel_req)) {
ret = -ECANCELED;
@@ -3371,8 +3374,10 @@ again:
spin_unlock(&fs_info->balance_lock);
}
+ chunk_type = btrfs_chunk_type(leaf, chunk);
ret = should_balance_chunk(chunk_root, leaf, chunk,
found_key.offset);
+
btrfs_release_path(path);
if (!ret) {
mutex_unlock(&fs_info->delete_unused_bgs_mutex);
@@ -3387,6 +3392,25 @@ again:
goto loop;
}
+ if ((chunk_type & BTRFS_BLOCK_GROUP_DATA) && !chunk_reserved) {
+ trans = btrfs_start_transaction(chunk_root, 0);
+ if (IS_ERR(trans)) {
+ mutex_unlock(&fs_info->delete_unused_bgs_mutex);
+ ret = PTR_ERR(trans);
+ goto error;
+ }
+
+ ret = btrfs_force_chunk_alloc(trans, chunk_root,
+ BTRFS_BLOCK_GROUP_DATA);
+ if (ret < 0) {
+ mutex_unlock(&fs_info->delete_unused_bgs_mutex);
+ goto error;
+ }
+
+ btrfs_end_transaction(trans, chunk_root);
+ chunk_reserved = 1;
+ }
+
ret = btrfs_relocate_chunk(chunk_root,
found_key.offset);
mutex_unlock(&fs_info->delete_unused_bgs_mutex);
--
1.8.5.1
^ permalink raw reply related [flat|nested] 5+ messages in thread
* Re: [PATCH v2 2/2] btrfs: Fix lost-data-profile caused by balance bg
2015-09-30 11:11 ` [PATCH v2 2/2] btrfs: Fix lost-data-profile caused by balance bg Zhao Lei
@ 2015-09-30 16:17 ` Filipe Manana
0 siblings, 0 replies; 5+ messages in thread
From: Filipe Manana @ 2015-09-30 16:17 UTC (permalink / raw)
To: Zhao Lei; +Cc: linux-btrfs@vger.kernel.org
On Wed, Sep 30, 2015 at 12:11 PM, Zhao Lei <zhaolei@cn.fujitsu.com> wrote:
> Reproduce:
> (In integration-4.3 branch)
>
> TEST_DEV=(/dev/vdg /dev/vdh)
> TEST_DIR=/mnt/tmp
>
> umount "$TEST_DEV" >/dev/null
> mkfs.btrfs -f -d raid1 "${TEST_DEV[@]}"
>
> mount -o nospace_cache "$TEST_DEV" "$TEST_DIR"
> btrfs balance start -dusage=0 $TEST_DIR
> btrfs filesystem usage $TEST_DIR
>
> dd if=/dev/zero of="$TEST_DIR"/file count=100
> btrfs filesystem usage $TEST_DIR
>
> Result:
> We can see "no data chunk" in first "btrfs filesystem usage":
> # btrfs filesystem usage $TEST_DIR
> Overall:
> ...
> Metadata,single: Size:8.00MiB, Used:0.00B
> /dev/vdg 8.00MiB
> Metadata,RAID1: Size:122.88MiB, Used:112.00KiB
> /dev/vdg 122.88MiB
> /dev/vdh 122.88MiB
> System,single: Size:4.00MiB, Used:0.00B
> /dev/vdg 4.00MiB
> System,RAID1: Size:8.00MiB, Used:16.00KiB
> /dev/vdg 8.00MiB
> /dev/vdh 8.00MiB
> Unallocated:
> /dev/vdg 1.06GiB
> /dev/vdh 1.07GiB
>
> And "data chunks changed from raid1 to single" in second
> "btrfs filesystem usage":
> # btrfs filesystem usage $TEST_DIR
> Overall:
> ...
> Data,single: Size:256.00MiB, Used:0.00B
> /dev/vdh 256.00MiB
> Metadata,single: Size:8.00MiB, Used:0.00B
> /dev/vdg 8.00MiB
> Metadata,RAID1: Size:122.88MiB, Used:112.00KiB
> /dev/vdg 122.88MiB
> /dev/vdh 122.88MiB
> System,single: Size:4.00MiB, Used:0.00B
> /dev/vdg 4.00MiB
> System,RAID1: Size:8.00MiB, Used:16.00KiB
> /dev/vdg 8.00MiB
> /dev/vdh 8.00MiB
> Unallocated:
> /dev/vdg 1.06GiB
> /dev/vdh 841.92MiB
>
> Reason:
> btrfs balance delete last data chunk in case of no data in
> the filesystem, then we can see "no data chunk" by "fi usage"
> command.
>
> And when we do write operation to fs, the only available data
> profile is 0x0, result is all new chunks are allocated single type.
>
> Fix:
> Allocate a data chunk explicitly to ensure we don't lose the
> raid profile for data.
>
> Test:
> Test by above script, and confirmed the logic by debug output.
>
> Changelog v1->v2:
> 1: Update patch description of "Fix" field
> 2: Use BTRFS_BLOCK_GROUP_DATA for btrfs_force_chunk_alloc instead
> of 1
> 3: Only reserve chunk if balance data chunk.
> All suggested-by: Filipe Manana <fdmanana@gmail.com>
>
> Signed-off-by: Zhao Lei <zhaolei@cn.fujitsu.com>
Reviewed-by: Filipe Manana <fdmanana@suse.com>
thanks
> ---
> fs/btrfs/volumes.c | 24 ++++++++++++++++++++++++
> 1 file changed, 24 insertions(+)
>
> diff --git a/fs/btrfs/volumes.c b/fs/btrfs/volumes.c
> index 6fc73586..cd9e5bd 100644
> --- a/fs/btrfs/volumes.c
> +++ b/fs/btrfs/volumes.c
> @@ -3277,6 +3277,7 @@ static int __btrfs_balance(struct btrfs_fs_info *fs_info)
> u64 limit_data = bctl->data.limit;
> u64 limit_meta = bctl->meta.limit;
> u64 limit_sys = bctl->sys.limit;
> + int chunk_reserved = 0;
>
> /* step one make some room on all the devices */
> devices = &fs_info->fs_devices->devices;
> @@ -3326,6 +3327,8 @@ again:
> key.type = BTRFS_CHUNK_ITEM_KEY;
>
> while (1) {
> + u64 chunk_type;
> +
> if ((!counting && atomic_read(&fs_info->balance_pause_req)) ||
> atomic_read(&fs_info->balance_cancel_req)) {
> ret = -ECANCELED;
> @@ -3371,8 +3374,10 @@ again:
> spin_unlock(&fs_info->balance_lock);
> }
>
> + chunk_type = btrfs_chunk_type(leaf, chunk);
> ret = should_balance_chunk(chunk_root, leaf, chunk,
> found_key.offset);
> +
> btrfs_release_path(path);
> if (!ret) {
> mutex_unlock(&fs_info->delete_unused_bgs_mutex);
> @@ -3387,6 +3392,25 @@ again:
> goto loop;
> }
>
> + if ((chunk_type & BTRFS_BLOCK_GROUP_DATA) && !chunk_reserved) {
> + trans = btrfs_start_transaction(chunk_root, 0);
> + if (IS_ERR(trans)) {
> + mutex_unlock(&fs_info->delete_unused_bgs_mutex);
> + ret = PTR_ERR(trans);
> + goto error;
> + }
> +
> + ret = btrfs_force_chunk_alloc(trans, chunk_root,
> + BTRFS_BLOCK_GROUP_DATA);
> + if (ret < 0) {
> + mutex_unlock(&fs_info->delete_unused_bgs_mutex);
> + goto error;
> + }
> +
> + btrfs_end_transaction(trans, chunk_root);
> + chunk_reserved = 1;
> + }
> +
> ret = btrfs_relocate_chunk(chunk_root,
> found_key.offset);
> mutex_unlock(&fs_info->delete_unused_bgs_mutex);
> --
> 1.8.5.1
>
> --
> To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
--
Filipe David Manana,
"Reasonable men adapt themselves to the world.
Unreasonable men adapt the world to themselves.
That's why all progress depends on unreasonable men."
^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: [PATCH v2 1/2] btrfs: Fix lost-data-profile caused by auto removing bg
2015-09-30 11:11 [PATCH v2 1/2] btrfs: Fix lost-data-profile caused by auto removing bg Zhao Lei
2015-09-30 11:11 ` [PATCH v2 2/2] btrfs: Fix lost-data-profile caused by balance bg Zhao Lei
@ 2015-09-30 16:19 ` Filipe Manana
2015-10-01 14:44 ` Jeff Mahoney
2 siblings, 0 replies; 5+ messages in thread
From: Filipe Manana @ 2015-09-30 16:19 UTC (permalink / raw)
To: Zhao Lei; +Cc: linux-btrfs@vger.kernel.org
On Wed, Sep 30, 2015 at 12:11 PM, Zhao Lei <zhaolei@cn.fujitsu.com> wrote:
> Reproduce:
> (In integration-4.3 branch)
>
> TEST_DEV=(/dev/vdg /dev/vdh)
> TEST_DIR=/mnt/tmp
>
> umount "$TEST_DEV" >/dev/null
> mkfs.btrfs -f -d raid1 "${TEST_DEV[@]}"
>
> mount -o nospace_cache "$TEST_DEV" "$TEST_DIR"
> umount "$TEST_DEV"
>
> mount -o nospace_cache "$TEST_DEV" "$TEST_DIR"
> btrfs filesystem usage $TEST_DIR
>
> We can see the data chunk changed from raid1 to single:
> # btrfs filesystem usage $TEST_DIR
> Data,single: Size:8.00MiB, Used:0.00B
> /dev/vdg 8.00MiB
> #
>
> Reason:
> When a empty filesystem mount with -o nospace_cache, the last
> data blockgroup will be auto-removed in umount.
>
> Then if we mount it again, there is no data chunk in the
> filesystem, so the only available data profile is 0x0, result
> is all new chunks are created as single type.
>
> Fix:
> Don't auto-delete last blockgroup for a raid type.
>
> Test:
> Test by above script, and confirmed the logic by debug output.
>
> Changelog v1->v2:
> 1: Put code of checking block_group->list into
> semaphore of space_info->groups_sem.
> Noticed-by: Filipe Manana <fdmanana@gmail.com>
>
> Signed-off-by: Zhao Lei <zhaolei@cn.fujitsu.com>
Reviewed-by: Filipe Manana <fdmanana@suse.com>
I would have made the check in the "if" statement below that is
already done while holding a write lock on the semaphore (smaller code
diff), but this is equally correct.
thanks
> ---
> fs/btrfs/extent-tree.c | 12 +++++++++++-
> 1 file changed, 11 insertions(+), 1 deletion(-)
>
> diff --git a/fs/btrfs/extent-tree.c b/fs/btrfs/extent-tree.c
> index 79a5bd9..ed9426c 100644
> --- a/fs/btrfs/extent-tree.c
> +++ b/fs/btrfs/extent-tree.c
> @@ -10010,8 +10010,18 @@ void btrfs_delete_unused_bgs(struct btrfs_fs_info *fs_info)
> block_group = list_first_entry(&fs_info->unused_bgs,
> struct btrfs_block_group_cache,
> bg_list);
> - space_info = block_group->space_info;
> list_del_init(&block_group->bg_list);
> +
> + space_info = block_group->space_info;
> +
> + down_read(&space_info->groups_sem);
> + if (block_group->list.next == block_group->list.prev) {
> + up_read(&space_info->groups_sem);
> + btrfs_put_block_group(block_group);
> + continue;
> + }
> + up_read(&space_info->groups_sem);
> +
> if (ret || btrfs_mixed_space_info(space_info)) {
> btrfs_put_block_group(block_group);
> continue;
> --
> 1.8.5.1
>
> --
> To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
--
Filipe David Manana,
"Reasonable men adapt themselves to the world.
Unreasonable men adapt the world to themselves.
That's why all progress depends on unreasonable men."
^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: [PATCH v2 1/2] btrfs: Fix lost-data-profile caused by auto removing bg
2015-09-30 11:11 [PATCH v2 1/2] btrfs: Fix lost-data-profile caused by auto removing bg Zhao Lei
2015-09-30 11:11 ` [PATCH v2 2/2] btrfs: Fix lost-data-profile caused by balance bg Zhao Lei
2015-09-30 16:19 ` [PATCH v2 1/2] btrfs: Fix lost-data-profile caused by auto removing bg Filipe Manana
@ 2015-10-01 14:44 ` Jeff Mahoney
2 siblings, 0 replies; 5+ messages in thread
From: Jeff Mahoney @ 2015-10-01 14:44 UTC (permalink / raw)
To: Zhao Lei, linux-btrfs
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1
On 9/30/15 7:11 AM, Zhao Lei wrote:
> Reproduce: (In integration-4.3 branch)
>
> TEST_DEV=(/dev/vdg /dev/vdh) TEST_DIR=/mnt/tmp
>
> umount "$TEST_DEV" >/dev/null mkfs.btrfs -f -d raid1
> "${TEST_DEV[@]}"
>
> mount -o nospace_cache "$TEST_DEV" "$TEST_DIR" umount "$TEST_DEV"
>
> mount -o nospace_cache "$TEST_DEV" "$TEST_DIR" btrfs filesystem
> usage $TEST_DIR
>
> We can see the data chunk changed from raid1 to single: # btrfs
> filesystem usage $TEST_DIR Data,single: Size:8.00MiB, Used:0.00B
> /dev/vdg 8.00MiB #
>
> Reason: When a empty filesystem mount with -o nospace_cache, the
> last data blockgroup will be auto-removed in umount.
>
> Then if we mount it again, there is no data chunk in the
> filesystem, so the only available data profile is 0x0, result is
> all new chunks are created as single type.
>
> Fix: Don't auto-delete last blockgroup for a raid type.
I still think this is kind of a hacky solution, but it's the best one
that doesn't involve a disk format change.
> Test: Test by above script, and confirmed the logic by debug
> output.
>
> Changelog v1->v2: 1: Put code of checking block_group->list into
> semaphore of space_info->groups_sem. Noticed-by: Filipe Manana
> <fdmanana@gmail.com>
>
> Signed-off-by: Zhao Lei <zhaolei@cn.fujitsu.com> ---
> fs/btrfs/extent-tree.c | 12 +++++++++++- 1 file changed, 11
> insertions(+), 1 deletion(-)
>
> diff --git a/fs/btrfs/extent-tree.c b/fs/btrfs/extent-tree.c index
> 79a5bd9..ed9426c 100644 --- a/fs/btrfs/extent-tree.c +++
> b/fs/btrfs/extent-tree.c @@ -10010,8 +10010,18 @@ void
> btrfs_delete_unused_bgs(struct btrfs_fs_info *fs_info) block_group
> = list_first_entry(&fs_info->unused_bgs, struct
> btrfs_block_group_cache, bg_list); - space_info =
> block_group->space_info; list_del_init(&block_group->bg_list); + +
> space_info = block_group->space_info; + +
> down_read(&space_info->groups_sem); + if (block_group->list.next
> == block_group->list.prev) {
if (list_is_singular(&block_group->list)) {
> + up_read(&space_info->groups_sem); +
> btrfs_put_block_group(block_group); + continue; + } +
> up_read(&space_info->groups_sem); + if (ret ||
> btrfs_mixed_space_info(space_info)) {
> btrfs_put_block_group(block_group); continue;
>
- -Jeff
- --
Jeff Mahoney
SUSE Labs
-----BEGIN PGP SIGNATURE-----
Version: GnuPG/MacGPG2 v2.0.19 (Darwin)
iQIcBAEBAgAGBQJWDUbPAAoJEB57S2MheeWypfIP/2Xkl+QH4bTu1Nkad+52+BQW
CaPnc8x6JkM6VoUKLVF2mN5yimW4CUxo2u4Bui34VtUEXb5LJ3YBq8RR6JXcO5Ip
MtZPjTatgq3K8loHRSRmZP9FRdyGOL3yvTtJ+inGV1SFFjd7XW7buKdhJnjgObwC
IK/OpPGLTAFOjf/hB2ge8A4LbfExkcWLk6dC5QJhjpnAhDajwZ6+McnEWKWYjE9T
J+U6B+sp2IgU2lWCLBH5OpOBHMA8HdSjDTWW6AcHIVOQ4ame7YrlZi7lZ2mVKfHo
7/e0Pxr4C7y/otxZWl9a5T4VYZljAUiqor4bR8chaHSLz42FaJOXcw62w3ScqCxA
e7hUf2Kq0QLPcXf5m1BWiwKdzjmqO0sY6iYGMIfkdD8IPaYIaAIdaCUk2KZUKYk6
+Lw1H5PoVh1SQ37q40CPelZOD1aPvpvo2+bmJEV1ENx9L4ZLITVll+dMN+h8rCbN
EqhbL0kyAtx7JpI0OiUhGFllBDHKdF+t9RlCacu5YJNxGyV28p1nC/d3DvTSOTQ8
5a7vFThLjoGnL+pLdZf28ZJ+fUEbupRqSfRdIR0OZ5NOTwFNGed3L8w+ZLhzQY4o
SP1J0RSuW5FJG1wT7qaUlfMfrqpBlUvanf3wkk58u8F9Moxrkn52/hG/w1SKvWP9
a0s4mZnD250jv+uwtcG7
=ua/t
-----END PGP SIGNATURE-----
^ permalink raw reply [flat|nested] 5+ messages in thread
end of thread, other threads:[~2015-10-01 14:44 UTC | newest]
Thread overview: 5+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2015-09-30 11:11 [PATCH v2 1/2] btrfs: Fix lost-data-profile caused by auto removing bg Zhao Lei
2015-09-30 11:11 ` [PATCH v2 2/2] btrfs: Fix lost-data-profile caused by balance bg Zhao Lei
2015-09-30 16:17 ` Filipe Manana
2015-09-30 16:19 ` [PATCH v2 1/2] btrfs: Fix lost-data-profile caused by auto removing bg Filipe Manana
2015-10-01 14:44 ` Jeff Mahoney
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).