* BTRFS balance fails with -dusage=100
@ 2015-06-23 3:53 Moby
2015-06-24 17:33 ` Moby
0 siblings, 1 reply; 3+ messages in thread
From: Moby @ 2015-06-23 3:53 UTC (permalink / raw)
To: linux-btrfs
OpenSuSE 13.2 system with single BTRFS / mounted on top of /dev/md1.
/dev/md1 is md raid5 across 4 SATA disks.
System details are:
Linux suse132 4.0.5-4.g56152db-default #1 SMP Thu Jun 18 15:11:06 UTC
2015 (56152db) x86_64 x86_64 x86_64 GNU/Linux
btrfs-progs v4.1+20150622
Label: none uuid: 33b98d97-606b-4968-a266-24a48a9fe50d
Total devices 1 FS bytes used 884.21GiB
devid 1 size 1.36TiB used 889.06GiB path /dev/md1
Data, single: total=885.00GiB, used=883.12GiB
System, DUP: total=32.00MiB, used=144.00KiB
Metadata, DUP: total=2.00GiB, used=1.09GiB
GlobalReserve, single: total=384.00MiB, used=0.00B
Relevant entries from log are:
2015-06-22T22:46:32.238011-05:00 suse132 kernel: [90193.446128] BTRFS:
bdev /dev/md1 errs: wr 9977, rd 0, flush 0, corrupt 0, gen 0
2015-06-22T22:46:32.238050-05:00 suse132 kernel: [90193.446158] BTRFS:
bdev /dev/md1 errs: wr 9978, rd 0, flush 0, corrupt 0, gen 0
2015-06-22T22:46:32.238054-05:00 suse132 kernel: [90193.446179] BTRFS:
bdev /dev/md1 errs: wr 9979, rd 0, flush 0, corrupt 0, gen 0
System was (still is - other than btrfs balance) running fine. Then I
did massive data I/O, copying and deleting and massive amounts of data
to bring the system into it's present state. Once I was done with the
I/O, kicked off btrfs balance start /.
Above command failed. Then I started doing btrfs balance -dusage=XX /
This command succeeds with XX upto and including 99. It fails when I
set XX to 100. btrfs balance also fails if I omit the -dusage option.
The errors in the log make no sense to me since the md raid device is
not reporting any errors at all. Also running btrfs scrub reports no
errors at all.
Any ideas on how to get btrfs balance to succeed without errors would be
welcome.
Regards,
--Moby
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
^ permalink raw reply [flat|nested] 3+ messages in thread* Re: BTRFS balance fails with -dusage=100
2015-06-23 3:53 BTRFS balance fails with -dusage=100 Moby
@ 2015-06-24 17:33 ` Moby
2015-06-25 14:41 ` Moby
0 siblings, 1 reply; 3+ messages in thread
From: Moby @ 2015-06-24 17:33 UTC (permalink / raw)
To: linux-btrfs
On 06/22/2015 10:53 PM, Moby wrote:
> OpenSuSE 13.2 system with single BTRFS / mounted on top of /dev/md1.
> /dev/md1 is md raid5 across 4 SATA disks.
>
> System details are:
>
> Linux suse132 4.0.5-4.g56152db-default #1 SMP Thu Jun 18 15:11:06 UTC
> 2015 (56152db) x86_64 x86_64 x86_64 GNU/Linux
>
> btrfs-progs v4.1+20150622
>
> Label: none uuid: 33b98d97-606b-4968-a266-24a48a9fe50d
> Total devices 1 FS bytes used 884.21GiB
> devid 1 size 1.36TiB used 889.06GiB path /dev/md1
>
>
> Data, single: total=885.00GiB, used=883.12GiB
> System, DUP: total=32.00MiB, used=144.00KiB
> Metadata, DUP: total=2.00GiB, used=1.09GiB
> GlobalReserve, single: total=384.00MiB, used=0.00B
>
>
> Relevant entries from log are:
> 2015-06-22T22:46:32.238011-05:00 suse132 kernel: [90193.446128] BTRFS:
> bdev /dev/md1 errs: wr 9977, rd 0, flush 0, corrupt 0, gen 0
> 2015-06-22T22:46:32.238050-05:00 suse132 kernel: [90193.446158] BTRFS:
> bdev /dev/md1 errs: wr 9978, rd 0, flush 0, corrupt 0, gen 0
> 2015-06-22T22:46:32.238054-05:00 suse132 kernel: [90193.446179] BTRFS:
> bdev /dev/md1 errs: wr 9979, rd 0, flush 0, corrupt 0, gen 0
>
> System was (still is - other than btrfs balance) running fine. Then I
> did massive data I/O, copying and deleting and massive amounts of data
> to bring the system into it's present state. Once I was done with the
> I/O, kicked off btrfs balance start /.
> Above command failed. Then I started doing btrfs balance -dusage=XX /
> This command succeeds with XX upto and including 99. It fails when I
> set XX to 100. btrfs balance also fails if I omit the -dusage option.
> The errors in the log make no sense to me since the md raid device is
> not reporting any errors at all. Also running btrfs scrub reports no
> errors at all.
>
> Any ideas on how to get btrfs balance to succeed without errors would
> be welcome.
>
> Regards,
>
> --Moby
> --
> To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
>
>
On another run with -duage=95, I am now seeing the following (negative
percentage left value!)
Every 15.0s: sh -c date;btrfs balance status -v / Wed Jun 24
12:29:12 2015
Wed Jun 24 12:29:12 CDT 2015
Balance on '/' is running
306 out of about 145 chunks balanced (312 considered), -111% left
Dumping filters: flags 0x1, state 0x1, force is off
DATA (flags 0x2): balancing, usage=95
--Moby
^ permalink raw reply [flat|nested] 3+ messages in thread
* Re: BTRFS balance fails with -dusage=100
2015-06-24 17:33 ` Moby
@ 2015-06-25 14:41 ` Moby
0 siblings, 0 replies; 3+ messages in thread
From: Moby @ 2015-06-25 14:41 UTC (permalink / raw)
To: linux-btrfs
On 06/24/2015 12:33 PM, Moby wrote:
>
>
> On 06/22/2015 10:53 PM, Moby wrote:
>> OpenSuSE 13.2 system with single BTRFS / mounted on top of /dev/md1.
>> /dev/md1 is md raid5 across 4 SATA disks.
>>
>> System details are:
>>
>> Linux suse132 4.0.5-4.g56152db-default #1 SMP Thu Jun 18 15:11:06 UTC
>> 2015 (56152db) x86_64 x86_64 x86_64 GNU/Linux
>>
>> btrfs-progs v4.1+20150622
>>
>> Label: none uuid: 33b98d97-606b-4968-a266-24a48a9fe50d
>> Total devices 1 FS bytes used 884.21GiB
>> devid 1 size 1.36TiB used 889.06GiB path /dev/md1
>>
>>
>> Data, single: total=885.00GiB, used=883.12GiB
>> System, DUP: total=32.00MiB, used=144.00KiB
>> Metadata, DUP: total=2.00GiB, used=1.09GiB
>> GlobalReserve, single: total=384.00MiB, used=0.00B
>>
>>
>> Relevant entries from log are:
>> 2015-06-22T22:46:32.238011-05:00 suse132 kernel: [90193.446128]
>> BTRFS: bdev /dev/md1 errs: wr 9977, rd 0, flush 0, corrupt 0, gen 0
>> 2015-06-22T22:46:32.238050-05:00 suse132 kernel: [90193.446158]
>> BTRFS: bdev /dev/md1 errs: wr 9978, rd 0, flush 0, corrupt 0, gen 0
>> 2015-06-22T22:46:32.238054-05:00 suse132 kernel: [90193.446179]
>> BTRFS: bdev /dev/md1 errs: wr 9979, rd 0, flush 0, corrupt 0, gen 0
>>
>> System was (still is - other than btrfs balance) running fine. Then I
>> did massive data I/O, copying and deleting and massive amounts of
>> data to bring the system into it's present state. Once I was done
>> with the I/O, kicked off btrfs balance start /.
>> Above command failed. Then I started doing btrfs balance -dusage=XX /
>> This command succeeds with XX upto and including 99. It fails when
>> I set XX to 100. btrfs balance also fails if I omit the -dusage option.
>> The errors in the log make no sense to me since the md raid device is
>> not reporting any errors at all. Also running btrfs scrub reports no
>> errors at all.
>>
>> Any ideas on how to get btrfs balance to succeed without errors would
>> be welcome.
>>
>> Regards,
>>
>> --Moby
>> --
>> To unsubscribe from this list: send the line "unsubscribe
>> linux-btrfs" in
>>
>>
>
> On another run with -duage=95, I am now seeing the following (negative
> percentage left value!)
>
> Every 15.0s: sh -c date;btrfs balance status -v / Wed Jun 24
> 12:29:12 2015
>
> Wed Jun 24 12:29:12 CDT 2015
> Balance on '/' is running
> 306 out of about 145 chunks balanced (312 considered), -111% left
> Dumping filters: flags 0x1, state 0x1, force is off
> DATA (flags 0x2): balancing, usage=95
>
> --Moby
> --
> To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
>
>
Upgrading to kernel 4.1.0-1.gfcf8349-default and btrfs-progs
v4.1+20150622 seems to have fixed the problem. btrfs balance now
completes without any errors.
--
--Moby
They that can give up essential liberty to obtain a little temporary safety deserve neither liberty nor safety. -- Benjamin Franklin
^ permalink raw reply [flat|nested] 3+ messages in thread
end of thread, other threads:[~2015-06-25 14:41 UTC | newest]
Thread overview: 3+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2015-06-23 3:53 BTRFS balance fails with -dusage=100 Moby
2015-06-24 17:33 ` Moby
2015-06-25 14:41 ` Moby
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox