* `btrfs dev del` fails with `No space left on device`
@ 2016-08-29 16:04 ojab //
2016-08-29 21:05 ` Chris Murphy
0 siblings, 1 reply; 7+ messages in thread
From: ojab // @ 2016-08-29 16:04 UTC (permalink / raw)
To: linux-btrfs
Hi,
[and I hope that this message will not be sent during compose]
I've had BTRFS filesystem with two 1Tb drives (sdb1 & sdc1), data
raid0 & metadata raid1. I need to replace one drive with 2Tb drive, so
I've done `btrfs dev add /dev/sdd /mnt/xxx` and now trying to do
`btrfs dev del /dev/sdc1 /mnt/xxx` which fails due to `ERROR: error
removing device '/dev/sdc1': No space left on device`. Right now FS
looks like:
$ sudo btrfs fi show /mnt/xxx
Label: none uuid: 8a65465d-1a8c-4f80-abc6-c818c38567c3
Total devices 3 FS bytes used 1.80TiB
devid 1 size 931.51GiB used 931.51GiB path /dev/sdc1
devid 2 size 931.51GiB used 931.51GiB path /dev/sdb1
devid 3 size 1.82TiB used 0.00B path /dev/sdd1
$ sudo btrfs device usage /mnt/xxx/
/dev/sdb1, ID: 2
Device size: 931.51GiB
Device slack: 0.00B
Data,RAID0: 928.48GiB
Metadata,RAID1: 3.00GiB
System,RAID1: 32.00MiB
Unallocated: 1.01MiB
/dev/sdc1, ID: 1
Device size: 931.51GiB
Device slack: 3.50KiB
Data,RAID0: 928.48GiB
Metadata,RAID1: 3.00GiB
System,RAID1: 32.00MiB
Unallocated: 1.00MiB
/dev/sdd1, ID: 3
Device size: 1.82TiB
Device slack: 0.00B
Unallocated: 1.82TiB
As far as I understand, no data is allocated on `/dev/sdc1`, so it
should be no space for metadata, but I don't really understand how can
I balance only metadata, both:
$ btrfs fi balance start -musage=1 -dusage=0 /mnt/xxx/
$ sudo btrfs fi balance start -musage=1 -dusage=0 -mconvert=raid1 /mnt/xxx/
fails with something like `BTRFS info (device sdb1): 1 enospc errors
during balance` in dmesg.
Is there any way to delete /dev/sdc1 without full rebalance?
//wbr ojab
^ permalink raw reply [flat|nested] 7+ messages in thread* Re: `btrfs dev del` fails with `No space left on device`
2016-08-29 16:04 `btrfs dev del` fails with `No space left on device` ojab //
@ 2016-08-29 21:05 ` Chris Murphy
2016-08-30 10:22 ` ojab //
0 siblings, 1 reply; 7+ messages in thread
From: Chris Murphy @ 2016-08-29 21:05 UTC (permalink / raw)
To: ojab //; +Cc: linux-btrfs
On Mon, Aug 29, 2016 at 10:04 AM, ojab // <ojab@ojab.ru> wrote:
> Hi,
> [and I hope that this message will not be sent during compose]
> I've had BTRFS filesystem with two 1Tb drives (sdb1 & sdc1), data
> raid0 & metadata raid1. I need to replace one drive with 2Tb drive, so
> I've done `btrfs dev add /dev/sdd /mnt/xxx` and now trying to do
> `btrfs dev del /dev/sdc1 /mnt/xxx` which fails due to `ERROR: error
> removing device '/dev/sdc1': No space left on device`. Right now FS
> looks like:
>
> $ sudo btrfs fi show /mnt/xxx
> Label: none uuid: 8a65465d-1a8c-4f80-abc6-c818c38567c3
> Total devices 3 FS bytes used 1.80TiB
> devid 1 size 931.51GiB used 931.51GiB path /dev/sdc1
> devid 2 size 931.51GiB used 931.51GiB path /dev/sdb1
> devid 3 size 1.82TiB used 0.00B path /dev/sdd1
sdc1 and sdb1 are completely allocated so no new chunks can be made,
it's not clear what's going on with sdd1 but the first two might be so
full that sdd1 is just stuck being part of the volume
What do you get for 'btrfs fi us <mp>'
>
> Is there any way to delete /dev/sdc1 without full rebalance?
You can see what the state of block groups are with btrfs-debugfs
which is in kdave btrfs-progs git. Chances are you need a larger
value, -dusage=15 -musage=15 to free up space on devid 1 and 2. Then
maybe devid 3 can be removed.
--
Chris Murphy
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: `btrfs dev del` fails with `No space left on device`
2016-08-29 21:05 ` Chris Murphy
@ 2016-08-30 10:22 ` ojab //
2016-08-30 10:35 ` Hugo Mills
2016-08-30 17:13 ` Chris Murphy
0 siblings, 2 replies; 7+ messages in thread
From: ojab // @ 2016-08-30 10:22 UTC (permalink / raw)
To: Chris Murphy; +Cc: linux-btrfs
On Mon, Aug 29, 2016 at 9:05 PM, Chris Murphy <lists@colorremedies.com> wrote:
> On Mon, Aug 29, 2016 at 10:04 AM, ojab // <ojab@ojab.ru> wrote:
> What do you get for 'btrfs fi us <mp>'
$ sudo btrfs fi us /mnt/xxx/
Overall:
Device size: 3.64TiB
Device allocated: 1.82TiB
Device unallocated: 1.82TiB
Device missing: 0.00B
Used: 1.81TiB
Free (estimated): 1.83TiB (min: 943.55GiB)
Data ratio: 1.00
Metadata ratio: 2.00
Global reserve: 512.00MiB (used: 0.00B)
Data,RAID0: Size:1.81TiB, Used:1.80TiB
/dev/sdb1 928.48GiB
/dev/sdc1 928.48GiB
Metadata,RAID1: Size:3.00GiB, Used:2.15GiB
/dev/sdb1 3.00GiB
/dev/sdc1 3.00GiB
System,RAID1: Size:32.00MiB, Used:176.00KiB
/dev/sdb1 32.00MiB
/dev/sdc1 32.00MiB
Unallocated:
/dev/sdb1 1.01MiB
/dev/sdc1 1.00MiB
/dev/sdd1 1.82TiB
>
> You can see what the state of block groups are with btrfs-debugfs
> which is in kdave btrfs-progs git. Chances are you need a larger
> value, -dusage=15 -musage=15 to free up space on devid 1 and 2. Then
> maybe devid 3 can be removed.
btrfs-debugfs output:
https://gist.github.com/ojab/a3c59983e8fb6679b8fdc0e88c0c9e60
Before `delete` the was about 60Gb of free space, looks like it was
filled during `delete` (I've seen similar behavior during `btrfs fi
defrag`) and I should use `-dusage=69` and up.
I don't quite understand what exactly btrfs is trying to do: I assume
that block groups should be relocated to the new/empty drive, but
during the delete `btrfs fi us` shows
Unallocated:
/dev/sdc1 16.00EiB
so deleted partition is counted as maximum possible empty drive and
blocks are relocated to it instead of new/empty drive? (kernel-4.7.2 &
btrfs-progs-4.7.1 here)
Is there any way to see where and why block groups are relocated
during `delete`?
//wbr ojab
^ permalink raw reply [flat|nested] 7+ messages in thread* Re: `btrfs dev del` fails with `No space left on device`
2016-08-30 10:22 ` ojab //
@ 2016-08-30 10:35 ` Hugo Mills
2016-08-30 17:13 ` Chris Murphy
1 sibling, 0 replies; 7+ messages in thread
From: Hugo Mills @ 2016-08-30 10:35 UTC (permalink / raw)
To: ojab //; +Cc: Chris Murphy, linux-btrfs
[-- Attachment #1: Type: text/plain, Size: 2808 bytes --]
On Tue, Aug 30, 2016 at 10:22:24AM +0000, ojab // wrote:
> On Mon, Aug 29, 2016 at 9:05 PM, Chris Murphy <lists@colorremedies.com> wrote:
> > On Mon, Aug 29, 2016 at 10:04 AM, ojab // <ojab@ojab.ru> wrote:
> > What do you get for 'btrfs fi us <mp>'
>
> $ sudo btrfs fi us /mnt/xxx/
> Overall:
> Device size: 3.64TiB
> Device allocated: 1.82TiB
> Device unallocated: 1.82TiB
> Device missing: 0.00B
> Used: 1.81TiB
> Free (estimated): 1.83TiB (min: 943.55GiB)
> Data ratio: 1.00
> Metadata ratio: 2.00
> Global reserve: 512.00MiB (used: 0.00B)
>
> Data,RAID0: Size:1.81TiB, Used:1.80TiB
> /dev/sdb1 928.48GiB
> /dev/sdc1 928.48GiB
>
> Metadata,RAID1: Size:3.00GiB, Used:2.15GiB
> /dev/sdb1 3.00GiB
> /dev/sdc1 3.00GiB
>
> System,RAID1: Size:32.00MiB, Used:176.00KiB
> /dev/sdb1 32.00MiB
> /dev/sdc1 32.00MiB
>
> Unallocated:
> /dev/sdb1 1.01MiB
> /dev/sdc1 1.00MiB
> /dev/sdd1 1.82TiB
Basically, your FS is full. Note that replacing just one of the
devices in a RAID-0 array with a larger one is not going to help. What
you really need here is "single", not RAID-0. I would probably try
converting to single first (which should move quite a lot of it to the
new device anyway), and then the device delete:
btrfs balance start -dconvert=single,soft /mountpoint
btrfs dev delete /dev/sdc1 /mountpoint
Hugo.
> > You can see what the state of block groups are with btrfs-debugfs
> > which is in kdave btrfs-progs git. Chances are you need a larger
> > value, -dusage=15 -musage=15 to free up space on devid 1 and 2. Then
> > maybe devid 3 can be removed.
>
> btrfs-debugfs output:
> https://gist.github.com/ojab/a3c59983e8fb6679b8fdc0e88c0c9e60
> Before `delete` the was about 60Gb of free space, looks like it was
> filled during `delete` (I've seen similar behavior during `btrfs fi
> defrag`) and I should use `-dusage=69` and up.
>
> I don't quite understand what exactly btrfs is trying to do: I assume
> that block groups should be relocated to the new/empty drive, but
> during the delete `btrfs fi us` shows
> Unallocated:
> /dev/sdc1 16.00EiB
>
> so deleted partition is counted as maximum possible empty drive and
> blocks are relocated to it instead of new/empty drive? (kernel-4.7.2 &
> btrfs-progs-4.7.1 here)
> Is there any way to see where and why block groups are relocated
> during `delete`?
>
> //wbr ojab
--
Hugo Mills | Everything simple is false. Everything which is
hugo@... carfax.org.uk | complex is unusable
http://carfax.org.uk/ |
PGP: E2AB1DE4 |
[-- Attachment #2: Digital signature --]
[-- Type: application/pgp-signature, Size: 836 bytes --]
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: `btrfs dev del` fails with `No space left on device`
2016-08-30 10:22 ` ojab //
2016-08-30 10:35 ` Hugo Mills
@ 2016-08-30 17:13 ` Chris Murphy
2016-08-30 20:10 ` ojab //
1 sibling, 1 reply; 7+ messages in thread
From: Chris Murphy @ 2016-08-30 17:13 UTC (permalink / raw)
To: ojab //; +Cc: Chris Murphy, linux-btrfs
On Tue, Aug 30, 2016 at 4:22 AM, ojab // <ojab@ojab.ru> wrote:
> On Mon, Aug 29, 2016 at 9:05 PM, Chris Murphy <lists@colorremedies.com> wrote:
>> On Mon, Aug 29, 2016 at 10:04 AM, ojab // <ojab@ojab.ru> wrote:
>> What do you get for 'btrfs fi us <mp>'
>
> $ sudo btrfs fi us /mnt/xxx/
> Overall:
> Device size: 3.64TiB
> Device allocated: 1.82TiB
> Device unallocated: 1.82TiB
> Device missing: 0.00B
> Used: 1.81TiB
> Free (estimated): 1.83TiB (min: 943.55GiB)
> Data ratio: 1.00
> Metadata ratio: 2.00
> Global reserve: 512.00MiB (used: 0.00B)
>
> Data,RAID0: Size:1.81TiB, Used:1.80TiB
> /dev/sdb1 928.48GiB
> /dev/sdc1 928.48GiB
>
> Metadata,RAID1: Size:3.00GiB, Used:2.15GiB
> /dev/sdb1 3.00GiB
> /dev/sdc1 3.00GiB
>
> System,RAID1: Size:32.00MiB, Used:176.00KiB
> /dev/sdb1 32.00MiB
> /dev/sdc1 32.00MiB
>
> Unallocated:
> /dev/sdb1 1.01MiB
> /dev/sdc1 1.00MiB
> /dev/sdd1 1.82TiB
The confusion is understandable because sdd1 is bigger than sdc1, so
why can't everything on sdc1 be moved to sdd1? Well, dev add > dev del
doesn't really do that, it's going to end up rewriting metadata to
sdb1 also, and there isn't enough space. Yes, there's 800MiB of unused
space in metadata chunks on sdb1 and sdc1, it should be enough (?) but
clearly it wants more than this for whatever reason. You could argue
it's a bug or some suboptimal behavior, but because this is a 99% full
file system, I'm willing to be it's a low priority bug. Because this
is raid0 you really need to add two devices, not just one.
> I don't quite understand what exactly btrfs is trying to do: I assume
> that block groups should be relocated to the new/empty drive,
There is a scant chance 'btrfs replace' will work better here. But
still the real problem remains, even if you replace sdc1 with sdd1,
sdb1 is still 99% full which in effect makes the file system 99% full
because it can't do anymore raid0 on sdb1, and it's not possible to do
raid0 chunks on a single sdd1 device.
If you can't add a 4th drive, you're going to have to convert to
single profile. Keep all three drives attached, 'btrfs balance start
-dconvert=single' and then once that's complete you should be able to
remove /dev/sdc1, although this will take a while because first
conversion will use space on all three drives, and then the removable
of sdc1 will have to copy chunks off before it can be removed.
> but
> during the delete `btrfs fi us` shows
> Unallocated:
> /dev/sdc1 16.00EiB
Known bug, also happens when resizing and conversions.
> so deleted partition is counted as maximum possible empty drive and
> blocks are relocated to it instead of new/empty drive? (kernel-4.7.2 &
> btrfs-progs-4.7.1 here)
> Is there any way to see where and why block groups are relocated
> during `delete`?
The two reasons this isn't working is a.) it's 99% full already and
b.) it's raid0, so merely adding one device isn't sufficient. It's
probably too full even to do a 3 device balance to restripe raid0
across 3 devices, which is still inefficient because it would leave
50% of the space on sdd as unusable. To do this with uneven devices
and use all the space, you're going to have to use single profile.
--
Chris Murphy
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: `btrfs dev del` fails with `No space left on device`
2016-08-30 17:13 ` Chris Murphy
@ 2016-08-30 20:10 ` ojab //
0 siblings, 0 replies; 7+ messages in thread
From: ojab // @ 2016-08-30 20:10 UTC (permalink / raw)
To: Chris Murphy; +Cc: linux-btrfs
On Tue, Aug 30, 2016 at 5:13 PM, Chris Murphy <lists@colorremedies.com> wrote:
> On Tue, Aug 30, 2016 at 4:22 AM, ojab // <ojab@ojab.ru> wrote:
>> On Mon, Aug 29, 2016 at 9:05 PM, Chris Murphy <lists@colorremedies.com> wrote:
>>> On Mon, Aug 29, 2016 at 10:04 AM, ojab // <ojab@ojab.ru> wrote:
>>> What do you get for 'btrfs fi us <mp>'
>>
>> $ sudo btrfs fi us /mnt/xxx/
>> Overall:
>> Device size: 3.64TiB
>> Device allocated: 1.82TiB
>> Device unallocated: 1.82TiB
>> Device missing: 0.00B
>> Used: 1.81TiB
>> Free (estimated): 1.83TiB (min: 943.55GiB)
>> Data ratio: 1.00
>> Metadata ratio: 2.00
>> Global reserve: 512.00MiB (used: 0.00B)
>>
>> Data,RAID0: Size:1.81TiB, Used:1.80TiB
>> /dev/sdb1 928.48GiB
>> /dev/sdc1 928.48GiB
>>
>> Metadata,RAID1: Size:3.00GiB, Used:2.15GiB
>> /dev/sdb1 3.00GiB
>> /dev/sdc1 3.00GiB
>>
>> System,RAID1: Size:32.00MiB, Used:176.00KiB
>> /dev/sdb1 32.00MiB
>> /dev/sdc1 32.00MiB
>>
>> Unallocated:
>> /dev/sdb1 1.01MiB
>> /dev/sdc1 1.00MiB
>> /dev/sdd1 1.82TiB
>
>
> The confusion is understandable because sdd1 is bigger than sdc1, so
> why can't everything on sdc1 be moved to sdd1? Well, dev add > dev del
> doesn't really do that, it's going to end up rewriting metadata to
> sdb1 also, and there isn't enough space. Yes, there's 800MiB of unused
> space in metadata chunks on sdb1 and sdc1, it should be enough (?) but
> clearly it wants more than this for whatever reason. You could argue
> it's a bug or some suboptimal behavior, but because this is a 99% full
> file system, I'm willing to be it's a low priority bug. Because this
> is raid0 you really need to add two devices, not just one.
>
>> I don't quite understand what exactly btrfs is trying to do: I assume
>> that block groups should be relocated to the new/empty drive,
>
> There is a scant chance 'btrfs replace' will work better here. But
> still the real problem remains, even if you replace sdc1 with sdd1,
> sdb1 is still 99% full which in effect makes the file system 99% full
> because it can't do anymore raid0 on sdb1, and it's not possible to do
> raid0 chunks on a single sdd1 device.
>
> If you can't add a 4th drive, you're going to have to convert to
> single profile. Keep all three drives attached, 'btrfs balance start
> -dconvert=single' and then once that's complete you should be able to
> remove /dev/sdc1, although this will take a while because first
> conversion will use space on all three drives, and then the removable
> of sdc1 will have to copy chunks off before it can be removed.
>
>> but
>> during the delete `btrfs fi us` shows
>> Unallocated:
>> /dev/sdc1 16.00EiB
>
> Known bug, also happens when resizing and conversions.
>
>
>
>> so deleted partition is counted as maximum possible empty drive and
>> blocks are relocated to it instead of new/empty drive? (kernel-4.7.2 &
>> btrfs-progs-4.7.1 here)
>> Is there any way to see where and why block groups are relocated
>> during `delete`?
>
> The two reasons this isn't working is a.) it's 99% full already and
> b.) it's raid0, so merely adding one device isn't sufficient. It's
> probably too full even to do a 3 device balance to restripe raid0
> across 3 devices, which is still inefficient because it would leave
> 50% of the space on sdd as unusable. To do this with uneven devices
> and use all the space, you're going to have to use single profile.
>
>
>
> --
> Chris Murphy
Ah, thanks for the elaboration, it makes things much more meaningful now!
//wbr ojab
^ permalink raw reply [flat|nested] 7+ messages in thread
* `btrfs dev del` fails with `No space left on device`
@ 2016-08-29 16:02 ojab //
0 siblings, 0 replies; 7+ messages in thread
From: ojab // @ 2016-08-29 16:02 UTC (permalink / raw)
To: linux-btrfs
Hi,
I've had BTRFS filesystem with two 1Tb drives (sdb1 & sdc1), data
raid0 & metadata raid1. I need to replace one drive with 2Tb drive, so
I've done `btrfs dev add /dev/sdd /mnt/xxx` and now trying to do
`btrfs dev del /dev/sdc1 /mnt/xxx` which fails due to `ERROR: error
removing device '/dev/sdc1': No space left on device`. Right now FS
looks like:
$ sudo btrfs fi show /mnt/xxx
Label: none uuid: 8a65465d-1a8c-4f80-abc6-c818c38567c3
Total devices 3 FS bytes used 1.80TiB
devid 1 size 931.51GiB used 931.51GiB path /dev/sdc1
devid 2 size 931.51GiB used 931.51GiB path /dev/sdb1
devid 3 size 1.82TiB used 0.00B path /dev/sdd1
$ sudo btrfs device usage /mnt/xxx/
/dev/sdb1, ID: 2
Device size: 931.51GiB
Device slack: 0.00B
Data,RAID0: 928.48GiB
Metadata,RAID1: 3.00GiB
System,RAID1: 32.00MiB
Unallocated: 1.01MiB
/dev/sdc1, ID: 1
Device size: 931.51GiB
Device slack: 3.50KiB
Data,RAID0: 928.48GiB
Metadata,RAID1: 3.00GiB
System,RAID1: 32.00MiB
Unallocated: 1.00MiB
/dev/sdd1, ID: 3
Device size: 1.82TiB
Device slack: 0.00B
Unallocated: 1.82TiB
As far as I understand, no data is allocated on `/dev/sdc1`, so it
should be no space for metadata, but I don't really understand how can
I balance only metadata:
$btrfs fi balance start -musage=1 -dusage=0 /mnt/torrents/
^ permalink raw reply [flat|nested] 7+ messages in thread
end of thread, other threads:[~2016-08-30 20:10 UTC | newest]
Thread overview: 7+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2016-08-29 16:04 `btrfs dev del` fails with `No space left on device` ojab //
2016-08-29 21:05 ` Chris Murphy
2016-08-30 10:22 ` ojab //
2016-08-30 10:35 ` Hugo Mills
2016-08-30 17:13 ` Chris Murphy
2016-08-30 20:10 ` ojab //
-- strict thread matches above, loose matches on Subject: below --
2016-08-29 16:02 ojab //
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).