linux-btrfs.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* Replace missing disc => strange result!?
@ 2017-08-10 20:00 Cloud Admin
  2017-08-11  0:39 ` Duncan
  0 siblings, 1 reply; 2+ messages in thread
From: Cloud Admin @ 2017-08-10 20:00 UTC (permalink / raw)
  To: Btrfs BTRFS

Hi,
I had a disc failure and must replace it. I followed the description on
 https://btrfs.wiki.kernel.org/index.php/Using_Btrfs_with_Multiple_Devi
ces and started the replacement.
Setup is a two disc RAID1!
After it was done, I called 'btrfs fi us /mn/btrfsroot' and I got the
output below. What is wrong?
Is it a rebalancing issue? I thought the replace command started it
automatically...

Overall:
    Device size:		   3.64TiB
    Device allocated:		   1.04TiB
    Device unallocated:		   2.60TiB
    Device missing:		     0.00B
    Used:			 519.76GiB
    Free (estimated):		   1.56TiB	(min: 1.56TiB)
    Data ratio:			      2.00
    Metadata ratio:		      1.60
    Global reserve:		 279.11MiB	(used: 0.00B)

Data,single: Size:1.00GiB, Used:0.00B
   /dev/mapper/luks-3ff6c412-4d3a-4d33-85a3-cc70e95c26f8	   1.00
GiB

Data,RAID1: Size:265.00GiB, Used:259.60GiB
   /dev/mapper/luks-3ff6c412-4d3a-4d33-85a3-cc70e95c26f8	 265.00
GiB
   /dev/mapper/luks-ff4bf5da-48af-4563-abb2-db083bd01512	 265.00
GiB

Data,DUP: Size:264.00GiB, Used:0.00B
   /dev/mapper/luks-3ff6c412-4d3a-4d33-85a3-cc70e95c26f8	 528.00
GiB

Metadata,single: Size:1.00GiB, Used:0.00B
   /dev/mapper/luks-3ff6c412-4d3a-4d33-85a3-cc70e95c26f8	   1.00
GiB

Metadata,RAID1: Size:1.00GiB, Used:286.03MiB
   /dev/mapper/luks-3ff6c412-4d3a-4d33-85a3-cc70e95c26f8	   1.00
GiB
   /dev/mapper/luks-ff4bf5da-48af-4563-abb2-db083bd01512	   1.00
GiB

Metadata,DUP: Size:512.00MiB, Used:112.00KiB
   /dev/mapper/luks-3ff6c412-4d3a-4d33-85a3-cc70e95c26f8	   1.00
GiB

System,single: Size:32.00MiB, Used:0.00B
   /dev/mapper/luks-3ff6c412-4d3a-4d33-85a3-cc70e95c26f8	  32.00
MiB

System,RAID1: Size:8.00MiB, Used:48.00KiB
   /dev/mapper/luks-3ff6c412-4d3a-4d33-85a3-cc70e95c26f8	   8.00
MiB
   /dev/mapper/luks-ff4bf5da-48af-4563-abb2-db083bd01512	   8.00
MiB

System,DUP: Size:32.00MiB, Used:48.00KiB
   /dev/mapper/luks-3ff6c412-4d3a-4d33-85a3-cc70e95c26f8	  64.00
MiB

Unallocated:
   /dev/mapper/luks-3ff6c412-4d3a-4d33-85a3-cc70e95c26f8	   1.04
TiB
   /dev/mapper/luks-ff4bf5da-48af-4563-abb2-db083bd01512	   1.56
TiB

Bye
	Frank

^ permalink raw reply	[flat|nested] 2+ messages in thread

* Re: Replace missing disc => strange result!?
  2017-08-10 20:00 Replace missing disc => strange result!? Cloud Admin
@ 2017-08-11  0:39 ` Duncan
  0 siblings, 0 replies; 2+ messages in thread
From: Duncan @ 2017-08-11  0:39 UTC (permalink / raw)
  To: linux-btrfs

Cloud Admin posted on Thu, 10 Aug 2017 22:00:08 +0200 as excerpted:

> Hi,
> I had a disc failure and must replace it. I followed the description on
>  https://btrfs.wiki.kernel.org/index.php/Using_Btrfs_with_Multiple_Devi
> ces and started the replacement.
> Setup is a two disc RAID1!
> After it was done, I called 'btrfs fi us /mn/btrfsroot' and I got the
> output below. What is wrong?
> Is it a rebalancing issue? I thought the replace command started it
> automatically...

...
 
> Data,single: Size:1.00GiB, Used:0.00B
>    /dev/mapper/luks-3ff6c412-4d3a-4d33-85a3-cc70e95c26f8	   1.00
> GiB

...

It's not entirely clear what you're referring to with the "what's wrong" 
question, but I'll assume it's all those single and dup chunks that 
aren't raid1.

Unlike device delete, which does an implicit rebalance, replace simply 
replaces one device with another in terms of content, and doesn't 
rebalance anything on remaining devices.  This tends to make it much 
faster, with less danger of something else bad (like another device going 
bad) happening in the process, but it pretty much copies verbatim as much 
as possible so unlike device delete with its implicit balance or an 
explicit balance, existing chunks remain as they were.

Which means any existing single and dup chunks didn't get removed, as I'm 
guessing you expected.

But most or all of them are 0 used, so an explicit rebalance to eliminate 
them should be quite short as it'll just delete the references to them.  
Try this (path from your post above):

btrfs balance start -dusage=0 -musage=0 /mn/btrfsroot

That should eliminate the 0-usage chunks, making the usage output easier 
to follow even if you do need to post updates because my guess as to what 
you were referring to with the "what's wrong" was incorrect.  And as I 
said it should be much faster (almost instantaneous on ssd, probably not 
/quite/ that on spinning rust) than rebalancing chunks that weren't 
empty, too.  =:^)

-- 
Duncan - List replies preferred.   No HTML msgs.
"Every nonfree program has a lord, a master --
and if you use the program, he is your master."  Richard Stallman


^ permalink raw reply	[flat|nested] 2+ messages in thread

end of thread, other threads:[~2017-08-11  0:39 UTC | newest]

Thread overview: 2+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2017-08-10 20:00 Replace missing disc => strange result!? Cloud Admin
2017-08-11  0:39 ` Duncan

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).