linux-btrfs.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* delete disk proceedure
@ 2012-06-05 14:38 Jim
  2012-06-05 15:47 ` Hugo Mills
  2012-06-05 16:19 ` Helmut Hullen
  0 siblings, 2 replies; 9+ messages in thread
From: Jim @ 2012-06-05 14:38 UTC (permalink / raw)
  To: linux-btrfs

Good morning btrfs list,
I had written about 2 weeks ago about using extra btrfs space in an nfs 
file system setup.  Nfs seems to export the files but the mounts don't 
work on older machines without btrfs kernels.  So I am down to deleting 
several drives from btrfs to setup a standard raid 1 array for storage 
and export via nfs.  My system stats are:
[root@advanced ~]# df -h
Filesystem            Size  Used Avail Use% Mounted on
/dev/sdm2             196G   50G  137G  27% /
tmpfs                  16G     0   16G   0% /dev/shm
/dev/sdm1             2.0G  141M  1.8G   8% /boot
/dev/sdm5             1.2T   20G  1.1T   2% /var
10.2.0.40:/data/sites
                       2.6T  2.4T  155G  94% /nfs1/data/sites
10.2.0.42:/data/sites
                       2.6T  2.2T  328G  87% /nfs2/data/sites
/dev/sda               11T  4.9T  6.0T  46% /btrfs
[root@advanced ~]# btrfs fi show
failed to read /dev/sr0
Label: none  uuid: c21f1221-a224-4ba4-92e5-cdea0fa6d0f9
         Total devices 12 FS bytes used 4.76TB
         devid    6 size 930.99GB used 429.32GB path /dev/sdf
         devid    5 size 930.99GB used 429.32GB path /dev/sde
         devid    8 size 930.99GB used 429.32GB path /dev/sdh
         devid    9 size 930.99GB used 429.32GB path /dev/sdi
         devid    4 size 930.99GB used 429.32GB path /dev/sdd
         devid    3 size 930.99GB used 429.32GB path /dev/sdc
         devid   11 size 930.99GB used 429.08GB path /dev/sdk
         devid    2 size 930.99GB used 429.32GB path /dev/sdb
         devid   10 size 930.99GB used 429.32GB path /dev/sdj
         devid   12 size 930.99GB used 429.33GB path /dev/sdl
         devid    7 size 930.99GB used 429.32GB path /dev/sdg
         devid    1 size 930.99GB used 429.09GB path /dev/sda

Btrfs v0.19-35-g1b444cd

df -h and btrfs fi show seem to be in good size agreement.  Btrfs was 
created as raid1 metadata and raid0 data.  I would like to delete the 
last 4 drives leaving 7T of space to hold 4.9T of data.  My plan would 
be to remove /dev/sdi, j, k, l one at a time.  After all are deleted run 
"btrfs fi balance /btrfs".  The data is not critical and can be lost but 
I am really trying to avoid the hassle of having to completely redo the 
filesystem.  Does my deletion plan seem reasonable.  Please, I'm really 
swimming alone here and would value some advice.
Jim Maloney
-- 


^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: delete disk proceedure
  2012-06-05 14:38 delete disk proceedure Jim
@ 2012-06-05 15:47 ` Hugo Mills
       [not found]   ` <4FCE2B70.9020107@webstarts.com>
  2012-06-05 16:19 ` Helmut Hullen
  1 sibling, 1 reply; 9+ messages in thread
From: Hugo Mills @ 2012-06-05 15:47 UTC (permalink / raw)
  To: Jim; +Cc: linux-btrfs

[-- Attachment #1: Type: text/plain, Size: 703 bytes --]

On Tue, Jun 05, 2012 at 10:38:11AM -0400, Jim wrote:
> Good morning btrfs list,
> I had written about 2 weeks ago about using extra btrfs space in an
> nfs file system setup.  Nfs seems to export the files but the mounts
> don't work on older machines without btrfs kernels.

   "The mounts don't work" -- can you be more specific here?

   It would seem that if we can get to the bottom of that problem, you
won't have to muck around with your current set-up at all.

   Hugo.

-- 
=== Hugo Mills: hugo@... carfax.org.uk | darksatanic.net | lug.org.uk ===
  PGP key: 515C238D from wwwkeys.eu.pgp.net or http://www.carfax.org.uk
  --- I am an opera lover from planet Zog.  Take me to your lieder. ---  

[-- Attachment #2: Digital signature --]
[-- Type: application/pgp-signature, Size: 190 bytes --]

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: delete disk proceedure
       [not found]   ` <4FCE2B70.9020107@webstarts.com>
@ 2012-06-05 16:00     ` Hugo Mills
  2012-06-05 16:05       ` Jim
  0 siblings, 1 reply; 9+ messages in thread
From: Hugo Mills @ 2012-06-05 16:00 UTC (permalink / raw)
  To: Jim; +Cc: Btrfs mailing list

[-- Attachment #1: Type: text/plain, Size: 1451 bytes --]

[Please keep replies on the list]

On Tue, Jun 05, 2012 at 11:53:20AM -0400, Jim wrote:
> Thanks for the response.  When I say mounts don't work I mean that
> when I try to mount an exported directory on an older os (most of my
> machines are centos 5.7) with a non btrfs kernel I get an "unknown
> filesystem" error.  If we can get around this I would be very happy.

   On my NFS exports, I found that I have to set the fsid to something
unique for each subvolume I export. In my case, this prevents all of
the exported filesystems being the same, but it might also help with
your problem.

/media/video			  *(ro,async,fsid=0x13570,no_subtree_check)

   Hugo.

> 
> On 06/05/2012 11:47 AM, Hugo Mills wrote:
> >On Tue, Jun 05, 2012 at 10:38:11AM -0400, Jim wrote:
> >>Good morning btrfs list,
> >>I had written about 2 weeks ago about using extra btrfs space in an
> >>nfs file system setup.  Nfs seems to export the files but the mounts
> >>don't work on older machines without btrfs kernels.
> >    "The mounts don't work" -- can you be more specific here?
> >
> >    It would seem that if we can get to the bottom of that problem, you
> >won't have to muck around with your current set-up at all.
> >
> >    Hugo.
> >
> 

-- 
=== Hugo Mills: hugo@... carfax.org.uk | darksatanic.net | lug.org.uk ===
  PGP key: 515C238D from wwwkeys.eu.pgp.net or http://www.carfax.org.uk
  --- I am an opera lover from planet Zog.  Take me to your lieder. ---  

[-- Attachment #2: Digital signature --]
[-- Type: application/pgp-signature, Size: 190 bytes --]

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: delete disk proceedure
  2012-06-05 16:00     ` Hugo Mills
@ 2012-06-05 16:05       ` Jim
  0 siblings, 0 replies; 9+ messages in thread
From: Jim @ 2012-06-05 16:05 UTC (permalink / raw)
  To: Hugo Mills, Btrfs mailing list

[resent after getting rid of the damn signature]
Sorry I missed the list last time, I hit reply too fast.  Yes, this 
sounds like a good idea.  I will try it and let you know.
Jim Maloney

On 06/05/2012 12:00 PM, Hugo Mills wrote:
> [Please keep replies on the list]
>
> On Tue, Jun 05, 2012 at 11:53:20AM -0400, Jim wrote:
>> Thanks for the response.  When I say mounts don't work I mean that
>> when I try to mount an exported directory on an older os (most of my
>> machines are centos 5.7) with a non btrfs kernel I get an "unknown
>> filesystem" error.  If we can get around this I would be very happy.
>     On my NFS exports, I found that I have to set the fsid to something
> unique for each subvolume I export. In my case, this prevents all of
> the exported filesystems being the same, but it might also help with
> your problem.
>
> /media/video			  *(ro,async,fsid=0x13570,no_subtree_check)
>
>     Hugo.
>
>> On 06/05/2012 11:47 AM, Hugo Mills wrote:
>>> On Tue, Jun 05, 2012 at 10:38:11AM -0400, Jim wrote:
>>>> Good morning btrfs list,
>>>> I had written about 2 weeks ago about using extra btrfs space in an
>>>> nfs file system setup.  Nfs seems to export the files but the mounts
>>>> don't work on older machines without btrfs kernels.
>>>     "The mounts don't work" -- can you be more specific here?
>>>
>>>     It would seem that if we can get to the bottom of that problem, you
>>> won't have to muck around with your current set-up at all.
>>>
>>>     Hugo.
>>>

-

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: delete disk proceedure
  2012-06-05 14:38 delete disk proceedure Jim
  2012-06-05 15:47 ` Hugo Mills
@ 2012-06-05 16:19 ` Helmut Hullen
  2012-06-05 17:04   ` Hugo Mills
  1 sibling, 1 reply; 9+ messages in thread
From: Helmut Hullen @ 2012-06-05 16:19 UTC (permalink / raw)
  To: linux-btrfs

Hallo, Jim,

Du meintest am 05.06.12:

> /dev/sda               11T  4.9T  6.0T  46% /btrfs
> [root@advanced ~]# btrfs fi show
> failed to read /dev/sr0
> Label: none  uuid: c21f1221-a224-4ba4-92e5-cdea0fa6d0f9
>          Total devices 12 FS bytes used 4.76TB
>          devid    6 size 930.99GB used 429.32GB path /dev/sdf
>          devid    5 size 930.99GB used 429.32GB path /dev/sde
>          devid    8 size 930.99GB used 429.32GB path /dev/sdh
>          devid    9 size 930.99GB used 429.32GB path /dev/sdi
>          devid    4 size 930.99GB used 429.32GB path /dev/sdd
>          devid    3 size 930.99GB used 429.32GB path /dev/sdc
>          devid   11 size 930.99GB used 429.08GB path /dev/sdk
>          devid    2 size 930.99GB used 429.32GB path /dev/sdb
>          devid   10 size 930.99GB used 429.32GB path /dev/sdj
>          devid   12 size 930.99GB used 429.33GB path /dev/sdl
>          devid    7 size 930.99GB used 429.32GB path /dev/sdg
>          devid    1 size 930.99GB used 429.09GB path /dev/sda

> Btrfs v0.19-35-g1b444cd

> df -h and btrfs fi show seem to be in good size agreement.  Btrfs was
> created as raid1 metadata and raid0 data.  I would like to delete the
> last 4 drives leaving 7T of space to hold 4.9T of data.  My plan
> would be to remove /dev/sdi, j, k, l one at a time.  After all are
> deleted run "btrfs fi balance /btrfs".

I'd prefer

        btrfs device delete /dev/sdi
        btrfs filesystem balance /btrfs
        btrfs device delete /dev/sdj
        btrfs filesystem balance /btrfs

etc. - after every "delete" its "balance" run.

That may take a lot of hours - I use the last lines of "dmesg" to  
extrapolate the needed time (btrfs produces a message about every  
minute).

And you can't use the console from where you have started the "balance"  
command. Therefore I wrap this command:

      echo 'btrfs filesystem balance /btrfs' | at now


Viele Gruesse!
Helmut

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: delete disk proceedure
  2012-06-05 16:19 ` Helmut Hullen
@ 2012-06-05 17:04   ` Hugo Mills
  2012-06-05 17:12     ` Jim
  2012-06-05 17:23     ` Helmut Hullen
  0 siblings, 2 replies; 9+ messages in thread
From: Hugo Mills @ 2012-06-05 17:04 UTC (permalink / raw)
  To: helmut; +Cc: linux-btrfs

[-- Attachment #1: Type: text/plain, Size: 2624 bytes --]

On Tue, Jun 05, 2012 at 06:19:00PM +0200, Helmut Hullen wrote:
> Hallo, Jim,
> 
> Du meintest am 05.06.12:
> 
> > /dev/sda               11T  4.9T  6.0T  46% /btrfs
> > [root@advanced ~]# btrfs fi show
> > failed to read /dev/sr0
> > Label: none  uuid: c21f1221-a224-4ba4-92e5-cdea0fa6d0f9
> >          Total devices 12 FS bytes used 4.76TB
> >          devid    6 size 930.99GB used 429.32GB path /dev/sdf
> >          devid    5 size 930.99GB used 429.32GB path /dev/sde
> >          devid    8 size 930.99GB used 429.32GB path /dev/sdh
> >          devid    9 size 930.99GB used 429.32GB path /dev/sdi
> >          devid    4 size 930.99GB used 429.32GB path /dev/sdd
> >          devid    3 size 930.99GB used 429.32GB path /dev/sdc
> >          devid   11 size 930.99GB used 429.08GB path /dev/sdk
> >          devid    2 size 930.99GB used 429.32GB path /dev/sdb
> >          devid   10 size 930.99GB used 429.32GB path /dev/sdj
> >          devid   12 size 930.99GB used 429.33GB path /dev/sdl
> >          devid    7 size 930.99GB used 429.32GB path /dev/sdg
> >          devid    1 size 930.99GB used 429.09GB path /dev/sda
> 
> > Btrfs v0.19-35-g1b444cd
> 
> > df -h and btrfs fi show seem to be in good size agreement.  Btrfs was
> > created as raid1 metadata and raid0 data.  I would like to delete the
> > last 4 drives leaving 7T of space to hold 4.9T of data.  My plan
> > would be to remove /dev/sdi, j, k, l one at a time.  After all are
> > deleted run "btrfs fi balance /btrfs".
> 
> I'd prefer
> 
>         btrfs device delete /dev/sdi
>         btrfs filesystem balance /btrfs
>         btrfs device delete /dev/sdj
>         btrfs filesystem balance /btrfs
> 
> etc. - after every "delete" its "balance" run.

   That's not necessary. Delete will move the blocks from the device
being removed into spare space on the other devices. The balance is
unnecessary. (In fact, delete and balance share quite a lot of code)

> That may take a lot of hours - I use the last lines of "dmesg" to  
> extrapolate the needed time (btrfs produces a message about every  
> minute).
> 
> And you can't use the console from where you have started the "balance"  
> command. Therefore I wrap this command:
> 
>       echo 'btrfs filesystem balance /btrfs' | at now

   ... or just put it into the background with "btrfs bal start
/mountpoint &". You know, like everyone else does. :)

   Hugo.

-- 
=== Hugo Mills: hugo@... carfax.org.uk | darksatanic.net | lug.org.uk ===
  PGP key: 515C238D from wwwkeys.eu.pgp.net or http://www.carfax.org.uk
           --- There are three mistaiks in this sentance. ---            

[-- Attachment #2: Digital signature --]
[-- Type: application/pgp-signature, Size: 190 bytes --]

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: delete disk proceedure
  2012-06-05 17:04   ` Hugo Mills
@ 2012-06-05 17:12     ` Jim
  2012-06-05 17:18       ` Hugo Mills
  2012-06-05 17:23     ` Helmut Hullen
  1 sibling, 1 reply; 9+ messages in thread
From: Jim @ 2012-06-05 17:12 UTC (permalink / raw)
  To: Hugo Mills, helmut, linux-btrfs

[sorry for the resend, signature again]
I am waiting for a window (later tonight) when I can try mounting the 
btrfs export.  Am I reading you guys correctly, that you think I should 
be deleting drives from the array?  Or is this a just in case?  Thanks.
Jim Maloney

On 06/05/2012 01:04 PM, Hugo Mills wrote:
> On Tue, Jun 05, 2012 at 06:19:00PM +0200, Helmut Hullen wrote:
>> Hallo, Jim,
>>
>> Du meintest am 05.06.12:
>>
>>> /dev/sda               11T  4.9T  6.0T  46% /btrfs
>>> [root@advanced ~]# btrfs fi show
>>> failed to read /dev/sr0
>>> Label: none  uuid: c21f1221-a224-4ba4-92e5-cdea0fa6d0f9
>>>           Total devices 12 FS bytes used 4.76TB
>>>           devid    6 size 930.99GB used 429.32GB path /dev/sdf
>>>           devid    5 size 930.99GB used 429.32GB path /dev/sde
>>>           devid    8 size 930.99GB used 429.32GB path /dev/sdh
>>>           devid    9 size 930.99GB used 429.32GB path /dev/sdi
>>>           devid    4 size 930.99GB used 429.32GB path /dev/sdd
>>>           devid    3 size 930.99GB used 429.32GB path /dev/sdc
>>>           devid   11 size 930.99GB used 429.08GB path /dev/sdk
>>>           devid    2 size 930.99GB used 429.32GB path /dev/sdb
>>>           devid   10 size 930.99GB used 429.32GB path /dev/sdj
>>>           devid   12 size 930.99GB used 429.33GB path /dev/sdl
>>>           devid    7 size 930.99GB used 429.32GB path /dev/sdg
>>>           devid    1 size 930.99GB used 429.09GB path /dev/sda
>>> Btrfs v0.19-35-g1b444cd
>>> df -h and btrfs fi show seem to be in good size agreement.  Btrfs was
>>> created as raid1 metadata and raid0 data.  I would like to delete the
>>> last 4 drives leaving 7T of space to hold 4.9T of data.  My plan
>>> would be to remove /dev/sdi, j, k, l one at a time.  After all are
>>> deleted run "btrfs fi balance /btrfs".
>> I'd prefer
>>
>>          btrfs device delete /dev/sdi
>>          btrfs filesystem balance /btrfs
>>          btrfs device delete /dev/sdj
>>          btrfs filesystem balance /btrfs
>>
>> etc. - after every "delete" its "balance" run.
>     That's not necessary. Delete will move the blocks from the device
> being removed into spare space on the other devices. The balance is
> unnecessary. (In fact, delete and balance share quite a lot of code)
>
>> That may take a lot of hours - I use the last lines of "dmesg" to
>> extrapolate the needed time (btrfs produces a message about every
>> minute).
>>
>> And you can't use the console from where you have started the "balance"
>> command. Therefore I wrap this command:
>>
>>        echo 'btrfs filesystem balance /btrfs' | at now
>     ... or just put it into the background with "btrfs bal start
> /mountpoint&". You know, like everyone else does. :)
>
>     Hugo.
>

-- 


^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: delete disk proceedure
  2012-06-05 17:12     ` Jim
@ 2012-06-05 17:18       ` Hugo Mills
  0 siblings, 0 replies; 9+ messages in thread
From: Hugo Mills @ 2012-06-05 17:18 UTC (permalink / raw)
  To: Jim; +Cc: helmut, linux-btrfs

[-- Attachment #1: Type: text/plain, Size: 3285 bytes --]

On Tue, Jun 05, 2012 at 01:12:17PM -0400, Jim wrote:
> [sorry for the resend, signature again]
> I am waiting for a window (later tonight) when I can try mounting
> the btrfs export.  Am I reading you guys correctly, that you think I
> should be deleting drives from the array?  Or is this a just in
> case?  Thanks.

   Try the modified exports as I suggested in the other part of the
thread first. If that turns out to be problematic still, then we can
discuss any migration strategies.

   Hugo.

> Jim Maloney
> 
> On 06/05/2012 01:04 PM, Hugo Mills wrote:
> >On Tue, Jun 05, 2012 at 06:19:00PM +0200, Helmut Hullen wrote:
> >>Hallo, Jim,
> >>
> >>Du meintest am 05.06.12:
> >>
> >>>/dev/sda               11T  4.9T  6.0T  46% /btrfs
> >>>[root@advanced ~]# btrfs fi show
> >>>failed to read /dev/sr0
> >>>Label: none  uuid: c21f1221-a224-4ba4-92e5-cdea0fa6d0f9
> >>>          Total devices 12 FS bytes used 4.76TB
> >>>          devid    6 size 930.99GB used 429.32GB path /dev/sdf
> >>>          devid    5 size 930.99GB used 429.32GB path /dev/sde
> >>>          devid    8 size 930.99GB used 429.32GB path /dev/sdh
> >>>          devid    9 size 930.99GB used 429.32GB path /dev/sdi
> >>>          devid    4 size 930.99GB used 429.32GB path /dev/sdd
> >>>          devid    3 size 930.99GB used 429.32GB path /dev/sdc
> >>>          devid   11 size 930.99GB used 429.08GB path /dev/sdk
> >>>          devid    2 size 930.99GB used 429.32GB path /dev/sdb
> >>>          devid   10 size 930.99GB used 429.32GB path /dev/sdj
> >>>          devid   12 size 930.99GB used 429.33GB path /dev/sdl
> >>>          devid    7 size 930.99GB used 429.32GB path /dev/sdg
> >>>          devid    1 size 930.99GB used 429.09GB path /dev/sda
> >>>Btrfs v0.19-35-g1b444cd
> >>>df -h and btrfs fi show seem to be in good size agreement.  Btrfs was
> >>>created as raid1 metadata and raid0 data.  I would like to delete the
> >>>last 4 drives leaving 7T of space to hold 4.9T of data.  My plan
> >>>would be to remove /dev/sdi, j, k, l one at a time.  After all are
> >>>deleted run "btrfs fi balance /btrfs".
> >>I'd prefer
> >>
> >>         btrfs device delete /dev/sdi
> >>         btrfs filesystem balance /btrfs
> >>         btrfs device delete /dev/sdj
> >>         btrfs filesystem balance /btrfs
> >>
> >>etc. - after every "delete" its "balance" run.
> >    That's not necessary. Delete will move the blocks from the device
> >being removed into spare space on the other devices. The balance is
> >unnecessary. (In fact, delete and balance share quite a lot of code)
> >
> >>That may take a lot of hours - I use the last lines of "dmesg" to
> >>extrapolate the needed time (btrfs produces a message about every
> >>minute).
> >>
> >>And you can't use the console from where you have started the "balance"
> >>command. Therefore I wrap this command:
> >>
> >>       echo 'btrfs filesystem balance /btrfs' | at now
> >    ... or just put it into the background with "btrfs bal start
> >/mountpoint&". You know, like everyone else does. :)
> >
> >    Hugo.
> >
> 

-- 
=== Hugo Mills: hugo@... carfax.org.uk | darksatanic.net | lug.org.uk ===
  PGP key: 515C238D from wwwkeys.eu.pgp.net or http://www.carfax.org.uk
               --- Quantum est ille canis in fenestra? ---               

[-- Attachment #2: Digital signature --]
[-- Type: application/pgp-signature, Size: 190 bytes --]

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: delete disk proceedure
  2012-06-05 17:04   ` Hugo Mills
  2012-06-05 17:12     ` Jim
@ 2012-06-05 17:23     ` Helmut Hullen
  1 sibling, 0 replies; 9+ messages in thread
From: Helmut Hullen @ 2012-06-05 17:23 UTC (permalink / raw)
  To: linux-btrfs

Hallo, Hugo,

Du meintest am 05.06.12:

[...]

>> And you can't use the console from where you have started the
>> "balance" command. Therefore I wrap this command:
>>
>>       echo 'btrfs filesystem balance /btrfs' | at now

>    ... or just put it into the background with "btrfs bal start
> /mountpoint &". You know, like everyone else does. :)

I know that possibility too. My proposal puts every message ("normal"  
messages and error messages) into a mail to "root" (when root has  
started this command).

Viele Gruesse!
Helmut

^ permalink raw reply	[flat|nested] 9+ messages in thread

end of thread, other threads:[~2012-06-05 17:35 UTC | newest]

Thread overview: 9+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2012-06-05 14:38 delete disk proceedure Jim
2012-06-05 15:47 ` Hugo Mills
     [not found]   ` <4FCE2B70.9020107@webstarts.com>
2012-06-05 16:00     ` Hugo Mills
2012-06-05 16:05       ` Jim
2012-06-05 16:19 ` Helmut Hullen
2012-06-05 17:04   ` Hugo Mills
2012-06-05 17:12     ` Jim
2012-06-05 17:18       ` Hugo Mills
2012-06-05 17:23     ` Helmut Hullen

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).