* [linux-lvm] lvreduce nightmare
@ 2012-05-16 15:48 tariq wali
2012-05-17 23:13 ` Ray Morris
2012-05-18 9:57 ` Bryn M. Reeves
0 siblings, 2 replies; 13+ messages in thread
From: tariq wali @ 2012-05-16 15:48 UTC (permalink / raw)
To: linux-lvm; +Cc: tariq wali
[-- Attachment #1: Type: text/plain, Size: 3638 bytes --]
Hi,
I tried to reduce the VG and this is what it looked like before I tried to
reduce it
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/vg0-data 2.2T 1.7T 433G 80% /data
Out of the available *433G* I wanted to reduce *100G* in vg0 so that I
could use it for a new partition and this is what I did ..
* e2fsck -f /dev/vg0/data
resize2fs /dev/vg0/data 100G
lvreduce -L -100G -n /dev/vg0/data*
lvs
LV VG Attr LSize Origin Snap% Move Log Copy% Convert
data vg0 -wi-ao 2.08T
after the lvreduce it did what I intended to as you can see in
vgs
VG #PV #LV #SN Attr VSize VFree
vg0 1 1 0 wz--n- 2.18T *100.00G* ( it did allocate the 100.00G
as free space )
after I mounted /data back , I could even touch a file , however when we
start the app (mysql) and when it tries to write into the existing data in
/data , the drive goes into *read only mode repeatedly* ..
in order to fix it i unmounted /data and ran
*e2fsck -f /dev/vg0/data -n*
*
*
after the fsck completed it would prompt to fix superblock/inodes which I
replied with 'yes' however the problem still persists that if i mount /data
it goes into read-only mode .
dmsetup table
vg0-data: 0 4477255680 linear 104:17 384
Red Hat Enterprise Linux Server release 5.7
2.6.18-274.17.1.el5 #1 SMP x86_64
lvm2-2.02.84-6.el5_7.1
I have reduced lvm's with those sequence of commands in the past but i
just dont understand why it seems to have failed this time although i did
get 100G free space in vg0 but the partition /data seems useless ..
* I would greatly appreciate any help/insights .. i notice lvm has created
a backup file in /etc/lvm/archive as vg0_00008-1147866134.vg , does
restoring from that file actually work ? will it get the drive into
original state ? and how can i actually free some space from the volume
group if i want to ?*
########
some errors spewed in dmesg
attempt to access beyond end of device
dm-0: rw=0, want=4654102632, limit=4477255680
EXT3-fs error (device dm-0): read_block_bitmap: Cannot read block bitmap -
block_group = 17145, block_bitmap = 561807360
Aborting journal on device dm-0.
attempt to access beyond end of device
dm-0: rw=0, want=4565762056, limit=4477255680
EXT3-fs error (device dm-0): read_block_bitmap: <2>ext3_abort called.
EXT3-fs error (device dm-0): ext3_journal_start_sb: Detected aborted journal
Cannot read block bitmap - block_group = 17417, block_bitmap = 570720256
EXT3-fs error (device dm-0): read_block_bitmap: Cannot read block bitmap -
block_group = 17417, block_bitmap = 570720256
dm-0: rw=0, want=4683202568, limit=4477255680
EXT3-fs error (device dm-0): read_block_bitmap: Cannot read block bitmap -
block_group = 17865, block_bitmap = 585400320
EXT3-fs error (device dm-0) in ext3_free_blocks_sb: Journal has aborted
EXT3-fs error (device dm-0) in ext3_orphan_del: Journal has aborted
__journal_remove_journal_head: freeing b_committed_data
ext3_abort called.
EXT3-fs error (device dm-0): ext3_put_super: Couldn't clean up the journal
kjournald starting. Commit interval 5 seconds
EXT3-fs warning (device dm-0): ext3_clear_journal_err: Filesystem error
recorded from previous mount: IO failure
EXT3-fs warning (device dm-0): ext3_clear_journal_err: Marking fs in need
of filesystem check.
EXT3-fs warning: mounting fs with errors, running e2fsck is recommended
attempt to access beyond end of device
dm-0: rw=0, want=4654102632, limit=4477255680
EXT3-fs error (device dm-0): read_block_bitmap: Cannot read block bitmap -
block_group = 17145, block_bitmap = 561807360
Aborting journal on device dm-0.
--
*Tariq Wali.*
[-- Attachment #2: Type: text/html, Size: 4993 bytes --]
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [linux-lvm] lvreduce nightmare
2012-05-16 15:48 [linux-lvm] lvreduce nightmare tariq wali
@ 2012-05-17 23:13 ` Ray Morris
2012-05-18 0:21 ` Stuart D Gathman
` (2 more replies)
2012-05-18 9:57 ` Bryn M. Reeves
1 sibling, 3 replies; 13+ messages in thread
From: Ray Morris @ 2012-05-17 23:13 UTC (permalink / raw)
To: linux-lvm, ganaiwali
Stop. Don't do anything else until you are sure of what to do next.
You will not lose data by studying. You can lose data by trying to fix
it.
> resize2fs /dev/vg0/data 100G
> lvreduce -L -100G -n /dev/vg0/data*
A 100 GB filesystem needs a block device of around 110 GB. So this
cut off the end of your filesystem. (The device needs to hold the
journal as well as the FS, for example.)
> * I would greatly appreciate any help/insights .. i notice lvm has
> created a backup file in /etc/lvm/archive as
> vg0_00008-1147866134.vg , does restoring from that file actually
> work ? will it get the drive into original state ?
It will work to put the LVM configuration back. It won't fix this:
> *e2fsck -f /dev/vg0/data -n*
> *
> *
> after the fsck completed it would prompt to fix superblock/inodes
> which I replied with 'yes'
Not good. That "fixed" perfectly good superblocks, changing them to be
no longer correct. See the first three sentences. Do not proceed
to try any other fixes you don't fully understand. Unless of course
you've tested your backup from last night and know it's fine. Don't
forget to stop tonight's backup job, BTW.
Hopefully you can restore the super blocks after you expand the LV
back. I'm not an expert on that step, so I'll simply suggest you refer
to proper resources and make sure you understand it before acting (or
make an image of the device with dd first so you can recover if you do
someting that causes further damage).
> and how can i
> actually free some space from the volume group if i want to ?*
resize2fs to smaller size than you wish to end up with.
see resize2fs -M
Next, resize the LV, then use resize2fs again to expand the FS to fill
the LV. Alternatively, you can calculate / estimate the space a
filesystem will need, which will depend on which filesystem, which
features are enabled, etc.
--
Ray Morris
support@bettercgi.com
Strongbox - The next generation in site security:
http://www.bettercgi.com/strongbox/
Throttlebox - Intelligent Bandwidth Control
http://www.bettercgi.com/throttlebox/
Strongbox / Throttlebox affiliate program:
http://www.bettercgi.com/affiliates/user/register.php
On Wed, 16 May 2012 21:18:46 +0530
tariq wali <ganaiwali@gmail.com> wrote:
> Hi,
>
> I tried to reduce the VG and this is what it looked like before I
> tried to reduce it
>
> Filesystem Size Used Avail Use% Mounted on
> /dev/mapper/vg0-data 2.2T 1.7T 433G 80% /data
>
> Out of the available *433G* I wanted to reduce *100G* in vg0 so that I
> could use it for a new partition and this is what I did ..
>
> * e2fsck -f /dev/vg0/data
> resize2fs /dev/vg0/data 100G
> lvreduce -L -100G -n /dev/vg0/data*
>
> lvs
> LV VG Attr LSize Origin Snap% Move Log Copy% Convert
> data vg0 -wi-ao 2.08T
>
> after the lvreduce it did what I intended to as you can see in
>
> vgs
> VG #PV #LV #SN Attr VSize VFree
> vg0 1 1 0 wz--n- 2.18T *100.00G* ( it did allocate the
> 100.00G as free space )
>
> after I mounted /data back , I could even touch a file , however when
> we start the app (mysql) and when it tries to write into the existing
> data in /data , the drive goes into *read only mode repeatedly* ..
>
> in order to fix it i unmounted /data and ran
>
> *e2fsck -f /dev/vg0/data -n*
> *
> *
> after the fsck completed it would prompt to fix superblock/inodes
> which I replied with 'yes' however the problem still persists that if
> i mount /data it goes into read-only mode .
>
> dmsetup table
> vg0-data: 0 4477255680 linear 104:17 384
>
> Red Hat Enterprise Linux Server release 5.7
> 2.6.18-274.17.1.el5 #1 SMP x86_64
> lvm2-2.02.84-6.el5_7.1
>
> I have reduced lvm's with those sequence of commands in the past but
> i just dont understand why it seems to have failed this time although
> i did get 100G free space in vg0 but the partition /data seems
> useless ..
>
> * I would greatly appreciate any help/insights .. i notice lvm has
> created a backup file in /etc/lvm/archive as
> vg0_00008-1147866134.vg , does restoring from that file actually
> work ? will it get the drive into original state ? and how can i
> actually free some space from the volume group if i want to ?*
>
>
> ########
> some errors spewed in dmesg
>
> attempt to access beyond end of device
> dm-0: rw=0, want=4654102632, limit=4477255680
> EXT3-fs error (device dm-0): read_block_bitmap: Cannot read block
> bitmap - block_group = 17145, block_bitmap = 561807360
> Aborting journal on device dm-0.
> attempt to access beyond end of device
> dm-0: rw=0, want=4565762056, limit=4477255680
> EXT3-fs error (device dm-0): read_block_bitmap: <2>ext3_abort called.
> EXT3-fs error (device dm-0): ext3_journal_start_sb: Detected aborted
> journal
>
> Cannot read block bitmap - block_group = 17417, block_bitmap =
> 570720256 EXT3-fs error (device dm-0): read_block_bitmap: Cannot read
> block bitmap - block_group = 17417, block_bitmap = 570720256
> dm-0: rw=0, want=4683202568, limit=4477255680
> EXT3-fs error (device dm-0): read_block_bitmap: Cannot read block
> bitmap - block_group = 17865, block_bitmap = 585400320
>
> EXT3-fs error (device dm-0) in ext3_free_blocks_sb: Journal has
> aborted EXT3-fs error (device dm-0) in ext3_orphan_del: Journal has
> aborted __journal_remove_journal_head: freeing b_committed_data
>
>
> ext3_abort called.
> EXT3-fs error (device dm-0): ext3_put_super: Couldn't clean up the
> journal kjournald starting. Commit interval 5 seconds
> EXT3-fs warning (device dm-0): ext3_clear_journal_err: Filesystem
> error recorded from previous mount: IO failure
> EXT3-fs warning (device dm-0): ext3_clear_journal_err: Marking fs in
> need of filesystem check.
> EXT3-fs warning: mounting fs with errors, running e2fsck is
> recommended attempt to access beyond end of device
> dm-0: rw=0, want=4654102632, limit=4477255680
>
> EXT3-fs error (device dm-0): read_block_bitmap: Cannot read block
> bitmap - block_group = 17145, block_bitmap = 561807360
> Aborting journal on device dm-0.
>
>
>
>
>
>
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [linux-lvm] lvreduce nightmare
2012-05-17 23:13 ` Ray Morris
@ 2012-05-18 0:21 ` Stuart D Gathman
2012-05-18 1:16 ` Andy Smith
2012-05-18 10:25 ` Bryn M. Reeves
2 siblings, 0 replies; 13+ messages in thread
From: Stuart D Gathman @ 2012-05-18 0:21 UTC (permalink / raw)
To: LVM general discussion and development; +Cc: ganaiwali
Long ago, Nostradamus foresaw that on May 17, Ray Morris would write:
> Stop. Don't do anything else until you are sure of what to do next.
> You will not lose data by studying. You can lose data by trying to fix
> it.
>
>> resize2fs /dev/vg0/data 100G
>> lvreduce -L -100G -n /dev/vg0/data*
This reduced the LV by 100G. Probably *not* what you wanted.
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [linux-lvm] lvreduce nightmare
2012-05-17 23:13 ` Ray Morris
2012-05-18 0:21 ` Stuart D Gathman
@ 2012-05-18 1:16 ` Andy Smith
2012-05-18 10:25 ` Bryn M. Reeves
2 siblings, 0 replies; 13+ messages in thread
From: Andy Smith @ 2012-05-18 1:16 UTC (permalink / raw)
To: linux-lvm
[-- Attachment #1: Type: text/plain, Size: 1590 bytes --]
Hello,
On Thu, May 17, 2012 at 06:13:50PM -0500, Ray Morris wrote:
> > resize2fs /dev/vg0/data 100G
> > lvreduce -L -100G -n /dev/vg0/data*
>
> A 100 GB filesystem needs a block device of around 110 GB. So this
> cut off the end of your filesystem. (The device needs to hold the
> journal as well as the FS, for example.)
I normally do as you suggest and resize2fs smaller, lvreduce and
then resize2fs again. This is due to paranoia though - I'm sure that
I normally see it match up with the lvreduce size exactly.
Surely OP's actual problem is that he has an FS with 2+ TB of data
on it that he resize2fs'd down *to* 100G when he actually wanted to
resize2fs it down *by* 100G? He said:
> > I tried to reduce the VG and this is what it looked like before I
> > tried to reduce it
> >
> > Filesystem Size Used Avail Use% Mounted on
> > /dev/mapper/vg0-data 2.2T 1.7T 433G 80% /data
Anyway I suspect your advice is still accurate though since you're
advising what to do when someone reduces an LV to very slightly
smaller than it needs to be to hold a ~100G FS and what he's
actually done is resize2fs and lvreduce a 2+TB FS into only 100G.
Hopefully all the data is still there and it's just the pointers
that are broken.. nasty.
> resize2fs to smaller size than you wish to end up with.
> see resize2fs -M
Ooh, I hadn't spotted that option. That certainly would reduce my
paranoia in future about making mistakes similar to this.
Cheers,
Andy
--
http://bitfolk.com/ -- No-nonsense VPS hosting
[-- Attachment #2: Digital signature --]
[-- Type: application/pgp-signature, Size: 198 bytes --]
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [linux-lvm] lvreduce nightmare
2012-05-16 15:48 [linux-lvm] lvreduce nightmare tariq wali
2012-05-17 23:13 ` Ray Morris
@ 2012-05-18 9:57 ` Bryn M. Reeves
1 sibling, 0 replies; 13+ messages in thread
From: Bryn M. Reeves @ 2012-05-18 9:57 UTC (permalink / raw)
To: LVM general discussion and development; +Cc: tariq wali
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1
On 05/16/2012 04:48 PM, tariq wali wrote:
> Filesystem Size Used Avail Use% Mounted
> on /dev/mapper/vg0-data 2.2T 1.7T 433G 80% /data
>
> Out of the available *433G* I wanted to reduce *100G* in vg0 so
> that I could use it for a new partition and this is what I did ..
>
> * e2fsck -f /dev/vg0/data resize2fs /dev/vg0/data 100G
This command will attempt to reduce the file system to 1OOG *total*.
It could not have succeeded given the file system state shown by df above.
> lvreduce -L -100G -n /dev/vg0/data*
This truncated the last 100G of the file system. Immediately stopping
at this point, restoring LVM metadata and fscking would have fixed this.
> *e2fsck -f /dev/vg0/data -n* * * after the fsck completed it would
> prompt to fix superblock/inodes which I replied with 'yes' however
> the problem still persists that if i mount /data it goes into
> read-only mode .
With -n the fsck should not write to the disk but if it was prompting
you for y/n to fixes then it probably was not run with that option.
> I have reduced lvm's with those sequence of commands in the past
> but i just dont understand why it seems to have failed this time
> although i did get 100G free space in vg0 but the partition /data
> seems useless ..
The resize2fs invocation used was incorrect and probably failed
causing the subsequent device resize to truncate the file system. The
fsck then trashed what was left while trying to fix it.
Regards,
Bryn.
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.12 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/
iEYEARECAAYFAk+2HOwACgkQ6YSQoMYUY94EJACgqZXdY+xM9nnYzWB2yfUzYrSl
VfUAoJe0I0e84iy22Riapx818DkHfRgb
=ADc8
-----END PGP SIGNATURE-----
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [linux-lvm] lvreduce nightmare
2012-05-17 23:13 ` Ray Morris
2012-05-18 0:21 ` Stuart D Gathman
2012-05-18 1:16 ` Andy Smith
@ 2012-05-18 10:25 ` Bryn M. Reeves
2012-05-18 14:13 ` tariq wali
2 siblings, 1 reply; 13+ messages in thread
From: Bryn M. Reeves @ 2012-05-18 10:25 UTC (permalink / raw)
To: LVM general discussion and development
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1
On 05/18/2012 12:13 AM, Ray Morris wrote:
> Stop. Don't do anything else until you are sure of what to do next.
> You will not lose data by studying. You can lose data by trying to
> fix it.
>
>> resize2fs /dev/vg0/data 100G lvreduce -L -100G -n /dev/vg0/data*
>
> A 100 GB filesystem needs a block device of around 110 GB. So
> this cut off the end of your filesystem. (The device needs to hold
> the journal as well as the FS, for example.)
Not with most file systems. When you ask resize2fs to make a file
system 100G it will make it (within rounding limits) 100G's worth of
blocks in length. This is the actual space required on the disk (i.e.
the argument is the size of the file system image not the maximum free
space within it).
E.g.:
# lvcreate -L 100M -n l0 tvg0
Logical volume "l0" created
# mke2fs -j /dev/tvg0/l0
mke2fs 1.41.9 (22-Aug-2009)
Filesystem label=
OS type: Linux
Block size=1024 (log=0)
Fragment size=1024 (log=0)
25688 inodes, 102400 blocks
[...]
102400 1024-byte blocks is exactly 100M (MiB to be pedantic).
Resize to 50M gives:
# resize2fs /dev/tvg0/l0 50M
resize2fs 1.41.9 (22-Aug-2009)
Resizing the filesystem on /dev/tvg0/l0 to 51200 (1k) blocks.
The filesystem on /dev/tvg0/l0 is now 51200 blocks long.
51200 1024 byte blocks is exactly 50MiB.
Regards,
Bryn.
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.12 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/
iEYEARECAAYFAk+2I6wACgkQ6YSQoMYUY96SlACfTAzVIzeym6iM3fzGhsR0P9YK
iG8Amwfu9AwTnwHvCUddMdzhKxlW0uCA
=+ITm
-----END PGP SIGNATURE-----
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [linux-lvm] lvreduce nightmare
2012-05-18 10:25 ` Bryn M. Reeves
@ 2012-05-18 14:13 ` tariq wali
2012-05-19 15:58 ` Stuart D Gathman
` (2 more replies)
0 siblings, 3 replies; 13+ messages in thread
From: tariq wali @ 2012-05-18 14:13 UTC (permalink / raw)
To: LVM general discussion and development
[-- Attachment #1: Type: text/plain, Size: 2829 bytes --]
Appreciate the response although I realized a messed up large volume lvm
requires time and which I didn't have in this case and resorted to backup
and recreated volumes all over new .
I realize I went wrong with resize2fs thinking that it would reduce my lvm
by 100G but it actually reduced the total volume size to 100G , i-e an lvm
of 1.7T
resize2fs /dev/vg0/data 100G ( i thought it would reduce the by 100G but
dropped the block device size to total of 100G )
so to i guess to do this right i should have
resize2fs /dev/vg0/data 1.6T or (1600G)
and then lvreduce -n data -L 100G /dev/vg0/data ( to reduce the lvm by 100
)
I even tried the vgcfgrestore on the archived lvm metadata file but that
just restored the metadata (back to original volume size however I still
had a bad file system ) .
Tariq
On Fri, May 18, 2012 at 3:55 PM, Bryn M. Reeves <bmr@redhat.com> wrote:
> -----BEGIN PGP SIGNED MESSAGE-----
> Hash: SHA1
>
> On 05/18/2012 12:13 AM, Ray Morris wrote:
> > Stop. Don't do anything else until you are sure of what to do next.
> > You will not lose data by studying. You can lose data by trying to
> > fix it.
> >
> >> resize2fs /dev/vg0/data 100G lvreduce -L -100G -n /dev/vg0/data*
> >
> > A 100 GB filesystem needs a block device of around 110 GB. So
> > this cut off the end of your filesystem. (The device needs to hold
> > the journal as well as the FS, for example.)
>
> Not with most file systems. When you ask resize2fs to make a file
> system 100G it will make it (within rounding limits) 100G's worth of
> blocks in length. This is the actual space required on the disk (i.e.
> the argument is the size of the file system image not the maximum free
> space within it).
>
> E.g.:
>
> # lvcreate -L 100M -n l0 tvg0
> Logical volume "l0" created
> # mke2fs -j /dev/tvg0/l0
> mke2fs 1.41.9 (22-Aug-2009)
> Filesystem label=
> OS type: Linux
> Block size=1024 (log=0)
> Fragment size=1024 (log=0)
> 25688 inodes, 102400 blocks
> [...]
>
> 102400 1024-byte blocks is exactly 100M (MiB to be pedantic).
>
> Resize to 50M gives:
>
> # resize2fs /dev/tvg0/l0 50M
> resize2fs 1.41.9 (22-Aug-2009)
> Resizing the filesystem on /dev/tvg0/l0 to 51200 (1k) blocks.
> The filesystem on /dev/tvg0/l0 is now 51200 blocks long.
>
> 51200 1024 byte blocks is exactly 50MiB.
>
> Regards,
> Bryn.
> -----BEGIN PGP SIGNATURE-----
> Version: GnuPG v1.4.12 (GNU/Linux)
> Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/
>
> iEYEARECAAYFAk+2I6wACgkQ6YSQoMYUY96SlACfTAzVIzeym6iM3fzGhsR0P9YK
> iG8Amwfu9AwTnwHvCUddMdzhKxlW0uCA
> =+ITm
> -----END PGP SIGNATURE-----
>
> _______________________________________________
> linux-lvm mailing list
> linux-lvm@redhat.com
> https://www.redhat.com/mailman/listinfo/linux-lvm
> read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/
>
--
*Tariq Wali.*
[-- Attachment #2: Type: text/html, Size: 4038 bytes --]
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [linux-lvm] lvreduce nightmare
2012-05-18 14:13 ` tariq wali
@ 2012-05-19 15:58 ` Stuart D Gathman
2012-05-19 23:14 ` Andy Smith
2012-05-20 21:39 ` Raptorfan
2 siblings, 0 replies; 13+ messages in thread
From: Stuart D Gathman @ 2012-05-19 15:58 UTC (permalink / raw)
To: LVM general discussion and development
[-- Attachment #1: Type: TEXT/PLAIN, Size: 1688 bytes --]
Long ago, Nostradamus foresaw that on May 18, tariq wali would write:
> Appreciate the response although I realized a messed up large volume �lvm
> requires time and which I didn't have in this case and resorted to backup
> and recreated volumes all over new .�
You get lots of bonus points for actually having that backup! Too many
people posting here in panic have no such option...
> I realize I went wrong with resize2fs thinking that it would reduce my lvm
> by 100G but it actually reduced the total volume size to 100G , i-e an lvm
> of 1.7T
>
> resize2fs /dev/vg0/data 100G ( i thought it would reduce the by 100G but
> dropped the block device size to total of 100G )
>
> so to i guess to do this right i should have�
>
> resize2fs /dev/vg0/data�1.6T or (1600G)
>
> and then lvreduce -n data -L 100G /dev/vg0/data ( to reduce the lvm by 100
> )�
I'm pretty sure resize2fs would have complained about the impossible
task you set it, and exited with an error code before doing anything.
The *real* problem was then reducing the LV with the (unresized) fs.
The fsadm script used to check for and prevent this kind of error.
> I even tried the vgcfgrestore on the archived lvm metadata file but that
> just restored the metadata (back to original volume size however I still had
> a bad file system ) .
Even after reducing the LV, perfect recovery was still possible (before
allocating/extending any other LVs) by vgcfgrestore. The point of
no return was nearly reached when you then ran e2fsck on the truncated fs.
You could still have escaped unscathed if you hadn't answered 'yes'
to its offer to "fix" the superblocks....
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [linux-lvm] lvreduce nightmare
2012-05-18 14:13 ` tariq wali
2012-05-19 15:58 ` Stuart D Gathman
@ 2012-05-19 23:14 ` Andy Smith
2012-05-20 8:59 ` tariq wali
2012-05-20 21:39 ` Raptorfan
2 siblings, 1 reply; 13+ messages in thread
From: Andy Smith @ 2012-05-19 23:14 UTC (permalink / raw)
To: linux-lvm
[-- Attachment #1: Type: text/plain, Size: 519 bytes --]
Hi tariq,
On Fri, May 18, 2012 at 07:43:40PM +0530, tariq wali wrote:
> so to i guess to do this right i should have
>
> resize2fs /dev/vg0/data 1.6T or (1600G)
>
> and then lvreduce -n data -L 100G /dev/vg0/data ( to reduce the lvm by 100
> )
I believe that will also reduce the LV *to* 100G. You want:
# lvreduce -n data -L-100G
if you want to reduce it *by* 100G.
Nothing like FS/LV shrinking to keep you on your toes..
Cheers,
Andy
--
http://bitfolk.com/ -- No-nonsense VPS hosting
[-- Attachment #2: Digital signature --]
[-- Type: application/pgp-signature, Size: 198 bytes --]
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [linux-lvm] lvreduce nightmare
2012-05-19 23:14 ` Andy Smith
@ 2012-05-20 8:59 ` tariq wali
2012-05-20 19:45 ` Andy Smith
2012-05-21 9:02 ` Bryn M. Reeves
0 siblings, 2 replies; 13+ messages in thread
From: tariq wali @ 2012-05-20 8:59 UTC (permalink / raw)
To: LVM general discussion and development
[-- Attachment #1: Type: text/plain, Size: 1943 bytes --]
Hi Andy,
Thanks for the mail.
I am not sure if reducing by lvreduce alone is safe and will actually
reduce the underneath file system also, look at this sequence ..
lvcreate -n test -L10G vg0
df -h /test/
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/vg0-test 9.9G 151M 9.2G 2% /test
created some pseudo 2GB files with dd and /test now is
df -h
Filesystem Size Used *Avail* Use% Mounted on
/dev/mapper/vg0-test 9.9G 2.2G *7.2G* 24% /test
lvreduce -L -1G -n /dev/vg0/test
lvs
LV VG Attr LSize Origin Snap% Move Log Copy% Convert
test vg0 -wi-a- 9.00G
mount /dev/vg0/test /test/
df -h
Filesystem Size Used *Avail* Use% Mounted on
/dev/mapper/vg0-test 9.9G 2.2G *7.2G* 24% /test
you see after lvreduce 1G and mount /test again , it still shows */test as
7.2G* while as it should be *6.2G*
Tariq
On Sun, May 20, 2012 at 4:44 AM, Andy Smith <andy@strugglers.net> wrote:
> Hi tariq,
>
> On Fri, May 18, 2012 at 07:43:40PM +0530, tariq wali wrote:
> > so to i guess to do this right i should have
> >
> > resize2fs /dev/vg0/data 1.6T or (1600G)
> >
> > and then lvreduce -n data -L 100G /dev/vg0/data ( to reduce the lvm by
> 100
> > )
>
> I believe that will also reduce the LV *to* 100G. You want:
>
> # lvreduce -n data -L-100G
>
> if you want to reduce it *by* 100G.
>
> Nothing like FS/LV shrinking to keep you on your toes..
>
> Cheers,
> Andy
>
> --
> http://bitfolk.com/ -- No-nonsense VPS hosting
>
> -----BEGIN PGP SIGNATURE-----
> Version: GnuPG v1.4.10 (GNU/Linux)
>
> iEYEAREDAAYFAk+4KTsACgkQIJm2TL8VSQvadgCdFfn/WdMLPkSsEdiF/+4Py8Ds
> hCcAoIVocvHCvD/4lft601QFLqjPHFPC
> =pgU1
> -----END PGP SIGNATURE-----
>
> _______________________________________________
> linux-lvm mailing list
> linux-lvm@redhat.com
> https://www.redhat.com/mailman/listinfo/linux-lvm
> read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/
>
--
*Tariq Wali.*
[-- Attachment #2: Type: text/html, Size: 3252 bytes --]
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [linux-lvm] lvreduce nightmare
2012-05-20 8:59 ` tariq wali
@ 2012-05-20 19:45 ` Andy Smith
2012-05-21 9:02 ` Bryn M. Reeves
1 sibling, 0 replies; 13+ messages in thread
From: Andy Smith @ 2012-05-20 19:45 UTC (permalink / raw)
To: linux-lvm
[-- Attachment #1: Type: text/plain, Size: 552 bytes --]
Hi Tariq,
On Sun, May 20, 2012 at 02:29:56PM +0530, tariq wali wrote:
> I am not sure if reducing by lvreduce alone is safe and will actually
> reduce the underneath file system also,
It won't; I never said it would.
My point was that you said:
> and then lvreduce -n data -L 100G /dev/vg0/data ( to reduce the lvm by
> 100
but that WON'T do what you want. That will reduce the LV *to* 100G.
You need BOTH your resize2fs AND lvreduce commands to be correct.
Cheers,
Andy
--
http://bitfolk.com/ -- No-nonsense VPS hosting
[-- Attachment #2: Digital signature --]
[-- Type: application/pgp-signature, Size: 198 bytes --]
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [linux-lvm] lvreduce nightmare
2012-05-18 14:13 ` tariq wali
2012-05-19 15:58 ` Stuart D Gathman
2012-05-19 23:14 ` Andy Smith
@ 2012-05-20 21:39 ` Raptorfan
2 siblings, 0 replies; 13+ messages in thread
From: Raptorfan @ 2012-05-20 21:39 UTC (permalink / raw)
To: LVM general discussion and development; +Cc: tariq wali
[-- Attachment #1: Type: text/plain, Size: 812 bytes --]
On 5/18/2012 10:13 AM, tariq wali wrote:
> so to i guess to do this right i should have
> resize2fs /dev/vg0/data 1.6T or (1600G
> and then lvreduce -n data -L 100G /dev/vg0/data ( to reduce the lvm by
100 )
This is probably a nit-picking.. but seriously consider picking a single
numeric selection for both operations. Either run both commands with an
absolute value (preferable in my opinion; there is NO QUESTION at that
point what your final state should be) or with the delta value. Mixing
both in the commands is another way to confuse what you actually want to
accomplish. If you'd used absolute values to start with there probably
would have been a reduced chance of failure (barring a typo.. though you
still need to verify the resize2fs command completed successfully before
doing the lvreduce).
-r
[-- Attachment #2: Type: text/html, Size: 1151 bytes --]
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [linux-lvm] lvreduce nightmare
2012-05-20 8:59 ` tariq wali
2012-05-20 19:45 ` Andy Smith
@ 2012-05-21 9:02 ` Bryn M. Reeves
1 sibling, 0 replies; 13+ messages in thread
From: Bryn M. Reeves @ 2012-05-21 9:02 UTC (permalink / raw)
To: LVM general discussion and development; +Cc: tariq wali
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1
On 05/20/2012 09:59 AM, tariq wali wrote:
> Hi Andy,
>
> Thanks for the mail.
>
> I am not sure if reducing by lvreduce alone is safe and will
> actually reduce the underneath file system also, look at this
> sequence ..
It will if you specify -r/--resizefs. This will call the fsadm script
appropriately to perform the same resize on the file system itself
(either before or after, depending on whether you are shrinking or
growing).
Regards,
Bryn.
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.12 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/
iEYEARECAAYFAk+6BI0ACgkQ6YSQoMYUY96QoACbBY/kXAMhYx4el1LKi5/0pI6S
xrAAoJhIktgoSrxVfoPBMrtS0Ki5MhB+
=yRht
-----END PGP SIGNATURE-----
^ permalink raw reply [flat|nested] 13+ messages in thread
end of thread, other threads:[~2012-05-21 9:02 UTC | newest]
Thread overview: 13+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2012-05-16 15:48 [linux-lvm] lvreduce nightmare tariq wali
2012-05-17 23:13 ` Ray Morris
2012-05-18 0:21 ` Stuart D Gathman
2012-05-18 1:16 ` Andy Smith
2012-05-18 10:25 ` Bryn M. Reeves
2012-05-18 14:13 ` tariq wali
2012-05-19 15:58 ` Stuart D Gathman
2012-05-19 23:14 ` Andy Smith
2012-05-20 8:59 ` tariq wali
2012-05-20 19:45 ` Andy Smith
2012-05-21 9:02 ` Bryn M. Reeves
2012-05-20 21:39 ` Raptorfan
2012-05-18 9:57 ` Bryn M. Reeves
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).