* Re: [linux-lvm] Snapshots and disk re-use
2011-02-24 15:20 ` Jonathan Tripathy
@ 2011-02-24 16:41 ` Jonathan Tripathy
2011-02-24 19:15 ` Nataraj
2011-02-24 19:19 ` Stuart D. Gathman
2011-02-24 19:45 ` Stuart D. Gathman
2011-04-05 20:09 ` Jonathan Tripathy
2 siblings, 2 replies; 68+ messages in thread
From: Jonathan Tripathy @ 2011-02-24 16:41 UTC (permalink / raw)
To: linux-lvm
On 24/02/11 15:20, Jonathan Tripathy wrote:
>
> On 24/02/11 15:13, Stuart D. Gathman wrote:
>> On Thu, 24 Feb 2011, Jonathan Tripathy wrote:
>>
>>>> Yes. To be more pedantic, the COW has copies of the original contents
>>>> of blocks written to the origin since the snapshot. That is why you
>>>> need to clear it to achieve your stated purpose. The origin blocks
>>>> are written normally to the *-real volume (you can see these in
>>>> the /dev/mapper directory).
>>> But didn't you say that there is only one copy of the files stored
>>> physically
>>> on disk?
>> Yes. When you make the snapshot, there is only one copy, and the COW
>> table
>> is empty. AS YOU WRITE to the origin, each chunk written is saved to
>> *-cow first before being written to *-real.
> Got ya. So data that is being written to the origin, while the
> snapshot exists, is the data that may leak, as it's saved to the COW
> first, then copied over to real.
>
> Hopefully an expert will let me know weather its safe to zero the COW
> after I�ve finished with the snapshot.
>
>
Oh, I forgot to mention. My LVM sits on top of linux software RAID1.
Does that complicate things at all?
^ permalink raw reply [flat|nested] 68+ messages in thread
* Re: [linux-lvm] Snapshots and disk re-use
2011-02-24 16:41 ` Jonathan Tripathy
@ 2011-02-24 19:15 ` Nataraj
2011-02-24 19:25 ` Les Mikesell
2011-02-24 19:55 ` Stuart D. Gathman
2011-02-24 19:19 ` Stuart D. Gathman
1 sibling, 2 replies; 68+ messages in thread
From: Nataraj @ 2011-02-24 19:15 UTC (permalink / raw)
To: linux-lvm
On 02/24/2011 08:41 AM, Jonathan Tripathy wrote:
>
> On 24/02/11 15:20, Jonathan Tripathy wrote:
>>
>> On 24/02/11 15:13, Stuart D. Gathman wrote:
>>> On Thu, 24 Feb 2011, Jonathan Tripathy wrote:
>>>
>>>>> Yes. To be more pedantic, the COW has copies of the original contents
>>>>> of blocks written to the origin since the snapshot. That is why you
>>>>> need to clear it to achieve your stated purpose. The origin blocks
>>>>> are written normally to the *-real volume (you can see these in
>>>>> the /dev/mapper directory).
>>>> But didn't you say that there is only one copy of the files stored
>>>> physically
>>>> on disk?
>>> Yes. When you make the snapshot, there is only one copy, and the
>>> COW table
>>> is empty. AS YOU WRITE to the origin, each chunk written is saved to
>>> *-cow first before being written to *-real.
>> Got ya. So data that is being written to the origin, while the
>> snapshot exists, is the data that may leak, as it's saved to the COW
>> first, then copied over to real.
>>
>> Hopefully an expert will let me know weather its safe to zero the COW
>> after I�ve finished with the snapshot.
>>
>>
> Oh, I forgot to mention. My LVM sits on top of linux software RAID1.
> Does that complicate things at all?
>
One other thing that wasn't mentioned here.... As far as I understand,
if these are lvms on the host and the snapshots are being taken on the
host, there is no guaranteed integrity of the filesystems unless you
shutdown the guest while the snapshot is taken.
It's too bad they don't implement a system call that could do something
like sync filesystem and sleep until I tell you to continue. This would
be perfect for snapshot backups of virtual hosts.
Nataraj
^ permalink raw reply [flat|nested] 68+ messages in thread
* Re: [linux-lvm] Snapshots and disk re-use
2011-02-24 19:15 ` Nataraj
@ 2011-02-24 19:25 ` Les Mikesell
2011-02-24 19:55 ` Stuart D. Gathman
1 sibling, 0 replies; 68+ messages in thread
From: Les Mikesell @ 2011-02-24 19:25 UTC (permalink / raw)
To: linux-lvm
On 2/24/2011 1:15 PM, Nataraj wrote:
>
> One other thing that wasn't mentioned here.... As far as I understand,
> if these are lvms on the host and the snapshots are being taken on the
> host, there is no guaranteed integrity of the filesystems unless you
> shutdown the guest while the snapshot is taken.
>
> It's too bad they don't implement a system call that could do something
> like sync filesystem and sleep until I tell you to continue. This would
> be perfect for snapshot backups of virtual hosts.
Syncing a filesystem's OS buffers isn't really enough to ensure that
running apps have saved their data in a consistent state. The best you
are going to get without shuting things down is more or less what you'd
have if the server crashed at some point in time.
--
Les Mikesell
lesmikesell@gmail.com
^ permalink raw reply [flat|nested] 68+ messages in thread* Re: [linux-lvm] Snapshots and disk re-use
2011-02-24 19:15 ` Nataraj
2011-02-24 19:25 ` Les Mikesell
@ 2011-02-24 19:55 ` Stuart D. Gathman
1 sibling, 0 replies; 68+ messages in thread
From: Stuart D. Gathman @ 2011-02-24 19:55 UTC (permalink / raw)
To: LVM general discussion and development
On Thu, 24 Feb 2011, Nataraj wrote:
> One other thing that wasn't mentioned here.... As far as I understand,
> if these are lvms on the host and the snapshots are being taken on the
> host, there is no guaranteed integrity of the filesystems unless you
> shutdown the guest while the snapshot is taken.
>
> It's too bad they don't implement a system call that could do something
> like sync filesystem and sleep until I tell you to continue. This would
> be perfect for snapshot backups of virtual hosts.
The hosting service backup script can attempt to send a message to
the VM on a network port before taking the snapshot. If the VM
accepts the connection and implements the handshake API, it will respond with
a "WAIT A SEC", tell databases to sync and suspend writes, then
tell the host to "GO AHEAD". The host then takes the snapshot and
tells the VM, "ALL DONE, CARRY ON".
This network API ought to be standardized. In fact, I am just about
to write this API for a client to reduce their backup window from
6 minutes to a few seconds. If I am reinventing the wheel here,
please let me know! If not, I'll share here. Code will likely be python.
For many applications, filesystem journalling is sufficient for snapshots
to be useful without further involvement. But when database are involved,
you often need the handshake.
--
Stuart D. Gathman <stuart@bmsi.com>
Business Management Systems Inc. Phone: 703 591-0911 Fax: 703 591-6154
"Confutatis maledictis, flammis acribus addictis" - background song for
a Microsoft sponsored "Where do you want to go from here?" commercial.
^ permalink raw reply [flat|nested] 68+ messages in thread
* Re: [linux-lvm] Snapshots and disk re-use
2011-02-24 16:41 ` Jonathan Tripathy
2011-02-24 19:15 ` Nataraj
@ 2011-02-24 19:19 ` Stuart D. Gathman
1 sibling, 0 replies; 68+ messages in thread
From: Stuart D. Gathman @ 2011-02-24 19:19 UTC (permalink / raw)
To: LVM general discussion and development
On Thu, 24 Feb 2011, Jonathan Tripathy wrote:
> Oh, I forgot to mention. My LVM sits on top of linux software RAID1. Does that
> complicate things at all?
Nope. If it were RAID5, or other scheme with a RWM cycle, you would need to
ensure alignment between RAID chunks and LVM chunks for good performance, but
RAID1 does not have that problem.
--
Stuart D. Gathman <stuart@bmsi.com>
Business Management Systems Inc. Phone: 703 591-0911 Fax: 703 591-6154
"Confutatis maledictis, flammis acribus addictis" - background song for
a Microsoft sponsored "Where do you want to go from here?" commercial.
^ permalink raw reply [flat|nested] 68+ messages in thread
* Re: [linux-lvm] Snapshots and disk re-use
2011-02-24 15:20 ` Jonathan Tripathy
2011-02-24 16:41 ` Jonathan Tripathy
@ 2011-02-24 19:45 ` Stuart D. Gathman
2011-02-24 21:22 ` Jonathan Tripathy
2011-04-05 20:09 ` Jonathan Tripathy
2 siblings, 1 reply; 68+ messages in thread
From: Stuart D. Gathman @ 2011-02-24 19:45 UTC (permalink / raw)
To: LVM general discussion and development
On Thu, 24 Feb 2011, Jonathan Tripathy wrote:
> > Yes. When you make the snapshot, there is only one copy, and the COW table
> > is empty. AS YOU WRITE to the origin, each chunk written is saved to
> > *-cow first before being written to *-real.
> Got ya. So data that is being written to the origin, while the snapshot
> exists, is the data that may leak, as it's saved to the COW first, then copied
> over to real.
>
> Hopefully an expert will let me know weather its safe to zero the COW after
> I've finished with the snapshot.
What *is* safe is to zero the snapshot. This will overwrite any blocks
in the COW copied from the origin. The problem is that if the snapshot runs
out of room, it is invalidated, and you may or may not have overwritten
all blocks copied from the origin.
So if you don't hear from an expert, a safe prodecure is to allocate
snapshots for backup that are as big as the origin + 1 PP (which should
be enough for the COW table as well unless we are talking terabytes). Then you
can zero the snapshot (not the COW) after making a backup. That will overwrite
any data copied from the origin. The only leftover data will just be the COW
table which is a bunch of block #s, so shouldn't be a security problem.
This procedure is less efficient than zeroing LVs on allocation, and takes
extra space for worst case snapshot allocation. But if you want allocation
to be "instant", and can pay for it when recycling, that should solve your
problem. You should still zero all free space (by allocating a huge LV
with all remaining space and zeroing it) periodically in case anything
got missed.
IDEA, since you are on raid1, reads are faster than writes (twice as fast),
and your snapshots will be mostly unused (the COW will only have a few blocks
copied from the origin). So you can write a "clear" utility that scans
a block device for non-zero chunks, and only writes over those with zeros.
This might be a good application for mmap().
Furthermore, if you have 3 copies of the data in your raid1 system, then reads
are 3 times as fast, and the clear utility might be fast enough to use at
allocation time - which is simple and safe.
--
Stuart D. Gathman <stuart@bmsi.com>
Business Management Systems Inc. Phone: 703 591-0911 Fax: 703 591-6154
"Confutatis maledictis, flammis acribus addictis" - background song for
a Microsoft sponsored "Where do you want to go from here?" commercial.
^ permalink raw reply [flat|nested] 68+ messages in thread* Re: [linux-lvm] Snapshots and disk re-use
2011-02-24 19:45 ` Stuart D. Gathman
@ 2011-02-24 21:22 ` Jonathan Tripathy
0 siblings, 0 replies; 68+ messages in thread
From: Jonathan Tripathy @ 2011-02-24 21:22 UTC (permalink / raw)
To: LVM general discussion and development
On 24/02/11 19:45, Stuart D. Gathman wrote:
> On Thu, 24 Feb 2011, Jonathan Tripathy wrote:
>
>>> Yes. When you make the snapshot, there is only one copy, and the COW table
>>> is empty. AS YOU WRITE to the origin, each chunk written is saved to
>>> *-cow first before being written to *-real.
>> Got ya. So data that is being written to the origin, while the snapshot
>> exists, is the data that may leak, as it's saved to the COW first, then copied
>> over to real.
>>
>> Hopefully an expert will let me know weather its safe to zero the COW after
>> I've finished with the snapshot.
> What *is* safe is to zero the snapshot. This will overwrite any blocks
> in the COW copied from the origin. The problem is that if the snapshot runs
> out of room, it is invalidated, and you may or may not have overwritten
> all blocks copied from the origin.
>
> So if you don't hear from an expert, a safe prodecure is to allocate
> snapshots for backup that are as big as the origin + 1 PP (which should
> be enough for the COW table as well unless we are talking terabytes). Then you
> can zero the snapshot (not the COW) after making a backup. That will overwrite
> any data copied from the origin. The only leftover data will just be the COW
> table which is a bunch of block #s, so shouldn't be a security problem.
>
> This procedure is less efficient than zeroing LVs on allocation, and takes
> extra space for worst case snapshot allocation. But if you want allocation
> to be "instant", and can pay for it when recycling, that should solve your
> problem. You should still zero all free space (by allocating a huge LV
> with all remaining space and zeroing it) periodically in case anything
> got missed.
Hmm this sounds like it would work. However I would rather zero the LVs
on allocation than do this, as we would run many backups, and it would
be highly inefficient to zero out all the snapshots (unless I made the
snapshot really small, but that would cause other problems, wouldn't it?)
> IDEA, since you are on raid1, reads are faster than writes (twice as fast),
> and your snapshots will be mostly unused (the COW will only have a few blocks
> copied from the origin). So you can write a "clear" utility that scans
> a block device for non-zero chunks, and only writes over those with zeros.
> This might be a good application for mmap().
This would be a fantastic idea. Since LVM is very commonily used in
shared tennant environments, if would be a fantastic feature if LVM
could make sure that snapshots didn't cause data leakage
Hopefully an expert will help me out with my zeroing issues
^ permalink raw reply [flat|nested] 68+ messages in thread
* Re: [linux-lvm] Snapshots and disk re-use
2011-02-24 15:20 ` Jonathan Tripathy
2011-02-24 16:41 ` Jonathan Tripathy
2011-02-24 19:45 ` Stuart D. Gathman
@ 2011-04-05 20:09 ` Jonathan Tripathy
2011-04-05 20:41 ` Stuart D. Gathman
2 siblings, 1 reply; 68+ messages in thread
From: Jonathan Tripathy @ 2011-04-05 20:09 UTC (permalink / raw)
To: LVM general discussion and development
On 24/02/2011 15:20, Jonathan Tripathy wrote:
>
> On 24/02/11 15:13, Stuart D. Gathman wrote:
>> On Thu, 24 Feb 2011, Jonathan Tripathy wrote:
>>
>>>> Yes. To be more pedantic, the COW has copies of the original contents
>>>> of blocks written to the origin since the snapshot. That is why you
>>>> need to clear it to achieve your stated purpose. The origin blocks
>>>> are written normally to the *-real volume (you can see these in
>>>> the /dev/mapper directory).
>>> But didn't you say that there is only one copy of the files stored
>>> physically
>>> on disk?
>> Yes. When you make the snapshot, there is only one copy, and the COW
>> table
>> is empty. AS YOU WRITE to the origin, each chunk written is saved to
>> *-cow first before being written to *-real.
> Got ya. So data that is being written to the origin, while the
> snapshot exists, is the data that may leak, as it's saved to the COW
> first, then copied over to real.
>
> Hopefully an expert will let me know weather its safe to zero the COW
> after I�ve finished with the snapshot.
Are any "experts" available to help me answer the above question? I feel
that this is a really important issue for those of us in the hosting
industry.
Just to sum up my question: when a customer leaves our service, we zero
their drive before removing the LV. This hopefully ensures that there is
no data "leakage" when we create a new LV for a new customer. However,
we need to take into consideration about what happens when we create
snapshots of LVs to perform backups (using rsync).
Any help would be appreciated.
^ permalink raw reply [flat|nested] 68+ messages in thread
* Re: [linux-lvm] Snapshots and disk re-use
2011-04-05 20:09 ` Jonathan Tripathy
@ 2011-04-05 20:41 ` Stuart D. Gathman
2011-04-05 20:48 ` Jonathan Tripathy
0 siblings, 1 reply; 68+ messages in thread
From: Stuart D. Gathman @ 2011-04-05 20:41 UTC (permalink / raw)
To: LVM general discussion and development
On Tue, 5 Apr 2011, Jonathan Tripathy wrote:
>> Hopefully an expert will let me know weather its safe to zero the COW after
>> I?ve finished with the snapshot.
> Are any "experts" available to help me answer the above question? I feel that
> this is a really important issue for those of us in the hosting industry.
>
> Just to sum up my question: when a customer leaves our service, we zero their
> drive before removing the LV. This hopefully ensures that there is no data
> "leakage" when we create a new LV for a new customer. However, we need to
> take into consideration about what happens when we create snapshots of LVs to
> perform backups (using rsync).
>
> Any help would be appreciated.
At this point, we'll just have to try it on a non-production server.
Hopefully, worst case the kernel crashes. I run Fedora (14 currently)
and CentOS-5.5. My guess as an amateur is that zeroing the COW while
the origin is open is a problem. I would suggest this for backup:
1) make snapshot
2) make backup
3) pause VM (check that this closes the origin LV, if not save the VM)
4) with both origin and snapshot not active, zero COW
5) remove snapshot
6) unpause VM (or restore if saved)
--
Stuart D. Gathman <stuart@bmsi.com>
Business Management Systems Inc. Phone: 703 591-0911 Fax: 703 591-6154
"Confutatis maledictis, flammis acribus addictis" - background song for
a Microsoft sponsored "Where do you want to go from here?" commercial.
^ permalink raw reply [flat|nested] 68+ messages in thread* Re: [linux-lvm] Snapshots and disk re-use
2011-04-05 20:41 ` Stuart D. Gathman
@ 2011-04-05 20:48 ` Jonathan Tripathy
2011-04-05 20:59 ` James Hawtin
0 siblings, 1 reply; 68+ messages in thread
From: Jonathan Tripathy @ 2011-04-05 20:48 UTC (permalink / raw)
To: LVM general discussion and development
On 05/04/2011 21:41, Stuart D. Gathman wrote:
> On Tue, 5 Apr 2011, Jonathan Tripathy wrote:
>
>>> Hopefully an expert will let me know weather its safe to zero the
>>> COW after I?ve finished with the snapshot.
>> Are any "experts" available to help me answer the above question? I
>> feel that this is a really important issue for those of us in the
>> hosting industry.
>>
>> Just to sum up my question: when a customer leaves our service, we
>> zero their drive before removing the LV. This hopefully ensures that
>> there is no data "leakage" when we create a new LV for a new
>> customer. However, we need to take into consideration about what
>> happens when we create snapshots of LVs to perform backups (using
>> rsync).
>>
>> Any help would be appreciated.
>
> At this point, we'll just have to try it on a non-production server.
> Hopefully, worst case the kernel crashes. I run Fedora (14 currently)
> and CentOS-5.5. My guess as an amateur is that zeroing the COW while
> the origin is open is a problem. I would suggest this for backup:
>
> 1) make snapshot
> 2) make backup
> 3) pause VM (check that this closes the origin LV, if not save the VM)
> 4) with both origin and snapshot not active, zero COW
> 5) remove snapshot
> 6) unpause VM (or restore if saved)
>
> --
> Stuart D. Gathman <stuart@bmsi.com>
> Business Management Systems Inc. Phone: 703 591-0911 Fax: 703
> 591-6154
> "Confutatis maledictis, flammis acribus addictis" - background song for
> a Microsoft sponsored "Where do you want to go from here?" commercial.
>
> _______________________________________________
> linux-lvm mailing list
> linux-lvm@redhat.com
> https://www.redhat.com/mailman/listinfo/linux-lvm
> read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/
Yeah, I'll have to try this on a non-production server. Which kernel do
you expect to crash? The Dom0 (Xen + LVM host) or the VM? Anyway, just
thinking about it, it seems that pausing/saving/shutting down the VM is
a must, as the VM may be writing to disk at the time of zeroing the cow
(!!).
In the hosting industy, what does everyone else do? Do they just ignore
the issue???
^ permalink raw reply [flat|nested] 68+ messages in thread
* Re: [linux-lvm] Snapshots and disk re-use
2011-04-05 20:48 ` Jonathan Tripathy
@ 2011-04-05 20:59 ` James Hawtin
2011-04-05 21:36 ` Jonathan Tripathy
0 siblings, 1 reply; 68+ messages in thread
From: James Hawtin @ 2011-04-05 20:59 UTC (permalink / raw)
To: linux-lvm
On 05/04/2011 20:48, Jonathan Tripathy wrote:
>
> Yeah, I'll have to try this on a non-production server. Which kernel
> do you expect to crash? The Dom0 (Xen + LVM host) or the VM? Anyway,
> just thinking about it, it seems that pausing/saving/shutting down the
> VM is a must, as the VM may be writing to disk at the time of zeroing
> the cow (!!).
>
> In the hosting industy, what does everyone else do? Do they just
> ignore the issue???
>
> _______________________________________________
> linux-lvm mailing list
> linux-lvm@redhat.com
> https://www.redhat.com/mailman/listinfo/linux-lvm
> read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/
>
>
> -----
> No virus found in this message.
> Checked by AVG - www.avg.com
> Version: 10.0.1209 / Virus Database: 1500/3552 - Release Date: 04/05/11
As an alturnative could you not?
1) Create snapshot lv using specific physical extents.
lvcreate -s -l 100 ..... /dev/sdx1:0-99
2) Backup VM
3) Delete snapshot lv
4) create a normal lv using same physical extents
lvcreate -l 100 ..... /dev/sdx1:0-99
5) zero normal lv
6) delete normal lv
James
^ permalink raw reply [flat|nested] 68+ messages in thread
* Re: [linux-lvm] Snapshots and disk re-use
2011-04-05 20:59 ` James Hawtin
@ 2011-04-05 21:36 ` Jonathan Tripathy
2011-04-05 22:42 ` James Hawtin
0 siblings, 1 reply; 68+ messages in thread
From: Jonathan Tripathy @ 2011-04-05 21:36 UTC (permalink / raw)
To: linux-lvm
On 05/04/2011 21:59, James Hawtin wrote:
> On 05/04/2011 20:48, Jonathan Tripathy wrote:
>>
>> Yeah, I'll have to try this on a non-production server. Which kernel
>> do you expect to crash? The Dom0 (Xen + LVM host) or the VM? Anyway,
>> just thinking about it, it seems that pausing/saving/shutting down
>> the VM is a must, as the VM may be writing to disk at the time of
>> zeroing the cow (!!).
>>
>> In the hosting industy, what does everyone else do? Do they just
>> ignore the issue???
>>
>> _______________________________________________
>> linux-lvm mailing list
>> linux-lvm@redhat.com
>> https://www.redhat.com/mailman/listinfo/linux-lvm
>> read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/
>>
>>
>> -----
>> No virus found in this message.
>> Checked by AVG - www.avg.com
>> Version: 10.0.1209 / Virus Database: 1500/3552 - Release Date: 04/05/11
>
> As an alturnative could you not?
>
> 1) Create snapshot lv using specific physical extents.
> lvcreate -s -l 100 ..... /dev/sdx1:0-99
> 2) Backup VM
> 3) Delete snapshot lv
> 4) create a normal lv using same physical extents
> lvcreate -l 100 ..... /dev/sdx1:0-99
> 5) zero normal lv
> 6) delete normal lv
>
> James
Hi James,
Interesting, didn't know you could do that! However, how do I know that
the PEs aren't being used by LVs? Also, could you please explain the
syntax? Normally to create a snapshot, I would do:
lvcreate -L20G -s -n backup /dev/vg0/customerID
Thanks
Thanks
^ permalink raw reply [flat|nested] 68+ messages in thread
* Re: [linux-lvm] Snapshots and disk re-use
2011-04-05 21:36 ` Jonathan Tripathy
@ 2011-04-05 22:42 ` James Hawtin
2011-04-05 22:52 ` Jonathan Tripathy
0 siblings, 1 reply; 68+ messages in thread
From: James Hawtin @ 2011-04-05 22:42 UTC (permalink / raw)
To: linux-lvm
On 05/04/2011 21:36, Jonathan Tripathy wrote:
> Hi James,
>
> Interesting, didn't know you could do that! However, how do I know
> that the PEs aren't being used by LVs? Also, could you please explain
> the syntax? Normally to create a snapshot, I would do:
>
> lvcreate -L20G -s -n backup /dev/vg0/customerID
>
Hmmm well you have two options, you could use pvdisplay --map or
lvdisplay --map to work out exactly which PEs have been used to build
you snapshot cow and then use that information to allow you to create a
blanking PV in the same place or you could do it the easy way :-
1 hog the space to specific PEs
2 delete the hog
3 create the snapshot on same PEs
4 backup
5 delete the snapshot
6 create the hog on the same PEs
7 zero the hog
This has the advantage that the creation commands will fail if the PEs
you want are not available the problem with it is you probably need more
space for snapshots. As its less flexible in space use. Below i have
illustrated all the commands, you need to do this. you don;t need all
the display commands but they are there to prove to you this has worked,
and the lvs are in the same place.
#pvdisplay --map /dev/cciss/c0d1p1
--- Physical volume ---
PV Name /dev/cciss/c0d1p1
VG Name test_vg
PV Size 683.51 GB / not usable 5.97 MB
Allocatable yes
PE Size (KByte) 131072
Total PE 5468
Free PE 4332
Allocated PE 1136
PV UUID YXjplf-EfLh-8Jkr-2utT-gmi5-gjAH-UOCcN0
--- Physical Segments ---
Physical extent 0 to 15:
Logical volume /dev/test_vg/test_lv
Logical extents 0 to 15
Physical extent 16 to 815:
Logical volume /dev/test_vg/mail_lv
Logical extents 0 to 799
Physical extent 816 to 975:
Logical volume /dev/test_vg/data_lv
Logical extents 0 to 159
Physical extent 976 to 2255:
FREE
Physical extent 2256 to 2335:
Logical volume /dev/test_vg/srv_lv
Logical extents 0 to 79
Physical extent 2336 to 2415:
Logical volume /dev/test_vg/data_lv
Logical extents 160 to 239
Physical extent 2416 to 5467:
FREE
#lvcreate -l 20 -n hog_lv test_vg /dev/cciss/c0d1p1:5448-5467
#pvdisplay --map /dev/cciss/c0d1p1
--- Physical volume ---
PV Name /dev/cciss/c0d1p1
VG Name test_vg
PV Size 683.51 GB / not usable 5.97 MB
Allocatable yes
PE Size (KByte) 131072
Total PE 5468
Free PE 4312
Allocated PE 1156
PV UUID YXjplf-EfLh-8Jkr-2utT-gmi5-gjAH-UOCcN0
--- Physical Segments ---
Physical extent 0 to 15:
Logical volume /dev/test_vg/test_lv
Logical extents 0 to 15
Physical extent 16 to 815:
Logical volume /dev/test_vg/mail_lv
Logical extents 0 to 799
Physical extent 816 to 975:
Logical volume /dev/test_vg/data_lv
Logical extents 0 to 159
Physical extent 976 to 2255:
FREE
Physical extent 2256 to 2335:
Logical volume /dev/test_vg/srv_lv
Logical extents 0 to 79
Physical extent 2336 to 2415:
Logical volume /dev/test_vg/data_lv
Logical extents 160 to 239
Physical extent 2416 to 5447:
FREE
Physical extent 5448 to 5467:
Logical volume /dev/test_vg/hog_lv
Logical extents 0 to 19
#lvremove /dev/test_vg/hog_lv
Do you really want to remove active logical volume hog_lv? [y/n]: y
Logical volume "hog_lv" successfully removed
#lvcreate -l 20 -s -n data_snap /dev/test_vg/data_lv
/dev/cciss/c0d1p1:5448-5467
Logical volume "data_snap" created
#pvdisplay --map /dev/cciss/c0d1p1
--- Physical volume ---
PV Name /dev/cciss/c0d1p1
VG Name test_vg
PV Size 683.51 GB / not usable 5.97 MB
Allocatable yes
PE Size (KByte) 131072
Total PE 5468
Free PE 4312
Allocated PE 1156
PV UUID YXjplf-EfLh-8Jkr-2utT-gmi5-gjAH-UOCcN0
--- Physical Segments ---
Physical extent 0 to 15:
Logical volume /dev/test_vg/restricted_lv
Logical extents 0 to 15
Physical extent 16 to 815:
Logical volume /dev/test_vg/mail_lv
Logical extents 0 to 799
Physical extent 816 to 975:
Logical volume /dev/test_vg/data_lv
Logical extents 0 to 159
Physical extent 976 to 2255:
FREE
Physical extent 2256 to 2335:
Logical volume /dev/test_vg/srv_lv
Logical extents 0 to 79
Physical extent 2336 to 2415:
Logical volume /dev/test_vg/data_lv
Logical extents 160 to 239
Physical extent 2416 to 5447:
FREE
Physical extent 5448 to 5467:
Logical volume /dev/test_vg/data_snap
Logical extents 0 to 19
#lvdisplay /dev/test_vg/data_snap
--- Logical volume ---
LV Name /dev/test_vg/data_snap
VG Name test_vg
LV UUID bdqB77-f0vb-ZucS-Ka1l-pCr3-Ebeq-kOchmk
LV Write Access read/write
LV snapshot status active destination for /dev/test_vg/data_lv
LV Status available
# open 0
LV Size 30.00 GB
Current LE 240
COW-table size 2.50 GB
COW-table LE 20
Allocated to snapshot 0.00%
Snapshot chunk size 4.00 KB
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 253:5
#lvdisplay --map /dev/test_vg/data_snap
--- Logical volume ---
LV Name /dev/test_vg/data_snap
VG Name test_vg
LV UUID IBBvOq-Bg0U-c69v-p7fQ-tR63-T8UV-gM1Ncu
LV Write Access read/write
LV snapshot status active destination for /dev/test_vg/data_lv
LV Status available
# open 0
LV Size 30.00 GB
Current LE 240
COW-table size 2.50 GB
COW-table LE 20
Allocated to snapshot 0.00%
Snapshot chunk size 4.00 KB
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 253:5
--- Segments ---
Logical extent 0 to 19:
Type linear
Physical volume /dev/cciss/c0d1p1
Physical extents 5448 to 5467
<NOW BACKUP>
#lvremove /dev/test_vg/data_snap
Do you really want to remove active logical volume data_snap? [y/n]: y
Logical volume "data_snap" successfully removed
#lvcreate -l 20 -n hog_lv test_vg /dev/cciss/c0d1p1:5448-5467
Logical volume "hog_lv" created
#pvdisplay --map /dev/cciss/c0d1p1 ---
Physical volume ---
PV Name /dev/cciss/c0d1p1
VG Name test_vg
PV Size 683.51 GB / not usable 5.97 MB
Allocatable yes
PE Size (KByte) 131072
Total PE 5468
Free PE 4312
Allocated PE 1156
PV UUID YXjplf-EfLh-8Jkr-2utT-gmi5-gjAH-UOCcN0
--- Physical Segments ---
Physical extent 0 to 15:
Logical volume /dev/test_vg/restricted_lv
Logical extents 0 to 15
Physical extent 16 to 815:
Logical volume /dev/test_vg/mail_lv
Logical extents 0 to 799
Physical extent 816 to 975:
Logical volume /dev/test_vg/data_lv
Logical extents 0 to 159
Physical extent 976 to 2255:
FREE
Physical extent 2256 to 2335:
Logical volume /dev/test_vg/srv_lv
Logical extents 0 to 79
Physical extent 2336 to 2415:
Logical volume /dev/test_vg/data_lv
Logical extents 160 to 239
Physical extent 2416 to 5447:
FREE
Physical extent 5448 to 5467:
Logical volume /dev/test_vg/hog_lv
Logical extents 0 to 19
#dd if=/dev/zero of=/dev/hog_lv
#lvremove /dev/test_vg/hog_lv
Do you really want to remove active logical volume hog_lv? [y/n]: y
Logical volume "hog_lv" successfully removed
Enjoy
James
^ permalink raw reply [flat|nested] 68+ messages in thread* Re: [linux-lvm] Snapshots and disk re-use
2011-04-05 22:42 ` James Hawtin
@ 2011-04-05 22:52 ` Jonathan Tripathy
2011-04-05 23:11 ` James Hawtin
0 siblings, 1 reply; 68+ messages in thread
From: Jonathan Tripathy @ 2011-04-05 22:52 UTC (permalink / raw)
To: linux-lvm
On 05/04/2011 23:42, James Hawtin wrote:
> On 05/04/2011 21:36, Jonathan Tripathy wrote:
>> Hi James,
>>
>> Interesting, didn't know you could do that! However, how do I know
>> that the PEs aren't being used by LVs? Also, could you please explain
>> the syntax? Normally to create a snapshot, I would do:
>>
>> lvcreate -L20G -s -n backup /dev/vg0/customerID
>>
>
> Hmmm well you have two options, you could use pvdisplay --map or
> lvdisplay --map to work out exactly which PEs have been used to build
> you snapshot cow and then use that information to allow you to create
> a blanking PV in the same place or you could do it the easy way :-
>
> 1 hog the space to specific PEs
> 2 delete the hog
> 3 create the snapshot on same PEs
> 4 backup
> 5 delete the snapshot
> 6 create the hog on the same PEs
> 7 zero the hog
>
> This has the advantage that the creation commands will fail if the PEs
> you want are not available the problem with it is you probably need
> more space for snapshots. As its less flexible in space use. Below i
> have illustrated all the commands, you need to do this. you don;t need
> all the display commands but they are there to prove to you this has
> worked, and the lvs are in the same place.
>
> #pvdisplay --map /dev/cciss/c0d1p1
> --- Physical volume ---
> PV Name /dev/cciss/c0d1p1
> VG Name test_vg
> PV Size 683.51 GB / not usable 5.97 MB
> Allocatable yes
> PE Size (KByte) 131072
> Total PE 5468
> Free PE 4332
> Allocated PE 1136
> PV UUID YXjplf-EfLh-8Jkr-2utT-gmi5-gjAH-UOCcN0
>
> --- Physical Segments ---
> Physical extent 0 to 15:
> Logical volume /dev/test_vg/test_lv
> Logical extents 0 to 15
> Physical extent 16 to 815:
> Logical volume /dev/test_vg/mail_lv
> Logical extents 0 to 799
> Physical extent 816 to 975:
> Logical volume /dev/test_vg/data_lv
> Logical extents 0 to 159
> Physical extent 976 to 2255:
> FREE
> Physical extent 2256 to 2335:
> Logical volume /dev/test_vg/srv_lv
> Logical extents 0 to 79
> Physical extent 2336 to 2415:
> Logical volume /dev/test_vg/data_lv
> Logical extents 160 to 239
> Physical extent 2416 to 5467:
> FREE
>
> #lvcreate -l 20 -n hog_lv test_vg /dev/cciss/c0d1p1:5448-5467
>
> #pvdisplay --map /dev/cciss/c0d1p1
> --- Physical volume ---
> PV Name /dev/cciss/c0d1p1
> VG Name test_vg
> PV Size 683.51 GB / not usable 5.97 MB
> Allocatable yes
> PE Size (KByte) 131072
> Total PE 5468
> Free PE 4312
> Allocated PE 1156
> PV UUID YXjplf-EfLh-8Jkr-2utT-gmi5-gjAH-UOCcN0
>
> --- Physical Segments ---
> Physical extent 0 to 15:
> Logical volume /dev/test_vg/test_lv
> Logical extents 0 to 15
> Physical extent 16 to 815:
> Logical volume /dev/test_vg/mail_lv
> Logical extents 0 to 799
> Physical extent 816 to 975:
> Logical volume /dev/test_vg/data_lv
> Logical extents 0 to 159
> Physical extent 976 to 2255:
> FREE
> Physical extent 2256 to 2335:
> Logical volume /dev/test_vg/srv_lv
> Logical extents 0 to 79
> Physical extent 2336 to 2415:
> Logical volume /dev/test_vg/data_lv
> Logical extents 160 to 239
> Physical extent 2416 to 5447:
> FREE
> Physical extent 5448 to 5467:
> Logical volume /dev/test_vg/hog_lv
> Logical extents 0 to 19
>
> #lvremove /dev/test_vg/hog_lv
> Do you really want to remove active logical volume hog_lv? [y/n]: y
> Logical volume "hog_lv" successfully removed
> #lvcreate -l 20 -s -n data_snap /dev/test_vg/data_lv
> /dev/cciss/c0d1p1:5448-5467
> Logical volume "data_snap" created
> #pvdisplay --map /dev/cciss/c0d1p1
> --- Physical volume ---
> PV Name /dev/cciss/c0d1p1
> VG Name test_vg
> PV Size 683.51 GB / not usable 5.97 MB
> Allocatable yes
> PE Size (KByte) 131072
> Total PE 5468
> Free PE 4312
> Allocated PE 1156
> PV UUID YXjplf-EfLh-8Jkr-2utT-gmi5-gjAH-UOCcN0
>
> --- Physical Segments ---
> Physical extent 0 to 15:
> Logical volume /dev/test_vg/restricted_lv
> Logical extents 0 to 15
> Physical extent 16 to 815:
> Logical volume /dev/test_vg/mail_lv
> Logical extents 0 to 799
> Physical extent 816 to 975:
> Logical volume /dev/test_vg/data_lv
> Logical extents 0 to 159
> Physical extent 976 to 2255:
> FREE
> Physical extent 2256 to 2335:
> Logical volume /dev/test_vg/srv_lv
> Logical extents 0 to 79
> Physical extent 2336 to 2415:
> Logical volume /dev/test_vg/data_lv
> Logical extents 160 to 239
> Physical extent 2416 to 5447:
> FREE
> Physical extent 5448 to 5467:
> Logical volume /dev/test_vg/data_snap
> Logical extents 0 to 19
>
>
> #lvdisplay /dev/test_vg/data_snap
> --- Logical volume ---
> LV Name /dev/test_vg/data_snap
> VG Name test_vg
> LV UUID bdqB77-f0vb-ZucS-Ka1l-pCr3-Ebeq-kOchmk
> LV Write Access read/write
> LV snapshot status active destination for /dev/test_vg/data_lv
> LV Status available
> # open 0
> LV Size 30.00 GB
> Current LE 240
> COW-table size 2.50 GB
> COW-table LE 20
> Allocated to snapshot 0.00%
> Snapshot chunk size 4.00 KB
> Segments 1
> Allocation inherit
> Read ahead sectors auto
> - currently set to 256
> Block device 253:5
>
> #lvdisplay --map /dev/test_vg/data_snap
> --- Logical volume ---
> LV Name /dev/test_vg/data_snap
> VG Name test_vg
> LV UUID IBBvOq-Bg0U-c69v-p7fQ-tR63-T8UV-gM1Ncu
> LV Write Access read/write
> LV snapshot status active destination for /dev/test_vg/data_lv
> LV Status available
> # open 0
> LV Size 30.00 GB
> Current LE 240
> COW-table size 2.50 GB
> COW-table LE 20
> Allocated to snapshot 0.00%
> Snapshot chunk size 4.00 KB
> Segments 1
> Allocation inherit
> Read ahead sectors auto
> - currently set to 256
> Block device 253:5
>
> --- Segments ---
> Logical extent 0 to 19:
> Type linear
> Physical volume /dev/cciss/c0d1p1
> Physical extents 5448 to 5467
>
> <NOW BACKUP>
>
> #lvremove /dev/test_vg/data_snap
> Do you really want to remove active logical volume data_snap? [y/n]: y
> Logical volume "data_snap" successfully removed
>
> #lvcreate -l 20 -n hog_lv test_vg /dev/cciss/c0d1p1:5448-5467
> Logical volume "hog_lv" created
>
> #pvdisplay --map /dev/cciss/c0d1p1
> --- Physical volume ---
> PV Name /dev/cciss/c0d1p1
> VG Name test_vg
> PV Size 683.51 GB / not usable 5.97 MB
> Allocatable yes
> PE Size (KByte) 131072
> Total PE 5468
> Free PE 4312
> Allocated PE 1156
> PV UUID YXjplf-EfLh-8Jkr-2utT-gmi5-gjAH-UOCcN0
>
> --- Physical Segments ---
> Physical extent 0 to 15:
> Logical volume /dev/test_vg/restricted_lv
> Logical extents 0 to 15
> Physical extent 16 to 815:
> Logical volume /dev/test_vg/mail_lv
> Logical extents 0 to 799
> Physical extent 816 to 975:
> Logical volume /dev/test_vg/data_lv
> Logical extents 0 to 159
> Physical extent 976 to 2255:
> FREE
> Physical extent 2256 to 2335:
> Logical volume /dev/test_vg/srv_lv
> Logical extents 0 to 79
> Physical extent 2336 to 2415:
> Logical volume /dev/test_vg/data_lv
> Logical extents 160 to 239
> Physical extent 2416 to 5447:
> FREE
> Physical extent 5448 to 5467:
> Logical volume /dev/test_vg/hog_lv
> Logical extents 0 to 19
>
> #dd if=/dev/zero of=/dev/hog_lv
>
> #lvremove /dev/test_vg/hog_lv
> Do you really want to remove active logical volume hog_lv? [y/n]: y
> Logical volume "hog_lv" successfully removed
>
> Enjoy
>
> James
>
>
>
> _______________________________________________
> linux-lvm mailing list
> linux-lvm@redhat.com
> https://www.redhat.com/mailman/listinfo/linux-lvm
> read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/
James,
That's fantastic! Thanks very much! I have a couple of questions:
1) If I wanted to create a script that backed up lots of customer-data
LVs, could I just do one zero at the end (and still have no data leakage)?
2) On average, each of my data LVs are 20GB each, and if I were to
create a snapshot of 20GB, this would take about 20 mins to erase. If I
made the snapshot only 1GB, that means it would be quick to erase at the
end (however only 1GB of data could be created on the respect origin,
correct?)
Thanks
^ permalink raw reply [flat|nested] 68+ messages in thread
* Re: [linux-lvm] Snapshots and disk re-use
2011-04-05 22:52 ` Jonathan Tripathy
@ 2011-04-05 23:11 ` James Hawtin
2011-04-05 23:19 ` Jonathan Tripathy
0 siblings, 1 reply; 68+ messages in thread
From: James Hawtin @ 2011-04-05 23:11 UTC (permalink / raw)
To: linux-lvm
> James,
>
> That's fantastic! Thanks very much! I have a couple of questions:
>
> 1) If I wanted to create a script that backed up lots of customer-data
> LVs, could I just do one zero at the end (and still have no data
> leakage)?
Yes you could, because COW means COPY ON WRITE, so the original block is
copied onto the COW with the data from the original disk overwritting
any data currently on it. Before that point any data on it was not
addressable from the snapshot lv (* see my final point)
> 2) On average, each of my data LVs are 20GB each, and if I were to
> create a snapshot of 20GB, this would take about 20 mins to erase. If
> I made the snapshot only 1GB, that means it would be quick to erase at
> the end (however only 1GB of data could be created on the respect
> origin, correct?)
You are right you only have to erase the snapshot cow space, which is
normally only 10-15% of the whole original disk. 2GB is pretty fast to
over right on any system I have used these days. To be sure though you
do need to overwrite the whole cow even if only a few percent was used
as you cannot tell which few percent that was.
* I do wonder why you are so worried, leakage is only a problem if the
COW is assigned to a future customer LV. If you always used the same
space for backups perhaps had a PE just for backups it would never be
used in a customer lv therefore you could argue that you never have to
erase it. If its on a pe you only use for snapshotting you also don't
need to hog the space as any bit of that disk is ok.
James
^ permalink raw reply [flat|nested] 68+ messages in thread
* Re: [linux-lvm] Snapshots and disk re-use
2011-04-05 23:11 ` James Hawtin
@ 2011-04-05 23:19 ` Jonathan Tripathy
2011-04-05 23:39 ` James Hawtin
0 siblings, 1 reply; 68+ messages in thread
From: Jonathan Tripathy @ 2011-04-05 23:19 UTC (permalink / raw)
To: linux-lvm
On 06/04/2011 00:11, James Hawtin wrote:
>
>> James,
>>
>> That's fantastic! Thanks very much! I have a couple of questions:
>>
>> 1) If I wanted to create a script that backed up lots of
>> customer-data LVs, could I just do one zero at the end (and still
>> have no data leakage)?
>
> Yes you could, because COW means COPY ON WRITE, so the original block
> is copied onto the COW with the data from the original disk
> overwritting any data currently on it. Before that point any data on
> it was not addressable from the snapshot lv (* see my final point)
>> 2) On average, each of my data LVs are 20GB each, and if I were to
>> create a snapshot of 20GB, this would take about 20 mins to erase. If
>> I made the snapshot only 1GB, that means it would be quick to erase
>> at the end (however only 1GB of data could be created on the respect
>> origin, correct?)
>
> You are right you only have to erase the snapshot cow space, which is
> normally only 10-15% of the whole original disk. 2GB is pretty fast to
> over right on any system I have used these days. To be sure though you
> do need to overwrite the whole cow even if only a few percent was used
> as you cannot tell which few percent that was.
Actually, I meant just making a snapshot of 1GB, not just erase the
first 1GB of a 20GB snapshot. Bu tthis may be moot (See below)
>
> * I do wonder why you are so worried, leakage is only a problem if the
> COW is assigned to a future customer LV. If you always used the same
> space for backups perhaps had a PE just for backups it would never be
> used in a customer lv therefore you could argue that you never have to
> erase it. If its on a pe you only use for snapshotting you also don't
> need to hog the space as any bit of that disk is ok.
Excellent point! As long as I use the same PEs for making the snapshot
everytime, I don't need to ever erase it (And it can be a nice big size
like 50GB, so even my largest customers won't outgrow the snapshot).
However though, wouldn't I need to keep the "hog" around just to make
sure that the snapshot PEs don't get assigned to a new customer LV in
the future (Currently, we don't specify PEs to use when creating normal
LVs)?
An even better question: Does the snapshot have to be on the same
physical disk as the LV its mirroring?
^ permalink raw reply [flat|nested] 68+ messages in thread
* Re: [linux-lvm] Snapshots and disk re-use
2011-04-05 23:19 ` Jonathan Tripathy
@ 2011-04-05 23:39 ` James Hawtin
2011-04-06 0:00 ` Jonathan Tripathy
0 siblings, 1 reply; 68+ messages in thread
From: James Hawtin @ 2011-04-05 23:39 UTC (permalink / raw)
To: linux-lvm
On 05/04/2011 23:19, Jonathan Tripathy wrote:
> Excellent point! As long as I use the same PEs for making the snapshot
> everytime, I don't need to ever erase it (And it can be a nice big
> size like 50GB, so even my largest customers won't outgrow the
> snapshot). However though, wouldn't I need to keep the "hog" around
> just to make sure that the snapshot PEs don't get assigned to a new
> customer LV in the future (Currently, we don't specify PEs to use when
> creating normal LVs)?
>
I think you missed the point of why I suggested using a separate PV, the
space could be divided using fdisk it does not have to separate physical
disk (This is ok as you will never use this space for mirroring). If
snapshots are created on a separate PV you can use pvchange - x n and
pvchange -x y to change if it is allocatable and only when you are
creating snaps do you make in allocatable that will prevent accidental
reuse in customer lvs without lots of hassle. If you don't use pvchange
you will need to specify the PVs when ever you create a customer LV.
# an lvcreate with specifing the PEs to use.
lvcreate -l 20 -s -n data_snap /dev/test_vg/data_lv /dev/cciss/c0d1p2
Also this allows you to use 50 GB area or 4 x 10 GB ones, depending on
what you are backing up without having to worry about creating a hog
over the top of anything you haven't used. This allows you more
flexibility with simultaneous backups.
> An even better question: Does the snapshot have to be on the same
> physical disk as the LV its mirroring?
>
No it does not.
Do I get cake now?
James
^ permalink raw reply [flat|nested] 68+ messages in thread
* Re: [linux-lvm] Snapshots and disk re-use
2011-04-05 23:39 ` James Hawtin
@ 2011-04-06 0:00 ` Jonathan Tripathy
2011-04-06 0:08 ` Stuart D. Gathman
2011-04-06 0:16 ` James Hawtin
0 siblings, 2 replies; 68+ messages in thread
From: Jonathan Tripathy @ 2011-04-06 0:00 UTC (permalink / raw)
To: linux-lvm
On 06/04/2011 00:39, James Hawtin wrote:
> On 05/04/2011 23:19, Jonathan Tripathy wrote:
>> Excellent point! As long as I use the same PEs for making the
>> snapshot everytime, I don't need to ever erase it (And it can be a
>> nice big size like 50GB, so even my largest customers won't outgrow
>> the snapshot). However though, wouldn't I need to keep the "hog"
>> around just to make sure that the snapshot PEs don't get assigned to
>> a new customer LV in the future (Currently, we don't specify PEs to
>> use when creating normal LVs)?
>>
>
> I think you missed the point of why I suggested using a separate PV,
> the space could be divided using fdisk it does not have to separate
> physical disk (This is ok as you will never use this space for
> mirroring). If snapshots are created on a separate PV you can use
> pvchange - x n and pvchange -x y to change if it is allocatable and
> only when you are creating snaps do you make in allocatable that will
> prevent accidental reuse in customer lvs without lots of hassle. If
> you don't use pvchange you will need to specify the PVs when ever you
> create a customer LV.
Ok now things are really getting interesting!
Actually, when I create new customer LVs, I always specify which volume
group I want to add it to. E.g:
lvcreate -nNewCustomerLV -L20G vg0
where vg0 is /dev/vg0
and vg0 is a volume group which uses an entire physical partition (Which
I guess is called a PV).
Now, if I were to create my snapshots on a seperate vg, eg:
lvcreate -L 20G -s -n data_snap /dev/vg0/NewCustomerLV /dev/vg1
does that mean I never need to use pvchange to "switch off" vg1? And I
never need to zero or create a "hog"? And no leakage will ever occur?
>
>
> Do I get cake now?
Only if it's not a lie.... :)
(Just incase you didn't get the referance
http://en.wikipedia.org/wiki/The_cake_is_a_lie)
^ permalink raw reply [flat|nested] 68+ messages in thread
* Re: [linux-lvm] Snapshots and disk re-use
2011-04-06 0:00 ` Jonathan Tripathy
@ 2011-04-06 0:08 ` Stuart D. Gathman
2011-04-06 0:14 ` Jonathan Tripathy
2011-04-06 0:16 ` James Hawtin
1 sibling, 1 reply; 68+ messages in thread
From: Stuart D. Gathman @ 2011-04-06 0:08 UTC (permalink / raw)
To: LVM general discussion and development
On Wed, 6 Apr 2011, Jonathan Tripathy wrote:
> lvcreate -nNewCustomerLV -L20G vg0
>
> where vg0 is /dev/vg0
> and vg0 is a volume group which uses an entire physical partition (Which I
> guess is called a PV).
>
> Now, if I were to create my snapshots on a seperate vg, eg:
You can't do that. Snapshots must be on the same VG. James is suggesting you
create them on the same VG, but a different PV reserved for backup snapshots.
--
Stuart D. Gathman <stuart@bmsi.com>
Business Management Systems Inc. Phone: 703 591-0911 Fax: 703 591-6154
"Confutatis maledictis, flammis acribus addictis" - background song for
a Microsoft sponsored "Where do you want to go from here?" commercial.
^ permalink raw reply [flat|nested] 68+ messages in thread* Re: [linux-lvm] Snapshots and disk re-use
2011-04-06 0:08 ` Stuart D. Gathman
@ 2011-04-06 0:14 ` Jonathan Tripathy
0 siblings, 0 replies; 68+ messages in thread
From: Jonathan Tripathy @ 2011-04-06 0:14 UTC (permalink / raw)
To: linux-lvm
On 06/04/2011 01:08, Stuart D. Gathman wrote:
> On Wed, 6 Apr 2011, Jonathan Tripathy wrote:
>
>> lvcreate -nNewCustomerLV -L20G vg0
>>
>> where vg0 is /dev/vg0
>> and vg0 is a volume group which uses an entire physical partition
>> (Which I guess is called a PV).
>>
>> Now, if I were to create my snapshots on a seperate vg, eg:
>
> You can't do that. Snapshots must be on the same VG. James is
> suggesting you
> create them on the same VG, but a different PV reserved for backup
> snapshots.
Guess I need to go back to LVM basics. I had a "hierarchical" structure
in my head (which must be wrong):
An LV sites on top of a VG which sits on top of a PV
But going by your comment above, you can add multiple PVs (Physical
partitions) to a single VG and create some form of a JBOD configuration?
^ permalink raw reply [flat|nested] 68+ messages in thread
* Re: [linux-lvm] Snapshots and disk re-use
2011-04-06 0:00 ` Jonathan Tripathy
2011-04-06 0:08 ` Stuart D. Gathman
@ 2011-04-06 0:16 ` James Hawtin
2011-04-06 0:28 ` Jonathan Tripathy
1 sibling, 1 reply; 68+ messages in thread
From: James Hawtin @ 2011-04-06 0:16 UTC (permalink / raw)
To: LVM general discussion and development
On 06/04/2011 00:00, Jonathan Tripathy wrote:
> Ok now things are really getting interesting!
>
> Actually, when I create new customer LVs, I always specify which
> volume group I want to add it to. E.g:
>
> lvcreate -nNewCustomerLV -L20G vg0
Sadly that is just normal... However a volume group could be made up of
more than one disk, you can with an option specify which of the disks
(pv) to use. Rather than the system select for you.
> where vg0 is /dev/vg0
> and vg0 is a volume group which uses an entire physical partition
> (Which I guess is called a PV).
That is a vg (volume group) not a pv (physical volume).
vgdisplay -v
pvs
pvdisplay --map
and you should get the idea.
>
> Now, if I were to create my snapshots on a seperate vg, eg:
>
> lvcreate -L 20G -s -n data_snap /dev/vg0/NewCustomerLV /dev/vg1
>
> does that mean I never need to use pvchange to "switch off" vg1? And I
> never need to zero or create a "hog"? And no leakage will ever occur?
No you are confused pvs are created on disk partitions, one or more pvs
then make you vg.
LVs are created on top of VGs, VGs are created on top of PVs, PVs are
created on partitions on block devices or whole block devices.
(Ok we shall stop here however blocks devices could be loop back, meta
devices like raid 0/1/5 etc hardware/software or real disks however that
is not LVM any more.)
>
>> Do I get cake now?
> Only if it's not a lie.... :)
I saw the cake but I got none (Portal)
James
^ permalink raw reply [flat|nested] 68+ messages in thread
* Re: [linux-lvm] Snapshots and disk re-use
2011-04-06 0:16 ` James Hawtin
@ 2011-04-06 0:28 ` Jonathan Tripathy
2011-04-06 0:38 ` Stuart D. Gathman
2011-04-06 0:42 ` James Hawtin
0 siblings, 2 replies; 68+ messages in thread
From: Jonathan Tripathy @ 2011-04-06 0:28 UTC (permalink / raw)
To: linux-lvm
> No you are confused pvs are created on disk partitions, one or more
> pvs then make you vg.
>
> LVs are created on top of VGs, VGs are created on top of PVs, PVs are
> created on partitions on block devices or whole block devices.
>
> (Ok we shall stop here however blocks devices could be loop back, meta
> devices like raid 0/1/5 etc hardware/software or real disks however
> that is not LVM any more.)
Ok, i think I get it now. At the minute, my vg (vg0) only has on PV in
it (/dev/md3 which you can tell is a mdadm RAID device). I wasn't aware
you could add more PVs (that's pretty cool!). So, let's say I had a
spare partition (/dev/hdb7 as an example). To my vg0 volume group, I
would firstly:
pvcreate /dev/hdb7
vgextend /dev/hdb7
Then, every time I create a new customer LV, I would do:
lvcreate -nNewCustomerLV -L20G vg0 /dev/md3
Then, every time I wanted to create a snapshot:
lvcreate -L20G -s -n data_snap /dev/vg0/NewCustomerLV /dev/hdb7
Is that correct? No Leakage? And no zeroing needed?
Side note: Since I didn't partition my servers with this in mind, my new
PV will probably have to be an iSCSI device located on a remote target
:( Either that or use a loopback device with an image, but I'd be scared
that the system would not boot properly. Can you give me any tips on how
to use an image file as a PV just for snapshots?
Thanks
^ permalink raw reply [flat|nested] 68+ messages in thread
* Re: [linux-lvm] Snapshots and disk re-use
2011-04-06 0:28 ` Jonathan Tripathy
@ 2011-04-06 0:38 ` Stuart D. Gathman
2011-04-06 0:43 ` Stuart D. Gathman
2011-04-06 0:47 ` Jonathan Tripathy
2011-04-06 0:42 ` James Hawtin
1 sibling, 2 replies; 68+ messages in thread
From: Stuart D. Gathman @ 2011-04-06 0:38 UTC (permalink / raw)
To: LVM general discussion and development
On Wed, 6 Apr 2011, Jonathan Tripathy wrote:
> pvcreate /dev/hdb7
> vgextend /dev/hdb7
>
> Then, every time I create a new customer LV, I would do:
>
> lvcreate -nNewCustomerLV -L20G vg0 /dev/md3
>
> Then, every time I wanted to create a snapshot:
>
> lvcreate -L20G -s -n data_snap /dev/vg0/NewCustomerLV /dev/hdb7
>
> Is that correct? No Leakage? And no zeroing needed?
Correct. Except vgextend needs to specify a VG. And you still need to zero a
customer LV before removing it. The PV reserved for snapshots approach only
avoids the need to zero snapshots.
I wonder if there is an elegant way to automate things like this via
"allocation policies" or similar in the metadata.
--
Stuart D. Gathman <stuart@bmsi.com>
Business Management Systems Inc. Phone: 703 591-0911 Fax: 703 591-6154
"Confutatis maledictis, flammis acribus addictis" - background song for
a Microsoft sponsored "Where do you want to go from here?" commercial.
^ permalink raw reply [flat|nested] 68+ messages in thread* Re: [linux-lvm] Snapshots and disk re-use
2011-04-06 0:38 ` Stuart D. Gathman
@ 2011-04-06 0:43 ` Stuart D. Gathman
2011-04-06 1:36 ` James Hawtin
2011-04-06 0:47 ` Jonathan Tripathy
1 sibling, 1 reply; 68+ messages in thread
From: Stuart D. Gathman @ 2011-04-06 0:43 UTC (permalink / raw)
To: LVM general discussion and development
On Tue, 5 Apr 2011, Stuart D. Gathman wrote:
>> lvcreate -L20G -s -n data_snap /dev/vg0/NewCustomerLV /dev/hdb7
>>
>> Is that correct? No Leakage? And no zeroing needed?
>
> Correct. Except vgextend needs to specify a VG. And you still need to zero
> a customer LV before removing it. The PV reserved for snapshots approach
> only avoids the need to zero snapshots.
In fact, since you still have to zero an LV before removing, you might as well
zero an LV when you allocate. Same overhead. Then you don't need to mess
with any of these schemes to deal with snapshots. Did we already cover
that obvious solution? We did discuss a utility to optimize zeroing an
already mostly zero LV.
--
Stuart D. Gathman <stuart@bmsi.com>
Business Management Systems Inc. Phone: 703 591-0911 Fax: 703 591-6154
"Confutatis maledictis, flammis acribus addictis" - background song for
a Microsoft sponsored "Where do you want to go from here?" commercial.
^ permalink raw reply [flat|nested] 68+ messages in thread* Re: [linux-lvm] Snapshots and disk re-use
2011-04-06 0:43 ` Stuart D. Gathman
@ 2011-04-06 1:36 ` James Hawtin
2011-04-06 1:47 ` Jonathan Tripathy
0 siblings, 1 reply; 68+ messages in thread
From: James Hawtin @ 2011-04-06 1:36 UTC (permalink / raw)
To: LVM general discussion and development
On 06/04/2011 00:43, Stuart D. Gathman wrote:
> In fact, since you still have to zero an LV before removing, you might
> as well
> zero an LV when you allocate. Same overhead. Then you don't need to
> mess
> with any of these schemes to deal with snapshots. Did we already cover
> that obvious solution? We did discuss a utility to optimize zeroing an
> already mostly zero LV.
>
I think you mean and lv of a virtual machine rather than snap shot, not
sure how you could zero the cow. Stuart does have a good point here,
there is another source of leak outside of snapshots (and probably
worse), if you delete an lv the create a new one over the same PEs the
data from then old lv will be there, save a few k that is zeroed at the
front (to help with fs/label creation). This is worse as it is in order
data that the cow from a snap is not. Geting a while fs back is very
possible.
Try this test
lvcreate -L 256m -n test_lv osiris_l1_vg
yes | dd of=/dev/osiris_l1_vg/test_lv
pvdisplay --map
lvremove /dev/osiris_l1_vg/test_lv
lvcreate -L 256m -n test_lv osiris_l1_vg
pvdisplay --map
od -ha /dev/osiris_l1_vg/test_lv
James
^ permalink raw reply [flat|nested] 68+ messages in thread
* Re: [linux-lvm] Snapshots and disk re-use
2011-04-06 1:36 ` James Hawtin
@ 2011-04-06 1:47 ` Jonathan Tripathy
2011-04-06 1:53 ` James Hawtin
0 siblings, 1 reply; 68+ messages in thread
From: Jonathan Tripathy @ 2011-04-06 1:47 UTC (permalink / raw)
To: linux-lvm
On 06/04/2011 02:36, James Hawtin wrote:
> On 06/04/2011 00:43, Stuart D. Gathman wrote:
>> In fact, since you still have to zero an LV before removing, you
>> might as well
>> zero an LV when you allocate. Same overhead. Then you don't need to
>> mess
>> with any of these schemes to deal with snapshots. Did we already cover
>> that obvious solution? We did discuss a utility to optimize zeroing an
>> already mostly zero LV.
>>
>
> I think you mean and lv of a virtual machine rather than snap shot,
> not sure how you could zero the cow. Stuart does have a good point
> here, there is another source of leak outside of snapshots (and
> probably worse), if you delete an lv the create a new one over the
> same PEs the data from then old lv will be there, save a few k that is
> zeroed at the front (to help with fs/label creation). This is worse as
> it is in order data that the cow from a snap is not. Geting a while fs
> back is very possible.
Currently, as part of our procedure, we zero LVs once a customer has
left our service. The reason why we don't zero on allocation is that
customer usually expect quick setup times (under 10 minutes) and zeroing
gigabytes worth of space can take too long. Getting new zeroed LVs ready
before sign ups also isn't an option but for other reasons.
^ permalink raw reply [flat|nested] 68+ messages in thread
* Re: [linux-lvm] Snapshots and disk re-use
2011-04-06 1:47 ` Jonathan Tripathy
@ 2011-04-06 1:53 ` James Hawtin
0 siblings, 0 replies; 68+ messages in thread
From: James Hawtin @ 2011-04-06 1:53 UTC (permalink / raw)
To: LVM general discussion and development
On 06/04/2011 01:47, Jonathan Tripathy wrote:
>
> Currently, as part of our procedure, we zero LVs once a customer has
> left our service. The reason why we don't zero on allocation is that
> customer usually expect quick setup times (under 10 minutes) and
> zeroing gigabytes worth of space can take too long. Getting new zeroed
> LVs ready before sign ups also isn't an option but for other reasons.
Very reasonable
It however is a shame than lvm cannot zero PE (physical extents) on
first access or do a proper background initialisation. As would happen
on hardware raids.
James
^ permalink raw reply [flat|nested] 68+ messages in thread
* Re: [linux-lvm] Snapshots and disk re-use
2011-04-06 0:38 ` Stuart D. Gathman
2011-04-06 0:43 ` Stuart D. Gathman
@ 2011-04-06 0:47 ` Jonathan Tripathy
1 sibling, 0 replies; 68+ messages in thread
From: Jonathan Tripathy @ 2011-04-06 0:47 UTC (permalink / raw)
To: linux-lvm
On 06/04/2011 01:38, Stuart D. Gathman wrote:
> On Wed, 6 Apr 2011, Jonathan Tripathy wrote:
>
>> pvcreate /dev/hdb7
>> vgextend /dev/hdb7
>>
>> Then, every time I create a new customer LV, I would do:
>>
>> lvcreate -nNewCustomerLV -L20G vg0 /dev/md3
>>
>> Then, every time I wanted to create a snapshot:
>>
>> lvcreate -L20G -s -n data_snap /dev/vg0/NewCustomerLV /dev/hdb7
>>
>> Is that correct? No Leakage? And no zeroing needed?
>
> Correct. Except vgextend needs to specify a VG. And you still need
> to zero a
> customer LV before removing it. The PV reserved for snapshots
> approach only
> avoids the need to zero snapshots.
Excellent! Cake for all! :)
>
> I wonder if there is an elegant way to automate things like this via
> "allocation policies" or similar in the metadata.
Indeed, there does seem to be some scope here for a code feature request
as this does seem like an aweful lot of work for something that could
very easily be overlooked. Every book or article I've read about hosting
seems to never mention anything about zeroing....
^ permalink raw reply [flat|nested] 68+ messages in thread
* Re: [linux-lvm] Snapshots and disk re-use
2011-04-06 0:28 ` Jonathan Tripathy
2011-04-06 0:38 ` Stuart D. Gathman
@ 2011-04-06 0:42 ` James Hawtin
2011-04-06 0:50 ` Jonathan Tripathy
1 sibling, 1 reply; 68+ messages in thread
From: James Hawtin @ 2011-04-06 0:42 UTC (permalink / raw)
To: linux-lvm
On 06/04/2011 00:28, Jonathan Tripathy wrote:
> Ok, i think I get it now. At the minute, my vg (vg0) only has on PV in
> it (/dev/md3 which you can tell is a mdadm RAID device). I wasn't
> aware you could add more PVs (that's pretty cool!). So, let's say I
> had a spare partition (/dev/hdb7 as an example). To my vg0 volume
> group, I would firstly:
>
> pvcreate /dev/hdb7
> vgextend /dev/hdb7
Right however danger warnings are going off in my mind now!
> Then, every time I create a new customer LV, I would do:
>
> lvcreate -nNewCustomerLV -L20G vg0 /dev/md3
yes that would work
>
> Then, every time I wanted to create a snapshot:
>
> lvcreate -L20G -s -n data_snap /dev/vg0/NewCustomerLV /dev/hdb7
>
Yes
> Is that correct? No Leakage? And no zeroing needed?
Indeed
>
> Side note: Since I didn't partition my servers with this in mind, my
> new PV will probably have to be an iSCSI device located on a remote
> target :( Either that or use a loopback device with an image, but I'd
> be scared that the system would not boot properly. Can you give me any
> tips on how to use an image file as a PV just for snapshots?
Ok, there has been alot of dangerous talk here, i assume you are using
an md device so you can mirror things. If you added a single disk to
that, and that disk failed you would have a major problem. Likewise if
you rebooted with open snaps and iscsi you would need that iscsi device
available during computer boot. I REALLY hope you do not have your local
FS on the same vg as your data. As this would result in a non booting
machine.
BTW new use pvmove on /var as it stores data there and will freeze the
whole system.
All hope is not lost, if you can add disk temperately you can use :-
1) add a new pv to the exisiting disk (vgextend)
2) move the data lvs to the new pv (pvmove)
3) remove the old disk (vgreduce)
a) check with pvscan that the old disk really is not in use
4) resize the partiton (fdisk)
5) create a new pv (pvcreate --force) you need that to overwrite... take
care now.
6) add the old disk back in (vgextend)
7) move the data lvs back to the old disk (pvmove)
8) remove the temp disk (vgreduce)
Now that is worth cake!
James
^ permalink raw reply [flat|nested] 68+ messages in thread
* Re: [linux-lvm] Snapshots and disk re-use
2011-04-06 0:42 ` James Hawtin
@ 2011-04-06 0:50 ` Jonathan Tripathy
2011-04-06 1:20 ` James Hawtin
0 siblings, 1 reply; 68+ messages in thread
From: Jonathan Tripathy @ 2011-04-06 0:50 UTC (permalink / raw)
To: linux-lvm
On 06/04/2011 01:42, James Hawtin wrote:
> On 06/04/2011 00:28, Jonathan Tripathy wrote:
>> Ok, i think I get it now. At the minute, my vg (vg0) only has on PV
>> in it (/dev/md3 which you can tell is a mdadm RAID device). I wasn't
>> aware you could add more PVs (that's pretty cool!). So, let's say I
>> had a spare partition (/dev/hdb7 as an example). To my vg0 volume
>> group, I would firstly:
>>
>> pvcreate /dev/hdb7
>> vgextend /dev/hdb7
>
> Right however danger warnings are going off in my mind now!
>
>> Then, every time I create a new customer LV, I would do:
>>
>> lvcreate -nNewCustomerLV -L20G vg0 /dev/md3
>
> yes that would work
>
>>
>> Then, every time I wanted to create a snapshot:
>>
>> lvcreate -L20G -s -n data_snap /dev/vg0/NewCustomerLV /dev/hdb7
>>
>
> Yes
>
>> Is that correct? No Leakage? And no zeroing needed?
> Indeed
>
>>
>> Side note: Since I didn't partition my servers with this in mind, my
>> new PV will probably have to be an iSCSI device located on a remote
>> target :( Either that or use a loopback device with an image, but I'd
>> be scared that the system would not boot properly. Can you give me
>> any tips on how to use an image file as a PV just for snapshots?
>
> Ok, there has been alot of dangerous talk here, i assume you are using
> an md device so you can mirror things.
Correct
> If you added a single disk to that, and that disk failed you would
> have a major problem. Likewise if you rebooted with open snaps and
> iscsi you would need that iscsi device available during computer boot.
> I REALLY hope you do not have your local FS on the same vg as your
> data. As this would result in a non booting machine.
Nope. My root partition is not on LVM at all, but just on a regular md
partition. The LVM LVs are used for virtual machine hosting for
customers. Also, I used just /dev/hdb7 as an example, but it does bring
up some interesting questions:
If the PV used for snapshots were to fail while the snapshot was open,
or the server rebooted and the PV wasn't available at boot, what would
happen? I ask these questions because a loopback device or iSCSI is
really my only feasible option right now for the snapshot PV...
^ permalink raw reply [flat|nested] 68+ messages in thread
* Re: [linux-lvm] Snapshots and disk re-use
2011-04-06 0:50 ` Jonathan Tripathy
@ 2011-04-06 1:20 ` James Hawtin
2011-04-06 1:45 ` Jonathan Tripathy
0 siblings, 1 reply; 68+ messages in thread
From: James Hawtin @ 2011-04-06 1:20 UTC (permalink / raw)
To: LVM general discussion and development
On 06/04/2011 00:50, Jonathan Tripathy wrote:
>
> If the PV used for snapshots were to fail while the snapshot was open,
> or the server rebooted and the PV wasn't available at boot, what would
> happen? I ask these questions because a loopback device or iSCSI is
> really my only feasible option right now for the snapshot PV...
What would happen is... if the file systems are mounted at boot time
(in the fstab) it will fail the fsck because the device is not there.
and drop to single user mode, you could then edit the fstab to to remove
those file systems that would bring the system online, at which point
you could fix what stopped the iscsi from working, and mount the file
systems.
At one place I worked they never mounted the data file systems at "boot"
but in a rc.local so the system always got to be interactive before any
problems so it was easy to go in and fix.
DO NOT... create a loop back device on the same on a file system that
the loopback that will then form a pv of, if you do your system is
DOOOMMMED! to get it to boot again your have to mount a part volume
group and block copy the devices to a new one, even worse if you
extended the file system with the loop back on it onto the pv of the
loop back it will NEVER work again. So the only place you can create a
loopback device is outside of a vg it is to be a part of and frankley
better that its in NO vg as you may get recursion problems.
The problem with a loop back is that you need to do a the loopback setup
to enable the device before the vgscan and vgchange can bring it online
in the volume, very hard to get right at boot time. If you have
partitioned it you will also need to do kpartx.
If you use loopbacks i would extend the volume group onto the disk only
during backups then nreduce it out afterwards to reduce risks.
Steal space from somewhere you say you have the OS on a physical
paritions, so LVM everything but / and /boot redo the make a pv on the
space freed. to rescue a system is easy if you can mount / everything
else does not matter.
If you have everything in / ... you are insane as you should set /var,
/tmp and perhaps even /home to noexec as if you get an automated break
in that is normally where the write stage two, to get executed, this
normally stops them in there tracks, No public writiable filesystems
should be executable.... !
James
^ permalink raw reply [flat|nested] 68+ messages in thread
* Re: [linux-lvm] Snapshots and disk re-use
2011-04-06 1:20 ` James Hawtin
@ 2011-04-06 1:45 ` Jonathan Tripathy
0 siblings, 0 replies; 68+ messages in thread
From: Jonathan Tripathy @ 2011-04-06 1:45 UTC (permalink / raw)
To: linux-lvm
On 06/04/2011 02:20, James Hawtin wrote:
> On 06/04/2011 00:50, Jonathan Tripathy wrote:
>>
>> If the PV used for snapshots were to fail while the snapshot was
>> open, or the server rebooted and the PV wasn't available at boot,
>> what would happen? I ask these questions because a loopback device or
>> iSCSI is really my only feasible option right now for the snapshot PV...
>
> What would happen is... if the file systems are mounted at boot time
> (in the fstab) it will fail the fsck because the device is not there.
> and drop to single user mode, you could then edit the fstab to to
> remove those file systems that would bring the system online, at which
> point you could fix what stopped the iscsi from working, and mount the
> file systems.
>
> At one place I worked they never mounted the data file systems at
> "boot" but in a rc.local so the system always got to be interactive
> before any problems so it was easy to go in and fix.
Since my vg is used purly for virtualisation, I don't mount any LVs in
fstab in my host OS. LVs only get mounted once VMs start up.
>
> DO NOT... create a loop back device on the same on a file system that
> the loopback that will then form a pv of, if you do your system is
> DOOOMMMED! to get it to boot again your have to mount a part volume
> group and block copy the devices to a new one, even worse if you
> extended the file system with the loop back on it onto the pv of the
> loop back it will NEVER work again. So the only place you can create a
> loopback device is outside of a vg it is to be a part of and frankley
> better that its in NO vg as you may get recursion problems.
Oh no, wouldn't dare to do that :) I was thinking of creating the image
file on a non-LVM area of the drive (somewhere in the host OS partition)
>
> The problem with a loop back is that you need to do a the loopback
> setup to enable the device before the vgscan and vgchange can bring it
> online in the volume, very hard to get right at boot time. If you
> have partitioned it you will also need to do kpartx.
Well I wouldn't partition the image file, seems more trouble than it's
worth. I did find an article from Novell that seems to guide me through
what to do if my machine were to reboot with a PV missing:
http://www.novell.com/coolsolutions/appnote/19386.html
>
> If you use loopbacks i would extend the volume group onto the disk
> only during backups then nreduce it out afterwards to reduce risks.
Good Idea.
^ permalink raw reply [flat|nested] 68+ messages in thread