* Domain backup file explodes on s3fs
@ 2020-04-07 19:13 Tim Haley
2020-04-07 19:37 ` Eric Blake
0 siblings, 1 reply; 3+ messages in thread
From: Tim Haley @ 2020-04-07 19:13 UTC (permalink / raw)
To: qemu-devel
Hi all,
Have been playing with `virsh backup-begin` of late and think it's an
excellent feature. I've noticed one behavior I'm not sure I understand.
Am doing pretty straight-forward backup of boot disk:
# cat bx
<domainbackup>
<disks>
<disk name='vda' type='file'>
<target file='/backups/vda.2aa450cc-6d2e-11ea-8de0-52542e0d008a'/>
<driver type='qcow2'/>
</disk>
</disks>
</domainbackup>
# cat cx
<domaincheckpoint>
<name>2aa450cc-6d2e-11ea-8de0-52542e0d008a</name>
<disks>
<disk name='vda' checkpoint='bitmap'/>
</disks>
</domaincheckpoint>
# virsh backup-begin 721 bx cx
If my /backups directory is just XFS, I get a backup file that looks
like it is just the size of data blocks in use
-rw------- 1 root root 2769551360 Mar 19 16:56
vda.2aa450cc-6d2e-11ea-8de0-52542e0d008a
but if I write to an s3fs (object storage backend) the file blows up to
the whole size of the disk
-rw------- 1 root root 8591507456 Mar 18 19:03
vda.2aa450cc-6d2e-11ea-8de0-52542e0d008a
Is this expected?
If it's relevant, this is on ubuntu and
# virsh version
Compiled against library: libvirt 6.1.0
Using library: libvirt 6.1.0
Using API: QEMU 6.1.0
Running hypervisor: QEMU 4.2.0
thanks for any ideas,
-tim
^ permalink raw reply [flat|nested] 3+ messages in thread
* Re: Domain backup file explodes on s3fs
2020-04-07 19:13 Domain backup file explodes on s3fs Tim Haley
@ 2020-04-07 19:37 ` Eric Blake
0 siblings, 0 replies; 3+ messages in thread
From: Eric Blake @ 2020-04-07 19:37 UTC (permalink / raw)
To: Tim Haley, qemu-devel, libvirt-list@redhat.com
[adding libvirt list]
On 4/7/20 2:13 PM, Tim Haley wrote:
> Hi all,
>
> Have been playing with `virsh backup-begin` of late and think it's an
> excellent feature. I've noticed one behavior I'm not sure I understand.
It looks like https://bugzilla.redhat.com/show_bug.cgi?id=1814664 is a
similar description of the same problem: namely, if qemu is not able to
determine that the destination already reads as zero, then it forcefully
zeroes the destination of a backup job. We may want to copy the fact
that qemu 5.0 is adding 'qemu-img convert --target-is-zero' to add a
similar knob to the QMP commands that trigger disk copying
(blockdev-backup, blockdev-mirror, possibly others) as well as logic to
avoid writing zeroes when the destination is already treated as zero
(whether by a probe, or by the knob being set).
...
> If my /backups directory is just XFS, I get a backup file that looks
> like it is just the size of data blocks in use
>
> -rw------- 1 root root 2769551360 Mar 19 16:56
> vda.2aa450cc-6d2e-11ea-8de0-52542e0d008a
For a local file, qemu is easily able to probe whether the destination
starts as all zeroes (thanks to lseek(SEEK_DATA));
>
> but if I write to an s3fs (object storage backend) the file blows up to
> the whole size of the disk
>
> -rw------- 1 root root 8591507456 Mar 18 19:03
> vda.2aa450cc-6d2e-11ea-8de0-52542e0d008a
whereas for s3fs, it looks like qemu does not have access to a quick
test to learn if the image starts all zero (POSIX does not provide a
quick way for doing this on a generic block device, but if you are aware
of an ioctl or otherwise that qemu could use, that might be helpful).
Or maybe the s3fs really is random contents rather than all zero, in
which case forcefully writing zeroes is the only correct behavior.
--
Eric Blake, Principal Software Engineer
Red Hat, Inc. +1-919-301-3226
Virtualization: qemu.org | libvirt.org
^ permalink raw reply [flat|nested] 3+ messages in thread
* Re: Domain backup file explodes on s3fs
@ 2020-04-13 17:57 Leo Luan
0 siblings, 0 replies; 3+ messages in thread
From: Leo Luan @ 2020-04-13 17:57 UTC (permalink / raw)
To: qemu-devel
[-- Attachment #1: Type: text/plain, Size: 3272 bytes --]
Hi Eric and all,
When invoking "virsh backup-begin" to do a full backup using qcow2
driver to a new backup
target file that does not have a backing chain, is it safe to not zero
the unallocated
parts of the virtual disk? Do we still depend on SEEK_DATA support in
this case to avoid
forcing zeros?
It looks like backup_run() in block/backup.c unsets the unallocated
parts of a copy bitmap
before starting the backup loop if s->sync_mode ==
MIRROR_SYNC_MODE_TOP. In a virsh backup-begin
full backup scenario, we observe that the mode is
MIRROR_SYNC_MODE_FULL, and the backup_loop()
function subsequently copies zeros for the entire virtual size,
including the unallocated parts
in the source qcow2 file. Would it be safe to also unset the
unallocated parts in the copy
map when the sync_mode is MIRROR_SYNC_MODE_FULL if we know there is no
need to force zeros
because the target file is a new empty qcow2 file without a backing
file? If so, maybe a
knob can be added to effect this behavior?
I guess the related code is changing in 5.0 and this issue may already
be adddressed.
Any updates/insights would be appreciated!
Thanks,
Leo
*From*: Eric Blake
*Subject*: Re: Domain backup file explodes on s3fs
*Date*: Tue, 7 Apr 2020 14:37:26 -0500
*User-agent*: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
Thunderbird/68.6.0
------------------------------
[adding libvirt list]
On 4/7/20 2:13 PM, Tim Haley wrote:
Hi all,
Have been playing with `virsh backup-begin` of late and think it's an excellent
feature. I've noticed one behavior I'm not sure I understand.
It looks like https://bugzilla.redhat.com/show_bug.cgi?id=1814664 is a similar
description of the same problem: namely, if qemu is not able to determine
that the destination already reads as zero, then it forcefully zeroes the
destination of a backup job. We may want to copy the fact that qemu 5.0 is
adding 'qemu-img convert --target-is-zero' to add a similar knob to the QMP
commands that trigger disk copying (blockdev-backup, blockdev-mirror,
possibly others) as well as logic to avoid writing zeroes when the
destination is already treated as zero (whether by a probe, or by the knob
being set).
...
If my /backups directory is just XFS, I get a backup file that looks like
it is just the size of data blocks in use
-rw------- 1 root root 2769551360 Mar 19 16:56
vda.2aa450cc-6d2e-11ea-8de0-52542e0d008a
For a local file, qemu is easily able to probe whether the destination starts
as all zeroes (thanks to lseek(SEEK_DATA));
but if I write to an s3fs (object storage backend) the file blows up to the
whole size of the disk
-rw------- 1 root root 8591507456 Mar 18 19:03
vda.2aa450cc-6d2e-11ea-8de0-52542e0d008a
whereas for s3fs, it looks like qemu does not have access to a quick test
to learn if the image starts all zero (POSIX does not provide a quick way
for doing this on a generic block device, but if you are aware of an ioctl
or otherwise that qemu could use, that might be helpful). Or maybe the s3fs
really is random contents rather than all zero, in which case forcefully
writing zeroes is the only correct behavior.
--
Eric Blake, Principal Software Engineer
Red Hat, Inc. +1-919-301-3226
Virtualization: qemu.org | libvirt.org
[-- Attachment #2: Type: text/html, Size: 6659 bytes --]
^ permalink raw reply [flat|nested] 3+ messages in thread
end of thread, other threads:[~2020-04-13 17:58 UTC | newest]
Thread overview: 3+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2020-04-07 19:13 Domain backup file explodes on s3fs Tim Haley
2020-04-07 19:37 ` Eric Blake
-- strict thread matches above, loose matches on Subject: below --
2020-04-13 17:57 Leo Luan
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).