From: Anushree Mathur <anushree.mathur@linux.ibm.com>
To: Peter Krempa <pkrempa@redhat.com>, Kevin Wolf <kwolf@redhat.com>
Cc: Peter Xu <peterx@redhat.com>,
qemu-devel@nongnu.org, farosas@suse.de, devel@lists.libvirt.org
Subject: Re: virsh migrate fails when --copy-storage-all option is given!
Date: Mon, 23 Jun 2025 23:25:28 +0530 [thread overview]
Message-ID: <657a0179-c51f-4e26-9ade-a0efbed732bb@linux.ibm.com> (raw)
In-Reply-To: <aEBJxUIYRaOKBiCL@angien.pipo.sk>
CC: libvirt devel list
Hi Kevin/Peter,
Thank you so much for addressing this issue. I tried out few more things and
here is my analysis:
Even when I removed the readonly option from guest xml I was still seeing the
issue in migration,In the qemu-commandline I could still see auto-read-only
option being set as true by default.
Then I tried giving the auto-read-only as false in the guest xml with
qemu-commandline param, it was actually getting set to false and the migration
worked!
Steps I tried:
1) Started the guest with adding the snippet in the guest xml with parameters
as:
<qemu:commandline>
<qemu:arg value='-blockdev'/>
<qemu:arg value='driver=file,filename=/disk_nfs/nfs/migrate_root.qcow2,node-name=drivefile,auto-read-only=false'/>
<qemu:arg value='-blockdev'/>
<qemu:arg value='driver=qcow2,file=drivefile,node-name=drive0'/>
<qemu:arg value='-device'/>
<qemu:arg value='virtio-blk-pci,drive=drive0,id=virtio-disk0,bus=pci.0,addr=0x5'/>
</qemu:commandline>
2) Started the migration and it worked.
Could anyone please clarify from libvirt side what is the change required?
Thanks,
Anushree-Mathur
On 04/06/25 6:57 PM, Peter Krempa wrote:
> On Wed, Jun 04, 2025 at 14:41:54 +0200, Kevin Wolf wrote:
>> Am 28.05.2025 um 17:34 hat Peter Xu geschrieben:
>>> Copy Kevin.
>>>
>>> On Wed, May 28, 2025 at 07:21:12PM +0530, Anushree Mathur wrote:
>>>> Hi all,
>>>>
>>>>
>>>> When I am trying to migrate the guest from host1 to host2 with the command
>>>> line as follows:
>>>>
>>>> date;virsh migrate --live --domain guest1 qemu+ssh://dest/system --verbose
>>>> --undefinesource --persistent --auto-converge --postcopy
>>>> --copy-storage-all;date
>>>>
>>>> and it fails with the following error message-
>>>>
>>>> error: internal error: unable to execute QEMU command 'block-export-add':
>>>> Block node is read-only
>>>>
>>>> HOST ENV:
>>>>
>>>> qemu : QEMU emulator version 9.2.2
>>>> libvirt : libvirtd (libvirt) 11.1.0
>>>> Seen with upstream qemu also
>>>>
>>>> Steps to reproduce:
>>>> 1) Start the guest1
>>>> 2) Migrate it with the command as
>>>>
>>>> date;virsh migrate --live --domain guest1 qemu+ssh://dest/system --verbose
>>>> --undefinesource --persistent --auto-converge --postcopy
>>>> --copy-storage-all;date
>>>>
>>>> 3) It fails as follows:
>>>> error: internal error: unable to execute QEMU command 'block-export-add':
>>>> Block node is read-only
>> I assume this is about an inactive block node. Probably on the
>> destination, but that's not clear to me from the error message.
> Yes this would be on the destination. Libvirt exports the nodes on
> destination, source connects and does the blockjob.
>
> The destination side is configured the same way as the source side so
> if the source disk is configured as read-write the destination should be
> as well
>
>>>> Things I analyzed-
>>>> 1) This issue is not happening if I give --unsafe option in the virsh
>>>> migrate command
> This is weird; this shouldn't have any impact.
>
>> What does this translate to on the QEMU command line?
>>
>>>> 2) O/P of qemu-monitor command also shows ro as false
>>>>
>>>> virsh qemu-monitor-command guest1 --pretty --cmd '{ "execute": "query-block"
> it'd be impossible to execute this on the guest due to timing; you'll
> need to collect libvirt debug logs to do that:
>
> https://www.libvirt.org/kbase/debuglogs.html#tl-dr-enable-debug-logs-for-most-common-scenario
>
> I also thing this should be eventually filed in a
>
>>>> }'
>>>> {
>>>> "return": [
>>>> {
>>>> "io-status": "ok",
>>>> "device": "",
>>>> "locked": false,
>>>> "removable": false,
>>>> "inserted": {
>>>> "iops_rd": 0,
>>>> "detect_zeroes": "off",
>>>> "image": {
>>>> "virtual-size": 21474836480,
>>>> "filename": "/home/Anu/guest_anu.qcow2",
>>>> "cluster-size": 65536,
>>>> "format": "qcow2",
>>>> "actual-size": 5226561536,
>>>> "format-specific": {
>>>> "type": "qcow2",
>>>> "data": {
>>>> "compat": "1.1",
>>>> "compression-type": "zlib",
>>>> "lazy-refcounts": false,
>>>> "refcount-bits": 16,
>>>> "corrupt": false,
>>>> "extended-l2": false
>>>> }
>>>> },
>>>> "dirty-flag": false
>>>> },
>>>> "iops_wr": 0,
>>>> "ro": false,
>>>> "node-name": "libvirt-1-format",
>>>> "backing_file_depth": 0,
>>>> "drv": "qcow2",
>>>> "iops": 0,
>>>> "bps_wr": 0,
>>>> "write_threshold": 0,
>>>> "encrypted": false,
>>>> "bps": 0,
>>>> "bps_rd": 0,
>>>> "cache": {
>>>> "no-flush": false,
>>>> "direct": false,
>>>> "writeback": true
>>>> },
>>>> "file": "/home/Anu/guest_anu.qcow2"
>>>> },
>>>> "qdev": "/machine/peripheral/virtio-disk0/virtio-backend",
>>>> "type": "unknown"
>>>> }
>>>> ],
>>>> "id": "libvirt-26"
>>>> }
>> I assume this is still from the source where the image is still active.
> Yes; on the destination the process wouldn't be around long enough to
> call 'virsh qemu-monitor-command'
>
>> Also it doesn't contain the "active" field yet that was recently
>> introduced, which could show something about this. I believe you would
>> still get "read-only": false for an inactive image if it's supposed to
>> be read-write after the migration completes.
>>
>>>> 3) Guest doesn't have any readonly
>>>>
>>>> virsh dumpxml guest1 | grep readonly
>>>>
>>>> 4) Tried giving the proper permissions also
>>>>
>>>> -rwxrwxrwx. 1 qemu qemu 4.9G Apr 28 15:06 guest_anu.qcow
> Is this on the destination? did you pre-create it yourself? otherwise
> libvirt is pre-creating that image for-non-shared-storage migration
> (--copy-storage-all) which should have proper permissions when it's
> created
>
>>>> 5) Checked for the permission of the pool also that is also proper!
>>>>
>>>> 6) Found 1 older bug similar to this, pasting the link for reference:
>>>>
>>>>
>>>> https://patchwork.kernel.org/project/qemu-devel/patch/20170811164854.GG4162@localhost.localdomain/
>> What's happening in detail is more of a virsh/libvirt question. CCing
>> Peter Krempa, he might have an idea.
> Please collect the debug log; at least from the destination side of
> migration. That should show how the VM is prepared and qemu invoked.
>
next prev parent reply other threads:[~2025-06-23 17:56 UTC|newest]
Thread overview: 6+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-05-28 13:51 virsh migrate fails when --copy-storage-all option is given! Anushree Mathur
2025-05-28 15:34 ` Peter Xu
2025-06-04 12:41 ` Kevin Wolf
2025-06-04 13:27 ` Peter Krempa
2025-06-23 17:55 ` Anushree Mathur [this message]
2025-06-23 20:28 ` Peter Krempa
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=657a0179-c51f-4e26-9ade-a0efbed732bb@linux.ibm.com \
--to=anushree.mathur@linux.ibm.com \
--cc=devel@lists.libvirt.org \
--cc=farosas@suse.de \
--cc=kwolf@redhat.com \
--cc=peterx@redhat.com \
--cc=pkrempa@redhat.com \
--cc=qemu-devel@nongnu.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).