* virsh migrate fails when --copy-storage-all option is given!
@ 2025-05-28 13:51 Anushree Mathur
2025-05-28 15:34 ` Peter Xu
0 siblings, 1 reply; 6+ messages in thread
From: Anushree Mathur @ 2025-05-28 13:51 UTC (permalink / raw)
To: qemu-devel; +Cc: peterx, farosas
Hi all,
When I am trying to migrate the guest from host1 to host2 with the
command line as follows:
date;virsh migrate --live --domain guest1 qemu+ssh://dest/system
--verbose --undefinesource --persistent --auto-converge --postcopy
--copy-storage-all;date
and it fails with the following error message-
error: internal error: unable to execute QEMU command
'block-export-add': Block node is read-only
HOST ENV:
qemu : QEMU emulator version 9.2.2
libvirt : libvirtd (libvirt) 11.1.0
Seen with upstream qemu also
Steps to reproduce:
1) Start the guest1
2) Migrate it with the command as
date;virsh migrate --live --domain guest1 qemu+ssh://dest/system
--verbose --undefinesource --persistent --auto-converge --postcopy
--copy-storage-all;date
3) It fails as follows:
error: internal error: unable to execute QEMU command
'block-export-add': Block node is read-only
Things I analyzed-
1) This issue is not happening if I give --unsafe option in the virsh
migrate command
2) O/P of qemu-monitor command also shows ro as false
virsh qemu-monitor-command guest1 --pretty --cmd '{ "execute":
"query-block" }'
{
"return": [
{
"io-status": "ok",
"device": "",
"locked": false,
"removable": false,
"inserted": {
"iops_rd": 0,
"detect_zeroes": "off",
"image": {
"virtual-size": 21474836480,
"filename": "/home/Anu/guest_anu.qcow2",
"cluster-size": 65536,
"format": "qcow2",
"actual-size": 5226561536,
"format-specific": {
"type": "qcow2",
"data": {
"compat": "1.1",
"compression-type": "zlib",
"lazy-refcounts": false,
"refcount-bits": 16,
"corrupt": false,
"extended-l2": false
}
},
"dirty-flag": false
},
"iops_wr": 0,
"ro": false,
"node-name": "libvirt-1-format",
"backing_file_depth": 0,
"drv": "qcow2",
"iops": 0,
"bps_wr": 0,
"write_threshold": 0,
"encrypted": false,
"bps": 0,
"bps_rd": 0,
"cache": {
"no-flush": false,
"direct": false,
"writeback": true
},
"file": "/home/Anu/guest_anu.qcow2"
},
"qdev": "/machine/peripheral/virtio-disk0/virtio-backend",
"type": "unknown"
}
],
"id": "libvirt-26"
}
3) Guest doesn't have any readonly
virsh dumpxml guest1 | grep readonly
4) Tried giving the proper permissions also
-rwxrwxrwx. 1 qemu qemu 4.9G Apr 28 15:06 guest_anu.qcow2
5) Checked for the permission of the pool also that is also proper!
6) Found 1 older bug similar to this, pasting the link for reference:
https://patchwork.kernel.org/project/qemu-devel/patch/20170811164854.GG4162@localhost.localdomain/
Thanks,
Anushree-Mathur
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: virsh migrate fails when --copy-storage-all option is given!
2025-05-28 13:51 virsh migrate fails when --copy-storage-all option is given! Anushree Mathur
@ 2025-05-28 15:34 ` Peter Xu
2025-06-04 12:41 ` Kevin Wolf
0 siblings, 1 reply; 6+ messages in thread
From: Peter Xu @ 2025-05-28 15:34 UTC (permalink / raw)
To: Anushree Mathur, Kevin Wolf; +Cc: qemu-devel, farosas
Copy Kevin.
On Wed, May 28, 2025 at 07:21:12PM +0530, Anushree Mathur wrote:
> Hi all,
>
>
> When I am trying to migrate the guest from host1 to host2 with the command
> line as follows:
>
> date;virsh migrate --live --domain guest1 qemu+ssh://dest/system --verbose
> --undefinesource --persistent --auto-converge --postcopy
> --copy-storage-all;date
>
> and it fails with the following error message-
>
> error: internal error: unable to execute QEMU command 'block-export-add':
> Block node is read-only
>
> HOST ENV:
>
> qemu : QEMU emulator version 9.2.2
> libvirt : libvirtd (libvirt) 11.1.0
> Seen with upstream qemu also
>
> Steps to reproduce:
> 1) Start the guest1
> 2) Migrate it with the command as
>
> date;virsh migrate --live --domain guest1 qemu+ssh://dest/system --verbose
> --undefinesource --persistent --auto-converge --postcopy
> --copy-storage-all;date
>
> 3) It fails as follows:
> error: internal error: unable to execute QEMU command 'block-export-add':
> Block node is read-only
>
> Things I analyzed-
> 1) This issue is not happening if I give --unsafe option in the virsh
> migrate command
>
> 2) O/P of qemu-monitor command also shows ro as false
>
> virsh qemu-monitor-command guest1 --pretty --cmd '{ "execute": "query-block"
> }'
> {
> "return": [
> {
> "io-status": "ok",
> "device": "",
> "locked": false,
> "removable": false,
> "inserted": {
> "iops_rd": 0,
> "detect_zeroes": "off",
> "image": {
> "virtual-size": 21474836480,
> "filename": "/home/Anu/guest_anu.qcow2",
> "cluster-size": 65536,
> "format": "qcow2",
> "actual-size": 5226561536,
> "format-specific": {
> "type": "qcow2",
> "data": {
> "compat": "1.1",
> "compression-type": "zlib",
> "lazy-refcounts": false,
> "refcount-bits": 16,
> "corrupt": false,
> "extended-l2": false
> }
> },
> "dirty-flag": false
> },
> "iops_wr": 0,
> "ro": false,
> "node-name": "libvirt-1-format",
> "backing_file_depth": 0,
> "drv": "qcow2",
> "iops": 0,
> "bps_wr": 0,
> "write_threshold": 0,
> "encrypted": false,
> "bps": 0,
> "bps_rd": 0,
> "cache": {
> "no-flush": false,
> "direct": false,
> "writeback": true
> },
> "file": "/home/Anu/guest_anu.qcow2"
> },
> "qdev": "/machine/peripheral/virtio-disk0/virtio-backend",
> "type": "unknown"
> }
> ],
> "id": "libvirt-26"
> }
>
>
> 3) Guest doesn't have any readonly
>
> virsh dumpxml guest1 | grep readonly
>
> 4) Tried giving the proper permissions also
>
> -rwxrwxrwx. 1 qemu qemu 4.9G Apr 28 15:06 guest_anu.qcow2
>
> 5) Checked for the permission of the pool also that is also proper!
>
> 6) Found 1 older bug similar to this, pasting the link for reference:
>
>
> https://patchwork.kernel.org/project/qemu-devel/patch/20170811164854.GG4162@localhost.localdomain/
>
>
>
> Thanks,
> Anushree-Mathur
>
>
--
Peter Xu
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: virsh migrate fails when --copy-storage-all option is given!
2025-05-28 15:34 ` Peter Xu
@ 2025-06-04 12:41 ` Kevin Wolf
2025-06-04 13:27 ` Peter Krempa
0 siblings, 1 reply; 6+ messages in thread
From: Kevin Wolf @ 2025-06-04 12:41 UTC (permalink / raw)
To: Peter Xu; +Cc: Anushree Mathur, qemu-devel, farosas, pkrempa
Am 28.05.2025 um 17:34 hat Peter Xu geschrieben:
> Copy Kevin.
>
> On Wed, May 28, 2025 at 07:21:12PM +0530, Anushree Mathur wrote:
> > Hi all,
> >
> >
> > When I am trying to migrate the guest from host1 to host2 with the command
> > line as follows:
> >
> > date;virsh migrate --live --domain guest1 qemu+ssh://dest/system --verbose
> > --undefinesource --persistent --auto-converge --postcopy
> > --copy-storage-all;date
> >
> > and it fails with the following error message-
> >
> > error: internal error: unable to execute QEMU command 'block-export-add':
> > Block node is read-only
> >
> > HOST ENV:
> >
> > qemu : QEMU emulator version 9.2.2
> > libvirt : libvirtd (libvirt) 11.1.0
> > Seen with upstream qemu also
> >
> > Steps to reproduce:
> > 1) Start the guest1
> > 2) Migrate it with the command as
> >
> > date;virsh migrate --live --domain guest1 qemu+ssh://dest/system --verbose
> > --undefinesource --persistent --auto-converge --postcopy
> > --copy-storage-all;date
> >
> > 3) It fails as follows:
> > error: internal error: unable to execute QEMU command 'block-export-add':
> > Block node is read-only
I assume this is about an inactive block node. Probably on the
destination, but that's not clear to me from the error message.
> > Things I analyzed-
> > 1) This issue is not happening if I give --unsafe option in the virsh
> > migrate command
What does this translate to on the QEMU command line?
> > 2) O/P of qemu-monitor command also shows ro as false
> >
> > virsh qemu-monitor-command guest1 --pretty --cmd '{ "execute": "query-block"
> > }'
> > {
> > "return": [
> > {
> > "io-status": "ok",
> > "device": "",
> > "locked": false,
> > "removable": false,
> > "inserted": {
> > "iops_rd": 0,
> > "detect_zeroes": "off",
> > "image": {
> > "virtual-size": 21474836480,
> > "filename": "/home/Anu/guest_anu.qcow2",
> > "cluster-size": 65536,
> > "format": "qcow2",
> > "actual-size": 5226561536,
> > "format-specific": {
> > "type": "qcow2",
> > "data": {
> > "compat": "1.1",
> > "compression-type": "zlib",
> > "lazy-refcounts": false,
> > "refcount-bits": 16,
> > "corrupt": false,
> > "extended-l2": false
> > }
> > },
> > "dirty-flag": false
> > },
> > "iops_wr": 0,
> > "ro": false,
> > "node-name": "libvirt-1-format",
> > "backing_file_depth": 0,
> > "drv": "qcow2",
> > "iops": 0,
> > "bps_wr": 0,
> > "write_threshold": 0,
> > "encrypted": false,
> > "bps": 0,
> > "bps_rd": 0,
> > "cache": {
> > "no-flush": false,
> > "direct": false,
> > "writeback": true
> > },
> > "file": "/home/Anu/guest_anu.qcow2"
> > },
> > "qdev": "/machine/peripheral/virtio-disk0/virtio-backend",
> > "type": "unknown"
> > }
> > ],
> > "id": "libvirt-26"
> > }
I assume this is still from the source where the image is still active.
Also it doesn't contain the "active" field yet that was recently
introduced, which could show something about this. I believe you would
still get "read-only": false for an inactive image if it's supposed to
be read-write after the migration completes.
> >
> > 3) Guest doesn't have any readonly
> >
> > virsh dumpxml guest1 | grep readonly
> >
> > 4) Tried giving the proper permissions also
> >
> > -rwxrwxrwx. 1 qemu qemu 4.9G Apr 28 15:06 guest_anu.qcow2
> >
> > 5) Checked for the permission of the pool also that is also proper!
> >
> > 6) Found 1 older bug similar to this, pasting the link for reference:
> >
> >
> > https://patchwork.kernel.org/project/qemu-devel/patch/20170811164854.GG4162@localhost.localdomain/
What's happening in detail is more of a virsh/libvirt question. CCing
Peter Krempa, he might have an idea.
Kevin
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: virsh migrate fails when --copy-storage-all option is given!
2025-06-04 12:41 ` Kevin Wolf
@ 2025-06-04 13:27 ` Peter Krempa
2025-06-23 17:55 ` Anushree Mathur
0 siblings, 1 reply; 6+ messages in thread
From: Peter Krempa @ 2025-06-04 13:27 UTC (permalink / raw)
To: Kevin Wolf; +Cc: Peter Xu, Anushree Mathur, qemu-devel, farosas
On Wed, Jun 04, 2025 at 14:41:54 +0200, Kevin Wolf wrote:
> Am 28.05.2025 um 17:34 hat Peter Xu geschrieben:
> > Copy Kevin.
> >
> > On Wed, May 28, 2025 at 07:21:12PM +0530, Anushree Mathur wrote:
> > > Hi all,
> > >
> > >
> > > When I am trying to migrate the guest from host1 to host2 with the command
> > > line as follows:
> > >
> > > date;virsh migrate --live --domain guest1 qemu+ssh://dest/system --verbose
> > > --undefinesource --persistent --auto-converge --postcopy
> > > --copy-storage-all;date
> > >
> > > and it fails with the following error message-
> > >
> > > error: internal error: unable to execute QEMU command 'block-export-add':
> > > Block node is read-only
> > >
> > > HOST ENV:
> > >
> > > qemu : QEMU emulator version 9.2.2
> > > libvirt : libvirtd (libvirt) 11.1.0
> > > Seen with upstream qemu also
> > >
> > > Steps to reproduce:
> > > 1) Start the guest1
> > > 2) Migrate it with the command as
> > >
> > > date;virsh migrate --live --domain guest1 qemu+ssh://dest/system --verbose
> > > --undefinesource --persistent --auto-converge --postcopy
> > > --copy-storage-all;date
> > >
> > > 3) It fails as follows:
> > > error: internal error: unable to execute QEMU command 'block-export-add':
> > > Block node is read-only
>
> I assume this is about an inactive block node. Probably on the
> destination, but that's not clear to me from the error message.
Yes this would be on the destination. Libvirt exports the nodes on
destination, source connects and does the blockjob.
The destination side is configured the same way as the source side so
if the source disk is configured as read-write the destination should be
as well
> > > Things I analyzed-
> > > 1) This issue is not happening if I give --unsafe option in the virsh
> > > migrate command
This is weird; this shouldn't have any impact.
>
> What does this translate to on the QEMU command line?
>
> > > 2) O/P of qemu-monitor command also shows ro as false
> > >
> > > virsh qemu-monitor-command guest1 --pretty --cmd '{ "execute": "query-block"
it'd be impossible to execute this on the guest due to timing; you'll
need to collect libvirt debug logs to do that:
https://www.libvirt.org/kbase/debuglogs.html#tl-dr-enable-debug-logs-for-most-common-scenario
I also thing this should be eventually filed in a
> > > }'
> > > {
> > > "return": [
> > > {
> > > "io-status": "ok",
> > > "device": "",
> > > "locked": false,
> > > "removable": false,
> > > "inserted": {
> > > "iops_rd": 0,
> > > "detect_zeroes": "off",
> > > "image": {
> > > "virtual-size": 21474836480,
> > > "filename": "/home/Anu/guest_anu.qcow2",
> > > "cluster-size": 65536,
> > > "format": "qcow2",
> > > "actual-size": 5226561536,
> > > "format-specific": {
> > > "type": "qcow2",
> > > "data": {
> > > "compat": "1.1",
> > > "compression-type": "zlib",
> > > "lazy-refcounts": false,
> > > "refcount-bits": 16,
> > > "corrupt": false,
> > > "extended-l2": false
> > > }
> > > },
> > > "dirty-flag": false
> > > },
> > > "iops_wr": 0,
> > > "ro": false,
> > > "node-name": "libvirt-1-format",
> > > "backing_file_depth": 0,
> > > "drv": "qcow2",
> > > "iops": 0,
> > > "bps_wr": 0,
> > > "write_threshold": 0,
> > > "encrypted": false,
> > > "bps": 0,
> > > "bps_rd": 0,
> > > "cache": {
> > > "no-flush": false,
> > > "direct": false,
> > > "writeback": true
> > > },
> > > "file": "/home/Anu/guest_anu.qcow2"
> > > },
> > > "qdev": "/machine/peripheral/virtio-disk0/virtio-backend",
> > > "type": "unknown"
> > > }
> > > ],
> > > "id": "libvirt-26"
> > > }
>
> I assume this is still from the source where the image is still active.
Yes; on the destination the process wouldn't be around long enough to
call 'virsh qemu-monitor-command'
>
> Also it doesn't contain the "active" field yet that was recently
> introduced, which could show something about this. I believe you would
> still get "read-only": false for an inactive image if it's supposed to
> be read-write after the migration completes.
>
> > >
> > > 3) Guest doesn't have any readonly
> > >
> > > virsh dumpxml guest1 | grep readonly
> > >
> > > 4) Tried giving the proper permissions also
> > >
> > > -rwxrwxrwx. 1 qemu qemu 4.9G Apr 28 15:06 guest_anu.qcow
Is this on the destination? did you pre-create it yourself? otherwise
libvirt is pre-creating that image for-non-shared-storage migration
(--copy-storage-all) which should have proper permissions when it's
created
> > >
> > > 5) Checked for the permission of the pool also that is also proper!
> > >
> > > 6) Found 1 older bug similar to this, pasting the link for reference:
> > >
> > >
> > > https://patchwork.kernel.org/project/qemu-devel/patch/20170811164854.GG4162@localhost.localdomain/
>
> What's happening in detail is more of a virsh/libvirt question. CCing
> Peter Krempa, he might have an idea.
Please collect the debug log; at least from the destination side of
migration. That should show how the VM is prepared and qemu invoked.
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: virsh migrate fails when --copy-storage-all option is given!
2025-06-04 13:27 ` Peter Krempa
@ 2025-06-23 17:55 ` Anushree Mathur
2025-06-23 20:28 ` Peter Krempa
0 siblings, 1 reply; 6+ messages in thread
From: Anushree Mathur @ 2025-06-23 17:55 UTC (permalink / raw)
To: Peter Krempa, Kevin Wolf; +Cc: Peter Xu, qemu-devel, farosas, devel
CC: libvirt devel list
Hi Kevin/Peter,
Thank you so much for addressing this issue. I tried out few more things and
here is my analysis:
Even when I removed the readonly option from guest xml I was still seeing the
issue in migration,In the qemu-commandline I could still see auto-read-only
option being set as true by default.
Then I tried giving the auto-read-only as false in the guest xml with
qemu-commandline param, it was actually getting set to false and the migration
worked!
Steps I tried:
1) Started the guest with adding the snippet in the guest xml with parameters
as:
<qemu:commandline>
<qemu:arg value='-blockdev'/>
<qemu:arg value='driver=file,filename=/disk_nfs/nfs/migrate_root.qcow2,node-name=drivefile,auto-read-only=false'/>
<qemu:arg value='-blockdev'/>
<qemu:arg value='driver=qcow2,file=drivefile,node-name=drive0'/>
<qemu:arg value='-device'/>
<qemu:arg value='virtio-blk-pci,drive=drive0,id=virtio-disk0,bus=pci.0,addr=0x5'/>
</qemu:commandline>
2) Started the migration and it worked.
Could anyone please clarify from libvirt side what is the change required?
Thanks,
Anushree-Mathur
On 04/06/25 6:57 PM, Peter Krempa wrote:
> On Wed, Jun 04, 2025 at 14:41:54 +0200, Kevin Wolf wrote:
>> Am 28.05.2025 um 17:34 hat Peter Xu geschrieben:
>>> Copy Kevin.
>>>
>>> On Wed, May 28, 2025 at 07:21:12PM +0530, Anushree Mathur wrote:
>>>> Hi all,
>>>>
>>>>
>>>> When I am trying to migrate the guest from host1 to host2 with the command
>>>> line as follows:
>>>>
>>>> date;virsh migrate --live --domain guest1 qemu+ssh://dest/system --verbose
>>>> --undefinesource --persistent --auto-converge --postcopy
>>>> --copy-storage-all;date
>>>>
>>>> and it fails with the following error message-
>>>>
>>>> error: internal error: unable to execute QEMU command 'block-export-add':
>>>> Block node is read-only
>>>>
>>>> HOST ENV:
>>>>
>>>> qemu : QEMU emulator version 9.2.2
>>>> libvirt : libvirtd (libvirt) 11.1.0
>>>> Seen with upstream qemu also
>>>>
>>>> Steps to reproduce:
>>>> 1) Start the guest1
>>>> 2) Migrate it with the command as
>>>>
>>>> date;virsh migrate --live --domain guest1 qemu+ssh://dest/system --verbose
>>>> --undefinesource --persistent --auto-converge --postcopy
>>>> --copy-storage-all;date
>>>>
>>>> 3) It fails as follows:
>>>> error: internal error: unable to execute QEMU command 'block-export-add':
>>>> Block node is read-only
>> I assume this is about an inactive block node. Probably on the
>> destination, but that's not clear to me from the error message.
> Yes this would be on the destination. Libvirt exports the nodes on
> destination, source connects and does the blockjob.
>
> The destination side is configured the same way as the source side so
> if the source disk is configured as read-write the destination should be
> as well
>
>>>> Things I analyzed-
>>>> 1) This issue is not happening if I give --unsafe option in the virsh
>>>> migrate command
> This is weird; this shouldn't have any impact.
>
>> What does this translate to on the QEMU command line?
>>
>>>> 2) O/P of qemu-monitor command also shows ro as false
>>>>
>>>> virsh qemu-monitor-command guest1 --pretty --cmd '{ "execute": "query-block"
> it'd be impossible to execute this on the guest due to timing; you'll
> need to collect libvirt debug logs to do that:
>
> https://www.libvirt.org/kbase/debuglogs.html#tl-dr-enable-debug-logs-for-most-common-scenario
>
> I also thing this should be eventually filed in a
>
>>>> }'
>>>> {
>>>> "return": [
>>>> {
>>>> "io-status": "ok",
>>>> "device": "",
>>>> "locked": false,
>>>> "removable": false,
>>>> "inserted": {
>>>> "iops_rd": 0,
>>>> "detect_zeroes": "off",
>>>> "image": {
>>>> "virtual-size": 21474836480,
>>>> "filename": "/home/Anu/guest_anu.qcow2",
>>>> "cluster-size": 65536,
>>>> "format": "qcow2",
>>>> "actual-size": 5226561536,
>>>> "format-specific": {
>>>> "type": "qcow2",
>>>> "data": {
>>>> "compat": "1.1",
>>>> "compression-type": "zlib",
>>>> "lazy-refcounts": false,
>>>> "refcount-bits": 16,
>>>> "corrupt": false,
>>>> "extended-l2": false
>>>> }
>>>> },
>>>> "dirty-flag": false
>>>> },
>>>> "iops_wr": 0,
>>>> "ro": false,
>>>> "node-name": "libvirt-1-format",
>>>> "backing_file_depth": 0,
>>>> "drv": "qcow2",
>>>> "iops": 0,
>>>> "bps_wr": 0,
>>>> "write_threshold": 0,
>>>> "encrypted": false,
>>>> "bps": 0,
>>>> "bps_rd": 0,
>>>> "cache": {
>>>> "no-flush": false,
>>>> "direct": false,
>>>> "writeback": true
>>>> },
>>>> "file": "/home/Anu/guest_anu.qcow2"
>>>> },
>>>> "qdev": "/machine/peripheral/virtio-disk0/virtio-backend",
>>>> "type": "unknown"
>>>> }
>>>> ],
>>>> "id": "libvirt-26"
>>>> }
>> I assume this is still from the source where the image is still active.
> Yes; on the destination the process wouldn't be around long enough to
> call 'virsh qemu-monitor-command'
>
>> Also it doesn't contain the "active" field yet that was recently
>> introduced, which could show something about this. I believe you would
>> still get "read-only": false for an inactive image if it's supposed to
>> be read-write after the migration completes.
>>
>>>> 3) Guest doesn't have any readonly
>>>>
>>>> virsh dumpxml guest1 | grep readonly
>>>>
>>>> 4) Tried giving the proper permissions also
>>>>
>>>> -rwxrwxrwx. 1 qemu qemu 4.9G Apr 28 15:06 guest_anu.qcow
> Is this on the destination? did you pre-create it yourself? otherwise
> libvirt is pre-creating that image for-non-shared-storage migration
> (--copy-storage-all) which should have proper permissions when it's
> created
>
>>>> 5) Checked for the permission of the pool also that is also proper!
>>>>
>>>> 6) Found 1 older bug similar to this, pasting the link for reference:
>>>>
>>>>
>>>> https://patchwork.kernel.org/project/qemu-devel/patch/20170811164854.GG4162@localhost.localdomain/
>> What's happening in detail is more of a virsh/libvirt question. CCing
>> Peter Krempa, he might have an idea.
> Please collect the debug log; at least from the destination side of
> migration. That should show how the VM is prepared and qemu invoked.
>
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: virsh migrate fails when --copy-storage-all option is given!
2025-06-23 17:55 ` Anushree Mathur
@ 2025-06-23 20:28 ` Peter Krempa
0 siblings, 0 replies; 6+ messages in thread
From: Peter Krempa @ 2025-06-23 20:28 UTC (permalink / raw)
To: Anushree Mathur; +Cc: Kevin Wolf, Peter Xu, qemu-devel, farosas, devel
On Mon, Jun 23, 2025 at 23:25:28 +0530, Anushree Mathur wrote:
> CC: libvirt devel list
>
> Hi Kevin/Peter,
>
> Thank you so much for addressing this issue. I tried out few more things and
> here is my analysis:
>
> Even when I removed the readonly option from guest xml I was still seeing the
> issue in migration,In the qemu-commandline I could still see auto-read-only
> option being set as true by default.
'auto-read-only' is very likely not the problem here. Due to the name
it's often suspected but it's really a red herring mostly.
What it does is that it instructs qemu just to automatically switch
between read-only and read-write as it did in the pre-blockdev era.
> Then I tried giving the auto-read-only as false in the guest xml with
> qemu-commandline param, it was actually getting set to false and the migration
> worked!
Okay, based on what you've wrote I'm now even more confused what you
wanted to do.
Based on previous report [1] you wanted to migrate with non-shared
storage (presence of --copy-storage-all which copies all read-write
disks to destination).
But now ...
> Steps I tried:
>
> 1) Started the guest with adding the snippet in the guest xml with parameters
> as:
>
> <qemu:commandline>
> <qemu:arg value='-blockdev'/>
> <qemu:arg value='driver=file,filename=/disk_nfs/nfs/migrate_root.qcow2,node-name=drivefile,auto-read-only=false'/>
... this looks like shared storage ...
> <qemu:arg value='-blockdev'/>
> <qemu:arg value='driver=qcow2,file=drivefile,node-name=drive0'/>
> <qemu:arg value='-device'/>
> <qemu:arg value='virtio-blk-pci,drive=drive0,id=virtio-disk0,bus=pci.0,addr=0x5'/>
> </qemu:commandline>
>
> 2) Started the migration and it worked.
... and especially since you added it via the qemu:arg backdoor, which
libvirt doesn't in any way interpret, it'd mean that --copy-storage-all
fully ignores what was declared here.
Thus it's weird that this would actually help anything especially when
you originally used option meant for non-shared storage.
So, how did you migrate this? What did you want to achieve?
> Could anyone please clarify from libvirt side what is the change required?
You need to first clarify what you are actually doing. This seems to be
different from what you tried last time.
For any further investigation I'll need:
- the full XML of the VM
- debug logs from the source [2]
- debug logs from the destination
- description what you are trying to achieve
>
> Thanks,
> Anushree-Mathur
[2] https://www.libvirt.org/kbase/debuglogs.html
>
>
> On 04/06/25 6:57 PM, Peter Krempa wrote:
> > On Wed, Jun 04, 2025 at 14:41:54 +0200, Kevin Wolf wrote:
> > > Am 28.05.2025 um 17:34 hat Peter Xu geschrieben:
> > > > Copy Kevin.
> > > >
> > > > On Wed, May 28, 2025 at 07:21:12PM +0530, Anushree Mathur wrote:
> > > > > Hi all,
> > > > >
> > > > >
> > > > > When I am trying to migrate the guest from host1 to host2 with the command
> > > > > line as follows:
> > > > >
> > > > > date;virsh migrate --live --domain guest1 qemu+ssh://dest/system --verbose
> > > > > --undefinesource --persistent --auto-converge --postcopy
> > > > > --copy-storage-all;date
> > > > >
> > > > > and it fails with the following error message-
> > > > >
> > > > > error: internal error: unable to execute QEMU command 'block-export-add':
> > > > > Block node is read-only
> > > > >
> > > > > HOST ENV:
> > > > >
> > > > > qemu : QEMU emulator version 9.2.2
> > > > > libvirt : libvirtd (libvirt) 11.1.0
> > > > > Seen with upstream qemu also
> > > > >
> > > > > Steps to reproduce:
> > > > > 1) Start the guest1
> > > > > 2) Migrate it with the command as
> > > > >
> > > > > date;virsh migrate --live --domain guest1 qemu+ssh://dest/system --verbose
> > > > > --undefinesource --persistent --auto-converge --postcopy
> > > > > --copy-storage-all;date
[1]
> > > > >
> > > > > 3) It fails as follows:
> > > > > error: internal error: unable to execute QEMU command 'block-export-add':
> > > > > Block node is read-only
> > > I assume this is about an inactive block node. Probably on the
> > > destination, but that's not clear to me from the error message.
> > Yes this would be on the destination. Libvirt exports the nodes on
> > destination, source connects and does the blockjob.
> >
> > The destination side is configured the same way as the source side so
> > if the source disk is configured as read-write the destination should be
> > as well
> >
> > > > > Things I analyzed-
> > > > > 1) This issue is not happening if I give --unsafe option in the virsh
> > > > > migrate command
> > This is weird; this shouldn't have any impact.
> >
> > > What does this translate to on the QEMU command line?
> > >
> > > > > 2) O/P of qemu-monitor command also shows ro as false
> > > > >
> > > > > virsh qemu-monitor-command guest1 --pretty --cmd '{ "execute": "query-block"
> > it'd be impossible to execute this on the guest due to timing; you'll
> > need to collect libvirt debug logs to do that:
> >
> > https://www.libvirt.org/kbase/debuglogs.html#tl-dr-enable-debug-logs-for-most-common-scenario
> >
> > I also thing this should be eventually filed in a
> >
> > > > > }'
> > > > > {
> > > > > "return": [
> > > > > {
> > > > > "io-status": "ok",
> > > > > "device": "",
> > > > > "locked": false,
> > > > > "removable": false,
> > > > > "inserted": {
> > > > > "iops_rd": 0,
> > > > > "detect_zeroes": "off",
> > > > > "image": {
> > > > > "virtual-size": 21474836480,
> > > > > "filename": "/home/Anu/guest_anu.qcow2",
> > > > > "cluster-size": 65536,
> > > > > "format": "qcow2",
> > > > > "actual-size": 5226561536,
> > > > > "format-specific": {
> > > > > "type": "qcow2",
> > > > > "data": {
> > > > > "compat": "1.1",
> > > > > "compression-type": "zlib",
> > > > > "lazy-refcounts": false,
> > > > > "refcount-bits": 16,
> > > > > "corrupt": false,
> > > > > "extended-l2": false
> > > > > }
> > > > > },
> > > > > "dirty-flag": false
> > > > > },
> > > > > "iops_wr": 0,
> > > > > "ro": false,
> > > > > "node-name": "libvirt-1-format",
> > > > > "backing_file_depth": 0,
> > > > > "drv": "qcow2",
> > > > > "iops": 0,
> > > > > "bps_wr": 0,
> > > > > "write_threshold": 0,
> > > > > "encrypted": false,
> > > > > "bps": 0,
> > > > > "bps_rd": 0,
> > > > > "cache": {
> > > > > "no-flush": false,
> > > > > "direct": false,
> > > > > "writeback": true
> > > > > },
> > > > > "file": "/home/Anu/guest_anu.qcow2"
> > > > > },
> > > > > "qdev": "/machine/peripheral/virtio-disk0/virtio-backend",
> > > > > "type": "unknown"
> > > > > }
> > > > > ],
> > > > > "id": "libvirt-26"
> > > > > }
> > > I assume this is still from the source where the image is still active.
> > Yes; on the destination the process wouldn't be around long enough to
> > call 'virsh qemu-monitor-command'
> >
> > > Also it doesn't contain the "active" field yet that was recently
> > > introduced, which could show something about this. I believe you would
> > > still get "read-only": false for an inactive image if it's supposed to
> > > be read-write after the migration completes.
> > >
> > > > > 3) Guest doesn't have any readonly
> > > > >
> > > > > virsh dumpxml guest1 | grep readonly
> > > > >
> > > > > 4) Tried giving the proper permissions also
> > > > >
> > > > > -rwxrwxrwx. 1 qemu qemu 4.9G Apr 28 15:06 guest_anu.qcow
> > Is this on the destination? did you pre-create it yourself? otherwise
> > libvirt is pre-creating that image for-non-shared-storage migration
> > (--copy-storage-all) which should have proper permissions when it's
> > created
> >
> > > > > 5) Checked for the permission of the pool also that is also proper!
> > > > >
> > > > > 6) Found 1 older bug similar to this, pasting the link for reference:
> > > > >
> > > > >
> > > > > https://patchwork.kernel.org/project/qemu-devel/patch/20170811164854.GG4162@localhost.localdomain/
> > > What's happening in detail is more of a virsh/libvirt question. CCing
> > > Peter Krempa, he might have an idea.
> > Please collect the debug log; at least from the destination side of
> > migration. That should show how the VM is prepared and qemu invoked.
> >
>
^ permalink raw reply [flat|nested] 6+ messages in thread
end of thread, other threads:[~2025-06-23 20:29 UTC | newest]
Thread overview: 6+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-05-28 13:51 virsh migrate fails when --copy-storage-all option is given! Anushree Mathur
2025-05-28 15:34 ` Peter Xu
2025-06-04 12:41 ` Kevin Wolf
2025-06-04 13:27 ` Peter Krempa
2025-06-23 17:55 ` Anushree Mathur
2025-06-23 20:28 ` Peter Krempa
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).