qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
From: Chun Feng Wu <wucf@linux.vnet.ibm.com>
To: qemu-devel@nongnu.org
Subject: qemu process consumes 100% host CPU after reverting snapshot
Date: Thu, 28 Mar 2024 07:32:24 +0800	[thread overview]
Message-ID: <c2bad656-0231-4113-af5b-75e4247d2ee2@linux.vnet.ibm.com> (raw)

[-- Attachment #1: Type: text/plain, Size: 6441 bytes --]

Hi,

I am testing throttle filter chain(multiple throttle-groups on disk) 
with the following steps:
1. start guest vm(chained throttle filters applied on disk per 
https://github.com/qemu/qemu/blob/master/docs/throttle.txt)
2. take snapshot
3. revert snapshot

after step3, I noticed qemu process in host consumes 100% cpu, and after 
I login guest vm, vm cannot(or slowly) response my cmd (it works well 
before reverting).

/    PID     USER      PR  NI    VIRT    RES         SHR    S %CPU  
%MEM     TIME+ COMMAND
    65455 root      20   0 9659924 891328  20132 R 100.3   5.4 29:39.93 
qemu-system-x86/

/
/

Does anybody know why such issue happens?  is it a bug or I 
misunderstand something?

my cmd:

/qemu-system-x86_64 \
   -name ubuntu-20.04-vm,debug-threads=on \
   -machine pc-i440fx-jammy,usb=off,dump-guest-core=off \
   -accel kvm \
   -cpu 
Broadwell-IBRS,vme=on,ss=on,vmx=on,f16c=on,rdrand=on,hypervisor=on,arat=on,tsc-adjust=on,md-clear=on,stibp=on,ssbd=on,xsaveopt=on,pdpe1gb=on,abm=on,tsx-ctrl=off,hle=off,rtm=off 
\
   -m 8192 \
   -overcommit mem-lock=off \
   -smp 2,sockets=1,dies=1,cores=1,threads=2 \
   -numa node,nodeid=0,cpus=0-1,memdev=ram \
   -object memory-backend-ram,id=ram,size=8192M \
   -uuid d2d68f5d-bff0-4167-bbc3-643e3566b8fb \
   -display none \
   -nodefaults \
   -monitor stdio \
   -rtc base=utc,driftfix=slew \
   -no-shutdown \
   -boot strict=on \
   -device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 \
   -object 
'{"qom-type":"throttle-group","id":"limit0","limits":{"bps-total":0,"bps-read":0,"bps-write":0,"iops-total":200,"iops-read":0,"iops-write":0,"bps-total-max":0,"bps-read-max":0,"bps-write-max":0,"iops-total-max":200,"iops-read-max":0,"iops-write-max":0,"iops-size":0,"bps-total-max-length":1,"bps-read-max-length":1,"bps-write-max-length":1,"iops-total-max-length":1,"iops-read-max-length":1,"iops-write-max-length":1}}' 
\
     -object 
'{"qom-type":"throttle-group","id":"limit1","limits":{"bps-total":0,"bps-read":0,"bps-write":0,"iops-total":250,"iops-read":0,"iops-write":0,"bps-total-max":0,"bps-read-max":0,"bps-write-max":0,"iops-total-max":250,"iops-read-max":0,"iops-write-max":0,"iops-size":0,"bps-total-max-length":1,"bps-read-max-length":1,"bps-write-max-length":1,"iops-total-max-length":1,"iops-read-max-length":1,"iops-write-max-length":1}}' 
\
     -object 
'{"qom-type":"throttle-group","id":"limit2","limits":{"bps-total":0,"bps-read":0,"bps-write":0,"iops-total":300,"iops-read":0,"iops-write":0,"bps-total-max":0,"bps-read-max":0,"bps-write-max":0,"iops-total-max":300,"iops-read-max":0,"iops-write-max":0,"iops-size":0,"bps-total-max-length":1,"bps-read-max-length":1,"bps-write-max-length":1,"iops-total-max-length":1,"iops-read-max-length":1,"iops-write-max-length":1}}' 
\
     -object 
'{"qom-type":"throttle-group","id":"limit012","limits":{"bps-total":0,"bps-read":0,"bps-write":0,"iops-total":400,"iops-read":0,"iops-write":0,"bps-total-max":0,"bps-read-max":0,"bps-write-max":0,"iops-total-max":400,"iops-read-max":0,"iops-write-max":0,"iops-size":0,"bps-total-max-length":1,"bps-read-max-length":1,"bps-write-max-length":1,"iops-total-max-length":1,"iops-read-max-length":1,"iops-write-max-length":1}}' 
\
   -blockdev 
'{"driver":"file","filename":"/virt/images/focal-server-cloudimg-amd64.img","node-name":"libvirt-4-storage","auto-read-only":true,"discard":"unmap"}' 
\
   -blockdev 
'{"node-name":"libvirt-4-format","read-only":false,"driver":"qcow2","file":"libvirt-4-storage","backing":null}' 
\
   -device 
virtio-blk-pci,bus=pci.0,addr=0x4,drive=libvirt-4-format,id=virtio-disk0,bootindex=1 
\
   -blockdev 
'{"driver":"file","filename":"/virt/disks/vm1_disk_1.qcow2","node-name":"libvirt-3-storage","auto-read-only":true,"discard":"unmap"}' 
\
   -blockdev 
'{"node-name":"libvirt-3-format","read-only":false,"driver":"qcow2","file":"libvirt-3-storage","backing":null}' 
\
   -blockdev 
'{"driver":"throttle","node-name":"libvirt-5-filter","throttle-group":"limit0","file":"libvirt-3-format"}' 
\
   -blockdev 
'{"driver":"throttle","node-name":"libvirt-6-filter","throttle-group":"limit012","file":"libvirt-5-filter"}' 
\
   -device 
virtio-blk-pci,bus=pci.0,addr=0x5,drive=libvirt-6-filter,id=virtio-disk1 \
   -blockdev 
'{"driver":"file","filename":"/virt/disks/vm1_disk_2.qcow2","node-name":"libvirt-2-storage","auto-read-only":true,"discard":"unmap"}' 
\
   -blockdev 
'{"node-name":"libvirt-2-format","read-only":false,"driver":"qcow2","file":"libvirt-2-storage","backing":null}' 
\
   -blockdev 
'{"driver":"throttle","node-name":"libvirt-3-filter","throttle-group":"limit1","file":"libvirt-2-format"}' 
\
   -blockdev 
'{"driver":"throttle","node-name":"libvirt-4-filter","throttle-group":"limit012","file":"libvirt-3-filter"}' 
\
   -device 
virtio-blk-pci,bus=pci.0,addr=0x6,drive=libvirt-4-filter,id=virtio-disk2 \
   -blockdev 
'{"driver":"file","filename":"/virt/disks/vm1_disk_3.qcow2","node-name":"libvirt-1-storage","auto-read-only":true,"discard":"unmap"}' 
\
   -blockdev 
'{"node-name":"libvirt-1-format","read-only":false,"driver":"qcow2","file":"libvirt-1-storage","backing":null}' 
\
   -blockdev 
'{"driver":"throttle","node-name":"libvirt-1-filter","throttle-group":"limit2","file":"libvirt-1-format"}' 
\
   -blockdev 
'{"driver":"throttle","node-name":"libvirt-2-filter","throttle-group":"limit012","file":"libvirt-1-filter"}' 
\
   -device 
virtio-blk-pci,bus=pci.0,addr=0x7,drive=libvirt-2-filter,id=virtio-disk3 \
   -netdev user,id=user0,hostfwd=tcp::2222-:22 \
   -device 
virtio-net-pci,netdev=user0,id=net0,mac=52:54:00:12:34:56,bus=pci.0,addr=0x3 
\
   -chardev pty,id=charserial0 \
   -device isa-serial,chardev=charserial0,id=serial0,index=0 \
   -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x2 \
   -sandbox 
on,obsolete=deny,elevateprivileges=deny,spawn=deny,resourcecontrol=deny \
   -msg timestamp=on/


snapshot and reverting:

/(qemu) info status
VM status: running
(qemu) savevm snapshot1
(qemu) loadvm snapshot1/


my env:
/:~# qemu-system-x86_64 --version
QEMU emulator version 6.2.0 (Debian 1:6.2+dfsg-2ubuntu6.17)
Copyright (c) 2003-2021 Fabrice Bellard and the QEMU Project developers

:~# lsb_release -a
No LSB modules are available.
Distributor ID:    Ubuntu
Description:    Ubuntu 22.04.4 LTS
Release:    22.04
Codename:    jammy/

-- 
Thanks and Regards,

Wu

[-- Attachment #2: Type: text/html, Size: 7902 bytes --]

             reply	other threads:[~2024-03-27 23:33 UTC|newest]

Thread overview: 3+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2024-03-27 23:32 Chun Feng Wu [this message]
2024-03-28 10:50 ` qemu process consumes 100% host CPU after reverting snapshot Chun Feng Wu
2024-04-01  5:05   ` Dongli Zhang

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=c2bad656-0231-4113-af5b-75e4247d2ee2@linux.vnet.ibm.com \
    --to=wucf@linux.vnet.ibm.com \
    --cc=qemu-devel@nongnu.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).