public inbox for kvm@vger.kernel.org
 help / color / mirror / Atom feed
From: bugzilla-daemon@bugzilla.kernel.org
To: kvm@vger.kernel.org
Subject: [Bug 118191] New: performance regression since dynamic halt-polling
Date: Fri, 13 May 2016 07:03:35 +0000	[thread overview]
Message-ID: <bug-118191-28872@https.bugzilla.kernel.org/> (raw)

https://bugzilla.kernel.org/show_bug.cgi?id=118191

            Bug ID: 118191
           Summary: performance regression since dynamic halt-polling
           Product: Virtualization
           Version: unspecified
    Kernel Version: >=4.2
          Hardware: All
                OS: Linux
              Tree: Mainline
            Status: NEW
          Severity: normal
          Priority: P1
         Component: kvm
          Assignee: virtualization_kvm@kernel-bugs.osdl.org
          Reporter: sp2.blub@speed.at
        Regression: No

Since commit aca6ff29c40 (KVM: dynamic halt-polling) a VM under network load
with virtio network produces extremely high cpu usage on the host.

Bisected on git://github.com/torvalds/linux master

Testcase:

Host: starting with the above mentioned commit (using a debian based linux
(PVE4.2))
Using iperf to test:
 $ iperf -us

Guest VM: qemu/kvm (running linux) with this network device:
 -netdev
type=tap,id=net0,ifname=tap100i0,script=/var/lib/qemu-server/pve-bridge,downscript=/var/lib/qemu-server/pve-bridgedown,vhost=on
 -device
virtio-net-pci,mac=32:32:37:39:33:62,netdev=net0,bus=pci.0,addr=0x12,id=net0,bootindex=300
Connecting to the iperf server via:
 $ iperf -uc 10.0.0.1 -b 100m

Behavior before the commit: ~60% cpu usage
After the commit: 100% cpu usage and much lower network throughput.

Related external threads:
http://pve.proxmox.com/pipermail/pve-user/2016-May/010302.html
https://forum.proxmox.com/threads/wrong-cpu-usage.27080/

The iperf result linked in the mailing list thread is the same I'm seeing in my
testcase after the commit.
(<https://gist.github.com/gilou/15b620a7a067fd1d58a7616942e025b4#file-perf_virtionet_4-4-txt>)
Whereas before the commit the KVM process iperf output starts with:
5.93% [kernel] [k] kvm_arch_vcpu_ioctl_run
4.92% [kernel] [k] vmx_vcpu_run
3.62% [kernel] [k] native_write_msr_safe
3.14% [kernel] [k] _raw_spin_lock_irqsave

And vhost-net with:
9.66% [kernel] [k] __br_fbd_get
4.55% [kernel] [k] copy_user_enhanced_fast_string
3.01% [kernel] [k] update_cfs_shares
2.43% [kernel] [k] __netif_receive_skb_core
2.25% [kernel] [k] vhost_worker

Loading the kvm module with the 2 new options introduced by the above commit
set to 0 reverts to the original CPU usage from before (which makes sense,
given that as far as I can tell this reverts to the old behavior).
  halt_poll_ns_grow=0 halt_poll_ns_shrink=0

-- 
You are receiving this mail because:
You are watching the assignee of the bug.

             reply	other threads:[~2016-05-13  7:03 UTC|newest]

Thread overview: 3+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2016-05-13  7:03 bugzilla-daemon [this message]
2016-05-13  7:54 ` [Bug 118191] performance regression since dynamic halt-polling bugzilla-daemon
2016-05-13  8:08 ` bugzilla-daemon

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=bug-118191-28872@https.bugzilla.kernel.org/ \
    --to=bugzilla-daemon@bugzilla.kernel.org \
    --cc=kvm@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox