From mboxrd@z Thu Jan 1 00:00:00 1970
From: bugzilla-daemon@bugzilla.kernel.org
Subject: [Bug 118191] New: performance regression since dynamic halt-polling
Date: Fri, 13 May 2016 07:03:35 +0000
Message-ID:
Mime-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 7bit
To: kvm@vger.kernel.org
Return-path:
Received: from mail.kernel.org ([198.145.29.136]:39961 "EHLO mail.kernel.org"
rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP
id S1750750AbcEMHDm (ORCPT );
Fri, 13 May 2016 03:03:42 -0400
Received: from mail.kernel.org (localhost [127.0.0.1])
by mail.kernel.org (Postfix) with ESMTP id B721C2022D
for ; Fri, 13 May 2016 07:03:38 +0000 (UTC)
Received: from bugzilla1.web.kernel.org (bugzilla1.web.kernel.org [172.20.200.51])
by mail.kernel.org (Postfix) with ESMTP id 5EC6F2020F
for ; Fri, 13 May 2016 07:03:36 +0000 (UTC)
Sender: kvm-owner@vger.kernel.org
List-ID:
https://bugzilla.kernel.org/show_bug.cgi?id=118191
Bug ID: 118191
Summary: performance regression since dynamic halt-polling
Product: Virtualization
Version: unspecified
Kernel Version: >=4.2
Hardware: All
OS: Linux
Tree: Mainline
Status: NEW
Severity: normal
Priority: P1
Component: kvm
Assignee: virtualization_kvm@kernel-bugs.osdl.org
Reporter: sp2.blub@speed.at
Regression: No
Since commit aca6ff29c40 (KVM: dynamic halt-polling) a VM under network load
with virtio network produces extremely high cpu usage on the host.
Bisected on git://github.com/torvalds/linux master
Testcase:
Host: starting with the above mentioned commit (using a debian based linux
(PVE4.2))
Using iperf to test:
$ iperf -us
Guest VM: qemu/kvm (running linux) with this network device:
-netdev
type=tap,id=net0,ifname=tap100i0,script=/var/lib/qemu-server/pve-bridge,downscript=/var/lib/qemu-server/pve-bridgedown,vhost=on
-device
virtio-net-pci,mac=32:32:37:39:33:62,netdev=net0,bus=pci.0,addr=0x12,id=net0,bootindex=300
Connecting to the iperf server via:
$ iperf -uc 10.0.0.1 -b 100m
Behavior before the commit: ~60% cpu usage
After the commit: 100% cpu usage and much lower network throughput.
Related external threads:
http://pve.proxmox.com/pipermail/pve-user/2016-May/010302.html
https://forum.proxmox.com/threads/wrong-cpu-usage.27080/
The iperf result linked in the mailing list thread is the same I'm seeing in my
testcase after the commit.
()
Whereas before the commit the KVM process iperf output starts with:
5.93% [kernel] [k] kvm_arch_vcpu_ioctl_run
4.92% [kernel] [k] vmx_vcpu_run
3.62% [kernel] [k] native_write_msr_safe
3.14% [kernel] [k] _raw_spin_lock_irqsave
And vhost-net with:
9.66% [kernel] [k] __br_fbd_get
4.55% [kernel] [k] copy_user_enhanced_fast_string
3.01% [kernel] [k] update_cfs_shares
2.43% [kernel] [k] __netif_receive_skb_core
2.25% [kernel] [k] vhost_worker
Loading the kvm module with the 2 new options introduced by the above commit
set to 0 reverts to the original CPU usage from before (which makes sense,
given that as far as I can tell this reverts to the old behavior).
halt_poll_ns_grow=0 halt_poll_ns_shrink=0
--
You are receiving this mail because:
You are watching the assignee of the bug.