From mboxrd@z Thu Jan 1 00:00:00 1970 From: Bronek Kozicki Subject: Re: vhost-[pid] 100% CPU Date: Sun, 06 Apr 2014 18:35:04 +0100 Message-ID: <53419048.6090308@spamcop.net> References: Mime-Version: 1.0 Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit To: Simon Chen , kvm@vger.kernel.org Return-path: Received: from a.painless.aa.net.uk ([81.187.30.51]:60083 "EHLO a.painless.aa.net.uk" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1754273AbaDFSUO (ORCPT ); Sun, 6 Apr 2014 14:20:14 -0400 In-Reply-To: Sender: kvm-owner@vger.kernel.org List-ID: On 06/04/2014 15:06, Simon Chen wrote: > Hello, > > I am using QEMU 1.6.0 on Linux 3.10.21. My VMs are using vhost-net in > a typical OpenStack setup: VM1->tap->linux > bridge->OVS->host1->physical network->host2->OVS->linux > bridge->tap->VM2. > > It seems that under heavy network load, the vhost-[pid] processes on > the receiving side is using 100% CPU. The sender side has over 85% > utilized. > > I am seeing unsatisfactory VM to VM network performance (using iperf > 16 concurrent TCP connections, I can only get 1.5Gbps, while I've > heard people got to over 6Gbps at least), and I wonder if it has > something to do with vhost-net maxing out on CPU. If so, is there > anything I can tune the system? You could dedicate network card to your virtual machine, using PCI passthrough. B.