From mboxrd@z Thu Jan 1 00:00:00 1970 From: Konrad Rzeszutek Wilk Subject: Re: [PATCH v13 10/11] pvqspinlock, x86: Enable PV qspinlock for KVM Date: Tue, 2 Dec 2014 14:10:58 -0500 Message-ID: <20141202191058.GA357@laptop.dumpdata.com> References: <1414613951-32532-1-git-send-email-Waiman.Long@hp.com> <1414613951-32532-11-git-send-email-Waiman.Long@hp.com> Mime-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Return-path: Content-Disposition: inline In-Reply-To: <1414613951-32532-11-git-send-email-Waiman.Long@hp.com> List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: virtualization-bounces@lists.linux-foundation.org Errors-To: virtualization-bounces@lists.linux-foundation.org To: Waiman Long Cc: linux-arch@vger.kernel.org, Rik van Riel , Raghavendra K T , kvm@vger.kernel.org, Oleg Nesterov , Peter Zijlstra , Scott J Norton , x86@kernel.org, Paolo Bonzini , linux-kernel@vger.kernel.org, virtualization@lists.linux-foundation.org, Ingo Molnar , David Vrabel , "H. Peter Anvin" , xen-devel@lists.xenproject.org, Thomas Gleixner , "Paul E. McKenney" , Linus Torvalds , Boris Ostrovsky , Douglas Hatch List-Id: linux-arch.vger.kernel.org On Wed, Oct 29, 2014 at 04:19:10PM -0400, Waiman Long wrote: > This patch adds the necessary KVM specific code to allow KVM to > support the CPU halting and kicking operations needed by the queue > spinlock PV code. > > Two KVM guests of 20 CPU cores (2 nodes) were created for performance > testing in one of the following three configurations: > 1) Only 1 VM is active > 2) Both VMs are active and they share the same 20 physical CPUs > (200% overcommit) > > The tests run included the disk workload of the AIM7 benchmark on > both ext4 and xfs RAM disks at 3000 users on a 3.17 based kernel. The > "ebizzy -m" test and futextest was was also run and its performance > data were recorded. With two VMs running, the "idle=poll" kernel > option was added to simulate a busy guest. If PV qspinlock is not > enabled, unfairlock will be used automically in a guest. What is the unfairlock? Isn't it just using a bytelock at this point? From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from userp1040.oracle.com ([156.151.31.81]:20953 "EHLO userp1040.oracle.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753677AbaLBTMN (ORCPT ); Tue, 2 Dec 2014 14:12:13 -0500 Date: Tue, 2 Dec 2014 14:10:58 -0500 From: Konrad Rzeszutek Wilk Subject: Re: [PATCH v13 10/11] pvqspinlock, x86: Enable PV qspinlock for KVM Message-ID: <20141202191058.GA357@laptop.dumpdata.com> References: <1414613951-32532-1-git-send-email-Waiman.Long@hp.com> <1414613951-32532-11-git-send-email-Waiman.Long@hp.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <1414613951-32532-11-git-send-email-Waiman.Long@hp.com> Sender: linux-arch-owner@vger.kernel.org List-ID: To: Waiman Long Cc: Thomas Gleixner , Ingo Molnar , "H. Peter Anvin" , Peter Zijlstra , linux-arch@vger.kernel.org, x86@kernel.org, linux-kernel@vger.kernel.org, virtualization@lists.linux-foundation.org, xen-devel@lists.xenproject.org, kvm@vger.kernel.org, Paolo Bonzini , Boris Ostrovsky , "Paul E. McKenney" , Rik van Riel , Linus Torvalds , Raghavendra K T , David Vrabel , Oleg Nesterov , Scott J Norton , Douglas Hatch Message-ID: <20141202191058.CgHEC4wF-BRZTP8t4fNadB6pz9nQP5EPvc6u-cbzhZg@z> On Wed, Oct 29, 2014 at 04:19:10PM -0400, Waiman Long wrote: > This patch adds the necessary KVM specific code to allow KVM to > support the CPU halting and kicking operations needed by the queue > spinlock PV code. > > Two KVM guests of 20 CPU cores (2 nodes) were created for performance > testing in one of the following three configurations: > 1) Only 1 VM is active > 2) Both VMs are active and they share the same 20 physical CPUs > (200% overcommit) > > The tests run included the disk workload of the AIM7 benchmark on > both ext4 and xfs RAM disks at 3000 users on a 3.17 based kernel. The > "ebizzy -m" test and futextest was was also run and its performance > data were recorded. With two VMs running, the "idle=poll" kernel > option was added to simulate a busy guest. If PV qspinlock is not > enabled, unfairlock will be used automically in a guest. What is the unfairlock? Isn't it just using a bytelock at this point?