From mboxrd@z Thu Jan 1 00:00:00 1970 From: Avi Kivity Subject: Re: [PATCH 0/8] use jump labels to streamline common APIC configuration Date: Mon, 06 Aug 2012 11:35:56 +0300 Message-ID: <501F81EC.8040705@redhat.com> References: <1344171513-4659-1-git-send-email-gleb@redhat.com> Mime-Version: 1.0 Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit Cc: Gleb Natapov , kvm@vger.kernel.org, mtosatti@redhat.com To: Eric Northup Return-path: Received: from mx1.redhat.com ([209.132.183.28]:21200 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753720Ab2HFIf7 (ORCPT ); Mon, 6 Aug 2012 04:35:59 -0400 In-Reply-To: Sender: kvm-owner@vger.kernel.org List-ID: On 08/05/2012 10:30 PM, Eric Northup wrote: > On Sun, Aug 5, 2012 at 5:58 AM, Gleb Natapov wrote: >> APIC code has a lot of checks for apic presence and apic HW/SW enable >> state. Most common configuration is when each vcpu has in kernel apic >> and it is fully enabled. This path series uses jump labels to turn checks >> to nops in the common case. > > What is the target workload and how does the performance compare? As > a naive question, how different is it than just using gcc branch > hints? We saw about 1.3% spent in kvm_apic_present() in some OLTP benchmark. I imagine most of it would be gone by just inlining. An exception would be the call from kvm_irq_delivery_to_apic() which happens from a different cpu, possibly causing a cache miss if something in the same cache line as vcpu->arch.apic is changed frequently. Come to think of it, if we implemented the irq routing cache (which eliminates the loop in kvm_irq_delivery_to_apic()) we might improve things quite a lot as well. -- error compiling committee.c: too many arguments to function