From mboxrd@z Thu Jan 1 00:00:00 1970 From: Avi Kivity Subject: Re: KVM call agenda for Feb 9 Date: Tue, 09 Feb 2010 08:56:20 +0200 Message-ID: <4B710714.1020109@redhat.com> References: <20100209012851.GJ25751@x200.localdomain> Mime-Version: 1.0 Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit Cc: kvm@vger.kernel.org, qemu-devel@nongnu.org To: Chris Wright Return-path: Received: from mx1.redhat.com ([209.132.183.28]:44766 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752691Ab0BIG40 (ORCPT ); Tue, 9 Feb 2010 01:56:26 -0500 In-Reply-To: <20100209012851.GJ25751@x200.localdomain> Sender: kvm-owner@vger.kernel.org List-ID: On 02/09/2010 03:28 AM, Chris Wright wrote: > Please send in any agenda items you are interested in covering. > hpet overhead on large smp guests I measured hpet consuming about a half a core's worth of cpu on an idle Windows 2008 R2 64-way guest. This is mostly due to futex contention, likely from the qemu mutex. Options: - ignore, this is about 1% of the entire system (but overhead might increase greatly if a workload triggers more hpet accesses) - push hpet into kernel, with virtio-net, virtio-blk, and kernel-hpet, there's little reason to exit into qemu - rcuify/fine-grain qemu locks -- I have a truly marvellous patch that fixes the bug which this signature is too narrow to contain.