From mboxrd@z Thu Jan 1 00:00:00 1970 From: Xiao Guangrong Subject: Re: [PATCH v3 00/15] KVM: MMU: fast zap all shadow pages Date: Sun, 21 Apr 2013 22:09:29 +0800 Message-ID: <5173F319.2040106@linux.vnet.ibm.com> References: <1366093973-2617-1-git-send-email-xiaoguangrong@linux.vnet.ibm.com> <20130421130346.GE8997@redhat.com> Mime-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 7bit Cc: mtosatti@redhat.com, avi.kivity@gmail.com, linux-kernel@vger.kernel.org, kvm@vger.kernel.org To: Gleb Natapov Return-path: Received: from e28smtp02.in.ibm.com ([122.248.162.2]:42351 "EHLO e28smtp02.in.ibm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753470Ab3DUOJh (ORCPT ); Sun, 21 Apr 2013 10:09:37 -0400 Received: from /spool/local by e28smtp02.in.ibm.com with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted for from ; Sun, 21 Apr 2013 19:34:06 +0530 In-Reply-To: <20130421130346.GE8997@redhat.com> Sender: kvm-owner@vger.kernel.org List-ID: On 04/21/2013 09:03 PM, Gleb Natapov wrote: > On Tue, Apr 16, 2013 at 02:32:38PM +0800, Xiao Guangrong wrote: >> This patchset is based on my previous two patchset: >> [PATCH 0/2] KVM: x86: avoid potential soft lockup and unneeded mmu reload >> (https://lkml.org/lkml/2013/4/1/2) >> >> [PATCH v2 0/6] KVM: MMU: fast invalid all mmio sptes >> (https://lkml.org/lkml/2013/4/1/134) >> >> Changlog: >> V3: >> completely redesign the algorithm, please see below. >> > This looks pretty complicated. Is it still needed in order to avoid soft > lockups after "avoid potential soft lockup and unneeded mmu reload" patch? Yes. I discussed this point with Marcelo: ====== BTW, to my honest, i do not think spin_needbreak is a good way - it does not fix the hot-lock contention and it just occupies more cpu time to avoid possible soft lock-ups. Especially, zap-all-shadow-pages can let other vcpus fault and vcpus contest mmu-lock, then zap-all-shadow-pages release mmu-lock and wait, other vcpus create page tables again. zap-all-shadow-page need long time to be finished, the worst case is, it can not completed forever on intensive vcpu and memory usage. I still think the right way to fix this kind of thing is optimization for mmu-lock. ====== Which parts scare you? Let's find a way to optimize for it. ;). For example, if you do not like unmap_memslot_rmap_nolock(), we can simplify it - We can use walk_shadow_page_lockless_begin() and walk_shadow_page_lockless_end() to protect spte instead of kvm->being_unmaped_rmap. Thanks!