From mboxrd@z Thu Jan 1 00:00:00 1970 From: Anthony Liguori Subject: Re: [RFT] kvm with Windows optimization Date: Fri, 26 Oct 2007 08:52:02 -0500 Message-ID: <4721F102.9010203@codemonkey.ws> References: <4720D122.4070606@qumranet.com> <4720D3F0.8010103@qumranet.com> <20071025200729.3f09d21d@holly> <4720DC12.8050303@qumranet.com> <4720DE65.2030209@qumranet.com> <4720E2AF.3070404@codemonkey.ws> <4720E53A.7040803@qumranet.com> <4720E65F.8070204@codemonkey.ws> <4720E80E.5070506@qumranet.com> <472103E8.8070605@codemonkey.ws> <4721A255.5050603@qumranet.com> Mime-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Cc: Jindrich Makovicka , kvm-devel-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f@public.gmane.org To: Avi Kivity Return-path: In-Reply-To: <4721A255.5050603-atKUWr5tajBWk0Htik3J/w@public.gmane.org> List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: kvm-devel-bounces-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f@public.gmane.org Errors-To: kvm-devel-bounces-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f@public.gmane.org List-Id: kvm.vger.kernel.org Avi Kivity wrote: > Anthony Liguori wrote: >> Avi Kivity wrote: >> >>> Anthony Liguori wrote: >>> >>>>>> static int mmu_topup_memory_caches(struct kvm_vcpu *vcpu) >>>>>> { >>>>>> int r; >>>>>> >>>>>> kvm_mmu_free_some_pages(vcpu); >>>>>> r = mmu_topup_memory_cache(&vcpu->mmu_pte_chain_cache, >>>>>> pte_chain_cache, 4); >>>>>> if (r) >>>>>> goto out; >>>>>> r = mmu_topup_memory_cache(&vcpu->mmu_rmap_desc_cache, >>>>>> rmap_desc_cache, 1); >>>>>> if (r) >>>>>> goto out; >>>>>> r = mmu_topup_memory_cache_page(&vcpu->mmu_page_cache, 8); >>>>>> if (r) >>>>>> goto out; >>>>>> r = mmu_topup_memory_cache(&vcpu->mmu_page_header_cache, >>>>>> mmu_page_header_cache, 4); >>>>>> out: >>>>>> return r; >>>>>> } >>>>>> >>>>> These are the (4, 1, 8, 4) values in the call to >>>>> mmu_topup_memory_cache. Perhaps one of them is too low. >>>>> >>>> Sure. Would this be affected at all by your tpr patch? >>> I believe not, but the code doesn't care what I believe. >>> >>> >>>> IIUC, if this is the problem, it should be reproducible with the >>>> latest git too? >>>> >>> One difference is that the tpr patch disables nx. That causes >>> Windows to go into 32-bit paging mode (nice that it has both pae and >>> nonpae in the same kernel), which may change things. >>> >>> You can try booting your host with nx disabled to get the same >>> effect (or disable nx cpuid in kvm). >>> >> >> I've disabled NX in KVM and that didn't reproduce the issue in the >> current git. >> >> If I double all of the memory caches, I can't seem to reproduce. >> However, as soon as I reduce rmap_desc_cache down to 1, I can reproduce. >> >> I'll try to see if just setting the rmap_desc_cache line to 2 is >> enough to make the problem go away. >> >> How can the guest provoke this BUG() based on the cache size? Should >> the cache size only affect performance? >> >> > > The memory caches are a little misnamed; they're really preallocation > buffers. > > They serve two purposes: to avoid allocation in atomic contexts > (that's no longer needed since preempt notifiers) and to avoid complex > error recovery paths. We make sure there are enough objects to > satisfy worst case behavior and assume all allocations will work. FWIW, two people in IRC just reported the same BUG in kvm-48 and the nightly snapshot. I asked them both to post more thorough descriptions here. That leads me to suspect that the problem wasn't introduced by your TPR optimization. Regards, Anthony Liguori > Regarding the rmap memory cache failure, I can't think of a reason why > we'll need to add more than one rmap entry per fault. > ------------------------------------------------------------------------- This SF.net email is sponsored by: Splunk Inc. Still grepping through log files to find problems? Stop. Now Search log events and configuration files using AJAX and a browser. Download your FREE copy of Splunk now >> http://get.splunk.com/