From mboxrd@z Thu Jan 1 00:00:00 1970 From: Avi Kivity Subject: Re: [RFC PATCH 0/2] Expose available KVM free memory slot count to help avoid aborts Date: Tue, 25 Jan 2011 16:56:19 +0200 Message-ID: <4D3EE493.7090000@redhat.com> References: <20110121233040.22262.68117.stgit@s20.home> <20110124093241.GA28654@amt.cnet> <4D3EA3D3.2050807@redhat.com> <1295966811.3230.59.camel@x201> Mime-Version: 1.0 Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 7bit Cc: Marcelo Tosatti , kvm@vger.kernel.org, ddutile@redhat.com, mst@redhat.com, chrisw@redhat.com, jan.kiszka@siemens.com To: Alex Williamson Return-path: Received: from mx1.redhat.com ([209.132.183.28]:4649 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751281Ab1AYO4Z (ORCPT ); Tue, 25 Jan 2011 09:56:25 -0500 In-Reply-To: <1295966811.3230.59.camel@x201> Sender: kvm-owner@vger.kernel.org List-ID: On 01/25/2011 04:46 PM, Alex Williamson wrote: > On Tue, 2011-01-25 at 12:20 +0200, Avi Kivity wrote: > > On 01/24/2011 11:32 AM, Marcelo Tosatti wrote: > > > On Fri, Jan 21, 2011 at 04:48:02PM -0700, Alex Williamson wrote: > > > > When doing device assignment, we use cpu_register_physical_memory() to > > > > directly map the qemu mmap of the device resource into the address > > > > space of the guest. The unadvertised feature of the register physical > > > > memory code path on kvm, at least for this type of mapping, is that it > > > > needs to allocate an index from a small, fixed array of memory slots. > > > > Even better, if it can't get an index, the code aborts deep in the > > > > kvm specific bits, preventing the caller from having a chance to > > > > recover. > > > > > > > > It's really easy to hit this by hot adding too many assigned devices > > > > to a guest (pretty easy to hit with too many devices at instantiation > > > > time too, but the abort is slightly more bearable there). > > > > > > > > I'm assuming it's pretty difficult to make the memory slot array > > > > dynamically sized. If that's not the case, please let me know as > > > > that would be a much better solution. > > > > > > Its not difficult to either increase the maximum number (defined as > > > 32 now in both qemu and kernel) of static slots, or support dynamic > > > increases, if it turns out to be a performance issue. > > > > > > > We can't make it unbounded in the kernel, since a malicious user could > > start creating an infinite amount of memory slots, pinning unbounded > > kernel memory. > > > > If we make the limit much larger, we should start to think about > > efficiency. Every mmio vmexit is currently a linear scan of the memory > > slot table, which is efficient at a small number of slots, but not at a > > large number. We could conceivably encode the "no slot" information > > into a bit in the not-present spte. > > On the plus side, very, very few users need more than the current 32 > slot limit and the implementation presented likely results in fewer > slots for the majority of the users. We can maybe save efficiency > issues until we start seeing problems there. Thanks, > Well, we need a static cap, but certainly limiting the search to the number of populated slots is an improvement. We might keep the array size static (but only use the populated part). -- error compiling committee.c: too many arguments to function