From mboxrd@z Thu Jan 1 00:00:00 1970 From: Sean Christopherson Date: Thu, 24 Oct 2019 20:48:26 +0000 Subject: Re: [PATCH v2 14/15] KVM: Terminate memslot walks via used_slots Message-Id: <20191024204826.GE28043@linux.intel.com> List-Id: References: <20191022003537.13013-1-sean.j.christopherson@intel.com> <20191022003537.13013-15-sean.j.christopherson@intel.com> <642f73ee-9425-0149-f4f4-f56be9ae5713@redhat.com> <20191022152827.GC2343@linux.intel.com> <625e511f-bd35-3b92-0c6d-550c10fc5827@redhat.com> <20191022155220.GD2343@linux.intel.com> <5c61c094-ee32-4dcf-b3ae-092eba0159c5@redhat.com> <20191024193856.GA28043@linux.intel.com> <5320341c-1abb-610b-8f5e-090a6726a9b1@redhat.com> In-Reply-To: <5320341c-1abb-610b-8f5e-090a6726a9b1@redhat.com> MIME-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit To: Paolo Bonzini Cc: James Hogan , Paul Mackerras , Christian Borntraeger , Janosch Frank , Radim =?utf-8?B?S3LEjW3DocWZ?= , Marc Zyngier , David Hildenbrand , Cornelia Huck , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , James Morse , Julien Thierry , Suzuki K Poulose , linux-mips@vger.kernel.org, kvm-ppc@vger.kernel.org, kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, linux-kernel@vger.kernel.org On Thu, Oct 24, 2019 at 10:24:09PM +0200, Paolo Bonzini wrote: > On 24/10/19 21:38, Sean Christopherson wrote: > > only > > * its new index into the array is update. > > s/update/tracked/? Ya, tracked is better. Waffled between updated and tracked, chose poorly :-) > Returns the changed memslot's > > * current index into the memslots array. > > */ > > static inline int kvm_memslot_move_backward(struct kvm_memslots *slots, > > struct kvm_memory_slot *memslot) > > { > > struct kvm_memory_slot *mslots = slots->memslots; > > int i; > > > > if (WARN_ON_ONCE(slots->id_to_index[memslot->id] = -1) || > > WARN_ON_ONCE(!slots->used_slots)) > > return -1; > > > > for (i = slots->id_to_index[memslot->id]; i < slots->used_slots - 1; i++) { > > if (memslot->base_gfn > mslots[i + 1].base_gfn) > > break; > > > > WARN_ON_ONCE(memslot->base_gfn = mslots[i + 1].base_gfn); > > > > /* Shift the next memslot forward one and update its index. */ > > mslots[i] = mslots[i + 1]; > > slots->id_to_index[mslots[i].id] = i; > > } > > return i; > > } > > > > /* > > * Move a changed memslot forwards in the array by shifting existing slots with > > * a lower GFN toward the back of the array. Note, the changed memslot itself > > * is not preserved in the array, i.e. not swapped at this time, only its new > > * index into the array is updated > > Same here? > > > * Note, slots are sorted from highest->lowest instead of lowest->highest for > > * historical reasons. > > Not just that, the largest slot (with all RAM above 4GB) is also often > at the highest address at least on x86. Ah, increasing the odds of a quick hit on lookup...but only when using a linear search. The binary search starts in the middle, so that optimization is also historical :-) > But we could sort them by size now, so I agree to call these historical > reasons. That wouldn't work with the binary search though. > The code itself is fine, thanks for the work on documenting it. > > Paolo >