From mboxrd@z Thu Jan 1 00:00:00 1970 From: Alexander Duyck Subject: Re: [RFC][Patch v8 6/7] KVM: Enables the kernel to isolate and report free pages Date: Thu, 7 Feb 2019 09:43:44 -0800 Message-ID: References: <20190204201854.2328-1-nitesh@redhat.com> <20190204201854.2328-7-nitesh@redhat.com> <20190205153607-mutt-send-email-mst@kernel.org> <20190205165514-mutt-send-email-mst@kernel.org> Mime-Version: 1.0 Content-Type: text/plain; charset="UTF-8" Cc: Nitesh Narayan Lal , kvm list , LKML , Paolo Bonzini , lcapitulino@redhat.com, pagupta@redhat.com, wei.w.wang@intel.com, Yang Zhang , riel@surriel.com, david@redhat.com, dodgen@google.com, Konrad Rzeszutek Wilk , dhildenb@redhat.com, Andrea Arcangeli To: "Michael S. Tsirkin" Return-path: In-Reply-To: <20190205165514-mutt-send-email-mst@kernel.org> Sender: linux-kernel-owner@vger.kernel.org List-Id: kvm.vger.kernel.org On Tue, Feb 5, 2019 at 3:21 PM Michael S. Tsirkin wrote: > > On Tue, Feb 05, 2019 at 04:54:03PM -0500, Nitesh Narayan Lal wrote: > > > > On 2/5/19 3:45 PM, Michael S. Tsirkin wrote: > > > On Mon, Feb 04, 2019 at 03:18:53PM -0500, Nitesh Narayan Lal wrote: > > >> This patch enables the kernel to scan the per cpu array and > > >> compress it by removing the repetitive/re-allocated pages. > > >> Once the per cpu array is completely filled with pages in the > > >> buddy it wakes up the kernel per cpu thread which re-scans the > > >> entire per cpu array by acquiring a zone lock corresponding to > > >> the page which is being scanned. If the page is still free and > > >> present in the buddy it tries to isolate the page and adds it > > >> to another per cpu array. > > >> > > >> Once this scanning process is complete and if there are any > > >> isolated pages added to the new per cpu array kernel thread > > >> invokes hyperlist_ready(). > > >> > > >> In hyperlist_ready() a hypercall is made to report these pages to > > >> the host using the virtio-balloon framework. In order to do so > > >> another virtqueue 'hinting_vq' is added to the balloon framework. > > >> As the host frees all the reported pages, the kernel thread returns > > >> them back to the buddy. > > >> > > >> Signed-off-by: Nitesh Narayan Lal > > > > > > This looks kind of like what early iterations of Wei's patches did. > > > > > > But this has lots of issues, for example you might end up with > > > a hypercall per a 4K page. > > > So in the end, he switched over to just reporting only > > > MAX_ORDER - 1 pages. > > You mean that I should only capture/attempt to isolate pages with order > > MAX_ORDER - 1? > > > > > > Would that be a good idea for you too? > > Will it help if we have a threshold value based on the amount of memory > > captured instead of the number of entries/pages in the array? > > This is what Wei's patches do at least. So in the solution I had posted I was looking more at HUGETLB_PAGE_ORDER and above as the size of pages to provide the hints on [1]. The advantage to doing that is that you can also avoid fragmenting huge pages which in turn can cause what looks like a memory leak as the memory subsystem attempts to reassemble huge pages[2]. In my mind a 2MB page makes good sense in terms of the size of things to be performing hints on as anything smaller than that is going to just end up being a bunch of extra work and end up causing a bunch of fragmentation. The only issue with limiting things on an arbitrary boundary like that is that you have to hook into the buddy allocator to catch the cases where a page has been merged up into that range. [1] https://lkml.org/lkml/2019/2/4/903 [2] https://blog.digitalocean.com/transparent-huge-pages-and-alternative-memory-allocators/