From mboxrd@z Thu Jan 1 00:00:00 1970 From: David Hildenbrand Subject: Re: [RFC][Patch v8 0/7] KVM: Guest Free Page Hinting Date: Mon, 18 Feb 2019 20:35:36 +0100 Message-ID: <4039c2e8-5db4-cddd-b997-2fdbcc6f529f@redhat.com> References: <20190204201854.2328-1-nitesh@redhat.com> <20190218114601-mutt-send-email-mst@kernel.org> <44740a29-bb14-e6e6-2992-98d0ae58e994@redhat.com> <20190218122636-mutt-send-email-mst@kernel.org> <20190218140947-mutt-send-email-mst@kernel.org> Mime-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 8bit Cc: Nitesh Narayan Lal , kvm@vger.kernel.org, linux-kernel@vger.kernel.org, pbonzini@redhat.com, lcapitulino@redhat.com, pagupta@redhat.com, wei.w.wang@intel.com, yang.zhang.wz@gmail.com, riel@surriel.com, dodgen@google.com, konrad.wilk@oracle.com, dhildenb@redhat.com, aarcange@redhat.com, Alexander Duyck To: "Michael S. Tsirkin" Return-path: In-Reply-To: <20190218140947-mutt-send-email-mst@kernel.org> Content-Language: en-US Sender: linux-kernel-owner@vger.kernel.org List-Id: kvm.vger.kernel.org On 18.02.19 20:16, Michael S. Tsirkin wrote: > On Mon, Feb 18, 2019 at 07:29:44PM +0100, David Hildenbrand wrote: >>> >>>>> >>>>> But really what business has something that is supposedly >>>>> an optimization blocking a VCPU? We are just freeing up >>>>> lots of memory why is it a good idea to slow that >>>>> process down? >>>> >>>> I first want to know that it is a problem before we declare it a >>>> problem. I provided an example (s390x) where it does not seem to be a >>>> problem. One hypercall ~every 512 frees. As simple as it can get. >>>> >>>> No trying to deny that it could be a problem on x86, but then I assume >>>> it is only a problem in specific setups. >>> >>> But which setups? How are we going to identify them? >> >> I guess is simple (I should be carefuly with this word ;) ): As long as >> you don't isolate + pin your CPUs in the hypervisor, you can expect any >> kind of sudden hickups. We're in a virtualized world. Real time is one >> example. >> >> Using kernel threads like Nitesh does right now? It can be scheduled >> anytime by the hypervisor on the exact same cpu. Unless you isolate + >> pin in the hypervor. So the same problem applies. > > Right but we know how to handle this. Many deployments already use tools > to detect host threads kicking VCPUs out. > Getting VCPU blocked by a kfree call would be something new. > Yes, and for s390x we already have some kfree's taking longer than others. We have to identify when it is not okay. > >>> So I'm fine with a simple implementation but the interface needs to >>> allow the hypervisor to process hints in parallel while guest is >>> running. We can then fix any issues on hypervisor without breaking >>> guests. >> >> Yes, I am fine with defining an interface that theoretically let's us >> change the implementation in the guest later. >> I consider this even a >> prerequisite. IMHO the interface shouldn't be different, it will be >> exactly the same. >> >> It is just "who" calls the batch freeing and waits for it. And as I >> outlined here, doing it without additional threads at least avoids us >> for now having to think about dynamic data structures and that we can >> sometimes not report "because the thread is still busy reporting or >> wasn't scheduled yet". > > Sorry I wasn't clear. I think we need ability to change the > implementation in the *host* later. IOW don't rely on > host being synchronous. > > I actually misread it :) . In any way, there has to be a mechanism to synchronize. If we are going via a bare hypercall (like s390x, like what Alexander proposes), it is going to be a synchronous interface either way. Just a bare hypercall, there will not really be any blocking on the guest side. Via virtio, I guess it is waiting for a response to a requests, right? -- Thanks, David / dhildenb