From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1755156AbbFLPbU (ORCPT ); Fri, 12 Jun 2015 11:31:20 -0400 Received: from mail-wg0-f53.google.com ([74.125.82.53]:34581 "EHLO mail-wg0-f53.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1750773AbbFLPbS (ORCPT ); Fri, 12 Jun 2015 11:31:18 -0400 Date: Fri, 12 Jun 2015 18:31:14 +0300 From: "dmitry.torokhov@gmail.com" To: Philip Moltmann Cc: "linux-kernel@vger.kernel.org" , "pv-drivers@vmware.com" , Xavier Deguillard , "gregkh@linuxfoundation.org" , "akpm@linux-foundation.org" Subject: Re: [PATCH 6/9] VMware balloon: Do not limit the amount of frees and allocations in non-sleep mode. Message-ID: <20150612153114.GA4668@dtor-pixel> References: <1429032576-2656-1-git-send-email-moltmann@vmware.com> <1429032576-2656-8-git-send-email-moltmann@vmware.com> <20150416205555.GA21618@dtor-ws> <1434053408.25956.31.camel@vmware.com> <20150612112026.GB24274@localhost> <1434121617.25956.60.camel@vmware.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <1434121617.25956.60.camel@vmware.com> User-Agent: Mutt/1.5.21 (2010-09-15) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Fri, Jun 12, 2015 at 03:06:56PM +0000, Philip Moltmann wrote: > Hi, > > thanks for taking so much interest in this driver. It is quite good > that our design choices get scrutinized by non-current VMware > employees. > > > > I understand that you negotiate the capabilities between hypervisor > > and > > the balloon driver, however that was not my concern (and I am sorry > > that > > I did not express it properly). > > > > The patch description stated: > > > > "Before this patch the slow memory transfer would cause the > > destination > > VM to have internal swapping until all memory is transferred. Now the > > memory is transferred fast enough so that the destination VM does not > > swap." > > > > As far as I understand the improvements in memory transfer speed > > hinge > > on the availability of batched operations, you however remove the > > limits > > on non-sleep allocations unconditionally. Thus my question: on older > > ESXi's that do not support batcher operations won't this cause VM to > > start swapping? > > Three improvements contribute to the overall faster speed: > - batched operations reduce the hypervisor overhead per page > - 2m instead of 4k buffer reduce the hypervisor overhead per page > - removing the rate-limiting for non-sleep allocations allows the guest > operating system to reclaim memory as fast as it can instead of > artificially limiting it. > > Any of these improvements is great by itself and helps a lot. The > combination of all three makes a rather dramatic difference. > > We cause hypervisor-level swapping if the balloon driver does not > reclaim fast enough. As any of these improvements increases reclamation > speed, we reduce swapping risk in any case. > > Unfortunately the first two improvements rely on hypervisor support, > the last does not. As far as I can understand the justification for removing the limit (improvement #3) is that we have #1 and #2, at least that's how I read the patch description. I am saying: what if you running on a hypervisor that does not support neither #1 nor #2? What was the first release that of ESXi supports batching and 2M pages? What about workstation (I don't recall if it started using ballooning at some point)? Thanks. -- Dmitry