From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-13.3 required=3.0 tests=DKIMWL_WL_MED,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,HTML_MESSAGE, MAILING_LIST_MULTI,MENTIONS_GIT_HOSTING,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED, USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0C62BC35247 for ; Mon, 3 Feb 2020 23:16:19 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 8EE4E20721 for ; Mon, 3 Feb 2020 23:16:18 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="MvqM72Rb" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 8EE4E20721 Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 1FC276B0003; Mon, 3 Feb 2020 18:16:18 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 1ABE96B0005; Mon, 3 Feb 2020 18:16:18 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 09CB16B0006; Mon, 3 Feb 2020 18:16:18 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0090.hostedemail.com [216.40.44.90]) by kanga.kvack.org (Postfix) with ESMTP id E12866B0003 for ; Mon, 3 Feb 2020 18:16:17 -0500 (EST) Received: from smtpin01.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id 9138F180AD802 for ; Mon, 3 Feb 2020 23:16:17 +0000 (UTC) X-FDA: 76450376394.01.unit18_7112dd4cbb83f X-HE-Tag: unit18_7112dd4cbb83f X-Filterd-Recvd-Size: 14969 Received: from mail-vs1-f46.google.com (mail-vs1-f46.google.com [209.85.217.46]) by imf24.hostedemail.com (Postfix) with ESMTP for ; Mon, 3 Feb 2020 23:16:16 +0000 (UTC) Received: by mail-vs1-f46.google.com with SMTP id x18so10132248vsq.4 for ; Mon, 03 Feb 2020 15:16:16 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=KN20OzNNBnYBjMHjunupqidBOvoMUCpFBbWWGz+GzJ8=; b=MvqM72RbQDs2NPg8DcWfGu/0IbMK5fEtkIFsny1AtPCweiRQy9XJR7kctL8MFBFabg TwdCD8m3edH/hXy1wg/cAfB6I7Cr0+iTUprC7PlzSH6tXiMTsAM1UnBWgk3klL64MO5C wWkOum7bO6ifowO9bPBXq2m/iTKKmuFIxU+4IumvhKnRqGj28rRX0QWgOTbY4N9tHWl+ zdVTIRFqXYVf/4BG4arqD7tllc1Jk8n3RLln3TT5Y8/5CFBMplZtNEw7xdmp/chl5ykE WifYoCVg3/pDdqKS6gkoGsJEIYVxvQ8fhT5HVqlWtnugLHZsbOR8hI3NSdKUOH6eBZVp EuPw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=KN20OzNNBnYBjMHjunupqidBOvoMUCpFBbWWGz+GzJ8=; b=myTYjGGv1PVmkMvkdr6bpzVmpl250PwBPfnL1/9JFV/2q5bh5TxbNGP2P5C58+qb4R x5gYzp8Lzbgbap3nIx3nuBTzsN0GeDlzIRe5CwtnZ3VBdqU/q1WpeoA9JJvulgr8HCXv UuOZHfYrtkXTbpGsNDUZUFGpxQ/n426gsn3Gc1FF7b/ubSOMZlQUdvu+NgrdKixZi2ev iTqpqypmezt1zxak7TpGwxyfl++SSf0+t9LqaRnGSA9Op4chGqIWztGWHeUSTiDSJEkZ z8IfI6J2Q/YcMhLaOWjO0JAfIqdVDhc3A7JCeR1xbS7RO4SDKplQtlHaFu5y2sZ2XPsS flag== X-Gm-Message-State: APjAAAW7WkXBEu1nHpuJBuGiYnniQZLebRMs0nlC1DyoJ7V8Rt/bVAj2 N2dn1Tut2LY8njk9WHHTw0XROxJ/9MNB/A8IjcGxCw== X-Google-Smtp-Source: APXvYqzSMbTobIrUjta2PRVMR5LzxcXd8VMYZkHAwSe+o4mp6e4iHwh+guE7ygidOTjbpHe+gMssuFQsa59SUjGF/cU= X-Received: by 2002:a05:6102:2159:: with SMTP id h25mr16214373vsg.160.1580771776224; Mon, 03 Feb 2020 15:16:16 -0800 (PST) MIME-Version: 1.0 References: <91270a68-ff48-88b0-219c-69801f0c252f@redhat.com> <75d4594f-0864-5172-a0f8-f97affedb366@redhat.com> <286AC319A985734F985F78AFA26841F73E3F8A02@shsmsx102.ccr.corp.intel.com> <20200203080520-mutt-send-email-mst@kernel.org> <5ac131de8e3b7fc1fafd05a61feb5f6889aeb917.camel@linux.intel.com> <20200203120225-mutt-send-email-mst@kernel.org> <2584af9b8d358faf27ee838fdab2be594e255433.camel@linux.intel.com> In-Reply-To: <2584af9b8d358faf27ee838fdab2be594e255433.camel@linux.intel.com> From: Tyler Sanderson Date: Mon, 3 Feb 2020 15:16:04 -0800 Message-ID: Subject: Re: Balloon pressuring page cache To: Alexander Duyck Cc: "Michael S. Tsirkin" , David Hildenbrand , "Wang, Wei W" , "virtualization@lists.linux-foundation.org" , David Rientjes , "linux-mm@kvack.org" , Michal Hocko Content-Type: multipart/alternative; boundary="00000000000031c9db059db41cc3" X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: --00000000000031c9db059db41cc3 Content-Type: text/plain; charset="UTF-8" On Mon, Feb 3, 2020 at 1:22 PM Alexander Duyck < alexander.h.duyck@linux.intel.com> wrote: > On Mon, 2020-02-03 at 12:32 -0800, Tyler Sanderson wrote: > > There were apparently good reasons for moving away from OOM notifier > > callback: > > https://lkml.org/lkml/2018/7/12/314 > > https://lkml.org/lkml/2018/8/2/322 > > > > In particular the OOM notifier is worse than the shrinker because: > > It is last-resort, which means the system has already gone through > > heroics to prevent OOM. Those heroic reclaim efforts are expensive and > > impact application performance. > > It lacks understanding of NUMA or other OOM constraints. > > It has a higher potential for bugs due to the subtlety of the callback > > context. > > Given the above, I think the shrinker API certainly makes the most sense > > _if_ the balloon size is static. In that case memory should be reclaimed > > from the balloon early and proportionally to balloon size, which the > > shrinker API achieves. > > The problem is the shrinker doesn't have any concept of tiering or > priority. I suspect he reason for using the OOM notification is because in > practice it should be the last thing we are pulling memory out of with > things like page cache and slab caches being first. Once we have pages > that are leaked out of the balloon by the shrinker it will trigger the > balloon wanting to reinflate. Deciding whether to trade IO performance (page cache) for memory-usage efficiency (balloon) seems use-case dependent. Deciding when to re-inflate is a similar policy choice. If the balloon's shrinker priority is hard-coded to "last-resort" then there would be no way to implement a policy where page cache growth could shrink the balloon. The current balloon implementation allows the host to implement this policy and tune the tradeoff between balloon and page cache. > Ideally if the shrinker is running we > shouldn't be able to reinflate the balloon, and if we are reinflating the > balloon we shouldn't need to run the shrinker. The fact that we can do > both at the same time is problematic. > I agree that this is inefficient. > > > However, if the balloon is inflating and intentionally causing memory > > pressure then this results in the inefficiency pointed out earlier. > > > > If the balloon is inflating but not causing memory pressure then there > > is no problem with either API. > > The entire point of the balloon is to cause memory pressure. Otherwise > essentially all we are really doing is hinting since the guest doesn't > need the memory and isn't going to use it any time soon. > Causing memory pressure is just a mechanism to achieve increased reclaim. If there was a better mechanism (like the fine-grained-cache-shrinking one discussed below) then I think the balloon device would be perfectly justified in using that instead (and maybe "balloon" becomes a misnomer. Oh well). > > > This suggests another route: rather than cause memory pressure to shrink > > the page cache, the balloon could issue the equivalent of "echo 3 > > > /proc/sys/vm/drop_caches". > > Of course ideally, we want to be more fine grained than "drop > > everything". We really want an API that says "drop everything that > > hasn't been accessed in the last 5 minutes". > > > > This would eliminate the need for the balloon to cause memory pressure > > at all which avoids the inefficiency in question. Furthermore, this > > pairs nicely with the FREE_PAGE_HINT feature. > > Something similar was brought up in the discussion we had about this in my > patch set. The problem is, by trying to use a value like "5 minutes" it > implies that we are going to need to track some extra state somewhere to > determine that value. > > An alternative is to essentially just slowly shrink memory for the guest. > We had some discussion about this in another thread, and the following > code example was brought up as a way to go about doing that: > > https://github.com/Conan-Kudo/omv-kernel-rc/blob/master/0154-sysctl-vm-Fine-grained-cache-shrinking.patch > > The idea is you essentially just slowly bleed the memory from the guest by > specifying some amount of MB of cache to be freed on some regular > interval. > Makes sense. Whatever API is settled on, I'd just propose that we allow the host to invoke it via the balloon device since the host has a host-global view of memory and can make decisions that an individual guest cannot. Alex, what is the status of your fine-grained-cache-shrinking patch? It seems like a really good idea. > Thanks. > > - Alex > > --00000000000031c9db059db41cc3 Content-Type: text/html; charset="UTF-8" Content-Transfer-Encoding: quoted-printable


=
On Mon, Feb 3, 2020 at 1:22 PM Alexan= der Duyck <alexande= r.h.duyck@linux.intel.com> wrote:
On Mon, 2020-02-03 at 12:32 -0800, Tyler Sanderson= wrote:
> There were apparently good reasons for moving away from OOM notifier > callback:
> https://lkml.org/lkml/2018/7/12/314
> https://lkml.org/lkml/2018/8/2/322
>
> In particular the OOM notifier is worse than the shrinker because:
> It is last-resort, which means the system has already gone through
> heroics to prevent OOM. Those heroic reclaim efforts are expensive and=
> impact application performance.
> It lacks understanding of NUMA or other OOM constraints.
> It has a higher potential for bugs due to the subtlety of the callback=
> context.
> Given the above, I think the shrinker API certainly makes the most sen= se
> _if_ the balloon size is static. In that case memory should be reclaim= ed
> from the balloon early and proportionally to balloon size, which the > shrinker API achieves.

The problem is the shrinker doesn't have any concept of tiering or
priority. I suspect he reason for using the OOM notification is because in<= br> practice it should be the last thing we are pulling memory out of with
things like page cache and slab caches being first. Once we have pages
that are leaked out of the balloon by the shrinker it will trigger the
balloon wanting to reinflate.
Deciding whether to trade IO= performance (page cache) for memory-usage efficiency (balloon) seems use-c= ase dependent.
Deciding when to re-inflate is a similar policy ch= oice.

If the balloon's shrinker priority is ha= rd-coded to "last-resort" then there would be no way to implement= a policy where page cache growth could shrink the balloon.
The c= urrent=C2=A0balloon implementation allows the host to implement this policy= and tune the tradeoff between balloon and page cache.
=C2=A0
Ideally if the shrinker= is running we
shouldn't be able to reinflate the balloon, and if we are reinflating t= he
balloon we shouldn't need to run the shrinker. The fact that we can do<= br> both at the same time is problematic.
I agree that thi= s is inefficient.
=C2=A0

> However, if the balloon is inflating and intentionally causing memory<= br> > pressure then this results in the inefficiency pointed out earlier. >
> If the balloon is inflating but not causing memory pressure then there=
> is no problem with either API.

The entire point of the balloon is to cause memory pressure. Otherwise
essentially all we are really doing is hinting since the guest doesn't<= br> need the memory and isn't going to use it any time soon.
Causing memory pressure is just a mechanism to achieve increased rec= laim. If there was a better mechanism (like the fine-grained-cache-shrinkin= g one discussed below) then I think the balloon device would be perfectly j= ustified in using that instead (and maybe "balloon" becomes a mis= nomer. Oh well).
=C2=A0

> This suggests another route: rather than cause memory pressure to shri= nk
> the page cache, the balloon could issue the equivalent of "echo 3= >
> /proc/sys/vm/drop_caches".
> Of course ideally, we want to be more fine grained than "drop
> everything". We really want an API that says "drop everythin= g that
> hasn't been accessed in the last 5 minutes".
>
> This would eliminate the need for the balloon to cause memory pressure=
> at all which avoids the inefficiency in question. Furthermore, this > pairs nicely with the FREE_PAGE_HINT feature.

Something similar was brought up in the discussion we had about this in my<= br> patch set. The problem is, by trying to use a value like "5 minutes&qu= ot; it
implies that we are going to need to track some extra state somewhere to determine that value.

An alternative is to essentially just slowly shrink memory for the guest. We had some discussion about this in another thread, and the following
code example was brought up as a way to go about doing that:
https://github.com/Conan-Kudo/omv-kernel-rc/blob/master/0154-sysctl-vm= -Fine-grained-cache-shrinking.patch

The idea is you essentially just slowly bleed the memory from the guest by<= br> specifying some amount of MB of cache to be freed on some regular
interval.
Makes sense. Whatever API is settled on, I&#= 39;d just propose that we allow the host to invoke it via the balloon devic= e since the host has a host-global view of memory and can make decisions th= at an individual guest cannot.

Alex, what is the s= tatus of your fine-grained-cache-shrinking patch? It seems like a really go= od idea.


Thanks.

- Alex

--00000000000031c9db059db41cc3--