From: "Kirill A. Shutemov" <kirill.shutemov@linux.intel.com>
To: Andrew Morton <akpm@linux-foundation.org>
Cc: "Kirill A. Shutemov" <kirill@shutemov.name>,
Andrea Arcangeli <aarcange@redhat.com>,
linux-mm@kvack.org, Andi Kleen <ak@linux.intel.com>,
"H. Peter Anvin" <hpa@linux.intel.com>,
linux-kernel@vger.kernel.org
Subject: Re: [PATCH v4 10/10] thp: implement refcounting for huge zero page
Date: Fri, 26 Oct 2012 01:10:31 +0300 [thread overview]
Message-ID: <20121025221031.GA29910@otc-wbsnb-06> (raw)
In-Reply-To: <20121025143707.b212d958.akpm@linux-foundation.org>
[-- Attachment #1: Type: text/plain, Size: 2255 bytes --]
On Thu, Oct 25, 2012 at 02:37:07PM -0700, Andrew Morton wrote:
> On Fri, 26 Oct 2012 00:22:51 +0300
> "Kirill A. Shutemov" <kirill@shutemov.name> wrote:
>
> > On Thu, Oct 25, 2012 at 02:05:24PM -0700, Andrew Morton wrote:
> > > hm. It's odd that the kernel didn't try to shrink slabs in this case.
> > > Why didn't it??
> >
> > nr_to_scan == 0 asks for the fast path. shrinker callback can shink, if
> > it thinks it's good idea.
>
> What nr_objects does your shrinker return in that case?
HPAGE_PMD_NR if hzp is freeable, otherwise 0.
> > > > I also tried another scenario: usemem -n16 100M -r 1000. It creates real
> > > > memory pressure - no easy reclaimable memory. This time callback called
> > > > with nr_to_scan > 0 and we freed hzp. Under pressure we fails to allocate
> > > > hzp and code goes to fallback path as it supposed to.
> > > >
> > > > Do I need to check any other scenario?
> > >
> > > I'm thinking that if we do hit problems in this area, we could avoid
> > > freeing the hugepage unless the scan_control.priority is high enough.
> > > That would involve adding a magic number or a tunable to set the
> > > threshold.
> >
> > What about ratelimit on alloc path to force fallback if we allocate
> > to often? Is it good idea?
>
> mmm... ratelimit via walltime is always a bad idea. We could
> ratelimit by "number of times the shrinker was called", and maybe that
> would work OK, unsure.
>
> It *is* appropriate to use sc->priority to be more reluctant to release
> expensive-to-reestablish objects. But there is already actually a
> mechanism in the shrinker code to handle this: the shrink_control.seeks
> field. That was originally added to provide an estimate of "how
> expensive will it be to recreate this object if we were to reclaim it".
> So perhaps we could generalise that a bit, and state that the zero
> hugepage is an expensive thing.
I've proposed DEFAULT_SEEKS * 4 already.
> I don't think the shrink_control.seeks facility had ever been used much,
> so it's possible that it is presently mistuned or not working very
> well.
Yeah, non-default .seeks is only in kvm mmu_shrinker and in few places in
staging/android/.
--
Kirill A. Shutemov
[-- Attachment #2: Digital signature --]
[-- Type: application/pgp-signature, Size: 836 bytes --]
next prev parent reply other threads:[~2012-10-25 22:09 UTC|newest]
Thread overview: 34+ messages / expand[flat|nested] mbox.gz Atom feed top
2012-10-15 6:00 [PATCH v4 00/10, REBASED] Introduce huge zero page Kirill A. Shutemov
2012-10-15 6:00 ` [PATCH v4 01/10] thp: huge zero page: basic preparation Kirill A. Shutemov
2012-10-15 6:00 ` [PATCH v4 02/10] thp: zap_huge_pmd(): zap huge zero pmd Kirill A. Shutemov
2012-10-15 6:00 ` [PATCH v4 03/10] thp: copy_huge_pmd(): copy huge zero page Kirill A. Shutemov
2012-10-15 6:00 ` [PATCH v4 04/10] thp: do_huge_pmd_wp_page(): handle " Kirill A. Shutemov
2012-10-15 6:00 ` [PATCH v4 05/10] thp: change_huge_pmd(): keep huge zero page write-protected Kirill A. Shutemov
2012-10-15 6:00 ` [PATCH v4 06/10] thp: change split_huge_page_pmd() interface Kirill A. Shutemov
2012-10-15 6:00 ` [PATCH v4 07/10] thp: implement splitting pmd for huge zero page Kirill A. Shutemov
2012-10-15 6:00 ` [PATCH v4 08/10] thp: setup huge zero page on non-write page fault Kirill A. Shutemov
2012-10-15 6:00 ` [PATCH v4 09/10] thp: lazy huge zero page allocation Kirill A. Shutemov
2012-10-15 6:00 ` [PATCH v4 10/10] thp: implement refcounting for huge zero page Kirill A. Shutemov
2012-10-18 23:45 ` Andrew Morton
2012-10-18 23:59 ` Kirill A. Shutemov
2012-10-23 6:35 ` Kirill A. Shutemov
2012-10-23 6:43 ` Andrew Morton
2012-10-23 7:00 ` Kirill A. Shutemov
2012-10-23 22:59 ` Andrew Morton
2012-10-23 23:38 ` Kirill A. Shutemov
2012-10-24 19:22 ` Andrew Morton
2012-10-24 19:45 ` Kirill A. Shutemov
2012-10-24 20:25 ` Andrew Morton
2012-10-24 20:33 ` Kirill A. Shutemov
2012-10-24 20:44 ` Andi Kleen
2012-10-25 20:49 ` Kirill A. Shutemov
2012-10-25 21:05 ` Andrew Morton
2012-10-25 21:22 ` Kirill A. Shutemov
2012-10-25 21:37 ` Andrew Morton
2012-10-25 22:10 ` Kirill A. Shutemov [this message]
2012-10-16 9:53 ` [PATCH v4 00/10, REBASED] Introduce " Ni zhan Chen
2012-10-16 10:54 ` Kirill A. Shutemov
2012-10-16 11:13 ` Ni zhan Chen
2012-10-16 11:28 ` Kirill A. Shutemov
2012-10-16 11:37 ` Ni zhan Chen
2012-10-26 15:14 ` [PATCH] thp, vmstat: implement HZP_ALLOC and HZP_ALLOC_FAILED events Kirill A. Shutemov
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20121025221031.GA29910@otc-wbsnb-06 \
--to=kirill.shutemov@linux.intel.com \
--cc=aarcange@redhat.com \
--cc=ak@linux.intel.com \
--cc=akpm@linux-foundation.org \
--cc=hpa@linux.intel.com \
--cc=kirill@shutemov.name \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).