linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Mel Gorman <mgorman@techsingularity.net>
To: Joonsoo Kim <js1304@gmail.com>
Cc: Andrew Morton <akpm@linux-foundation.org>,
	Johannes Weiner <hannes@cmpxchg.org>,
	Rik van Riel <riel@redhat.com>, Vlastimil Babka <vbabka@suse.cz>,
	David Rientjes <rientjes@google.com>,
	Joonsoo Kim <iamjoonsoo.kim@lge.com>,
	Michal Hocko <mhocko@kernel.org>, Linux-MM <linux-mm@kvack.org>,
	LKML <linux-kernel@vger.kernel.org>
Subject: Re: [PATCH 12/12] mm, page_alloc: Only enforce watermarks for order-0 allocations
Date: Wed, 9 Sep 2015 13:39:01 +0100	[thread overview]
Message-ID: <20150909123901.GA12432@techsingularity.net> (raw)
In-Reply-To: <CAAmzW4NbjqOpDhNKp7POVLZyaoUJa6YU5-B9Xz2b+crkzD25+g@mail.gmail.com>

On Tue, Sep 08, 2015 at 05:26:13PM +0900, Joonsoo Kim wrote:
> 2015-08-24 21:30 GMT+09:00 Mel Gorman <mgorman@techsingularity.net>:
> > The primary purpose of watermarks is to ensure that reclaim can always
> > make forward progress in PF_MEMALLOC context (kswapd and direct reclaim).
> > These assume that order-0 allocations are all that is necessary for
> > forward progress.
> >
> > High-order watermarks serve a different purpose. Kswapd had no high-order
> > awareness before they were introduced (https://lkml.org/lkml/2004/9/5/9).
> > This was particularly important when there were high-order atomic requests.
> > The watermarks both gave kswapd awareness and made a reserve for those
> > atomic requests.
> >
> > There are two important side-effects of this. The most important is that
> > a non-atomic high-order request can fail even though free pages are available
> > and the order-0 watermarks are ok. The second is that high-order watermark
> > checks are expensive as the free list counts up to the requested order must
> > be examined.
> >
> > With the introduction of MIGRATE_HIGHATOMIC it is no longer necessary to
> > have high-order watermarks. Kswapd and compaction still need high-order
> > awareness which is handled by checking that at least one suitable high-order
> > page is free.
> 
> I still don't think that this one suitable high-order page is enough.
> If fragmentation happens, there would be no order-2 freepage. If kswapd
> prepares only 1 order-2 freepage, one of two successive process forks
> (AFAIK, fork in x86 and ARM require order 2 page) must go to direct reclaim
> to make order-2 freepage. Kswapd cannot make order-2 freepage in that
> short time. It causes latency to many high-order freepage requestor
> in fragmented situation.
> 

So what do you suggest instead? A fixed number, some other heuristic?
You have pushed several times now for the series to focus on the latency
of standard high-order allocations but again I will say that it is outside
the scope of this series. If you want to take steps to reduce the latency
of ordinary high-order allocation requests that can sleep then it should
be a separate series.

> > With the patch applied, there was little difference in the allocation
> > failure rates as the atomic reserves are small relative to the number of
> > allocation attempts. The expected impact is that there will never be an
> > allocation failure report that shows suitable pages on the free lists.
> 
> Due to highatomic pageblock and freepage count mismatch per allocation
> flag, allocation failure with suitable pages can still be possible.
> 

An allocation failure of this type would be a !atomic allocation that
cannot access the reserve. If such allocations requests can access the
reserve then it defeats the whole point of the pageblock type.

> > + * Return true if free base pages are above 'mark'. For high-order checks it
> > + * will return true of the order-0 watermark is reached and there is at least
> > + * one free page of a suitable size. Checking now avoids taking the zone lock
> > + * to check in the allocation paths if no pages are free.
> >   */
> >  static bool __zone_watermark_ok(struct zone *z, unsigned int order,
> >                         unsigned long mark, int classzone_idx, int alloc_flags,
> > @@ -2289,7 +2291,7 @@ static bool __zone_watermark_ok(struct zone *z, unsigned int order,
> >  {
> >         long min = mark;
> >         int o;
> > -       long free_cma = 0;
> > +       const bool atomic = (alloc_flags & ALLOC_HARDER);
> >
> >         /* free_pages may go negative - that's OK */
> >         free_pages -= (1 << order) - 1;
> > @@ -2301,7 +2303,7 @@ static bool __zone_watermark_ok(struct zone *z, unsigned int order,
> >          * If the caller is not atomic then discount the reserves. This will
> >          * over-estimate how the atomic reserve but it avoids a search
> >          */
> > -       if (likely(!(alloc_flags & ALLOC_HARDER)))
> > +       if (likely(!atomic))
> >                 free_pages -= z->nr_reserved_highatomic;
> >         else
> >                 min -= min / 4;
> > @@ -2309,22 +2311,30 @@ static bool __zone_watermark_ok(struct zone *z, unsigned int order,
> >  #ifdef CONFIG_CMA
> >         /* If allocation can't use CMA areas don't use free CMA pages */
> >         if (!(alloc_flags & ALLOC_CMA))
> > -               free_cma = zone_page_state(z, NR_FREE_CMA_PAGES);
> > +               free_pages -= zone_page_state(z, NR_FREE_CMA_PAGES);
> >  #endif
> >
> > -       if (free_pages - free_cma <= min + z->lowmem_reserve[classzone_idx])
> > +       if (free_pages <= min + z->lowmem_reserve[classzone_idx])
> >                 return false;
> > -       for (o = 0; o < order; o++) {
> > -               /* At the next order, this order's pages become unavailable */
> > -               free_pages -= z->free_area[o].nr_free << o;
> >
> > -               /* Require fewer higher order pages to be free */
> > -               min >>= 1;
> > +       /* order-0 watermarks are ok */
> > +       if (!order)
> > +               return true;
> > +
> > +       /* Check at least one high-order page is free */
> > +       for (o = order; o < MAX_ORDER; o++) {
> > +               struct free_area *area = &z->free_area[o];
> > +               int mt;
> > +
> > +               if (atomic && area->nr_free)
> > +                       return true;
> 
> How about checking area->nr_free first?
> In both atomic and !atomic case, nr_free == 0 means
> there is no appropriate pages.
> 
> So,
> if (!area->nr_free)
>     continue;
> if (atomic)
>     return true;
> ...
> 
> 
> > -               if (free_pages <= min)
> > -                       return false;
> > +               for (mt = 0; mt < MIGRATE_PCPTYPES; mt++) {
> > +                       if (!list_empty(&area->free_list[mt]))
> > +                               return true;
> > +               }
> 
> I'm not sure this is really faster than previous.
> We need to check three lists on each order.
> 
> Think about order-2 case. I guess order-2 is usually on movable
> pageblock rather than unmovable pageblock. In this case,
> we need to check three lists so cost is more.
> 

Ok, the extra check makes sense. Thanks.

-- 
Mel Gorman
SUSE Labs

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

  reply	other threads:[~2015-09-09 12:39 UTC|newest]

Thread overview: 55+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2015-08-24 12:09 [PATCH 00/12] Remove zonelist cache and high-order watermark checking v3 Mel Gorman
2015-08-24 12:09 ` [PATCH 01/12] mm, page_alloc: Remove unnecessary parameter from zone_watermark_ok_safe Mel Gorman
2015-08-24 12:09 ` [PATCH 02/12] mm, page_alloc: Remove unnecessary recalculations for dirty zone balancing Mel Gorman
2015-08-24 12:09 ` [PATCH 03/12] mm, page_alloc: Remove unnecessary taking of a seqlock when cpusets are disabled Mel Gorman
2015-08-26 10:25   ` Michal Hocko
2015-08-24 12:09 ` [PATCH 04/12] mm, page_alloc: Only check cpusets when one exists that can be mem-controlled Mel Gorman
2015-08-24 12:37   ` Vlastimil Babka
2015-08-24 13:16     ` Mel Gorman
2015-08-24 20:53       ` Vlastimil Babka
2015-08-25 10:33         ` Mel Gorman
2015-08-25 11:09           ` Vlastimil Babka
2015-08-26 13:41             ` Mel Gorman
2015-08-26 10:46   ` Michal Hocko
2015-08-24 12:09 ` [PATCH 05/12] mm, page_alloc: Remove unecessary recheck of nodemask Mel Gorman
2015-08-25 14:32   ` Vlastimil Babka
2015-08-24 12:09 ` [PATCH 06/12] mm, page_alloc: Use masks and shifts when converting GFP flags to migrate types Mel Gorman
2015-08-25 14:36   ` Vlastimil Babka
2015-08-24 12:09 ` [PATCH 07/12] mm, page_alloc: Distinguish between being unable to sleep, unwilling to sleep and avoiding waking kswapd Mel Gorman
2015-08-24 18:29   ` Mel Gorman
2015-08-25 15:37   ` Vlastimil Babka
2015-08-26 14:45     ` Mel Gorman
2015-08-26 16:24       ` Vlastimil Babka
2015-08-26 18:10         ` Mel Gorman
2015-08-27  9:18           ` Vlastimil Babka
2015-08-25 15:48   ` Vlastimil Babka
2015-08-26 13:05   ` Michal Hocko
2015-09-08  6:49   ` Joonsoo Kim
2015-09-09 12:22     ` Mel Gorman
2015-09-18  6:25       ` Joonsoo Kim
2015-08-24 12:09 ` [PATCH 08/12] mm, page_alloc: Rename __GFP_WAIT to __GFP_RECLAIM Mel Gorman
2015-08-26 12:19   ` Vlastimil Babka
2015-08-24 12:09 ` [PATCH 09/12] mm, page_alloc: Delete the zonelist_cache Mel Gorman
2015-08-24 12:29 ` [PATCH 10/12] mm, page_alloc: Remove MIGRATE_RESERVE Mel Gorman
2015-08-24 12:29 ` [PATCH 11/12] mm, page_alloc: Reserve pageblocks for high-order atomic allocations on demand Mel Gorman
2015-08-26 12:44   ` Vlastimil Babka
2015-08-26 14:53   ` Michal Hocko
2015-08-26 15:38     ` Mel Gorman
2015-09-08  8:01   ` Joonsoo Kim
2015-09-09 12:32     ` Mel Gorman
2015-09-18  6:38       ` Joonsoo Kim
2015-09-21 10:51         ` Mel Gorman
2015-08-24 12:30 ` [PATCH 12/12] mm, page_alloc: Only enforce watermarks for order-0 allocations Mel Gorman
2015-08-26 13:42   ` Vlastimil Babka
2015-08-26 14:53     ` Mel Gorman
2015-08-28 12:10   ` Michal Hocko
2015-08-28 14:12     ` Mel Gorman
2015-09-08  8:26   ` Joonsoo Kim
2015-09-09 12:39     ` Mel Gorman [this message]
2015-09-18  6:56       ` Joonsoo Kim
2015-09-21 10:51         ` Mel Gorman
2015-09-30  8:51       ` Vitaly Wool
2015-09-30 13:52         ` Vlastimil Babka
2015-09-30 14:16           ` Vitaly Wool
2015-09-30 14:43             ` Vlastimil Babka
2015-09-30 15:18               ` Mel Gorman

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20150909123901.GA12432@techsingularity.net \
    --to=mgorman@techsingularity.net \
    --cc=akpm@linux-foundation.org \
    --cc=hannes@cmpxchg.org \
    --cc=iamjoonsoo.kim@lge.com \
    --cc=js1304@gmail.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=mhocko@kernel.org \
    --cc=riel@redhat.com \
    --cc=rientjes@google.com \
    --cc=vbabka@suse.cz \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).