public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
From: Johannes Weiner <hannes@cmpxchg.org>
To: Usama Arif <usamaarif642@gmail.com>
Cc: akpm@linux-foundation.org, riel@surriel.com,
	zhaoyang.huang@unisoc.com, yuzhao@google.com, david@redhat.com,
	leitao@debian.org, huangzhaoyang@gmail.com, bharata@amd.com,
	willy@infradead.org, vbabka@suse.cz,
	linux-kernel@vger.kernel.org, kernel-team@meta.com
Subject: Re: [PATCH] Revert "mm: skip CMA pages when they are not available"
Date: Thu, 22 Aug 2024 12:43:23 +0200	[thread overview]
Message-ID: <20240822104323.GA315708@cmpxchg.org> (raw)
In-Reply-To: <9060a32d-b2d7-48c0-8626-1db535653c54@gmail.com>

On Wed, Aug 21, 2024 at 03:53:21PM -0400, Usama Arif wrote:
> From 1aae7f04a5cb203ea2c3ede7973dd9eddbbd7a8b Mon Sep 17 00:00:00 2001
> From: Usama Arif <usamaarif642@gmail.com>
> Date: Wed, 21 Aug 2024 20:26:07 +0100
> Subject: [PATCH] Revert "mm: skip CMA pages when they are not available"
> 
> This reverts commit 5da226dbfce3a2f44978c2c7cf88166e69a6788b.
> 
> lruvec->lru_lock is highly contended and is held when calling
> isolate_lru_folios. If the lru has a large number of CMA folios
> consecutively, while the allocation type requested is not
> MIGRATE_MOVABLE, isolate_lru_folios can hold the lock for a very long
> time while it skips those. For FIO workload, ~150million order=0
> folios were skipped to isolate a few ZONE_DMA folios [1].
> This can cause lockups [1] and high memory pressure for extended periods
> of time [2].
> 
> [1] https://lore.kernel.org/all/CAOUHufbkhMZYz20aM_3rHZ3OcK4m2puji2FGpUpn_-DevGk3Kg@mail.gmail.com/
> [2] https://lore.kernel.org/all/ZrssOrcJIDy8hacI@gmail.com/
> 
> Signed-off-by: Usama Arif <usamaarif642@gmail.com>

Acked-by: Johannes Weiner <hannes@cmpxchg.org>

I think this is the right move for now, until there is a robust
solution for the original issue.

But hould b7108d66318abf3e060c7839eabcba52e9461568 be reverted along
with it? From its changelog:

    No observable issue without this patch on MGLRU, but logically it make
    sense to skip the CMA page reclaim when those pages can't be satisfied for
    the current allocation context.

Presumably it has the same risk reward profile as it does on
conventional reclaim, with long skip runs while holding the
lruvec->lock.

  reply	other threads:[~2024-08-22 10:43 UTC|newest]

Thread overview: 6+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2024-08-21 19:35 [PATCH] Revert "mm: skip CMA pages when they are not available" Usama Arif
2024-08-21 19:53 ` Usama Arif
2024-08-22 10:43   ` Johannes Weiner [this message]
2024-08-22 15:13     ` Usama Arif
  -- strict thread matches above, loose matches on Subject: below --
2024-03-14 14:15 liuhailong
2024-03-14 17:47 ` Yu Zhao

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20240822104323.GA315708@cmpxchg.org \
    --to=hannes@cmpxchg.org \
    --cc=akpm@linux-foundation.org \
    --cc=bharata@amd.com \
    --cc=david@redhat.com \
    --cc=huangzhaoyang@gmail.com \
    --cc=kernel-team@meta.com \
    --cc=leitao@debian.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=riel@surriel.com \
    --cc=usamaarif642@gmail.com \
    --cc=vbabka@suse.cz \
    --cc=willy@infradead.org \
    --cc=yuzhao@google.com \
    --cc=zhaoyang.huang@unisoc.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox