public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
From: Mel Gorman <mgorman@suse.de>
To: Bharata B Rao <bharata@amd.com>
Cc: linux-kernel@vger.kernel.org, mingo@redhat.com,
	peterz@infradead.org, riel@redhat.com
Subject: Re: [RFC PATCH 4/4] sched/numa: Don't update mm->numa_next_scan from fault path
Date: Tue, 5 Oct 2021 09:23:35 +0100	[thread overview]
Message-ID: <20211005082335.GN3891@suse.de> (raw)
In-Reply-To: <20211004105706.3669-5-bharata@amd.com>

On Mon, Oct 04, 2021 at 04:27:06PM +0530, Bharata B Rao wrote:
> p->numa_scan_period is typically scaled up or down from
> the fault path and mm->numa_next_scan is updated during
> scanning from the task_work context using cmpxchg.
> 
> However there is one case where the scan period is increased
> in the fault path, but mm->numa_next_scan
> 
>  - is immediately updated and
>  - updated without using cmpxchg
> 
> Both of the above don't seem intended and hence remove
> the updation of mm->numa_next_scan from the fault path
> Updation should happen from task_work context subsequently.
> 
> Signed-off-by: Bharata B Rao <bharata@amd.com>

I believe the update was intended because it aims to reduce scanning
when the task is either completely idle or activity is in memory ranges
that are not influenced by numab. What is the user-visible impact you
observe?

My expectation is that in some cases this will increase the number of
PTE updates and migrations. It may even be a performance gain for some
workloads if it increases locality but in cases where locality is poor
(e.g. heavily shared regions or cross-node migrations), there will be a
loss due to increased numab activity.

Updating via cmpxchg would be ok to avoid potential collisions between
threads updating a shared mm.

-- 
Mel Gorman
SUSE Labs

  reply	other threads:[~2021-10-05  8:23 UTC|newest]

Thread overview: 17+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-10-04 10:57 [PATCH 0/4] A few autonuma cleanups Bharata B Rao
2021-10-04 10:57 ` [PATCH 1/4] sched/numa: Replace hard-coded number by a define in numa_task_group() Bharata B Rao
2021-10-05  8:18   ` Mel Gorman
2021-10-09 10:07   ` [tip: sched/core] " tip-bot2 for Bharata B Rao
2021-10-14 11:16   ` tip-bot2 for Bharata B Rao
2021-10-04 10:57 ` [PATCH 2/4] sched/numa: Remove the redundant member numa_group::fault_cpus Bharata B Rao
2021-10-05  8:21   ` Mel Gorman
2021-10-09 10:07   ` [tip: sched/core] " tip-bot2 for Bharata B Rao
2021-10-14 11:16   ` tip-bot2 for Bharata B Rao
2021-10-04 10:57 ` [PATCH 3/4] sched/numa: Fix a few comments Bharata B Rao
2021-10-05  8:22   ` Mel Gorman
2021-10-09 10:07   ` [tip: sched/core] " tip-bot2 for Bharata B Rao
2021-10-14 11:16   ` tip-bot2 for Bharata B Rao
2021-10-04 10:57 ` [RFC PATCH 4/4] sched/numa: Don't update mm->numa_next_scan from fault path Bharata B Rao
2021-10-05  8:23   ` Mel Gorman [this message]
2021-10-05  9:10     ` Bharata B Rao
2021-10-07 10:25       ` Mel Gorman

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20211005082335.GN3891@suse.de \
    --to=mgorman@suse.de \
    --cc=bharata@amd.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=mingo@redhat.com \
    --cc=peterz@infradead.org \
    --cc=riel@redhat.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox