public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
From: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
To: Peter Zijlstra <peterz@infradead.org>, Rik van Riel <riel@surriel.com>
Cc: kernel test robot <oliver.sang@intel.com>,
	oe-lkp@lists.linux.dev, lkp@intel.com,
	linux-kernel@vger.kernel.org, x86@kernel.org,
	Ingo Molnar <mingo@kernel.org>,
	Dave Hansen <dave.hansen@intel.com>,
	Linus Torvalds <torvalds@linux-foundation.org>,
	Mel Gorman <mgorman@suse.de>
Subject: Re: [tip:x86/mm] [x86/mm/tlb]  209954cbc7: will-it-scale.per_thread_ops 13.2% regression
Date: Thu, 28 Nov 2024 14:46:57 -0500	[thread overview]
Message-ID: <Z0jIsYsuo_9w16tK@localhost.localdomain> (raw)
In-Reply-To: <202411282207.6bd28eae-lkp@intel.com>

On 28-Nov-2024 10:57:35 PM, kernel test robot wrote:
> 
> 
> Hello,
> 
> kernel test robot noticed a 13.2% regression of will-it-scale.per_thread_ops on:
> 
> 
> commit: 209954cbc7d0ce1a190fc725d20ce303d74d2680 ("x86/mm/tlb: Update mm_cpumask lazily")
> https://git.kernel.org/cgit/linux/kernel/git/tip/tip.git x86/mm

AFAIU, this commit changes the way TLB flushes are inhibited when
context switching away from a mm. This means that one additional TLB
flush is performed to a given CPU even after it has context switched
away from the mm, and only then is the mm_cpumask cleared for that CPU.

This could result in additional TLB flush IPI overhead in specific
scenarios where the IPIs are typically triggered after a thread has
context-switched out.

May I recommend looking into a scheme similar to rseq mm_cid for this ?
We're already adding a per-mm per-cpu data:

mm_struct:
                /**
                 * @pcpu_cid: Per-cpu current cid.
                 *
                 * Keep track of the currently allocated mm_cid for each cpu.
                 * The per-cpu mm_cid values are serialized by their respective
                 * runqueue locks.
                 */
                struct mm_cid __percpu *pcpu_cid;

struct mm_cid {
        u64 time;
        int cid;
        int recent_cid;
};

I suspect you could use a similar per-cpu data structure per-mm
to keep track of the pending TLB flush mask, and update it simply with
load/store to per-CPU data rather than have to cacheline-bounce all over
the place due to frequent mm_cpumask atomic updates.

Then you get all the benefits without introducing a window where useless
TLB flush IPIs get triggered.

Of course it's slightly less compact in terms of memory footprint than a
cpumask, but you gain a lot by removing cache line bouncing on this
frequent context switch code path.

Thoughts ?

Thanks,

Mathieu

-- 
Mathieu Desnoyers
EfficiOS Inc.
http://www.efficios.com

  parent reply	other threads:[~2024-11-28 19:46 UTC|newest]

Thread overview: 24+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2024-11-28 14:57 [tip:x86/mm] [x86/mm/tlb] 209954cbc7: will-it-scale.per_thread_ops 13.2% regression kernel test robot
2024-11-28 16:21 ` Peter Zijlstra
2024-11-29  1:44   ` Oliver Sang
2024-11-28 19:46 ` Mathieu Desnoyers [this message]
2024-11-29  2:52   ` Rik van Riel
2024-12-02 16:30     ` Mathieu Desnoyers
2024-12-02 18:10       ` Rik van Riel
2024-12-02 16:50   ` Dave Hansen
2024-12-03  0:43 ` [PATCH] x86,mm: only trim the mm_cpumask once a second Rik van Riel
2024-12-04 13:15   ` Oliver Sang
2024-12-04 16:07     ` Rik van Riel
2024-12-04 16:56     ` [PATCH v3] " Rik van Riel
2024-12-04 20:19       ` Mathieu Desnoyers
2024-12-05  2:03         ` [PATCH v4] " Rik van Riel
2024-12-06  1:30           ` Oliver Sang
2024-12-06  9:40           ` [tip: x86/mm] x86/mm/tlb: Only " tip-bot2 for Rik van Riel
2024-12-03  1:22 ` [PATCH -tip] x86,mm: only " Rik van Riel
2024-12-03 14:57   ` Mathieu Desnoyers
2024-12-03 19:48     ` [PATCH v2] " Rik van Riel
2024-12-03 20:05       ` Dave Hansen
2024-12-03 20:07         ` Rik van Riel
2024-12-04  0:46           ` Dave Hansen
2024-12-04  1:43             ` Rik van Riel
2024-12-03 23:27       ` Mathieu Desnoyers

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=Z0jIsYsuo_9w16tK@localhost.localdomain \
    --to=mathieu.desnoyers@efficios.com \
    --cc=dave.hansen@intel.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=lkp@intel.com \
    --cc=mgorman@suse.de \
    --cc=mingo@kernel.org \
    --cc=oe-lkp@lists.linux.dev \
    --cc=oliver.sang@intel.com \
    --cc=peterz@infradead.org \
    --cc=riel@surriel.com \
    --cc=torvalds@linux-foundation.org \
    --cc=x86@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox