From: ypodemsk@redhat.com
To: Peter Zijlstra <peterz@infradead.org>
Cc: mtosatti@redhat.com, ppandit@redhat.com, david@redhat.com,
linux@armlinux.org.uk, mpe@ellerman.id.au, npiggin@gmail.com,
christophe.leroy@csgroup.eu, hca@linux.ibm.com,
gor@linux.ibm.com, agordeev@linux.ibm.com,
borntraeger@linux.ibm.com, svens@linux.ibm.com,
davem@davemloft.net, tglx@linutronix.de, mingo@redhat.com,
bp@alien8.de, dave.hansen@linux.intel.com, hpa@zytor.com,
keescook@chromium.org, paulmck@kernel.org, frederic@kernel.org,
will@kernel.org, ardb@kernel.org, samitolvanen@google.com,
juerg.haefliger@canonical.com, arnd@arndb.de,
rmk+kernel@armlinux.org.uk, geert+renesas@glider.be,
linus.walleij@linaro.org, akpm@linux-foundation.org,
sebastian.reichel@collabora.com, rppt@kernel.org,
aneesh.kumar@linux.ibm.com, x86@kernel.org,
linux-arm-kernel@lists.infradead.org,
linuxppc-dev@lists.ozlabs.org, linux-s390@vger.kernel.org,
sparclinux@vger.kernel.org, linux-arch@vger.kernel.org,
linux-mm@kvack.org, linux-kernel@vger.kernel.org
Subject: Re: [PATCH v2 0/2] send tlb_remove_table_smp_sync IPI only to necessary CPUs
Date: Thu, 22 Jun 2023 16:11:32 +0300 [thread overview]
Message-ID: <7a9f193e6fa9db1d5fa0eb4a91927a866909f13c.camel@redhat.com> (raw)
In-Reply-To: <20230621074337.GF2046280@hirez.programming.kicks-ass.net>
On Wed, 2023-06-21 at 09:43 +0200, Peter Zijlstra wrote:
> On Tue, Jun 20, 2023 at 05:46:16PM +0300, Yair Podemsky wrote:
> > Currently the tlb_remove_table_smp_sync IPI is sent to all CPUs
> > indiscriminately, this causes unnecessary work and delays notable
> > in
> > real-time use-cases and isolated cpus.
> > By limiting the IPI to only be sent to cpus referencing the
> > effected
> > mm.
> > a config to differentiate architectures that support mm_cpumask
> > from
> > those that don't will allow safe usage of this feature.
> >
> > changes from -v1:
> > - Previous version included a patch to only send the IPI to CPU's
> > with
> > context_tracking in the kernel space, this was removed due to race
> > condition concerns.
> > - for archs that do not maintain mm_cpumask the mask used should be
> > cpu_online_mask (Peter Zijlstra).
> >
>
> Would it not be much better to fix the root cause? As per the last
> time,
> there's patches that cure the thp abuse of this.
>
Hi Peter,
Thanks for your reply.
There are two code paths leading to this IPI, one is the thp,
But the other is the failure to allocate page in tlb_remove_table,
It is the the second path that we are most interested in as it was
found
to cause interference in a real time process for a client (That system
did
not have thp).
So while curing thp abuses is a good thing, it will not unfortunately
solve
our root cause.
If you have any idea of how to remove the tlb_remove_table_sync_one()
usage
in the tlb_remove_table()->tlb_remove_table_one() call path -- the
usage
that's relevant for us -- that would be great. As long as we can't
remove
that, I'm afraid all we can do is optimize for it to not broadcast an
IPI
to all CPUs in the system, as done in this patch.
Thanks,
Yair
prev parent reply other threads:[~2023-06-22 13:11 UTC|newest]
Thread overview: 16+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-06-20 14:46 [PATCH v2 0/2] send tlb_remove_table_smp_sync IPI only to necessary CPUs Yair Podemsky
2023-06-20 14:46 ` [PATCH v2 1/2] arch: Introduce ARCH_HAS_CPUMASK_BITS Yair Podemsky
2023-06-20 14:46 ` [PATCH v2 2/2] mm/mmu_gather: send tlb_remove_table_smp_sync IPI only to MM CPUs Yair Podemsky
2023-06-21 17:42 ` Dave Hansen
2023-06-22 13:14 ` ypodemsk
2023-06-22 13:37 ` Dave Hansen
2023-06-26 14:36 ` ypodemsk
2023-06-26 15:23 ` Dave Hansen
2023-06-21 18:02 ` Nadav Amit
2023-06-22 13:57 ` ypodemsk
2023-06-23 3:38 ` Yang Shi
2023-07-03 13:57 ` Peter Zijlstra
2023-06-21 7:43 ` [PATCH v2 0/2] send tlb_remove_table_smp_sync IPI only to necessary CPUs Peter Zijlstra
2023-06-22 12:47 ` Marcelo Tosatti
2023-07-03 14:09 ` Peter Zijlstra
2023-06-22 13:11 ` ypodemsk [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=7a9f193e6fa9db1d5fa0eb4a91927a866909f13c.camel@redhat.com \
--to=ypodemsk@redhat.com \
--cc=agordeev@linux.ibm.com \
--cc=akpm@linux-foundation.org \
--cc=aneesh.kumar@linux.ibm.com \
--cc=ardb@kernel.org \
--cc=arnd@arndb.de \
--cc=borntraeger@linux.ibm.com \
--cc=bp@alien8.de \
--cc=christophe.leroy@csgroup.eu \
--cc=dave.hansen@linux.intel.com \
--cc=davem@davemloft.net \
--cc=david@redhat.com \
--cc=frederic@kernel.org \
--cc=geert+renesas@glider.be \
--cc=gor@linux.ibm.com \
--cc=hca@linux.ibm.com \
--cc=hpa@zytor.com \
--cc=juerg.haefliger@canonical.com \
--cc=keescook@chromium.org \
--cc=linus.walleij@linaro.org \
--cc=linux-arch@vger.kernel.org \
--cc=linux-arm-kernel@lists.infradead.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=linux-s390@vger.kernel.org \
--cc=linux@armlinux.org.uk \
--cc=linuxppc-dev@lists.ozlabs.org \
--cc=mingo@redhat.com \
--cc=mpe@ellerman.id.au \
--cc=mtosatti@redhat.com \
--cc=npiggin@gmail.com \
--cc=paulmck@kernel.org \
--cc=peterz@infradead.org \
--cc=ppandit@redhat.com \
--cc=rmk+kernel@armlinux.org.uk \
--cc=rppt@kernel.org \
--cc=samitolvanen@google.com \
--cc=sebastian.reichel@collabora.com \
--cc=sparclinux@vger.kernel.org \
--cc=svens@linux.ibm.com \
--cc=tglx@linutronix.de \
--cc=will@kernel.org \
--cc=x86@kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).