linux-arm-kernel.lists.infradead.org archive mirror
 help / color / mirror / Atom feed
From: Thomas Gleixner <tglx@linutronix.de>
To: "Russell King (Oracle)" <linux@armlinux.org.uk>,
	Robin Murphy <robin.murphy@arm.com>
Cc: linux-arm-kernel@lists.infradead.org,
	John Ogness <jogness@linutronix.de>,
	Arnd Bergmann <arnd@kernel.org>
Subject: Re: [PATCH] ARM: tlb: Prevent flushing insane large ranges one by one
Date: Wed, 24 May 2023 17:51:55 +0200	[thread overview]
Message-ID: <871qj5n2w4.ffs@tglx> (raw)
In-Reply-To: <ZG3lh+l+yXTNg19q@shell.armlinux.org.uk>

On Wed, May 24 2023 at 11:23, Russell King wrote:
> On Wed, May 24, 2023 at 11:18:12AM +0100, Robin Murphy wrote:
>> > +static inline unsigned int __attribute_const__ read_cpuid_tlbsize(void)
>> > +{
>> > +	return 64 << ((read_cpuid(CPUID_TLBTYPE) >> 1) & 0x03);
>> > +}
>> 
>> This appears to be specific to Cortex-A9 - these bits are
>> implementation-defined, and it looks like on on most other Arm Ltd. CPUs
>> they have no meaning at all, e.g.[1][2][3], but they could still hold some
>> wildly unrelated value on other implementations.

Bah.

> That sucks. I guess we'll need to decode the main CPU ID register and
> have a table, except for Cortex-A9 where we can read the TLB size.
>
> If that's not going to work either, then the MM layer needs to get
> fixed not to be so utterly stupid to request a TLB flush over an
> insanely large range - or people will just have to put up with
> latency sucking on 32-bit ARM platforms.

The problem is that there are legitimate cases for large ranges. Even if
we can provide a list or an array of ranges then due to the batching in
the vmap layer it can end up with a large number of pages to flush.

There is an obviously CPU specific crossover point where

      N * t(single) > t(all) + t(refill)

That needs some perf analysis, but I'd be truly surprised if $N would be
large. On x86 the crossover point is around 32 pages.

Lets just assume 32 pages for that A9 too. That's 128k, which is not an
unreasonable size for large buffers. Though its way under the batching
threshold of the vmalloc code, which scales logarithmicaly with the
number of online cpus:

        fls(num_online_cpus()) * (32UL * 1024 * 1024);

That's 64M, i.e. 16k pages, for 2 CPUs... The reasoning there is that a
single flush all is in sum way cheaper than 16k individual flushes right
at the point of each *free() operation.

Where the vmalloc layer can be less silly, is for the immediate flush
case which only flushes 3 pages, but that wont show up yesterday.

For now (and eventual backporting) the occasional flush all is
definitely a better choice than the current situation.

Thanks,

        tglx

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

      parent reply	other threads:[~2023-05-24 15:52 UTC|newest]

Thread overview: 5+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-05-24  9:32 [PATCH] ARM: tlb: Prevent flushing insane large ranges one by one Thomas Gleixner
2023-05-24 10:18 ` Robin Murphy
2023-05-24 10:23   ` Russell King (Oracle)
2023-05-24 11:05     ` Robin Murphy
2023-05-24 15:51     ` Thomas Gleixner [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=871qj5n2w4.ffs@tglx \
    --to=tglx@linutronix.de \
    --cc=arnd@kernel.org \
    --cc=jogness@linutronix.de \
    --cc=linux-arm-kernel@lists.infradead.org \
    --cc=linux@armlinux.org.uk \
    --cc=robin.murphy@arm.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).