public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
From: Joerg Roedel <jroedel@suse.de>
To: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Ingo Molnar <mingo@kernel.org>,
	Linux Kernel Mailing List <linux-kernel@vger.kernel.org>,
	Thomas Gleixner <tglx@linutronix.de>,
	Borislav Petkov <bp@alien8.de>,
	Peter Zijlstra <a.p.zijlstra@chello.nl>,
	Andrew Morton <akpm@linux-foundation.org>
Subject: Re: [GIT PULL] x86/mm changes for v5.10
Date: Tue, 13 Oct 2020 10:05:57 +0200	[thread overview]
Message-ID: <20201013080557.GF3302@suse.de> (raw)
In-Reply-To: <CAHk-=wgf8ko=b-F74+Qc+EX6M36kHx5wEBCS8nJK1WSod9UO0w@mail.gmail.com>

On Mon, Oct 12, 2020 at 03:07:45PM -0700, Linus Torvalds wrote:
> On Mon, Oct 12, 2020 at 10:24 AM Ingo Molnar <mingo@kernel.org> wrote:
> >
> > Do not sync vmalloc/ioremap mappings on x86-64 kernels.
> >
> > Hopefully now without the bugs!
> 
> Let's hope so.
> 
> If this turns out to work this time, can we do a similar preallocation
> of the page directories on 32-bit? Because I think now x86-32 is the
> only remaining case of doing that arch_sync_kernel_mappings() thing.
> 
> Or is there some reason that won't work that I've lost sight of?

There were two reasons which made me decide to not pre-allocate on
x86-32:

	1) The sync-level is the same as the huge-page level (PMD) on
	   both paging modes, so with large ioremap mappings the
	   synchronization is always needed. The huge ioremap mapping
	   could possibly be disabled without much performance impact on
	   x86-32.

	2) The vmalloc area has a variable size and grows with less RAM
	   in the machine. And when the vmalloc area gets larger, more
	   pages are needed. Another factor is the configurable
	   vm-split. With a 1G/3G split on a machine with 128MB of RAM
	   there would be:

	   	VMalloc area size (hole ignored): 3072MB - 128MB = 2944MB
		PTE-pages needed (with PAE):      2944MB / 2MB/page = 1472 4k pages
		Memory needed:                    1472*4k = 5888kb

	   So on such machine the pre-allocation would need 5.75MB of
	   the 128MB RAM. Without PAE it is half of that. This is an
	   exotic configuration and I am not sure it matters much in
	   practice. It could also be worked around by setting limits
	   such as, for example, don't make the vmalloc area larger then
	   the avauilable memory in the system.

So pre-allocating has its implications. If we decide to pre-allocate on
x86-32 too, then we should be prepared for that fall-out of the higher
memory usage.

Regards,

	Joerg

  reply	other threads:[~2020-10-13  8:06 UTC|newest]

Thread overview: 5+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-10-12 17:24 [GIT PULL] x86/mm changes for v5.10 Ingo Molnar
2020-10-12 22:07 ` Linus Torvalds
2020-10-13  8:05   ` Joerg Roedel [this message]
2020-10-13 15:53     ` Linus Torvalds
2020-10-12 22:34 ` pr-tracker-bot

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20201013080557.GF3302@suse.de \
    --to=jroedel@suse.de \
    --cc=a.p.zijlstra@chello.nl \
    --cc=akpm@linux-foundation.org \
    --cc=bp@alien8.de \
    --cc=linux-kernel@vger.kernel.org \
    --cc=mingo@kernel.org \
    --cc=tglx@linutronix.de \
    --cc=torvalds@linux-foundation.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox