public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
From: Baoquan He <bhe@redhat.com>
To: "Kirill A. Shutemov" <kirill@shutemov.name>
Cc: linux-kernel@vger.kernel.org, dave.hansen@linux.intel.com,
	luto@kernel.org, peterz@infradead.org, tglx@linutronix.de,
	mingo@redhat.com, bp@alien8.de, hpa@zytor.com, x86@kernel.org,
	kirill.shutemov@linux.intel.com, keescook@chromium.org,
	thgarnie@google.com
Subject: Re: [PATCH 1/2] x86/mm/KASLR: Only build one PUD entry of area for real mode trampoline
Date: Mon, 25 Feb 2019 21:20:17 +0800	[thread overview]
Message-ID: <20190225132017.GO14858@MiWiFi-R3L-srv> (raw)
In-Reply-To: <20190225123150.muyzsycmyrbimzqd@kshutemo-mobl1>

On 02/25/19 at 03:31pm, Kirill A. Shutemov wrote:
> On Sun, Feb 24, 2019 at 09:22:30PM +0800, Baoquan He wrote:
> > The current code builds identity mapping for real mode treampoline by
> > borrowing page tables from the direct mapping section if KASLR is
> > enabled. It will copy present entries of the first PUD table in 4-level
> > paging mode, or the first P4D table in 5-level paging mode.
> > 
> > However, there's only a very small area under low 1 MB reserved
> > for real mode trampoline in reserve_real_mode(). Makes no sense
> > to build up so large area of mapping for it. Since the randomization
> > granularity in 4-level is 1 GB, and 512 GB in 5-level, only copying
> > one PUD entry is enough.
> 
> Can we get more of this info into comments in code?

Sure, I will add this to above init_trampoline(). Thanks.

> 
> > Hence, only copy one PUD entry of area where physical address 0
> > resides. And this is preparation for later changing the randomization
> > granularity of 5-level paging mode from 512 GB to 1 GB.
> > 
> > Signed-off-by: Baoquan He <bhe@redhat.com>
> > ---
> >  arch/x86/mm/kaslr.c | 72 ++++++++++++++++++---------------------------
> >  1 file changed, 28 insertions(+), 44 deletions(-)
> > 
> > diff --git a/arch/x86/mm/kaslr.c b/arch/x86/mm/kaslr.c
> > index 754b5da91d43..6b2a06c36b6f 100644
> > --- a/arch/x86/mm/kaslr.c
> > +++ b/arch/x86/mm/kaslr.c
> > @@ -226,74 +226,58 @@ void __init kernel_randomize_memory(void)
> >  
> >  static void __meminit init_trampoline_pud(void)
> >  {
> > -	unsigned long paddr, paddr_next;
> > +	unsigned long paddr, vaddr;
> >  	pgd_t *pgd;
> > -	pud_t *pud_page, *pud_page_tramp;
> > -	int i;
> >  
> > +	p4d_t *p4d_page, *p4d_page_tramp, *p4d, *p4d_tramp;
> > +	pud_t *pud_page, *pud_page_tramp, *pud, *pud_tramp;
> > +
> > +
> > +	p4d_page_tramp = alloc_low_page();
> 
> I believe this line should be under
> 
> 	if (pgtable_l5_enabled()) {
> 
> Right?

Yeah, you are right. No need to waste one page in 4-level case.

Will see if there's any other comment, then repost to update.

Thanks
Baoquan

> 
> >  	pud_page_tramp = alloc_low_page();
> >  
> >  	paddr = 0;
> > +	vaddr = (unsigned long)__va(paddr);
> >  	pgd = pgd_offset_k((unsigned long)__va(paddr));
> > -	pud_page = (pud_t *) pgd_page_vaddr(*pgd);
> >  
> > -	for (i = pud_index(paddr); i < PTRS_PER_PUD; i++, paddr = paddr_next) {
> > -		pud_t *pud, *pud_tramp;
> > -		unsigned long vaddr = (unsigned long)__va(paddr);
> > +	if (pgtable_l5_enabled()) {
> > +		p4d_page = (p4d_t *) pgd_page_vaddr(*pgd);
> > +		p4d = p4d_page + p4d_index(vaddr);
> >  
> > -		pud_tramp = pud_page_tramp + pud_index(paddr);
> > +		pud_page = (pud_t *) p4d_page_vaddr(*p4d);
> >  		pud = pud_page + pud_index(vaddr);
> > -		paddr_next = (paddr & PUD_MASK) + PUD_SIZE;
> > -
> > -		*pud_tramp = *pud;
> > -	}
> > -
> > -	set_pgd(&trampoline_pgd_entry,
> > -		__pgd(_KERNPG_TABLE | __pa(pud_page_tramp)));
> > -}
> > -
> > -static void __meminit init_trampoline_p4d(void)
> > -{
> > -	unsigned long paddr, paddr_next;
> > -	pgd_t *pgd;
> > -	p4d_t *p4d_page, *p4d_page_tramp;
> > -	int i;
> >  
> > -	p4d_page_tramp = alloc_low_page();
> > +		p4d_tramp = p4d_page_tramp + p4d_index(paddr);
> > +		pud_tramp = pud_page_tramp + pud_index(paddr);
> >  
> > -	paddr = 0;
> > -	pgd = pgd_offset_k((unsigned long)__va(paddr));
> > -	p4d_page = (p4d_t *) pgd_page_vaddr(*pgd);
> > +		*pud_tramp = *pud;
> >  
> > -	for (i = p4d_index(paddr); i < PTRS_PER_P4D; i++, paddr = paddr_next) {
> > -		p4d_t *p4d, *p4d_tramp;
> > -		unsigned long vaddr = (unsigned long)__va(paddr);
> > +		set_p4d(p4d_tramp,
> > +			__p4d(_KERNPG_TABLE | __pa(pud_page_tramp)));
> >  
> > -		p4d_tramp = p4d_page_tramp + p4d_index(paddr);
> > -		p4d = p4d_page + p4d_index(vaddr);
> > -		paddr_next = (paddr & P4D_MASK) + P4D_SIZE;
> > +		set_pgd(&trampoline_pgd_entry,
> > +			__pgd(_KERNPG_TABLE | __pa(p4d_page_tramp)));
> > +	} else {
> > +		pud_page = (pud_t *) pgd_page_vaddr(*pgd);
> > +		pud = pud_page + pud_index(vaddr);
> >  
> > -		*p4d_tramp = *p4d;
> > +		pud_tramp = pud_page_tramp + pud_index(paddr);
> > +		*pud_tramp = *pud;
> > +		set_pgd(&trampoline_pgd_entry,
> > +			__pgd(_KERNPG_TABLE | __pa(pud_page_tramp)));
> >  	}
> > -
> > -	set_pgd(&trampoline_pgd_entry,
> > -		__pgd(_KERNPG_TABLE | __pa(p4d_page_tramp)));
> >  }
> >  
> >  /*
> > - * Create PGD aligned trampoline table to allow real mode initialization
> > - * of additional CPUs. Consume only 1 low memory page.
> > + * Create PUD aligned trampoline table to allow real mode initialization
> > + * of additional CPUs. Consume only 1 or 2 low memory pages.
> >   */
> >  void __meminit init_trampoline(void)
> >  {
> > -
> >  	if (!kaslr_memory_enabled()) {
> >  		init_trampoline_default();
> >  		return;
> >  	}
> >  
> > -	if (pgtable_l5_enabled())
> > -		init_trampoline_p4d();
> > -	else
> > -		init_trampoline_pud();
> > +	init_trampoline_pud();
> >  }
> > -- 
> > 2.17.2
> > 
> 
> -- 
>  Kirill A. Shutemov

  reply	other threads:[~2019-02-25 13:20 UTC|newest]

Thread overview: 5+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-02-24 13:22 [PATCH 0/2] x86/mm/KASLR: Change the granularity of randomization to PUD size in 5-level Baoquan He
2019-02-24 13:22 ` [PATCH 1/2] x86/mm/KASLR: Only build one PUD entry of area for real mode trampoline Baoquan He
2019-02-25 12:31   ` Kirill A. Shutemov
2019-02-25 13:20     ` Baoquan He [this message]
2019-02-24 13:22 ` [PATCH 2/2] x86/mm/KASLR: Change the granularity of randomization to PUD size in 5-level Baoquan He

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20190225132017.GO14858@MiWiFi-R3L-srv \
    --to=bhe@redhat.com \
    --cc=bp@alien8.de \
    --cc=dave.hansen@linux.intel.com \
    --cc=hpa@zytor.com \
    --cc=keescook@chromium.org \
    --cc=kirill.shutemov@linux.intel.com \
    --cc=kirill@shutemov.name \
    --cc=linux-kernel@vger.kernel.org \
    --cc=luto@kernel.org \
    --cc=mingo@redhat.com \
    --cc=peterz@infradead.org \
    --cc=tglx@linutronix.de \
    --cc=thgarnie@google.com \
    --cc=x86@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox