linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Peter Zijlstra <peterz@infradead.org>
To: Nadav Amit <namit@vmware.com>
Cc: Ingo Molnar <mingo@redhat.com>,
	linux-kernel@vger.kernel.org, x86@kernel.org,
	"H. Peter Anvin" <hpa@zytor.com>,
	Thomas Gleixner <tglx@linutronix.de>,
	Borislav Petkov <bp@alien8.de>,
	Dave Hansen <dave.hansen@linux.intel.com>,
	Andy Lutomirski <luto@kernel.org>,
	Kees Cook <keescook@chromium.org>,
	Dave Hansen <dave.hansen@intel.com>,
	Masami Hiramatsu <mhiramat@kernel.org>
Subject: Re: [PATCH v4 06/10] x86/alternative: use temporary mm for text poking
Date: Sun, 11 Nov 2018 15:59:36 +0100	[thread overview]
Message-ID: <20181111145936.GA3021@worktop> (raw)
In-Reply-To: <20181110231732.15060-7-namit@vmware.com>

On Sat, Nov 10, 2018 at 03:17:28PM -0800, Nadav Amit wrote:
> @@ -683,43 +684,108 @@ __ro_after_init unsigned long poking_addr;
>  
>  static int __text_poke(void *addr, const void *opcode, size_t len)
>  {
> +	bool cross_page_boundary = offset_in_page(addr) + len > PAGE_SIZE;
> +	temporary_mm_state_t prev;
> +	struct page *pages[2] = {NULL};
>  	unsigned long flags;
> +	pte_t pte, *ptep;
> +	spinlock_t *ptl;
> +	int r = 0;
>  
>  	/*
> +	 * While boot memory allocator is running we cannot use struct pages as
> +	 * they are not yet initialized.
>  	 */
>  	BUG_ON(!after_bootmem);
>  
>  	if (!core_kernel_text((unsigned long)addr)) {
>  		pages[0] = vmalloc_to_page(addr);
> +		if (cross_page_boundary)
> +			pages[1] = vmalloc_to_page(addr + PAGE_SIZE);
>  	} else {
>  		pages[0] = virt_to_page(addr);
>  		WARN_ON(!PageReserved(pages[0]));
> +		if (cross_page_boundary)
> +			pages[1] = virt_to_page(addr + PAGE_SIZE);
>  	}
> +
> +	if (!pages[0] || (cross_page_boundary && !pages[1]))
>  		return -EFAULT;
> +
>  	local_irq_save(flags);
> +
> +	/*
> +	 * The lock is not really needed, but this allows to avoid open-coding.
> +	 */
> +	ptep = get_locked_pte(poking_mm, poking_addr, &ptl);
> +
> +	/*
> +	 * If we failed to allocate a PTE, fail. This should *never* happen,
> +	 * since we preallocate the PTE.
> +	 */
> +	if (WARN_ON_ONCE(!ptep))
> +		goto out;

Since we hard rely on init getting that right; can't we simply get rid
of this?

> +
> +	pte = mk_pte(pages[0], PAGE_KERNEL);
> +	set_pte_at(poking_mm, poking_addr, ptep, pte);
> +
> +	if (cross_page_boundary) {
> +		pte = mk_pte(pages[1], PAGE_KERNEL);
> +		set_pte_at(poking_mm, poking_addr + PAGE_SIZE, ptep + 1, pte);
> +	}
> +
> +	/*
> +	 * Loading the temporary mm behaves as a compiler barrier, which
> +	 * guarantees that the PTE will be set at the time memcpy() is done.
> +	 */
> +	prev = use_temporary_mm(poking_mm);
> +
> +	kasan_disable_current();
> +	memcpy((u8 *)poking_addr + offset_in_page(addr), opcode, len);
> +	kasan_enable_current();
> +
> +	/*
> +	 * Ensure that the PTE is only cleared after the instructions of memcpy
> +	 * were issued by using a compiler barrier.
> +	 */
> +	barrier();
> +
> +	pte_clear(poking_mm, poking_addr, ptep);
> +
> +	/*
> +	 * __flush_tlb_one_user() performs a redundant TLB flush when PTI is on,
> +	 * as it also flushes the corresponding "user" address spaces, which
> +	 * does not exist.
> +	 *
> +	 * Poking, however, is already very inefficient since it does not try to
> +	 * batch updates, so we ignore this problem for the time being.
> +	 *
> +	 * Since the PTEs do not exist in other kernel address-spaces, we do
> +	 * not use __flush_tlb_one_kernel(), which when PTI is on would cause
> +	 * more unwarranted TLB flushes.
> +	 *
> +	 * There is a slight anomaly here: the PTE is a supervisor-only and
> +	 * (potentially) global and we use __flush_tlb_one_user() but this
> +	 * should be fine.
> +	 */
> +	__flush_tlb_one_user(poking_addr);
> +	if (cross_page_boundary) {
> +		pte_clear(poking_mm, poking_addr + PAGE_SIZE, ptep + 1);
> +		__flush_tlb_one_user(poking_addr + PAGE_SIZE);
> +	}
> +
> +	/*
> +	 * Loading the previous page-table hierarchy requires a serializing
> +	 * instruction that already allows the core to see the updated version.
> +	 * Xen-PV is assumed to serialize execution in a similar manner.
> +	 */
> +	unuse_temporary_mm(prev);
> +
> +	pte_unmap_unlock(ptep, ptl);
> +out:
> +	if (memcmp(addr, opcode, len))
> +		r = -EFAULT;

How could this ever fail? And how can we reliably recover from that?

I mean, we can move that BUG_ON() we have in text_poke() down a level,
but for example the static_key/jump_label code has no real option on
failing this.

> +
>  	local_irq_restore(flags);
>  	return r;
>  }

Other than that, this looks really good!

  reply	other threads:[~2018-11-11 14:59 UTC|newest]

Thread overview: 29+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2018-11-10 23:17 [PATCH v4 00/10] x86/alternative: text_poke() fixes Nadav Amit
2018-11-10 23:17 ` [PATCH v4 01/10] Fix "x86/alternatives: Lockdep-enforce text_mutex in text_poke*()" Nadav Amit
2018-11-12  2:54   ` Masami Hiramatsu
2018-11-12 10:59     ` Jiri Kosina
2018-11-10 23:17 ` [PATCH v4 02/10] x86/jump_label: Use text_poke_early() during early init Nadav Amit
2018-11-12 20:12   ` Nadav Amit
2018-11-10 23:17 ` [PATCH v4 03/10] x86/mm: temporary mm struct Nadav Amit
2018-11-10 23:17 ` [PATCH v4 04/10] fork: provide a function for copying init_mm Nadav Amit
2018-11-10 23:17 ` [PATCH v4 05/10] x86/alternative: initializing temporary mm for patching Nadav Amit
2018-11-11 14:43   ` Peter Zijlstra
2018-11-11 20:38     ` Nadav Amit
2018-11-12  0:34       ` Peter Zijlstra
2018-11-10 23:17 ` [PATCH v4 06/10] x86/alternative: use temporary mm for text poking Nadav Amit
2018-11-11 14:59   ` Peter Zijlstra [this message]
2018-11-11 20:53     ` Nadav Amit
2018-11-11 23:52       ` Peter Zijlstra
2018-11-12  0:09         ` Nadav Amit
2018-11-12  0:41           ` Peter Zijlstra
2018-11-12  0:36         ` Peter Zijlstra
2018-11-12  3:46         ` Ingo Molnar
2018-11-12  8:50           ` Peter Zijlstra
2018-11-11 19:11   ` Damian Tometzki
2018-11-11 20:41     ` Nadav Amit
2018-11-10 23:17 ` [PATCH v4 07/10] x86/kgdb: avoid redundant comparison of code Nadav Amit
2018-11-10 23:17 ` [PATCH v4 08/10] x86: avoid W^X being broken during modules loading Nadav Amit
2018-11-10 23:17 ` [PATCH v4 09/10] x86/jump-label: remove support for custom poker Nadav Amit
2018-11-11 15:05   ` Peter Zijlstra
2018-11-11 20:31     ` Nadav Amit
2018-11-10 23:17 ` [PATCH v4 10/10] x86/alternative: remove the return value of text_poke_*() Nadav Amit

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20181111145936.GA3021@worktop \
    --to=peterz@infradead.org \
    --cc=bp@alien8.de \
    --cc=dave.hansen@intel.com \
    --cc=dave.hansen@linux.intel.com \
    --cc=hpa@zytor.com \
    --cc=keescook@chromium.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=luto@kernel.org \
    --cc=mhiramat@kernel.org \
    --cc=mingo@redhat.com \
    --cc=namit@vmware.com \
    --cc=tglx@linutronix.de \
    --cc=x86@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).