From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S932834AbcGOMAl (ORCPT ); Fri, 15 Jul 2016 08:00:41 -0400 Received: from terminus.zytor.com ([198.137.202.10]:55752 "EHLO terminus.zytor.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S932565AbcGOMAj (ORCPT ); Fri, 15 Jul 2016 08:00:39 -0400 Date: Fri, 15 Jul 2016 04:59:47 -0700 From: tip-bot for Andy Lutomirski Message-ID: Cc: mingo@kernel.org, torvalds@linux-foundation.org, linux-kernel@vger.kernel.org, bp@alien8.de, brgerst@gmail.com, hpa@zytor.com, luto@kernel.org, jpoimboe@redhat.com, tglx@linutronix.de, dvlasenk@redhat.com, peterz@infradead.org Reply-To: peterz@infradead.org, dvlasenk@redhat.com, tglx@linutronix.de, jpoimboe@redhat.com, luto@kernel.org, hpa@zytor.com, brgerst@gmail.com, bp@alien8.de, torvalds@linux-foundation.org, linux-kernel@vger.kernel.org, mingo@kernel.org In-Reply-To: References: To: linux-tip-commits@vger.kernel.org Subject: [tip:x86/mm] x86/mm/cpa: In populate_pgd(), don't set the PGD entry until it's populated Git-Commit-ID: 360cb4d15567a7eca07a5f3ade6de308bbfb4e70 X-Mailer: tip-git-log-daemon Robot-ID: Robot-Unsubscribe: Contact to get blacklisted from these emails MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Content-Type: text/plain; charset=UTF-8 Content-Disposition: inline Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Commit-ID: 360cb4d15567a7eca07a5f3ade6de308bbfb4e70 Gitweb: http://git.kernel.org/tip/360cb4d15567a7eca07a5f3ade6de308bbfb4e70 Author: Andy Lutomirski AuthorDate: Thu, 14 Jul 2016 13:22:50 -0700 Committer: Ingo Molnar CommitDate: Fri, 15 Jul 2016 10:26:25 +0200 x86/mm/cpa: In populate_pgd(), don't set the PGD entry until it's populated This avoids pointless races in which another CPU or task might see a partially populated global PGD entry. These races should normally be harmless, but, if another CPU propagates the entry via vmalloc_fault() and then populate_pgd() fails (due to memory allocation failure, for example), this prevents a use-after-free of the PGD entry. Signed-off-by: Andy Lutomirski Cc: Borislav Petkov Cc: Brian Gerst Cc: Denys Vlasenko Cc: H. Peter Anvin Cc: Josh Poimboeuf Cc: Linus Torvalds Cc: Peter Zijlstra Cc: Thomas Gleixner Link: http://lkml.kernel.org/r/bf99df27eac6835f687005364bd1fbd89130946c.1468527351.git.luto@kernel.org Signed-off-by: Ingo Molnar --- arch/x86/mm/pageattr.c | 9 ++++++--- 1 file changed, 6 insertions(+), 3 deletions(-) diff --git a/arch/x86/mm/pageattr.c b/arch/x86/mm/pageattr.c index 7514215..26aa487 100644 --- a/arch/x86/mm/pageattr.c +++ b/arch/x86/mm/pageattr.c @@ -1104,8 +1104,6 @@ static int populate_pgd(struct cpa_data *cpa, unsigned long addr) pud = (pud_t *)get_zeroed_page(GFP_KERNEL | __GFP_NOTRACK); if (!pud) return -1; - - set_pgd(pgd_entry, __pgd(__pa(pud) | _KERNPG_TABLE)); } pgprot_val(pgprot) &= ~pgprot_val(cpa->mask_clr); @@ -1113,11 +1111,16 @@ static int populate_pgd(struct cpa_data *cpa, unsigned long addr) ret = populate_pud(cpa, addr, pgd_entry, pgprot); if (ret < 0) { - unmap_pgd_range(cpa->pgd, addr, + if (pud) + free_page((unsigned long)pud); + unmap_pud_range(pgd_entry, addr, addr + (cpa->numpages << PAGE_SHIFT)); return ret; } + if (pud) + set_pgd(pgd_entry, __pgd(__pa(pud) | _KERNPG_TABLE)); + cpa->numpages = ret; return 0; }