From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org, stable@vger.kernel.org Cc: Greg Kroah-Hartman , alan@lxorguk.ukuu.org.uk, Nicolas Pitre , Catalin Marinas , Will Deacon Subject: [ 096/123] ARM: mm: use pteval_t to represent page protection values Date: Wed, 9 Jan 2013 12:35:35 -0800 Message-Id: <20130109201510.925282646@linuxfoundation.org> In-Reply-To: <20130109201458.392601412@linuxfoundation.org> References: <20130109201458.392601412@linuxfoundation.org> Sender: linux-kernel-owner@vger.kernel.org List-ID: 3.7-stable review patch. If anyone has any objections, please let me know. ------------------ From: Will Deacon commit 864aa04cd02979c2c755cb28b5f4fe56039171c0 upstream. When updating the page protection map after calculating the user_pgprot value, the base protection map is temporarily stored in an unsigned long type, causing truncation of the protection bits when LPAE is enabled. This effectively means that calls to mprotect() will corrupt the upper page attributes, clearing the XN bit unconditionally. This patch uses pteval_t to store the intermediate protection values, preserving the upper bits for 64-bit descriptors. Acked-by: Nicolas Pitre Acked-by: Catalin Marinas Signed-off-by: Will Deacon Signed-off-by: Greg Kroah-Hartman --- arch/arm/mm/mmu.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) --- a/arch/arm/mm/mmu.c +++ b/arch/arm/mm/mmu.c @@ -488,7 +488,7 @@ static void __init build_mem_type_table( #endif for (i = 0; i < 16; i++) { - unsigned long v = pgprot_val(protection_map[i]); + pteval_t v = pgprot_val(protection_map[i]); protection_map[i] = __pgprot(v | user_pgprot); }