From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-wr0-f200.google.com (mail-wr0-f200.google.com [209.85.128.200]) by kanga.kvack.org (Postfix) with ESMTP id B910C6B0374 for ; Fri, 9 Jun 2017 10:21:50 -0400 (EDT) Received: by mail-wr0-f200.google.com with SMTP id g36so8663343wrg.4 for ; Fri, 09 Jun 2017 07:21:50 -0700 (PDT) Received: from mx0a-001b2d01.pphosted.com (mx0b-001b2d01.pphosted.com. [148.163.158.5]) by mx.google.com with ESMTPS id c186si1767195wmd.146.2017.06.09.07.21.48 for (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Fri, 09 Jun 2017 07:21:48 -0700 (PDT) Received: from pps.filterd (m0098419.ppops.net [127.0.0.1]) by mx0b-001b2d01.pphosted.com (8.16.0.20/8.16.0.20) with SMTP id v59EIhnb022160 for ; Fri, 9 Jun 2017 10:21:47 -0400 Received: from e06smtp15.uk.ibm.com (e06smtp15.uk.ibm.com [195.75.94.111]) by mx0b-001b2d01.pphosted.com with ESMTP id 2ayw5vrmsj-1 (version=TLSv1.2 cipher=AES256-SHA bits=256 verify=NOT) for ; Fri, 09 Jun 2017 10:21:47 -0400 Received: from localhost by e06smtp15.uk.ibm.com with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted for from ; Fri, 9 Jun 2017 15:21:45 +0100 From: Laurent Dufour Subject: [RFC v4 13/20] mm/spf: Add check on the VMA's flags Date: Fri, 9 Jun 2017 16:21:02 +0200 In-Reply-To: <1497018069-17790-1-git-send-email-ldufour@linux.vnet.ibm.com> References: <1497018069-17790-1-git-send-email-ldufour@linux.vnet.ibm.com> Message-Id: <1497018069-17790-14-git-send-email-ldufour@linux.vnet.ibm.com> Sender: owner-linux-mm@kvack.org List-ID: To: paulmck@linux.vnet.ibm.com, peterz@infradead.org, akpm@linux-foundation.org, kirill@shutemov.name, ak@linux.intel.com, mhocko@kernel.org, dave@stgolabs.net, jack@suse.cz, Matthew Wilcox Cc: linux-kernel@vger.kernel.org, linux-mm@kvack.org, haren@linux.vnet.ibm.com, khandual@linux.vnet.ibm.com, npiggin@gmail.com, bsingharora@gmail.com When handling speculative page fault we should check for the VMA's access permission as it is done in handle_mm_fault() or access_error in x86's fault handler. Signed-off-by: Laurent Dufour --- mm/memory.c | 24 ++++++++++++++++++++++++ 1 file changed, 24 insertions(+) diff --git a/mm/memory.c b/mm/memory.c index 75d24e74c4ff..27e44ebc5440 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -3966,6 +3966,30 @@ int handle_speculative_fault(struct mm_struct *mm, unsigned long address, if (address < vma->vm_start || vma->vm_end <= address) goto unlock; + /* XXX Could we handle huge page here ? */ + if (unlikely(is_vm_hugetlb_page(vma))) + goto unlock; + + /* + * The three following checks are copied from access_error from + * arch/x86/mm/fault.c + * XXX they may not be applicable to all architectures + */ + if (!arch_vma_access_permitted(vma, flags & FAULT_FLAG_WRITE, + flags & FAULT_FLAG_INSTRUCTION, + flags & FAULT_FLAG_REMOTE)) + goto unlock; + + /* This is one is required to check that the VMA has write access set */ + if (flags & FAULT_FLAG_WRITE) { + if (unlikely(!(vma->vm_flags & VM_WRITE))) + goto unlock; + } else { + /* XXX This may not be required */ + if (unlikely(!(vma->vm_flags & (VM_READ | VM_EXEC | VM_WRITE)))) + goto unlock; + } + /* * We need to re-validate the VMA after checking the bounds, otherwise * we might have a false positive on the bounds. -- 2.7.4 -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: email@kvack.org