From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from psmtp.com (na3sys010amx160.postini.com [74.125.245.160]) by kanga.kvack.org (Postfix) with SMTP id E97DC6B0070 for ; Sat, 9 Jun 2012 05:00:29 -0400 (EDT) Received: from /spool/local by e28smtp01.in.ibm.com with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted for from ; Sat, 9 Jun 2012 14:30:26 +0530 Received: from d28av03.in.ibm.com (d28av03.in.ibm.com [9.184.220.65]) by d28relay03.in.ibm.com (8.13.8/8.13.8/NCO v10.0) with ESMTP id q5990Njo59834422 for ; Sat, 9 Jun 2012 14:30:23 +0530 Received: from d28av03.in.ibm.com (loopback [127.0.0.1]) by d28av03.in.ibm.com (8.14.4/8.13.1/NCO v10.0 AVout) with ESMTP id q59ETddi029729 for ; Sun, 10 Jun 2012 00:29:40 +1000 From: "Aneesh Kumar K.V" Subject: [PATCH -V8 05/16] hugetlb: avoid taking i_mmap_mutex in unmap_single_vma() for hugetlb Date: Sat, 9 Jun 2012 14:29:50 +0530 Message-Id: <1339232401-14392-6-git-send-email-aneesh.kumar@linux.vnet.ibm.com> In-Reply-To: <1339232401-14392-1-git-send-email-aneesh.kumar@linux.vnet.ibm.com> References: <1339232401-14392-1-git-send-email-aneesh.kumar@linux.vnet.ibm.com> Sender: owner-linux-mm@kvack.org List-ID: To: linux-mm@kvack.org, kamezawa.hiroyu@jp.fujitsu.com, dhillf@gmail.com, rientjes@google.com, mhocko@suse.cz, akpm@linux-foundation.org, hannes@cmpxchg.org Cc: linux-kernel@vger.kernel.org, cgroups@vger.kernel.org, "Aneesh Kumar K.V" From: "Aneesh Kumar K.V" i_mmap_mutex lock was added in unmap_single_vma by 502717f4e ("hugetlb: fix linked list corruption in unmap_hugepage_range()") but we don't use page->lru in unmap_hugepage_range any more. Also the lock was taken higher up in the stack in some code path. That would result in deadlock. unmap_mapping_range (i_mmap_mutex) -> unmap_mapping_range_tree -> unmap_mapping_range_vma -> zap_page_range_single -> unmap_single_vma -> unmap_hugepage_range (i_mmap_mutex) For shared pagetable support for huge pages, since pagetable pages are ref counted we don't need any lock during huge_pmd_unshare. We do take i_mmap_mutex in huge_pmd_share while walking the vma_prio_tree in mapping. (39dde65c9940c97f ("shared page table for hugetlb page")). Signed-off-by: Aneesh Kumar K.V --- mm/memory.c | 5 +---- 1 file changed, 1 insertion(+), 4 deletions(-) diff --git a/mm/memory.c b/mm/memory.c index 545e18a..f6bc04f 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -1326,11 +1326,8 @@ static void unmap_single_vma(struct mmu_gather *tlb, * Since no pte has actually been setup, it is * safe to do nothing in this case. */ - if (vma->vm_file) { - mutex_lock(&vma->vm_file->f_mapping->i_mmap_mutex); + if (vma->vm_file) __unmap_hugepage_range(tlb, vma, start, end, NULL); - mutex_unlock(&vma->vm_file->f_mapping->i_mmap_mutex); - } } else unmap_page_range(tlb, vma, start, end, details); } -- 1.7.10 -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: email@kvack.org