From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1755621AbbIWPhW (ORCPT ); Wed, 23 Sep 2015 11:37:22 -0400 Received: from mx1.redhat.com ([209.132.183.28]:48040 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1754398AbbIWPhU (ORCPT ); Wed, 23 Sep 2015 11:37:20 -0400 Date: Wed, 23 Sep 2015 17:34:16 +0200 From: Oleg Nesterov To: "Kirill A. Shutemov" Cc: Andrey Konovalov , Sasha Levin , Rik van Riel , Andrew Morton , Dmitry Vyukov , linux-mm@kvack.org, linux-kernel@vger.kernel.org, Vlastimil Babka , Hugh Dickins Subject: Re: Multiple potential races on vma->vm_flags Message-ID: <20150923153416.GA18973@redhat.com> References: <55EC9221.4040603@oracle.com> <20150907114048.GA5016@node.dhcp.inet.fi> <55F0D5B2.2090205@oracle.com> <20150910083605.GB9526@node.dhcp.inet.fi> <20150911103959.GA7976@node.dhcp.inet.fi> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20150911103959.GA7976@node.dhcp.inet.fi> User-Agent: Mutt/1.5.18 (2008-05-17) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 09/11, Kirill A. Shutemov wrote: > > This one is tricky. I *assume* the mm cannot be generally accessible after > mm_users drops to zero, but I'm not entirely sure about it. > procfs? ptrace? Well, all I can say is that proc/ptrace look fine afaics... This is off-topic, but how about the patch below? Different threads can expand different vma's at the same time under read_lock(mmap_sem), so vma_lock_anon_vma() can't help to serialize "locked_vm += grow". Oleg. --- x/mm/mmap.c +++ x/mm/mmap.c @@ -2146,9 +2146,6 @@ static int acct_stack_growth(struct vm_a if (security_vm_enough_memory_mm(mm, grow)) return -ENOMEM; - /* Ok, everything looks good - let it rip */ - if (vma->vm_flags & VM_LOCKED) - mm->locked_vm += grow; vm_stat_account(mm, vma->vm_flags, vma->vm_file, grow); return 0; } @@ -2210,6 +2207,8 @@ int expand_upwards(struct vm_area_struct * against concurrent vma expansions. */ spin_lock(&vma->vm_mm->page_table_lock); + if (vma->vm_flags & VM_LOCKED) + mm->locked_vm += grow; anon_vma_interval_tree_pre_update_vma(vma); vma->vm_end = address; anon_vma_interval_tree_post_update_vma(vma); @@ -2281,6 +2280,8 @@ int expand_downwards(struct vm_area_stru * against concurrent vma expansions. */ spin_lock(&vma->vm_mm->page_table_lock); + if (vma->vm_flags & VM_LOCKED) + mm->locked_vm += grow; anon_vma_interval_tree_pre_update_vma(vma); vma->vm_start = address; vma->vm_pgoff -= grow;