From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1757073AbcEFDEf (ORCPT ); Thu, 5 May 2016 23:04:35 -0400 Received: from szxga02-in.huawei.com ([119.145.14.65]:35648 "EHLO szxga02-in.huawei.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1754684AbcEFDEe (ORCPT ); Thu, 5 May 2016 23:04:34 -0400 Subject: Re: [PATCH] ksm: fix conflict between mmput and scan_get_next_rmap_item To: Andrea Arcangeli , Zhou Chengming References: <1462452176-33462-1-git-send-email-zhouchengming1@huawei.com> <20160505215731.GK28755@redhat.com> CC: , , , , , , , , , , , From: Ding Tianhong Message-ID: <572C0753.1010300@huawei.com> Date: Fri, 6 May 2016 10:54:11 +0800 User-Agent: Mozilla/5.0 (Windows NT 6.1; rv:38.0) Gecko/20100101 Thunderbird/38.5.1 MIME-Version: 1.0 In-Reply-To: <20160505215731.GK28755@redhat.com> Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: 7bit X-Originating-IP: [10.177.22.246] X-CFilter-Loop: Reflected X-Mirapoint-Virus-RAPID-Raw: score=unknown(0), refid=str=0001.0A020203.572C0762.015A,ss=1,re=0.000,recu=0.000,reip=0.000,cl=1,cld=1,fgs=0, ip=0.0.0.0, so=2013-06-18 04:22:30, dmn=2013-03-21 17:37:32 X-Mirapoint-Loop-Id: 35d4b8ad8f3564ee687f34dd5ed6588b Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Good Catch. The original code looks too old, use the ksm_mmlist_lock to protect the mm_list looks will affect the performance, Should we use the RCU to protect the list and not free the mm until out of the rcu critical period? On 2016/5/6 5:57, Andrea Arcangeli wrote: > Hello Zhou, > > Great catch. > > On Thu, May 05, 2016 at 08:42:56PM +0800, Zhou Chengming wrote: >> remove_trailing_rmap_items(slot, ksm_scan.rmap_list); >> + up_read(&mm->mmap_sem); >> >> spin_lock(&ksm_mmlist_lock); >> ksm_scan.mm_slot = list_entry(slot->mm_list.next, >> @@ -1666,16 +1667,12 @@ next_mm: >> */ >> hash_del(&slot->link); >> list_del(&slot->mm_list); >> - spin_unlock(&ksm_mmlist_lock); >> >> free_mm_slot(slot); >> clear_bit(MMF_VM_MERGEABLE, &mm->flags); >> - up_read(&mm->mmap_sem); >> mmdrop(mm); > > I thought the mmap_sem for reading prevented a race of the above > clear_bit against a concurrent madvise(MADV_MERGEABLE) which takes the > mmap_sem for writing. After this change can't __ksm_enter run > concurrently with the clear_bit above introducing a different SMP race > condition? > >> - } else { >> - spin_unlock(&ksm_mmlist_lock); >> - up_read(&mm->mmap_sem); > > The strict obviously safe fix is just to invert the above two, > up_read; spin_unlock. > > Then I found another instance of this same SMP race condition in > unmerge_and_remove_all_rmap_items() that you didn't fix. > > Actually for the other instance of the bug the implementation above > that releases the mmap_sem early sounds safe, because it's a > ksm_text_exit that takes the clear_bit path, not just the fact we > didn't find a vma with VM_MERGEABLE set and we garbage collect the > mm_slot, while the "mm" may still alive. In the other case the "mm" > isn't alive anymore so the race with MADV_MERGEABLE shouldn't be > possible to materialize. > > Could you fix it by just inverting the up_read/spin_unlock order, in > the place you patched, and add this comment: > > } else { > /* > * up_read(&mm->mmap_sem) first because after > * spin_unlock(&ksm_mmlist_lock) run, the "mm" may > * already have been freed under us by __ksm_exit() > * because the "mm_slot" is still hashed and > * ksm_scan.mm_slot doesn't point to it anymore. > */ > up_read(&mm->mmap_sem); > spin_unlock(&ksm_mmlist_lock); > } > > And in unmerge_and_remove_all_rmap_items() same thing, except there > you can apply your up_read() early and you can just drop the "else" > clause. > > . >