From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752041AbbJLSzw (ORCPT ); Mon, 12 Oct 2015 14:55:52 -0400 Received: from mx2.suse.de ([195.135.220.15]:53628 "EHLO mx2.suse.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751081AbbJLSzu (ORCPT ); Mon, 12 Oct 2015 14:55:50 -0400 Date: Mon, 12 Oct 2015 11:55:33 -0700 From: Davidlohr Bueso To: "Kirill A. Shutemov" Cc: Dmitry Vyukov , Andrew Morton , dave.hansen@linux.intel.com, Hugh Dickins , Joe Perches , sds@tycho.nsa.gov, Oleg Nesterov , "Kirill A. Shutemov" , Rik van Riel , mhocko@suse.cz, gang.chen.5i5j@gmail.com, Peter Feiner , aarcange@redhat.com, "linux-mm@kvack.org" , LKML , syzkaller@googlegroups.com, Kostya Serebryany , Alexander Potapenko , Andrey Konovalov , Sasha Levin Subject: Re: GPF in shm_lock ipc Message-ID: <20151012185533.GD3170@linux-uzut.site> Mail-Followup-To: "Kirill A. Shutemov" , Dmitry Vyukov , Andrew Morton , dave.hansen@linux.intel.com, Hugh Dickins , Joe Perches , sds@tycho.nsa.gov, Oleg Nesterov , "Kirill A. Shutemov" , Rik van Riel , mhocko@suse.cz, gang.chen.5i5j@gmail.com, Peter Feiner , aarcange@redhat.com, "linux-mm@kvack.org" , LKML , syzkaller@googlegroups.com, Kostya Serebryany , Alexander Potapenko , Andrey Konovalov , Sasha Levin References: <20151012122702.GC2544@node> <20151012174945.GC3170@linux-uzut.site> <20151012181040.GC6447@node> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii; format=flowed Content-Disposition: inline In-Reply-To: <20151012181040.GC6447@node> User-Agent: Mutt/1.5.21 (2010-09-15) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Mon, 12 Oct 2015, Kirill A. Shutemov wrote: >On Mon, Oct 12, 2015 at 10:49:45AM -0700, Davidlohr Bueso wrote: >> diff --git a/ipc/shm.c b/ipc/shm.c >> index 4178727..9615f19 100644 >> --- a/ipc/shm.c >> +++ b/ipc/shm.c >> @@ -385,9 +385,25 @@ static struct mempolicy *shm_get_policy(struct vm_area_struct *vma, >> static int shm_mmap(struct file *file, struct vm_area_struct *vma) >> { >> - struct shm_file_data *sfd = shm_file_data(file); >> + struct file *vma_file = vma->vm_file; >> + struct shm_file_data *sfd = shm_file_data(vma_file); >> + struct ipc_ids *ids = &shm_ids(sfd->ns); >> + struct kern_ipc_perm *shp; >> int ret; >> + rcu_read_lock(); >> + shp = ipc_obtain_object_check(ids, sfd->id); >> + if (IS_ERR(shp)) { >> + ret = -EINVAL; >> + goto err; >> + } >> + >> + if (!ipc_valid_object(shp)) { >> + ret = -EIDRM; >> + goto err; >> + } >> + rcu_read_unlock(); >> + > >Hm. Isn't it racy? What prevents IPC_RMID from happening after this point? Nothing, but that is later caught by shm_open() doing similar checks. We basically end up doing a check between ->mmap() calls, which is fair imho. Note that this can occur anywhere in ipc as IPC_RMID is a user request/cmd, and we try to respect it -- thus you can argue this race anywhere, which is why we have EIDRM/EINVL. Ultimately the user should not be doing such hacks _anyway_. So I'm not really concerned about it. Another similar alternative would be perhaps to make shm_lock() return an error, and thus propagate that error to mmap return. That way we would have a silent way out of the warning scenario (afterward we cannot race as we hold the ipc object lock). However, the users would now have to take this into account... [validity check lockless] ->mmap() [validity check lock] >Shouldn't we bump shm_nattch here? Or some other refcount? At least not shm_nattach, as that would acknowledge a new attachment after a valid IPC_RMID. But the problem is also with how we check for marked for deletion segments -- ipc_valid_object() checking the deleted flag. As such, we always rely on explicitly checking against the deleted flag.