From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S934008AbcIUWVc (ORCPT ); Wed, 21 Sep 2016 18:21:32 -0400 Received: from mx2.suse.de ([195.135.220.15]:50298 "EHLO mx2.suse.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S932337AbcIUWVb (ORCPT ); Wed, 21 Sep 2016 18:21:31 -0400 Date: Wed, 21 Sep 2016 15:21:22 -0700 From: Davidlohr Bueso To: Manfred Spraul Cc: Linux Kernel Mailing List , Peter Zijlstra , akpm@linux-foundation.org Subject: Re: [PATCH 4/5] ipc/msg: Lockless security checks for msgsnd Message-ID: <20160921222122.GA13358@linux-80c1.suse> References: <58c2536e-aebd-e21e-6d78-003dcd10443d@colorfullife.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii; format=flowed Content-Disposition: inline In-Reply-To: <58c2536e-aebd-e21e-6d78-003dcd10443d@colorfullife.com> User-Agent: Mutt/1.5.24 (2015-08-30) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Sun, 18 Sep 2016, Manfred Spraul wrote: >>Just as with msgrcv (along with the rest of sysvipc since a few years >> ago), perform the security checks without holding the ipc object lock. >Thinking about it: isn't this wrong? > >CPU1: >* msgrcv() >* ipcperms() > > >CPU2: >* msgctl(), change permissions >** msgctl() returns, new permissions should now be in effect >* msgsnd(), send secret message >** msgsnd() returns, new message stored. > >CPU1: resumes, receives secret message Hmm, would this not apply to everything IPC_SET, we do lockless ipcperms() all over the place. >Obviously, we could argue that the msgrcv() was already ongoing and >therefore the old permissions still apply - but then we don't need to >recheck after sleeping at all. There is that, and furthermore we make no such guarantees under concurrency. Another way of looking at it could perhaps be IPC_SET returning EPERM if there's an unserviced msgrcv -- but I'm not suggesting doing this btw ;) > >> This also reduces the hogging of the lock for the entire duration of a >> sender, as we drop the lock upon every iteration -- and this is >>exactly >> why we also check for racing with RMID in the first place. > >Which hogging do you mean? The lock is dopped uppon every iteration, >the schedule() is in the middle. >Which your patch, the lock are now dropped twice: >>- >> for (;;) { >> struct msg_sender s; >> err = -EACCES; >> if (ipcperms(ns, &msq->q_perm, S_IWUGO)) >>- goto out_unlock0; >>+ goto out_unlock1; >>+ >>+ ipc_lock_object(&msq->q_perm); >> /* raced with RMID? */ >> if (!ipc_valid_object(&msq->q_perm)) { >>@@ -681,6 +681,7 @@ long do_msgsnd(int msqid, long mtype, void __user *mtext, >> goto out_unlock0; >> } >>+ ipc_unlock_object(&msq->q_perm); >> } >> >> >This means the lock is dropped, just for ipcperms(). >This doubles the lock acquire/release cycles. The effectiveness all depends on the workload and degree of contention. But I have no problem dropping this patch either, although this is standard for all things ipc. Thanks, Davidlohr