From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.6 required=3.0 tests=DKIMWL_WL_HIGH,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE, SPF_PASS,USER_AGENT_MUTT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 45DBFC43613 for ; Fri, 21 Jun 2019 15:55:00 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 1AB1120665 for ; Fri, 21 Jun 2019 15:55:00 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1561132500; bh=x60BFEqXUMZs3XtTVAwgm97e50SKoHNWBLekmv2eNag=; h=Date:From:To:Cc:Subject:References:In-Reply-To:List-ID:From; b=HISBzu051avg844spfUAURK1ZYwWV9fvqkWvlAygXEDOnHeoFcEL86rhOO8GyuPQN 4R/AgLGgRoNMRwUoRm/MT/rxsay/Z44gnWinMEjcZYMbI8P436YrVKoydT/mmEVCsn fedak8GPZllUxS+KYAD/lHXtMwtK2gVCxNCVBY/Q= Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726010AbfFUPy7 (ORCPT ); Fri, 21 Jun 2019 11:54:59 -0400 Received: from mx2.suse.de ([195.135.220.15]:45286 "EHLO mx1.suse.de" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1726002AbfFUPy7 (ORCPT ); Fri, 21 Jun 2019 11:54:59 -0400 X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (unknown [195.135.220.254]) by mx1.suse.de (Postfix) with ESMTP id 5EDC2AFF4; Fri, 21 Jun 2019 15:54:57 +0000 (UTC) Date: Fri, 21 Jun 2019 17:54:55 +0200 From: Michal Hocko To: Alexander Potapenko Cc: Andrew Morton , Christoph Lameter , Kees Cook , Masahiro Yamada , James Morris , "Serge E. Hallyn" , Nick Desaulniers , Kostya Serebryany , Dmitry Vyukov , Sandeep Patil , Laura Abbott , Randy Dunlap , Jann Horn , Mark Rutland , Marco Elver , Linux Memory Management List , linux-security-module , Kernel Hardening Subject: Re: [PATCH v7 1/2] mm: security: introduce init_on_alloc=1 and init_on_free=1 boot options Message-ID: <20190621155455.GG3429@dhcp22.suse.cz> References: <20190617151050.92663-1-glider@google.com> <20190617151050.92663-2-glider@google.com> <20190621070905.GA3429@dhcp22.suse.cz> <20190621151210.GF3429@dhcp22.suse.cz> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: User-Agent: Mutt/1.10.1 (2018-07-13) Sender: owner-linux-security-module@vger.kernel.org Precedence: bulk List-ID: On Fri 21-06-19 17:24:21, Alexander Potapenko wrote: > On Fri, Jun 21, 2019 at 5:12 PM Michal Hocko wrote: > > > > On Fri 21-06-19 16:10:19, Alexander Potapenko wrote: > > > On Fri, Jun 21, 2019 at 10:57 AM Alexander Potapenko wrote: > > [...] > > > > > > diff --git a/mm/dmapool.c b/mm/dmapool.c > > > > > > index 8c94c89a6f7e..e164012d3491 100644 > > > > > > --- a/mm/dmapool.c > > > > > > +++ b/mm/dmapool.c > > > > > > @@ -378,7 +378,7 @@ void *dma_pool_alloc(struct dma_pool *pool, gfp_t mem_flags, > > > > > > #endif > > > > > > spin_unlock_irqrestore(&pool->lock, flags); > > > > > > > > > > > > - if (mem_flags & __GFP_ZERO) > > > > > > + if (want_init_on_alloc(mem_flags)) > > > > > > memset(retval, 0, pool->size); > > > > > > > > > > > > return retval; > > > > > > > > > > Don't you miss dma_pool_free and want_init_on_free? > > > > Agreed. > > > > I'll fix this and add tests for DMA pools as well. > > > This doesn't seem to be easy though. One needs a real DMA-capable > > > device to allocate using DMA pools. > > > On the other hand, what happens to a DMA pool when it's destroyed, > > > isn't it wiped by pagealloc? > > > > Yes it should be returned to the page allocator AFAIR. But it is when we > > are returning an object to the pool when you want to wipe the data, no? > My concern was that dma allocation is something orthogonal to heap and > page allocator. > I also don't know how many other allocators are left overboard, e.g. > we don't do anything to lib/genalloc.c yet. Well, that really depends what would you like to achieve by this functionality. There are likely to be all sorts of allocators on top of the core ones (e.g. mempool allocator). The question is whether you really want to cover them all. Are they security relevant? > > Why cannot you do it along the already existing poisoning? > I can sure keep these bits. > Any idea how the correct behavior of dma_pool_alloc/free can be tested? Well, I would say that you have to rely on the review process here more than any specific testing. In any case other allocators can be handled incrementally. This is not all or nothing kinda stuff. I have pointed out dma_pool because it only addresses one half of the work and it was not clear why. If you want to drop dma_pool then this will be fine by me. As this is a hardening feature you want to get coverage as large as possible rather than 100%. -- Michal Hocko SUSE Labs