From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 397ACC433EF for ; Tue, 19 Apr 2022 21:50:28 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 797E36B0072; Tue, 19 Apr 2022 17:50:27 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 7487F6B0073; Tue, 19 Apr 2022 17:50:27 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 5E6F16B0074; Tue, 19 Apr 2022 17:50:27 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (relay.hostedemail.com [64.99.140.25]) by kanga.kvack.org (Postfix) with ESMTP id 509406B0072 for ; Tue, 19 Apr 2022 17:50:27 -0400 (EDT) Received: from smtpin10.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id 261F92256A for ; Tue, 19 Apr 2022 21:50:27 +0000 (UTC) X-FDA: 79374972894.10.E30884B Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) by imf14.hostedemail.com (Postfix) with ESMTP id D4EF0100019 for ; Tue, 19 Apr 2022 21:50:25 +0000 (UTC) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 9103D617AE for ; Tue, 19 Apr 2022 21:50:25 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 7D9E5C385AF for ; Tue, 19 Apr 2022 21:50:24 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1650405024; bh=19+C6lovXblzkzesiQNBstoa94kewx17o/4B7AAGQ3A=; h=References:In-Reply-To:From:Date:Subject:To:Cc:From; b=N9vV7hIQyo8lzzHx/Up7J4SUised1IeJyLkHOyV+Qm0MrG8AvMoVkUj0cA5zMjfK6 5BbHpuyfX08WzvIYkX0cnMGoHWYgzfL2bFhqS19iwGcx9y6WzcK54/pt/UZfJvmrNo yqcDZvmOMa3iAwF0gQd6k4Qn0FrGGIWfRLnh/s7mMQKWLBEW3t7o9jN7M7h8uMf4K3 GE1qgyfLzEvUyXIjkrTo6I96DOwYkgMvtpkUOcv4XZAWaojEYZ4XIDqgGd2hdBo/av HgK6aOF79uGNkHIInXG8T6/02VzO3dfwfrzmtAEwbp6GpAYry5o0fAECdPJ9XUELz5 LwdToN9o/t8Yg== Received: by mail-oo1-f45.google.com with SMTP id x22-20020a4aca96000000b00338fbaf797bso1998113ooq.10 for ; Tue, 19 Apr 2022 14:50:24 -0700 (PDT) X-Gm-Message-State: AOAM533t3PCLh9M+nYMazQfzhi3yc3Lbhh86ymk61zF0Gz/XenPdiqwL ntkqqG6raMlAZ8rgGca0WCyhOk8oOGLrBFKF1dE= X-Google-Smtp-Source: ABdhPJzRWwRTImhUHZlzz8/ohm/laqoDuDdJO2L+tATCho4Qe7V4mzxG7+EOW6PAztE6kFuaceKhdkUboY8OfCdbILk= X-Received: by 2002:a05:6820:16c:b0:33a:3348:bdbe with SMTP id k12-20020a056820016c00b0033a3348bdbemr4949595ood.98.1650405023571; Tue, 19 Apr 2022 14:50:23 -0700 (PDT) MIME-Version: 1.0 References: In-Reply-To: From: Ard Biesheuvel Date: Tue, 19 Apr 2022 23:50:11 +0200 X-Gmail-Original-Message-ID: Message-ID: Subject: Re: [PATCH 07/10] crypto: Use ARCH_DMA_MINALIGN instead of ARCH_KMALLOC_MINALIGN To: Catalin Marinas Cc: Herbert Xu , Will Deacon , Marc Zyngier , Arnd Bergmann , Greg Kroah-Hartman , Andrew Morton , Linus Torvalds , Linux Memory Management List , Linux ARM , Linux Kernel Mailing List , "David S. Miller" Content-Type: text/plain; charset="UTF-8" X-Rspam-User: X-Rspamd-Server: rspam07 X-Rspamd-Queue-Id: D4EF0100019 X-Stat-Signature: c8fn3ee6scuwfsq8atmj6gq1gnhstc7m Authentication-Results: imf14.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=N9vV7hIQ; spf=pass (imf14.hostedemail.com: domain of ardb@kernel.org designates 139.178.84.217 as permitted sender) smtp.mailfrom=ardb@kernel.org; dmarc=pass (policy=none) header.from=kernel.org X-HE-Tag: 1650405025-113242 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Mon, 18 Apr 2022 at 18:44, Catalin Marinas wrote: > > On Mon, Apr 18, 2022 at 04:37:17PM +0800, Herbert Xu wrote: > > On Sun, Apr 17, 2022 at 05:30:27PM +0100, Catalin Marinas wrote: > > > Do you mean as per Ard's proposal here: > > > > > > https://lore.kernel.org/r/CAMj1kXH0x5Va7Wgs+mU1ONDwwsazOBuN4z4ihVzO2uG-n41Kbg@mail.gmail.com > > > > > > struct crypto_request { > > > union { > > > struct { > > > ... fields ... > > > }; > > > u8 __padding[ARCH_DMA_MINALIGN]; > > > }; > > > void __ctx[] __aligned(CRYPTO_MINALIGN); > > > }; > > > > > > If CRYPTO_MINALIGN is lowered to, say, 8 (to be the same as lowest > > > ARCH_KMALLOC_MINALIGN), the __alignof__(req->__ctx) would be 8. > > > Functions like crypto_tfm_ctx_alignment() will return 8 when what you > > > need is 128. We can change those functions to return ARCH_DMA_MINALIGN > > > instead or always bump cra_alignmask to ARCH_DMA_MINALIGN-1. > > > > OK, at this point I think we need to let the code do the talking :) > > > > I've seen Ard's patches already and I think I understand what your > > needs are. So let me whip up some code to show you guys what I > > think needs to be done. > > BTW before you have a go at this, there's also Linus' idea that does not > change the crypto code (at least not functionally). Of course, you and > Ard can still try to figure out how to reduce the padding but if we go > with Linus' idea of a new GFP_NODMA flag, there won't be any changes to > the crypto code as long as it doesn't pass such flag. So, the options: > > 1. Change ARCH_KMALLOC_MINALIGN to 8 (or ARCH_SLAB_MINALIGN if higher) > while keeping ARCH_DMA_MINALIGN to 128. By default kmalloc() will > honour the 128-byte alignment, unless GDP_NODMA is passed. This still > requires changing CRYPTO_MINALIGN to ARCH_DMA_MINALIGN but there is > no functional change, kmalloc() without the new flag will return > CRYPTO_MINALIGN-aligned pointers. > > 2. Leave ARCH_KMALLOC_MINALIGN as ARCH_DMA_MINALIGN (128) and introduce > a new GFP_PACKED (I think it fits better than 'NODMA') flag that > reduces the minimum kmalloc() below ARCH_KMALLOC_MINALIGN (and > probably at least ARCH_SLAB_MINALIGN). It's equivalent to (1) but > does not touch the crypto code at all. > > (1) and (2) are the same, just minor naming difference. Happy to go with > any of them. They still have the downside that we need to add the new > GFP_ flag to those hotspots that allocate small objects (Arnd provided > an idea on how to find them with ftrace) but at least we know it won't > inadvertently break anything. > I'm not sure GFP_NODMA adds much here. The way I see it, the issue in the crypto code is that we are relying on a ARCH_KMALLOC_ALIGN aligned zero length __ctx[] array for three different things: - ensuring/informing the compiler that top-level request/TFM structures are aligned to ARCH_KMALLOC_ALIGN, - adding padding to ensure that driver context structures that are embedded in those top-level request/TFM structures are sufficiently aligned so that any member C types appear at the expected alignment (but those structures are not usually defined as being aligned to ARCH_KMALLOC_ALIGN) - adding padding to ensure that these driver context structures do not share cache lines with the preceding top-level struct. One thing to note here is that the padding is only necessary when the driver context size > 0, and has nothing to do with the alignment of the top-level struct. Using a single aligned ctx member was a nice way to accomplish all of these when it was introduced, but I think it might be better to get rid of it, and move the padding logic to the static inline helpers instead. So something like struct skcipher_request { ... } CRYPTO_MINALIGN_ATTR; which states/ensures the alignment of the struct, and void *skcipher_request_ctx(struct skcipher_request *req) { return (void *)PTR_ALIGN(req + 1, ARCH_DMA_MINALIGN); } to get at the context struct, instead of using a struct field. Then, we could update skcipher_request_alloc() to only round up sizeof(struct skipher_request) to ARCH_DMA_MINALIGN if the reqsize is >0 to begin with, and if it is, to also round reqsize up to ARCH_DMA_MINALIGN when accessed. That way, we spell out the DMA padding requirements with relying on aligned struct members. If we do it this way, we have a clear distinction between expectations about what kmalloc returns in terms of alignment, and adding padding to influence the placement of the context struct. It also makes it easier to either apply the changes I proposed in the series I sent out a couple of weeks ago, or get rid of DMA alignment for request/TFM structures altogether, if we manage to identify and fix the drivers that are relying on this. In any case, it decouples these two things in a way that allows Catalin to make his kmalloc changes without having to redefine CRYPTO_MINALIGN to ARCH_DMA_MINALIGN. -- Ard.