From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 08CF9C4332F for ; Sun, 13 Nov 2022 10:43:12 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 67DE46B0071; Sun, 13 Nov 2022 05:43:12 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 606436B0072; Sun, 13 Nov 2022 05:43:12 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 4A6C26B0073; Sun, 13 Nov 2022 05:43:12 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id 377876B0071 for ; Sun, 13 Nov 2022 05:43:12 -0500 (EST) Received: from smtpin09.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay06.hostedemail.com (Postfix) with ESMTP id DE970AAF16 for ; Sun, 13 Nov 2022 10:43:11 +0000 (UTC) X-FDA: 80128081782.09.20C95DD Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) by imf13.hostedemail.com (Postfix) with ESMTP id 8012D20007 for ; Sun, 13 Nov 2022 10:43:11 +0000 (UTC) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id B72D460B31; Sun, 13 Nov 2022 10:43:10 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 0D0E8C433C1; Sun, 13 Nov 2022 10:43:06 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1668336190; bh=LnBdTu+myjuNJTl8wC7tpMlkw+WhyVCc7Ez0nbT55RM=; h=Date:From:To:Cc:Subject:References:In-Reply-To:From; b=bxqDrgAV9svfTnb1OVlHhBAB7+gIEdieuv1jtgp5R0dBESWgPfoCabgyVDWldaFnd ILd0SLcJKYtkfvUkyod0GL3+Z3OrRdRT/2Bobgap2EnFCLtq6ETYJAMNTgWPn9C73B wmwfgpz11SQEy9mJWK802Aay/IWvzmvbwKhw76qCSRYrtN6SE+R0+v0RyvdjibEYJo f2E+0UABPVCwb01eKPgrEN+8Gm8FOrnW4mGYbDrVEYQm1/RXlCvWxAoStX2GvG4haI xJ/T4OefAom4b14Jm0xJvny2TjpEKK104aboh5OQK82kdH7XGL6NlEDzw6p/qLm/G1 7iRWy6XEHGkyA== Date: Sun, 13 Nov 2022 12:42:50 +0200 From: Mike Rapoport To: Song Liu Cc: "Edgecombe, Rick P" , "peterz@infradead.org" , "bpf@vger.kernel.org" , "linux-mm@kvack.org" , "hch@lst.de" , "x86@kernel.org" , "akpm@linux-foundation.org" , "mcgrof@kernel.org" , "Lu, Aaron" Subject: Re: [PATCH bpf-next v2 0/5] execmem_alloc for BPF programs Message-ID: References: <20221107223921.3451913-1-song@kernel.org> <9e59a4e8b6f071cf380b9843cdf1e9160f798255.camel@intel.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: ARC-Authentication-Results: i=1; imf13.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=bxqDrgAV; spf=pass (imf13.hostedemail.com: domain of rppt@kernel.org designates 139.178.84.217 as permitted sender) smtp.mailfrom=rppt@kernel.org; dmarc=pass (policy=none) header.from=kernel.org ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1668336191; a=rsa-sha256; cv=none; b=Jju+eXXejpbdybB6l3kcOWd0YBqrKbEyqRgUhL66e9ZvUz1OdVzfTzMybT6XvAwB5zOSgh 4lpD4yJwm1pUROsCfM31JnxHX7b31fMaoExlFnbk2sk7YL2uXRZVb2tZRNEuTtf8aQyiV/ sdTbHA0cvwKLUq4AQyQ+RHiUoWN7HC0= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1668336191; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=0jmf/yHk9CbgYMZ6paBmyiUSUSem8UXrEJNf6MDv6lE=; b=GorhiYULNdyPQtcANxUHs2ec9k/LhptqPIqtDUFgIt2Ca/V/0Lt62ItjCuLYm1TLFp9fRB zAr35qL5WVxYo/hfwCtjGi9g6y3qaL8hdw1drYjsKFEEJcneIgcmTIY+hezO+otsoaiI4w F9sJn1745qWv4HsQ7FYIhkLvO25/LVo= X-Rspam-User: Authentication-Results: imf13.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=bxqDrgAV; spf=pass (imf13.hostedemail.com: domain of rppt@kernel.org designates 139.178.84.217 as permitted sender) smtp.mailfrom=rppt@kernel.org; dmarc=pass (policy=none) header.from=kernel.org X-Stat-Signature: its7uf45utao1o6igiy7rdok8hduxpfe X-Rspamd-Queue-Id: 8012D20007 X-Rspamd-Server: rspam09 X-HE-Tag: 1668336191-677706 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Wed, Nov 09, 2022 at 09:43:50AM -0800, Song Liu wrote: > On Wed, Nov 9, 2022 at 3:18 AM Mike Rapoport wrote: > > > [...] > > > > > > > > > The proposed execmem_alloc() looks to me very much tailored for x86 > > > > to be > > > > used as a replacement for module_alloc(). Some architectures have > > > > module_alloc() that is quite different from the default or x86 > > > > version, so > > > > I'd expect at least some explanation how modules etc can use execmem_ > > > > APIs > > > > without breaking !x86 architectures. > > > > > > I think this is fair, but I think we should ask ask ourselves - how > > > much should we do in one step? > > > > I think that at least we need an evidence that execmem_alloc() etc can be > > actually used by modules/ftrace/kprobes. Luis said that RFC v2 didn't work > > for him at all, so having a core MM API for code allocation that only works > > with BPF on x86 seems not right to me. > > While using execmem_alloc() et. al. in module support is difficult, folks are > making progress with it. For example, the prototype would be more difficult > before CONFIG_ARCH_WANTS_MODULES_DATA_IN_VMALLOC > (introduced by Christophe). > > We also have other users that we can onboard soon: BPF trampoline on > x86_64, BPF jit and trampoline on arm64, and maybe also on powerpc and > s390. Caching of large pages won't make any difference on arm64 and powerpc because they do not support splitting of the direct map, so the only potential benefit there is a centralized handling of text loading and I'm not convinced execmem_alloc() will get us there. > > With execmem_alloc() as the first step I'm failing to see the large > > picture. If we want to use it for modules, how will we allocate RO data? > > with similar rodata_alloc() that uses yet another tree in vmalloc? > > How the caching of large pages in vmalloc can be made useful for use cases > > like secretmem and PKS? > > If RO data causes problems with direct map fragmentation, we can use > similar logic. I think we will need another tree in vmalloc for this case. > Since the logic will be mostly identical, I personally don't think adding > another tree is a big overhead. Actually, it would be interesting to quantify memory savings/waste as the result of using execmem_alloc() > Thanks, > Song -- Sincerely yours, Mike.