From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7DD52C4332F for ; Sun, 13 Nov 2022 09:59:20 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id CB9808E0001; Sun, 13 Nov 2022 04:59:19 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id C41E66B0072; Sun, 13 Nov 2022 04:59:19 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id AE3BC8E0001; Sun, 13 Nov 2022 04:59:19 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id 9AC9C6B0071 for ; Sun, 13 Nov 2022 04:59:19 -0500 (EST) Received: from smtpin22.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay09.hostedemail.com (Postfix) with ESMTP id 73AFB807C0 for ; Sun, 13 Nov 2022 09:59:19 +0000 (UTC) X-FDA: 80127971238.22.7435045 Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) by imf05.hostedemail.com (Postfix) with ESMTP id 1127A100002 for ; Sun, 13 Nov 2022 09:59:18 +0000 (UTC) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 22AC960B37; Sun, 13 Nov 2022 09:59:18 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 03585C433C1; Sun, 13 Nov 2022 09:59:13 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1668333557; bh=O3yETtGeGX1jXQk1imu9r0bAXCQw4m/qx6KHAyirX7o=; h=Date:From:To:Cc:Subject:References:In-Reply-To:From; b=eLR2KEH946GsaHHxApKhdzx4zp0mZ82jCxNkvSS+QRSf0DTkv+SiyI2Ns4PZtjqoJ qLgWEPCyGwKP4ef/5QX3G6qjjM2JkbzsVL9uQM+KQl2x/lCUg+T8SQzIWmQouJ0rSr XVZVlIbZS1EX3rRKoOWwbKT5/E3hfKGZw4ZZgbkVBkIT1Q57SZrNA3cRRrJ7VXrQZ4 m/2RQTwOQFhKJ4eTDC7+t1erqCytC7OjHW/0pO413IfZJM4cLTNodop58GoVbqDgOd lxsWYlt5R1paQCInVvtx5OZ0TazER7Geiu+zcJyueJx/tw9a6HC9PdpNbbU0ji+1EQ B2WFPWJM0iQYA== Date: Sun, 13 Nov 2022 11:58:57 +0200 From: Mike Rapoport To: Song Liu Cc: bpf@vger.kernel.org, linux-mm@kvack.org, akpm@linux-foundation.org, x86@kernel.org, peterz@infradead.org, hch@lst.de, rick.p.edgecombe@intel.com, aaron.lu@intel.com, mcgrof@kernel.org Subject: Re: [PATCH bpf-next v2 0/5] execmem_alloc for BPF programs Message-ID: References: <20221107223921.3451913-1-song@kernel.org> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1668333559; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=goA7zkBtqRAOcHvccgRGxHMgHFErNABrXehJqCE1c3E=; b=5k81CM0klE2udsnsyyzeCuHebjCRzoW5rRFh8lN7yZDP6hmvwxe754Czb6InxprY4KWJRG gNNOxhEwZ7ccpPAf5lZ9qgHVx/QctROUDed3f8RzkhIOzFHzW4syJhKXoDmWpIQczRhvN0 argL8atT8wr4dPrF+s+lvtw6LxiBwBA= ARC-Authentication-Results: i=1; imf05.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=eLR2KEH9; spf=pass (imf05.hostedemail.com: domain of rppt@kernel.org designates 139.178.84.217 as permitted sender) smtp.mailfrom=rppt@kernel.org; dmarc=pass (policy=none) header.from=kernel.org ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1668333559; a=rsa-sha256; cv=none; b=ZG3MAqqO8h7yuXlie7O6qhEw9JUkT4qGSHlNPPIopzfGuyx88ceBCX0UB+ruXE/MqKmc8q YMxPPOhUKuEh8TWtTI1ug6puetE7KiIxROF2CdNIxwGMGL1BXND/WpNThDShw12jCLc+qn MnLDRqlUYITud2wUqRLAWCDqDgtMBYk= X-Stat-Signature: 6qit44orckaqzhgdgd4gneyiq8u4uccr X-Rspamd-Queue-Id: 1127A100002 Authentication-Results: imf05.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=eLR2KEH9; spf=pass (imf05.hostedemail.com: domain of rppt@kernel.org designates 139.178.84.217 as permitted sender) smtp.mailfrom=rppt@kernel.org; dmarc=pass (policy=none) header.from=kernel.org X-Rspamd-Server: rspam07 X-Rspam-User: X-HE-Tag: 1668333558-21666 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Tue, Nov 08, 2022 at 10:41:53AM -0800, Song Liu wrote: > On Tue, Nov 8, 2022 at 3:27 AM Mike Rapoport wrote: > > > > Hi Song, > > > > On Mon, Nov 07, 2022 at 02:39:16PM -0800, Song Liu wrote: > > > This patchset tries to address the following issues: > > > > > > 1. Direct map fragmentation > > > > > > On x86, STRICT_*_RWX requires the direct map of any RO+X memory to be also > > > RO+X. These set_memory_* calls cause 1GB page table entries to be split > > > into 2MB and 4kB ones. This fragmentation in direct map results in bigger > > > and slower page table, and pressure for both instruction and data TLB. > > > > > > Our previous work in bpf_prog_pack tries to address this issue from BPF > > > program side. Based on the experiments by Aaron Lu [4], bpf_prog_pack has > > > greatly reduced direct map fragmentation from BPF programs. > > > > Usage of set_memory_* APIs with memory allocated from vmalloc/modules > > virtual range does not change the direct map, but only updates the > > permissions in vmalloc range. The direct map splits occur in > > vm_remove_mappings() when the memory is *freed*. > > > > That said, both bpf_prog_pack and these patches do reduce the > > fragmentation, but this happens because the memory is freed to the system > > in 2M chunks and there are no splits of 2M pages. Besides, since the same > > 2M page used for many BPF programs there should be way less vfree() calls. > > > > > 2. iTLB pressure from BPF program > > > > > > Dynamic kernel text such as modules and BPF programs (even with current > > > bpf_prog_pack) use 4kB pages on x86, when the total size of modules and > > > BPF program is big, we can see visible performance drop caused by high > > > iTLB miss rate. > > > > Like Luis mentioned several times already, it would be nice to see numbers. > > > > > 3. TLB shootdown for short-living BPF programs > > > > > > Before bpf_prog_pack loading and unloading BPF programs requires global > > > TLB shootdown. This patchset (and bpf_prog_pack) replaces it with a local > > > TLB flush. > > > > > > 4. Reduce memory usage by BPF programs (in some cases) > > > > > > Most BPF programs and various trampolines are small, and they often > > > occupies a whole page. From a random server in our fleet, 50% of the > > > loaded BPF programs are less than 500 byte in size, and 75% of them are > > > less than 2kB in size. Allowing these BPF programs to share 2MB pages > > > would yield some memory saving for systems with many BPF programs. For > > > systems with only small number of BPF programs, this patch may waste a > > > little memory by allocating one 2MB page, but using only part of it. > > > > I'm not convinced there are memory savings here. Unless you have hundreds > > of BPF programs, most of 2M page will be wasted, won't it? > > So for systems that have moderate use of BPF most of the 2M page will be > > unused, right? > > There will be some memory waste in such cases. But it will get better with: > 1) With 4/5 and 5/5, BPF programs will share this 2MB page with kernel .text > section (_stext to _etext); > 2) modules, ftrace, kprobe will also share this 2MB page; Unless I'm missing something, what will be shared is the virtual space, the actual physical pages will be still allocated the same way as any vmalloc() allocation. > 3) There are bigger BPF programs in many use cases. With statistics you provided above one will need hundreds if not thousands of BPF programs to fill a 2M page. I didn't do the math, but it seems that to see memory savings there should be several hundreds of BPF programs. > Thanks, > Song -- Sincerely yours, Mike.